Home Page || amazon.com listing || The Calvin Bookshelf || Table of Contents
A book by
William H. Calvin
A Science Masters book (BasicBooks in the US; to be available in 12 translations)
copyright ©1996 by William H. Calvin

Evolution On-The-Fly

Foresight of phenomena and power over them depend on knowledge of their sequences, and not upon any notion we may have formed respecting their origin or inmost nature.
John Stuart Mill, Auguste Comte and Positivism, 1865

One thing follows another is a fairly simple concept, one that many animals can master. Indeed, it’s what most learning is all about; for Pavlov’s dogs, it was bell tends to be followed by food.

    More than two things may be chained; many animals have elaborate song sequences, not to mention all those elaborate locomotion sequences, such as gaits. Acquiring vocabulary and understanding basic word order are, we just saw, relatively easy language tasks for both humans and bonobos.

    If sequence is so elementary, why is planning ahead so rare in the animal kingdom, except for those trivial cases of foresight that mere melatonin can handle so well? What additional mental machinery is required in order to plan for a novel contingency? (Perhaps argument structure, as in those verb-lifting handles?) How do we do something we’ve never done before, with no exact memories to guide us? How do we even imagine such a thing?

    We are always saying something we’ve never said before. The other novelty generator, operating just as frequently in our lives (though often subconsciously), is that “What happens next?” predictor, mentioned in chapter 2 in the context of humor and the distressful effects of environmental incoherence.

    Perhaps the mechanisms for foresight are similar to those used in the fancier aspects of mental grammar, the ones involving long-term dependencies, as when basic word order is replaced by the alternate forms for the who-what-when questions. Perhaps the trees used by phrase structure, or the obligatory roles of argument structure, are mental mechanisms that are useful for foresight in a more general way.

    Mental grammar provides our most detailed set of insights into those mental structures that might be handy for intelligent guessing. This chapter will take a look at three more: chunking, sequencing, and darwinian processes.

Juggling a half-dozen things at the same time is one of those abilities measured by multiple-choice tests, particularly analogy questions (A is to B as C is to [D,E,F]). It also shows up in our ability to remember phone numbers for a long enough time to dial them. Many people can hang on to a seven-digit number between five and ten seconds, but will resort to writing it down if faced with an out-of-area number or an even longer international one.

    The limitation, it turns out, is not the number of digits; it’s the number of chunks. I remember San Francisco’s area code, 415, as a single chunk, but the number 451 means nothing to me, so I would have to remember it as three chunks: 4, 5, and 1. Chunking refers to the process of collapsing 4, 1, and 5 into the entity 415. A ten-digit San Francisco phone number, such as 4153326106, is, to me, only eight chunks; our schemes for using nondialed separators when writing down numbers — as in (415)332-6106 or 415.332.6106 — are essentially aids to chunking. Since we are already familiar with many two-digit numbers as single words — for example, “nineteen” — the Parisian 42-60-31-25 style of separators makes for more easily memorized eight-digit number strings.

    How many chunks can you hang onto? That varies among people, but the typical range forms the title of a famous 1956 paper by the psychologist George Miller: “The Magical Number Seven, Plus or Minus Two.” It’s as if the mind had room for only a limited number of items — at least, in the work space used for current problems. When you get close to your limit, you try to collapse several items into one chunk so as to make more room. Acronyms are a version of chunking, making one “word” from many. Indeed, many new words are just substitutes for a longer phrase, as when someone invented ambivalence as a shortcut, to save a whole paragraph of explanation. A dictionary is a compendium of chunking over the centuries. The combination of chunking and rapid speech, so that much meaning can be accommodated within the brief span of short-term memory, has surely been important for holding as much information as possible in mind at the same time.

    So one of the first lessons about working memory is that there’s seemingly a limited scratch pad, better suited to a half-dozen items than twice that number. This limitation probably has some implications for intelligence (certainly for IQ tests!), but the key feature of intelligent acts is creative divergent thinking, not memory per se. What we need is a process that will produce good guesses.

Language and intelligence are so powerful that we usually assume that more and more would be better and better. Evolutionary theorists, however, are fond of demonstrating that evolution is full of dead-end stabilities that can prevent such straight-forward “progress” and they like to point out evolution’s indirect routes involving multipurpose organs. Many organs are actually multipurpose and change their relative mix of functions over time. (When did that gas exchange organ in fish, known as the “swim bladder” because of its role in neutralizing buoyancy, become a lung?) And, if the analogy to computer software is to be believed, it’s far easier for the brain to be multipurpose than it is for any other organ system. Some regions of the brain are surely multipurpose too.

    So, in asking about how neural machinery for foresight or language got started, we must bear in mind that the underlying mechanisms might serve multiple functions, any one of which could be driven by natural selection and so incidentally benefit the others. They might be like what architects call core facilities, such as the rooms for the photocopy machines and the mailboxes. The mouth, for example, is a multipurpose core facility involved with drinking, tasting, ingesting, vocalization, and emotional expression; in some animals, also with breathing, cooling off, and fighting. Bundling (paying for one thing, but getting something else “free”) is a familiar marketing strategy. What human abilities might come bundled together like the proverbial “free lunch” that comes with the cost of the drinks? In particular, might syntax or planning come bundled with some other ability, simply because they can make spare-time use of a core facility?

    I realize that a “free lunch” explanation is going to offend the sensibilities of the more Calvinist of the strict adaptationists in evolutionary theory — the ones that think that every little feature has to pay its own way. But strict accounting isn’t always the name of the game. As noted earlier (enlarge one, enlarge them all), mammalian brain enlargements tend not to come piecemeal. And a free lunch is just another way of looking at what the original adaptationist himself emphasized. Charles Darwin reminded his readers, in a caution to his general emphasis on adaptations, that conversions of function were “so important.”

    In the midst of converting function — swim bladder into lung, for example — there is likely a multifunctional period (indeed, the multifunctional period could last forever). During it, an anatomical feature formerly under natural selection for one function, gives an enormous boost to some new function, far beyond whatever natural selection the new function has experienced so far. Lungs were “bootstrapped” by earlier buoyancy considerations. What brain functions have bootstrapped others, and does it tell us anything about intelligence?

We certainly have a passion for stringing things together in structured ways, ones that go far beyond the sequences produced by other animals. Besides words into sentences, we combine notes into melodies, steps into dances, and elaborate narratives into games with procedural rules. Might structured strings be a core facility of the brain, useful for language, storytelling, planning ahead, games, and ethics? Might natural selection for any of these abilities augment the common neural machinery, so that improved grammar incidentally serves to expand plan-ahead abilities?

In considering transitions of organs, it is so important to bear in mind the probability of conversion from one function to another....
Charles Darwin,
The Origin of Species, 1859
    Some beyond-the-apes abilities — music, for example — are puzzling, because it is hard to imagine environments that would give the musically gifted an evolutionary advantage over the tone-deaf. To some extent, music and dance are surely secondary uses of that very neural machinery that was shaped up by structured strings more exposed to natural selection, such as language.

    What other beyond-the apes abilities were likely to have been under strong natural selection? As improbable as it initially seems, planning ballistic movements may have once promoted language, music, and intelligence. Apes have elementary forms of the rapid arm movements that we’re experts at — hammering, clubbing, and throwing — and one can imagine hunting and toolmaking scenarios that in some settings were important additions to the basic hominid gathering and scavenging strategies. If the same “structured string” core facility is used for the mouth as is used for ballistic hand movements, then improvements in language might promote manual dexterity. It could work the other way, too: accurate throwing opens up the possibility of eating meat regularly, of being able to survive winter in the temperate zone — and of talking all the better as an incidental benefit, a “free lunch.”

Choosing between hand movements involves finding a candidate movement program — likely a characteristic firing patterns of cortical neurons — and then some more candidates. Little is yet known about how this transpires in the human brain, but a simple model involves multiple copies of each movement program, each competing for space in the brain. The program for an open palm might make copies more readily than the program for making a V-sign or a precision pincer grip.

    Ballistic movements (so named because beyond a certain point there is no opportunity to modify the command) require a surprising amount of planning, compared to most movements. They also likely require lots of clones of the movement program.

    For sudden limb movements lasting less than about an eighth of a second, feedback corrections are largely ineffective, because reaction times are so long. Nerves conduct too slowly, and decisions aren’t made quickly enough; feedback might help plan for next time, if the target hasn’t run away by then, but it’s no help in real time. For the last eighth second of clubbing, hammering, throwing, and kicking, the brain has to plan every detail of the movement and then spit it out, rather like punching a roll for a player piano and then letting it roll.

    We need nearly complete advance planning for ballistic movements during “get set,” with no reliance on feedback. Hammering requires planning the exact sequence of activation for dozens of muscles. For throwing, the problem is difficult for an additional reason: there is a launch window — a range of times when the projectile can be released and still hit the target. Release occurs shortly after the velocity peaks, as the projectile sails out of the decelerating hand. Getting this peak velocity to occur at exactly the right time, at the appropriate angle from the horizontal, is the trick.

    Given the launch-window problems, you can see why planning is so difficult for human ballistic movements. Launch windows depend on how far away the target is, and on how big it is. Let’s say that, eight tries out of ten, you can hit a rabbit-sized target from the length of one parallel parking space — that implies a launch window of 11 milliseconds. Hitting the same target from twice the distance with equal reliability means releasing within a launch window about eight times narrower, 1.4 msec. Neurons are not exactly atomic clocks when it comes to timing accuracy; there is a lot of jitter in when impulses are produced, enough so that any one neuron would have trouble hitting the broad side of a barn if it had to time the ball’s release all by itself.

    Happily, many noisy neurons are better than a few — so long as they’re all “doing their own thing,” making their own mistakes. Such can average out some of the noise. You can see this principle at work in the heart, making the heartbeat more regular. A fourfold increase in the number of pacemaker cells serves to cut the heartbeat jitter in half. To reduce ballistic release jitter eightfold requires averaging the output of 64 times as many noisy neurons as you needed to program the original throw. If you want to hit that same rabbit-sized target at three times the distance with that the same eight-out-of-ten-times reliability, count on needing to recruit a lot of helpers: you will require 729 times as many neurons as the number sufficient for generating your standard short throw. It’s redundancy, but in a different sense from, say, the three ways every large airplane has of lowering the landing gear.

    So now we have a third insight into relevant brain mechanisms for fancy sequences: besides those trees and handles of syntax, besides those limited scratch pad memories that encourage chunking, we see that fancy sequences of activation such as the ballistic movements, probably share cerebral real estate with other fancy sequences — and that some need hundredfold levels of redundancy when precision timing is important.

    Lots of planning space is also needed when you are throwing at a nonstandard target distance — one for which you don’t have a stored movement plan (as you might for throwing darts or basketball free-throws). For nonstandard shots, you need to create an array of variants between two standard programs and pick the one that will come closest to hitting your target. Improvisation takes space. If, once you select the “best” variant, all the other variants change to conform to it, then you would have the redundancy needed for staying inside the launch window. Imagine a roomful of soloists, all singing somewhat different melodies and then converging on the one that they could sing as a chorus. And then, for real precision, recruiting a lot of helpers, just as the expert choir recruits the audience in the Hallelujah Chorus.

    A core facility for structured sequences could solve a lot of problems. Does one actually exist? If so, we might occasionally see some synergy or conflict between similar movements.

Charles Darwin was one of the first to suggest hand-to-mouth synergies in his 1872 book on the expression of the emotions: “Thus persons cutting anything with a pair of scissors may be seen to move their jaws simultaneously with the blades of the scissors. Children learning to write often twist about their tongues as their fingers move, in a ridiculous fashion.”

    What kind of sequences are we talking about, anyway? Rhythmic movements per se are ubiquitous: chewing, breathing, locomotion, and so forth. They can be implemented by simple circuits at the level of the spinal cord. Like the simple one-thing-follows-another of learning, there is nothing distinctively cerebral about rhythm or other sequences. But novel sequences, that’s the rub. If there is a common sequencer for the fancier novel movements, where is it located in the brain?

    Sequencing in itself doesn’t require a cerebral cortex. Much movement coordination in the brain is done at a subcortical level, in places known as the basal ganglia and the cerebellum. But novel movements tend to depend on the premotor and prefrontal cortex, in the rear two-thirds of the frontal lobes.

    There are other regions of the cerebral cortex that are likely to be involved with sequential activities. The dorsolateral portions of the frontal lobe (dorso=top, lateral=side; if you had a pair of horns growing out of your forehead, these regions would lie beneath them) are crucial for delayed-response tasks. You show a monkey some food and allow him to watch where you hide it — but force him to wait 20 minutes before being allowed to go after it. Monkeys with damage to the dorsolateral frontal cortex will fail to retain that information. It’s not really a failure of memory but a problem of formulating a lasting intention, perhaps even an “agenda.”

    The great Russian neurologist Alexander Luria described a patient in bed with his arms under the covers. Luria asked him to raise his arm. He couldn’t seem to do that. But if Luria asked him to remove his arm from under the covers, he could do that. If Luria then asked him to raise his arm up and down in the air, the patient could do that, too. His difficulty was in planning the sequence — he got stuck on the condition of working around the obstacle of the confining bedcovers. Left prefrontal damage gives patients difficulty in unfolding a proper sequence of actions — or perhaps in planning them in the first place. Patients with damage to the left premotor cortex have trouble chaining the actions together into a fluent motion — what Luria called a kinetic melody.

    Tumors or strokes in the bottom of the frontal lobe, just above the eyes, also affect sequences of activities, such as going shopping. One famous patient, an accountant, had a high IQ and did quite well on a battery of neuropsychological tests. Yet he had big problems in organizing his life: he was fired from a series of jobs, went bankrupt, underwent two divorces in a two-year period as a result of impulsive marriages. Nonetheless, this man was often unable to make simple, rapid decisions — say, about what toothpaste to buy or what to wear. He would get stuck making endless comparisons and contrasts, often making no decision at all or a purely random one. If he wanted to go out for dinner, he had to consider the seating plan, the menu, the atmosphere, and the management of each possible restaurant. He might drive by them to see how busy they were, and even then would be unable to choose among them.

    There are two major lines of evidence that suggest the lateral language area above the left ear also has a lot to do with nonlanguage sequencing. The Canadian neuropsychologist Doreen Kimura and her coworkers showed that left-lateral stroke patients with language difficulties (aphasia) also have considerable difficulty executing hand and arm movement sequences of a novel sort, a condition known as apraxia. (A fancy, though not novel, sequence would be taking your keys out of your pocket, finding the right one, inserting it into the lock, turning the key, and then pushing on the door.)

    The Seattle neurosurgeon George Ojemann and his coworkers further showed, using electrical stimulation of the brain during epilepsy operations, that much of the left-lateral language specialization is involved with listening to sound sequences. These regions include the part of the frontal lobe adjacent to Broca’s Area, the top of the temporal lobe on either side of the primary auditory cortex, and some of the parietal lobe in back of the map of the skin surface. (In other words, they’re “perisylvian,” bordering the Sylvian fissure core.) The big surprise was that these exact same areas seem heavily involved in producing oral-facial movement sequences — even nonlanguage ones, such as mimicking a series of facial expressions.

    One of the hazards of naming things in the brain is that we expect something called the language cortex to be devoted to language. But data such as Ojemann’s show that, at its core, the cortical specialization is far more generalized, concerned with novel sequences of various kinds: hand as well as mouth, sensation as well as movement, mimicry as well as narrative.

Not only can many species learn abstract symbols and a simple language, but some clearly can learn categories. Indeed, animals often overgeneralize, in the same way that a baby goes through a phase of calling all adult males “Daddy.” Relationships can be learned, such as is-a or is-larger-than. A banana is a fruit, a banana is larger than a chestnut.

    Closer to intelligence are the power of analogies, metaphors, similes, parables, and mental models. They involve the comparing of relationships, as when we make an imperfect analogy between is-bigger-than and is-faster-than, by inferring that bigger-is-faster.

    We humans can mentally operate in a familiar domain (for example, filing a document in a file folder or throwing it in a wastebasket) and carry this relationship over to a less familiar domain (saving or deleting computer files), perhaps by means of moving icons on a screen. We can make a gesture in one mental domain and have it interpreted in another. These mappings all break down somewhere — and, in Robert Frost’s words, we have to know how far we can ride a metaphor, judge when it’s safe.

    Consider the mapping from one domain to another that Umberto Eco creates here:

     The fact is that the world is divided between users of the Macintosh computer and users of MS-DOS compatible computers. I am firmly of the opinion that the Macintosh is Catholic and that DOS is Protestant. Indeed, the Macintosh is counterreformist and has been influenced by the ’ratio studiorum’ of the Jesuits. It is cheerful, friendly, conciliatory, it tells the faithful how they must proceed step by step to reach — if not the Kingdom of Heaven — the moment in which their document is printed. It is catechistic: the essence of revelation is dealt with via simple formulae and sumptuous icons. Everyone has a right to salvation.
     DOS is Protestant, or even Calvinistic. It allows free interpretation of scripture, demands difficult personal decisions, imposes a subtle hermeneutics upon the user, and takes for granted the idea that not all can reach salvation. To make the system work you need to interpret the program yourself: a long way from the baroque community of revelers, the user is closed within the loneliness of his own inner torment.
     You may object that, with the passage to Windows, the DOS universe has come to resemble more closely the counterreformist tolerance of the Macintosh. It’s true: Windows represents an Anglican-style schism, big ceremonies in the cathedral, but there is always the possibility of a return to DOS to change things in accordance with bizarre decisions.....
     And machine code, which lies beneath both systems (or environments, if you prefer)? Ah, that is to do with the Old Testament, and is Talmudic and cabalistic.

The excerpt is from an English translation of Umberto Eco's back-page column, ``La bustina di Minerva,'' in the Italian newsweekly Espresso (September 30, 1994).

Most mappings are simpler, as when objects are associated with a sequence of phonemes (as in naming). Chimpanzees, with some effort, can learn simple analogies, such as A is to B as C is to D. If the chimp could apply such mental manipulations to events in its everyday life instead of using them only while at the testing apparatus, it would be a more capable ape. Humans, obviously, keep mapping into more and more abstract domains, notching stratified stability up a few more levels.

You are reading an excerpt from chapter 6 of HOW BRAINS THINK.

The paperback US edition
is available from most bookstores and

Safety is the big problem with trial combinations, ones that produce behaviors that have never been done before. Bigger isn’t always faster. Even simple reversals in order can yield dangerous novelty, as in “Look after you leap.” In 1943, in his book The Nature of Explanation, the British psychologist Kenneth Craik proposed that:

the nervous system is... a calculating machine capable of modeling or paralleling external events....If the organism carries a “small-scale model” of external reality and of its own possible actions within its head, it is able to try out various alternatives, conclude which is the best of them, react to future situations before they arise, utilise the knowledge of past events in dealing with the future, and in every way to react in a much fuller, safer and more competent manner to the emergencies which face it.
Humans can simulate future courses of action and weed out the nonsense off-line; as the philosopher Karl Popper has said, this “permits our hypotheses to die in our stead.” Creativity — indeed, the whole high end of intelligence and consciousness — involves playing mental games that shape up quality.

    What sort of mental machinery might it take, to do something of the sort that Craik suggests?

The American psychologist William James was talking about mental processes operating in a darwinian manner in the 1870s, little more than a decade after Charles Darwin published On the Origin of Species. The notion of trial and error was developed by the Scottish psychologist Alexander Bain in 1855, but James was using evolutionary thinking in addition.

    Not only might darwinism shape up a better brain in two million years without the guiding hand of a master potter, but another darwinian process, operating in the brain, might shape up a more intelligent solution to a problem on the milliseconds-to-minutes time scale of thought and action. The body’s immune response also appears to be a darwinian process, whereby antibodies that are better and better fits to the invading molecule are shaped up in a series of generations spanning several weeks.

    Darwinian processes tend to start from the biological basic: reproduction. Copies are always happening. One theory of making up your mind is that you form some plans for movement — making an open hand, or a V-sign, or a precision pincer movement — and that these alternative movement plans reproductively compete with one another until one “wins.” On that theory, it takes a critical mass of command clones before any action is finally initiated.

    Darwinism requires a lot more than just reproduction and competition, however. When I try to abstract the essential features of a darwinian process from what we know about species evolution and the immune response, it appears that a Darwin Machine must possess six essential properties, all of which must be present for the process to do anything interesting:

     It involves a pattern. Classically, this is the string of DNA bases called a gene. As Richard Dawkins pointed out in The Selfish Gene, the pattern could also be a cultural one such as a melody, and he usefully coined the term meme for such patterns. The pattern could also be the brain patterns associated with thinking a thought.

     Copies are somehow made of this pattern. Cells divide. People hum or whistle a tune they’ve overheard. Indeed, the unit pattern (that’s the meme) is defined by what’s semi-reliably copied — for example, the gene’s DNA sequence is semi-reliably copied during meiosis, whereas whole chromosomes or organisms are not reliably copied at all.

     Patterns occasionally change. Point mutations from cosmic rays may be the best known alterations, but far more common are copying errors and (as in meiosis) shuffling the deck.

      Copying competitions occur for occupation of a limited environmental space. For example, several variant patterns called bluegrass and crabgrass compete for my back yard.

     The relative success of the variants is influenced by a multifaceted environment. For grass, the operative factors are nutrients, water, sunshine, how often it’s cut, and so on. We sometimes say that the environment “selects,” or that there is selective reproduction or selective survival. Charles Darwin called this biasing by the term natural selection.

     The next generation is based on which variants survive to reproductive age and successfully find mates. The high mortality among juveniles makes their environment much more important than that of adults. This means that the surviving variants place their own reproductive bets from a shifted base, not from where the center of the variations was at conception (this is what Darwin called the inheritance principle). In this next generation, a spread around the currently successful is again created. Many new variants will be worse than the parental average, but some may be even better “fitted” to the environment’s collection of features.

From all this, one gets that surprising darwinian drift toward patterns that almost seem designed for their environment. (There! I actually managed to work “intelligent design” into this intelligence book; maybe there’s hope yet for “military intelligence”).

    Sex (which is shuffling genes using two decks) isn’t essential to the darwinian process, and neither is climate change — but they add spice and speed to it, whether it operates in milliseconds or millennia. A third factor accelerating the darwinian process is fragmentation and the isolation that follows: the darwinian process operates more quickly on islands than on continents. For some fancy darwinian processes requiring speed (and the time scale of thought and action certainly does), that might make fragmentation processes essential. A decelerating factor is a pocket of stability that requires considerable back-and-forth rocking in order to escape from it; most stable species are trapped in such stabilizing pockets.

    People are always confusing particular parts, such as “natural selection,” with the darwinian whole. But no one part by itself will suffice. Without all six essentials, the process will shortly grind to a halt.

    People also associate the darwinian essentials exclusively with biology. But selective survival, for example, can be seen when flowing water carries away the sand and leaves the pebbles behind. Mistaking a part for the process (“Darwinism is selective survival”) is why it has taken a century for scientists to realize that thought patterns may also need to be repeatedly copied — and that copies of thoughts may need to compete with copies of alternative ones on “islands” during a series of mental “climate changes” in order to rapidly evolve an intelligent guess.

In our search for suitable brain mechanisms for guessing intelligently, we now have (1) those nested boxes of syntax that underlie strings; (2) argument structure with all its clues about probable roles; (3) those relative position words such as near-into-above; (4) the limited size of scratch pad memory and the consequent chunking tendencies; and (5) common core facilities for fancy sequences, with quite a lot of need for extra copies of the neural patterns used to produce ballistic movements. Our sixth clue, from darwinian processes, now appears to be a whole suite of features: distinctive patterns, copying them, establishing variants via errors (with most of the variants coming from the most successful), competition, and the biasing of copying competitions by a multifaceted environment. What’s more, it looks as if the multifaceted environment is partly remembered and partly current.

    Fortunately, there is some overlap of darwinian considerations with those from the ballistic movements: darwinian backyard work spaces might utilize the “get set” scratch pads, darwinian copying could help produce the jitter-reducing movement command clones. What else might correspond? In particular, what are those patterns that we might need to clone, on the time scale of thought and action?

Thoughts are combinations of sensations and memories — or, looked at another way, thoughts are movements that haven’t happened yet (and maybe never will). They’re fleeting and mostly ephemeral. What does this tell us?

    The brain produces movements by means of a barrage of nerve impulses going to the muscles, whether of the limbs or the larynx. Each muscle is activated at a somewhat different time, often only briefly; the whole sequence is timed as carefully as the finale of a fireworks display. A plan for a movement is like a sheet of music or a player piano roll. In the latter case, the plan covers 88 output channels and the times at which each key is struck, and, indeed, the ballistic movements involve almost as many muscles as the piano has notes. So a movement is a spatiotemporal pattern not unlike a musical refrain. It might repeat over and over, like the rhythms of locomotion, but it could also be more like a one-shot arpeggio, triggered by another temporal pattern.

    Some spatiotemporal patterns in the brain probably qualify for the name cerebral code. Though individual neurons are more sensitive to some features of an input than others, no single neuron represents your grandmother’s face. Just as your sense of a color depends on the relative activity in three different cone pathways from the retina, and a taste can be represented by the relative amounts of activity in about four different types of tongue receptors, so any one item of memory is likely to involve a committee of neurons. A single neuron, like any one key on the piano, is likely to play different roles in different melodies (most often, of course, its role is to keep quiet — again, like a piano key).

    A cerebral code is probably the spatiotemporal activity pattern in the brain which represents an object, an action, or an abstraction such as an idea — just as bar codes on product packages serve to represent without resembling. When we see a banana, various neurons are stirred by the sight: some of the neurons happen to specialize in the color yellow, others in the short straight lines tangent to the banana’s curve. Evoking a memory is simply reconstituting such a pattern of activity, according to the cell-assembly hypothesis put forward in 1949 by the Canadian psychologist Donald O. Hebb.

    So the banana committee is like a melody, if we imagine the neurons involved as unpacked along a musical scale. Some neurophysiologists think that the involved neurons all have to fire synchronously, as in a chord, but I think that a cerebral code is more like a short musical melody, comprised of chords and individual notes; we neurophysiologists just find it easier to interpret chords than we do scattered single notes. What we really need are the families of strange attractors associated with words, but that’s another book! (The Cerebral Code).

Music is the effort we make to explain to ourselves how our brains work. We listen to Bach transfixed because this is listening to a human mind.
Lewis Thomas, The Medusa and the Snail, 1979
We know that long-term memories cannot be spatiotemporal patterns. For one thing, they survive even massive shutdowns of the electrical activity in the brain, as in seizures or coma. But we now have lots of examples of how to convert a spatial pattern into a spatiotemporal one: musical notation, player pianos, phonograph records — even bumps in a washboarded road waiting for a car to come along and recreate a bouncing spatiotemporal pattern.

    This is what Donald Hebb called the dual trace memory: a short-term active version (spatiotemporal) and a long-term spatial-only version similar to a sheet of music or the grooves on a phonograph record.

    Some of these “cerebral ruts” are as permanent as those in the grooves of a phonograph record. The bumps and ruts are, essentially, the strengths of the various synapses that predispose the cerebral cortex to produce a repertoire of spatiotemporal patterns, much like the connection strengths in the spinal cord predispose it to produce the spatiotemporal patterns we know as walking, trotting, galloping, running, and so forth. But short-term memories can be either active spatiotemporal patterns (probably what is called “working memory” in the psychology literature) or transient spatial-only patterns — temporary ruts that somewhat overwrite the permanent ruts but don’t vibrate (they merely fade in a matter of minutes). They’re simply the altered synaptic strengths (what is called “facilitation” and “long-term potentiation” in the neurophysiological literature), the bumps left behind by a repetition or two of the characteristic spatiotemporal pattern.

    The truly persistent bumps and ruts are unique to the individual, even to each identical twin, as the American psychologist Israel Rosenfield explains:

Historians constantly rewrite history, reinterpreting (reorganizing) the records of the past. So, too, when the brain’s coherent responses become part of a memory, they are organized anew as part of the structure of consciousness. What makes them memories is that they become part of that structure and thus form part of the sense of self; my sense of self derives from a certainty that my experiences refer back to me, the individual who is having them. Hence the sense of the past, of history, of memory, is in part the creation of the self.

Copying is going to be needed over long distances in the brain. Like a fax machine, the brain must take a pattern and make a distant copy of it, perhaps on the other side of the brain. The pattern cannot be physically transported in the manner of a letter, so telecopying is likely to be important when the visual cortex wants to tell the language area that an apple has been seen. The need for copying suggests that the pattern we seek is the working memory, that active spatiotemporal pattern, since it is difficult to see how “ruts” would otherwise copy themselves at a distance.

    A darwinian model of mind and my analysis of the activity of throwing suggest that many clones might be needed locally, not just a few in distant places. Furthermore, in a darwinian process, an activated memory must somehow compete with other spatiotemporal patterns for occupation of a work space. And the other question we must answer is, What decides if one such “melody” is better than another?

    Suppose a spatiotemporal pattern, produced in one little part with the aid of some appropriate “ruts”, manages to induce the same melody in another cortical area that lacks those ruts. But the pattern can nonetheless be performed there, thanks to the active copying process nearby, even if it might not sustain itself without the driving patterns, the same way a square dance might fizzle out without a caller. If an adjacent area has bumps and ruts that are “close enough”, the melody might catch on better, and die out less readily, than some other imposed melody. So resonating with a passive memory could be the aspect of the multifaceted environment which biases a competition.

    In this way, the permanent bumps and ruts bias the competition. But so do the fading ones that were made by spatiotemporal activity patterns in that same patch of cortex a few minutes earlier. So, too, do the current active inputs to the region from elsewhere — the ones that are (like most synaptic inputs) in themselves too weak to induce a melody or create ruts. Probably most important is the background of secretions from the four major diffuse projection systems, the ones associated with the serotonin, norepinephrine, dopamine, and acetylcholine neuromodulators. Other emotional biases surely come from the neocortical projections of the such subcortical brain sites as the amygdala. Thalamic and cingulate gyrus inputs may bias competitions elsewhere, in the name of shifting your attention from external to memorized environments. Thus the current real-time environment, memories of near-past and long-past environments, emotional state, and attention all change the resonance possibilities, all likely bias the competition that shapes up a thought. Yet they could do it without themselves forming up clones to compete for cortical territory.

The picture that emerges from such theoretical considerations is one of a quilt, some patches of which enlarge at the expense of their neighbors as one code copies more successfully than another. As you try to decide whether to pick an apple or a banana from the fruit bowl (so my theory goes), the cerebral code for apple may be having a cloning competition with the one for banana. When one code has enough active copies to trip the action circuits, you might reach for the apple.

    But the banana codes need not vanish; they could linger in the background as subconscious thoughts, undergoing variations. When you unsuccessfully try to remember someone’s name, the candidate codes might continue copying for the next half hour, until suddenly Jane Smith’s name seems to “pop into your mind,” because your variations on the spatiotemporal theme finally hit a resonance good enough to generate a critical mass of identical copies. Our conscious thought may be only the currently dominant pattern in the copying competition, with many other variants competing for dominance, one of which will win a moment later, when your thoughts seem to shift focus.

    It may be that darwinian processes are only the frosting on the cognitive cake; it may be that much is routine or rule-bound. But we often deal with novel situations in creative ways, as when you decide what to fix for dinner tonight. You survey what’s already in the refrigerator and on the kitchen shelves. You think about a few alternatives, keeping track of what else you might have to fetch from the grocery store. All this can flash though your mind within seconds — and that’s probably a darwinian process at work, as is speculating about what tomorrow might bring.

We build mental models that represent significant aspects of our physical and social world, and we manipulate elements of those models when we think, plan, and try to explain events of that world. The ability to construct and manipulate valid models of reality provides humans with our distinctive adaptive advantage; it must be considered one of the crowning achievements of the human intellect.

Gordon H. Bower and Daniel G. Morrow, 1990

Conflicts of representation are painful for a variety of reasons. On a very practical level, it is painful to have a model of reality that conflicts with those of the people around you. The people around you soon make you aware of that. But why should this conflict worry people, if a model is only a model, a best guess at reality that each of us makes? Because nobody thinks of it in that way. If the model is the only reality you can know, then that model is reality, and if there is only one reality, then the possessor of a different model must be wrong.

Derek Bickerton, 1990

Email || Home Page || The Calvin Bookshelf
|| amazon.com listing || End Notes for this chapter || To Table of Contents || To NEXT CHAPTER
You are reading HOW BRAINS THINK.

The paperback US edition
is available from most bookstores and