Home Page || Public Bookmarks || The Calvin Bookshelf
A book by
William H. Calvin
UNIVERSITY OF WASHINGTON
SEATTLE, WASHINGTON   98195-1800   USA
The Cerebral Symphony
Seashore Reflections on the
Structure of Consciousness

Copyright ©1989 by William H. Calvin.

You may download this for personal reading but may not redistribute or archive without permission (exception: teachers should feel free to print out a chapter and photocopy it for students).


10

Darwin on the Brain:
Self-organizing Committees

Contrary to what I once thought, scientific progress did not consist simply in observing, in accurately formulating experimental facts and drawing up a theory from them. It began with the invention of a possible world, or a fragment thereof, which was then compared by experimentation with the real world. And it was this constant dialogue between imagination and experiment that allowed one to form an increasingly fine-grained conception of what is called reality.
the French molecular biologist François Jacob, 1988

The darwinian competition of ideas, which the nineteenth century identified as a basis of thought, suggests that we might gain some insight about thinking from a study of evolutionary mechanisms that usually operate on long time scales. But in ideas, we are always dealing with a string of words or more elaborate concepts. How do we compare strings? How do masses of nerve cells interact to shape up new concepts from random noise? How do a group of nerve cells get together to initiate a movement?
      It's actually not too different from how fancy tools can be shaped up by random bashing about. And how we devise "logical" shortcuts when we want to repeat a success.

DARWINIAN TOOLMAKING is probably the simplest way of making simple stone tools. I just hauled back some potato-sized rocks from the Oyster Pond Beach, all so that I can demonstrate the late Glynn Isaac's toolmaking technique. John Pfeiffer (whose books The Emergence of Humankind and The Creative Explosion are among the most widely read of anthropology books) is always around MBL in the summer, and we fell into reminiscence the other day about Glynn. John first met him in East Africa when visiting Louis Leakey, and found Glynn in charge of the archaeology (the search for cultural artefacts such as stone tools, as opposed to bones).
      Some of the earliest toolmaking methods, even back 2 million years ago when hominids had ape-sized brains, seem based entirely on producing randomness and then selecting the useful. Glynn Isaac used to demonstrate early toolmaking techniques during his archaeology lectures by pounding together two potato-sized rocks, not delicately but furiously: Chips would soon be scattered all over the floor of the stage. After a minute, he would stop and sort through the dozens of stone flakes. And he would pick up some excellent analogs of the single-edged razor blade, just the thing for incising the tough hide of a savannah animal, or amputating a leg at the joint.
      The half-potato-shaped fragments left over after the first round of brute bashing will have some sharp edges too, and the smooth part of the remaining rock will serve as a handle, enabling real pressure to be brought to bear, handy for carving harder stuff. All this without notions of design: It serves as a far simpler toolmaking method than our usual notions of careful craftsmanship. Make lots of random variants by brute bashing about, then select the good ones. Careful craftsmanship probably developed where the raw materials were scarce; before then, a different ethic prevailed. Just try to imagine this sign posted in a modern factory:

The more Waste, the more Progress!
Yet that was probably the philosophy at one time, before planning ahead was well established (and we started throwing away the finished product, after only one use!).
      Of course, you have to recognize the sharp fragment as useful in order to accomplish the selection step following the random one. So how did hominids get the idea of a sharp tool, so they could select the variant? If this is like the "faces in the driftwood" -- children at the beach are always finding driftwood with familiar shapes, such as faces, and are hard to convince that the shape simply happened, wasn't created for their amusement --then we haven't gained much.
      But there is clearly a simple way of developing the mental image of a sharp edge as useful. Even baboons and chimpanzees use rocks to hammer on tough nuts, to crack open their shells. And sometimes the rocks split open instead. Thus flakes and handle-sized sharp rocks might be available, just lying around during further nut-cracking. And some might be used as probes à la termite-fishing -- but in nut-cracking, one can use them to pry open cracks in tough nuts and so get at the soft innards without the danger of pulverizing the soft innards with further hammering. It seems but a small step from such serendipitous use to the purposeful toolmaking of the kind that Glynn Isaac discovered had been practiced in East Africa -- and used at some stage to carve up large chunks of grazing animal -- to make protein portable.
      Randomness plus selection is powerful, but multiple rounds of it are much more powerful as they can shape up the raw materials into things that look very nonrandom, very purposeful, even designed. Perhaps another round of brute bashing resulted in a flake splitting, two sharp edges intersecting in a point. Et voilà, a pointy instrument handy for gouging and other noncutting uses of knives. Among other uses, such intersecting edges make formidable weapons. One naturally thinks of arrowheads, but they came along very late, only during the last ice age or perhaps the one before last -- but that's because hafting, the attachment to a shaft or wooden handle, seems to have been invented late. So for more than a million years, hominids instead held on to a half-potato-shaped remnant sporting two intersecting sharp edges, and probably treated such a stone dagger as a prized possession.
      And given apelike tendencies to imitate, it probably wasn't long before other hominids were making their own stone knives and daggers, lots of handy rocks being sacrificed to the cause. Whether one calls it an arms race or just consumer mimicry not unlike the microscope mania one sees at MBL, it surely generated a lot of rounds of bashing and selecting.

SO IF TOOLMAKING AND BRAINSTORMING look, in their simple forms, like darwinism at work, maybe we had better see how darwinism applies to everything else that the brain does. While species evolution has a millennia-to-eons time scale, perhaps the brain has adopted darwinism in a big way on a seconds-to-days time scale. But what about the details -- how does something fancy and logical arise from something random?
      Certainly the familiar mechanistic example of darwinism outside of species evolution is how our body's immune system crafts a defense against foreign invaders. Most of us have two different versions of the gene that serves to get the immune system started; additionally, there is probably a lot of gene shuffling during development, which creates a wide variety of templates (attached to antibodies) for recognizing foreign molecules (called antigens) by their shapes. When an antigen comes along, one or another of the specialized B-cells most attuned to that molecular shape will bind it. And this stimulates the B-cells to produce more antibodies: not just clones of itself, but a variety of variations on the same theme. Some of these will be even better at detecting and binding the foreign molecule. And so another generation of variations on the better theme will be produced, some of which will be even better. Pretty soon all of the foreign invaders are bound, and taken out of circulation so they can't do further harm; if the antigens are attached to the surface of a cell like a bacterium, the cell too is destroyed.
      And a population of circulating antibodies specialized for that particular foreigner remain in circulation, just in case it comes back. That's how we acquire immunity, why we don't get childhood diseases a second time (or not at all, provided that a vaccine has already stimulated the immune response with a harmless bit of the antigen sometime in the past).
      So variation and selection can work fairly quickly to explore the possibilities, even if this antigenic molecule is a completely novel one, never seen before in nature. It's essentially a successive approximation method that homes in on a molecular shape. It is very reminiscent of the Puzzle Principle that Minsky reminds us about:
     

We can program a computer to solve any problem by trial and error, without knowing how to solve it in advance, provided only that we have a way to recognize when the problem is solved.

      The disappearance of the antigen signals the solution of the problem. We can also think about a problem using similar mental trial and error and, given enough time (which means knowing enough shortcuts!), solve it without knowing in advance how to proceed.
      At the mechanistic level, might the brain do something similar to the immune system's variation-and-selection game in a matter of seconds? To fit a schema never seen before? Memorize a telephone number? Explore a creative thought? Or make an awkward movement into a skillful one? Darwinism (particularly the darwinism of committees) turns out to be involved in not only memory, but with how you move your hand -- indeed, with how you decide to move your hand.
I'm holding my hand out in front of me. It's amazing! I can open and close it! Just like that. No problem. I just decide to do it, and it's done. There's something very strange going on here!
John Hoag, 1987

You move it unconsciously, of course. The interesting thing about that is that you could learn to move it consciously if you went to a lot of trouble.

Howard Rheingold, 1987

Well, I remember how hard it was to get my hand to make fine motor movements. Remember making little circles with a pencil, in first grade? You think it's marvelous that you can move your hand just by wishing to do so, but actually that's the culmination of years of practice and training.

Corinne Cullen Hawkins, 1987

SAYING THAT THE "MIND" COMMANDS the hand to move really doesn't buy you much of an explanation. It has taken us a while to realize, however, that such a statement is little better than saying saying "God did it" when there is an earthquake. Whether one believes in a vengeful, absent-minded, or nonexistent deity, there are going to be a series of explanations at different levels. With an earthquake, there is fault-line slippage, which in turn is due to stored energy; that is due in turn to continental drift; due to the molten core of the earth circulating in "cells" not unlike the trade winds in the atmosphere; due in turn to radioactive heating and tidal forces from the moon; due to... God? Those are all "explanations" but at different levels.
      In the case of this mind drama, we have a series of semi-autonomous actors in the brain, nerves, and muscles. Each has some capacity for spontaneous action, versus some capacity for being told what to do by some upstream agent. Take a muscle fiber: It can spontaneously fire (and thereby twitch) but such pacemaker activity is rare (at least in skeletal muscle; the smooth muscle in the gut walls uses autonomous pacemakers all the time). Mostly a skeletal muscle fiber simply stays silent until commanded to twitch by its motorneuron, which resides back in the spinal cord; when the motorneuron "fires," an impulse speeds down to the muscle and tells the muscle to twitch. Muscle cramps result from the nerve-muscle junction developing an ectopic pacemaker.
      Motorneurons can be pacemakers, too (eyelid "tics" and similar fasciculations; the thumb flexor muscle often develops them as you get older), seemingly doing their own thing despite your attempts to tell them to shut up. Since only one motorneuron does this at a time (and the muscle has hundreds of them), usually the movements produced are small and ineffective.
      Each motorneuron responds to its plus and minus inputs, of which it has thousands; a few are feedback from sensors out in the muscle that detect its length and tension, but more than 99 percent are from "interneurons" in the spinal cord and brain. Interneurons are all the neurons except for sensors and motorneurons themselves; an interneuron has no direct connections to the outside world, and is rather like Hartline's general receiving phoned reports of a battle that he doesn't personally witness and issuing instructions to intermediaries. There are more than a million millions of neurons (or maybe ten times that!), nearly all "interneurons" in the case of primates. There are hundreds of muscles, each of which has hundreds of motorneurons, but that's only tens of thousands of motorneurons maximum. There are maybe a billion sensory neurons, most in the eye. So interneurons outnumber all other neurons by at least a thousandfold.
      Another simple numbers game will show just how different we are from an army command hierarchy: Instead of a thousand soldiers for every general, we have a thousand "generals" for every "soldier." With rare exceptions such as the Mauthner cell of a fish, no one interneuron can command an action; they can only act by forming committees, and getting up enough "momentum" somehow.
      Perhaps only movement programs with the simplest spatiotemporal patterns can get by with the single command neuron approach to orchestration. The appropriate trigger for most movement programs is likely a keylike combination. It is seldom bottlenecked into a single all-important cell pathway; indeed, it will probably be just as important which cells are inactive as which are active. And unlike a spatial pattern such as key notches, it will be a spatiotemporal pattern like that fireworks finale mentioned in Chapter 8, the order in which various neurons are activated, as well as which neurons are activated, being the adequate stimulus.
      So finding the origin of a command for a movement isn't easy, except in trivial cases (cramps, tics, tail flips, fire-alarmlike situations). You're surely going to be talking about a committee of interneurons, not a single entity, when identifying the source of a command to move your hand. So where does this committee live? And how do its members interact to achieve a consensus and implement it?

SERIAL ORDER, per se, is a capability that is widely seen in nervous systems throughout the animal kingdom. Jumping spiders, for example, may spy a prey while standing on one limb of a tree, but need to move over to another limb in order to be directly over the target. The spider will return to the main trunk, drop down along the trunk, go out another limb, select the correct secondary and tertiary branches, and so arrive at a suitable launching platform. But as long as the prey remains visible, that sequence doesn't require a serial-order buffer to hold the moves, as the spider has the time to make a series of judgments, just as I use feedback corrections in picking up a coffee cup and moving it to my lips. Goal plus feedback suffices for most things.
      Neuronal networks that generate motor patterns without feedback are the real neural networks that have been analyzed in greatest detail. Walking, for example, seems to have such a stored motor program. As does swimming in a leech, or flight in a bumblebee. There are hidden actions as well, such as digestion, that operate with similar neural circuits. Computer simulations have even been done using measured values for individual neuron properties and individual interconnection strengths in the case of the lobster stomatogastric ganglion. It is a small group of thirty identified neurons that controls one of the original assembly lines (whose purpose, however, is disassembly): Sixteen of the thirty cells produce the gastric mill rhythm (lobsters have teeth in their stomachs) for masticating the food, and the other fourteen produce the pyloric rhythm that squeezes the stomach contents into the foregut.
      The simulations show how the complicated three-phase pyloric rhythm is sequenced, and demonstrate how committee properties emerge. These studies would serve admirably as a guide for how sequential readout from a large neural array (such as our premotor cortex) could be timed.

What you've got to realize is that every cell in the nervous system is not just sitting there waiting to be told what to do. It's doing it the whole darn time. If there's input to the nervous system, fine. It will react to it. But the nervous system is primarily a device for generating action spontaneously. It's an ongoing affair. The biggest mistake that people make is in thinking of it as an input-output device.
the neurobiologist Graham Hoyle (1913-1985)

POSTURAL MOVEMENTS, and even simple kinds of locomotion, are generated at the level of the spinal cord (a brainless cat can still walk on a treadmill). To the extent that hand movements were once part of walking on all fours, they're potentially spinal in origin.
      But opening and closing your hand voluntarily -- most people would say that the brain commands that, stimulating the motorneurons into a pattern of action. So hand-movement "programs" can be spinal cord alone, or brain commanding spinal cord. To come back to the earlier dualism: Every level of "agents" has the capacity for spontaneous activity and also has the capacity for being commanded (or at least persuaded) by other agents. Martin Minsky's Society of Mind has a discussion of such collections of agents (though in the curious artificial-intelligence tradition of making up whatever seems useful rather than using the committees known from neurology; the AI folk seem to think that research is a game where it's cheating to look at the cards).
      And interneurons are seldom silent for long: Even when we are asleep and motionless, doing minimal sensory processing, most of those thousand billion interneurons are busy talking to one another. Like individually impotent generals trying to build a junta, they are politicking like crazy, all the time. Each one talks directly to about a thousand other interneurons, though more influential with some recipients of their message than with others. About half of interneurons send a predominantly inhibitory message, opposing excitatory recommendations that a target interneuron might receive from its other correspondents.

IF RECOGNITION depends on a committee of neurons, and triggering a movement program also depends on a spatiotemporal pattern of activity in many neurons, then surely our darwinian variation-and-selection games are going to depend on committees too, just as biological darwinism depends on populations changing their characteristics.
      But how to think about such a nebulous matter! Reductionism is far easier, if you can keep to the agenda. Fortunately there are some analogies that help. The analogies are not to generals and soldiers or executives and departments, but to how we form committees in everyday life.
      Consider the usual jury: It is founded on the principle of random selection from among one's peers. It may be shaped up by the challenges allowed lawyers, but it starts random. Some grand juries are not even random from the beginning, but selected by the judge to be citizens with some educational or occupational background, able to comprehend the complexities of a certain kind of crime. Other committees are really panels of experts, people who know all the common mistakes and how to avoid them. Advisory committees are often combinations of experts and community representatives of the grand-juror types. But in all cases, you can form a committee by starting randomly with all possible individuals and then narrowing things down by a series of selections, each based on some criterion.
      Committees also have an identity of their own that, in most cases (juries are the usual exception), survives changes in committee membership. Some are even endowed with a legal status: The corporation still remains responsible for actions taken years ago by some now-retired group of directors and employees. Most important, a committee can usually act without all of its members present, without all of them in agreement. Furthermore, an individual may be a member of many committees, most of which have little to do with one another. Such committee analogies provide us with some ways of thinking about cerebral committees of neurons.
      Similarly, actions often require a series of permissions from various committees, often in a particular order. For Penzoil to collect $3-billion in damages from Texaco took a whole series of committee decisions prior to the actual transfer of funds by bank wire: A board of directors approved a lawsuit, teams of lawyers argued, a court decided to award $10.3 billion. More lawyers argued, an appeals court decided, then a $3 billion compromise was worked out by a negotiating committee, a court approved, and finally, in 1988, a rather simple action was finally taken. Though involving a record amount of money, it was little different from other decision-making processes with which we are familiar.
      Brain committees too are going to have overlapping but nonessential memberships, have to act in sequence or synchrony, reorganize themselves, try again, finally make something happen. If the same action has to be repeated many times, the committee actions may become streamlined, as when routine expenditures for office supplies can come to be authorized at lower levels in a corporation, with only retrospective oversight by other committees.
      Our problem is how such cerebral committees get organized and reorganized, how their sequences become determined and occasionally streamlined, and what constitutes "higher authority."
     

We are accustomed to think of thinking as a linear experience, as when we say "train" of thought. But subconscious thinking may be much more complicated. Just as one has simultaneous visual impressions on the retina, might there not be simultaneous, parallel, independently organized, abstract impressions in the brain itself? Something goes on in our heads in processes which are not simply strung out on one line. In the future, there might be a theory of a memory search, not by one sensor going around, but perhaps more like several searchers looking for someone lost in a forest.
the Polish-American mathematician Stanislaw M. Ulam, 1976
I must stress how little is yet known about the programs of the brain. The code has not yet been properly broken; but we begin to see the units of it.... We can see that the code is somehow a matter of sequences of neural activities, providing expectancies of what to do next.
the English neurobiologist J. Z. Young, 1987

THERE'S A PARADE OF BOATS going out of Eel Pond this morning, quite a collection of big sailboats and small rowboats, all lined up to pass through the channel one by one. At first I wondered if it was a regatta, the orderly kind of parade that boat clubs organize. But then I realized that, were that true, the boats would be in some sort of order; in Seattle, the opening day of the yachting season sees this big parade out of Portage Bay into Lake Washington, but the big sailboats get to lead, and then come the medium-sized cabin cruisers, and then the smaller ones.
      Only the order of big-medium-small trips my "regatta" schema. And that's very much the way that a bat detects his favorite food, the mosquito --he sends out a brief chirp and then listens for matching chirps echoing amid all the other noises of the night. These chirps, upon closer inspection via a sonogram, are either crescendos or their opposite, a falling-frequency burst. If he sends out a chirp that starts with high notes and ends up at low frequencies, then he'll want to listen for the faint returning signals that also sweep from high to low. That's his "mosquito" schema. So how does the bat wire up his auditory system to arrive at a mosquito detector -- something that responds only to that high-to-low sweep?
      We got to talking about this in the computational neuroscience course yesterday. And that's essentially what the field is all about: how to do smart tasks with dumb elements. Watching the boats today, I was prompted to think up a postcard version of what brains do using nerve impulses. And then I realized that it would have been a lovely low-technology system for spies to use back in World War II.
      Suppose that you wanted to detect fleet movements in and out of a harbor, but without using smart spies -- only stupid spies, who don't know what the spymaster is looking for. All they do is to mail postcards to various addresses in some other city (for convenience, let us say the postcards are all sent to various residents of one small town who visit the post office twice a day to clean out their mailboxes). Some of the postcards are sent the cheapest way, which takes two days. Others are sent regular mail, which reliably takes one day, and some are sent express so that they arrive in a half-day.
      Each of the observers has a somewhat different mailing list, and each has a specialty: Observer A only mails postcards when he has seen big boats passing the harbor entrance, but sends them to each of ten different P. O. boxes. Observer B only watches for medium-sized boats, and upon seeing them he mails postcards to ten P. O. boxes, some of which are the same ones that Observer A mailed to. And Observer C likes small boats, and mails to another ten P. O. boxes, again with some overlap in recipients. Twice each day, they mail some postcards, all to this one set of P.O. boxes in the small town; there is nothing special about the postcards, no message contained in either picture or handwriting. But each name on the mailing list of each observer is annotated with whether the postcard is to be sent slow, regular, or express.
      And this allows a farmer visiting the post office to detect whether the harbor is sending out convoys, or having the fleet come home for leave, and distinguish this from a mere parade or just a lot of random traffic patterns. That's because each of the three major possibilities has a characteristic "signature": a parade passing the harbor entrance will be big-medium-small, strung out over a day's time. A convoy leaving the harbor will start with some big escort vessels, then have a daylong string of medium-sized freighters with a few small escort vessels mixed in, and finally the rear guard of big and small escort vessels. The fleet coming home for leave (as before Pearl Harbor in December 1941) will have some small escort vessels at first, then a long string of big cruisers, battleships, and aircraft carriers, followed by the rear guard of smaller escort vessels.
      They are all fairly easy patterns to detect, but the big-to-small parade pattern is the easiest one to explain. The farmer-spymaster merely has to stand on the far wall of the post office from the mailboxes and see whose mailbox has an unusual amount of mail: Two days after the parade started, P. O. Box 007 will be crammed because it was on Observer A's slow-mail list, on Observer B's regular-mail list, and on Observer C's express-mail list. Half a day before, there was very little -- and a half-day later, it will be back to very little mail in the mailbox. The big pileup of arriving postcards during the one half-day is what tells the story of "parade" (if you have a reliable postal system, which is why no one will ever stamp this proposal Top Secret!). Detecting "fleet returning" could be valuable information if you want to send over some bombers to sink them while they're bottled up inside the harbor; agent 007 could just be a stuffed post-office box.
      To detect the high-to-low-frequency sweep of the mosquito echo, the bat can use a similar trick. His cochlea has specialists in each different frequency, and they can have different conduction velocities to the brain. Were the high-frequency specialists the slowpokes, and the low frequency types the fast-conducting ones, there would be a pileup of arrivals at some cells back in the brain shortly after the high-to-low sweep was completed. If those cells had a high threshold, they would become active only when the "mosquito" pattern had recently occurred. They would, in effect, be mosquito detectors. Other cells in the brain could, I suppose, be tuned up to be detectors of the opening "dit-dit-dit-dah" notes (G-G-G-Eflat) of Beethoven's Fifth -- though the bat would be more likely to specialize in the sonar echoes of other common objects instead. ascent and descending scale
      Now you can detect much fancier patterns with several stages of analysis. Suppose that the postoffice-spy isn't the true spymaster; all he knows is that when he empties a lot of postcards in P. O. Box 007, he sends individually meaningless postcards to a variety of people in another town. Again some recipients get the express mail treatment, others the slow-boat treatment. With this, a recipient in the next city could detect fancier time patterns, such as a whole line of music: You could detect the opening "dit-dit-dit-dah" notes with just one stage of analysis, and the whole first line of Beethoven's Fifth with several more. All you need is to adjust the mailing lists of the various stupid spies along the way.
      Now all this is without inhibition -- just excitation. If you allow some observers to send red postcards and others to send green instead, we can give the owner of P.O. Box 007 a new instruction: red postcards cancel green ones, just throw them both away. If there are more than a dozen green ones left over, send your own postcards to everyone on your mailing list. Otherwise, just throw them all away. This, together with a variety of delivery delays, is a very economical way to detect patterns such as small-large-medium-medium-medium-big-small-small (how a convoy leaving port might stream past).
      Now real nervous systems would do one additional thing, which is to adjust the strengths of the messages sent, equivalent to sending lots of postcards to some recipients and only one to others. The other thing they do is to automatically adjust those "synaptic strengths" so as to spontaneously create a Beethoven's Fifth detector without a designer of mailing lists: They (as a group) will self-organize upon hearing the same thing repeatedly, so that they will become more sensitive to that pattern, able to detect it even when it is almost obscured by noise (equivalent to lots of nonrelated ships passing through the harbor entrance at the same time as the fleet-convoy-parade-whatever). How do they automatically adjust their interconnection strengths (mailing lists) to accomplish that? Tune in next year; if anyone discovers it in the meantime, you'll probably read about it in the newspapers, because that's a question where the answer will likely win someone a Nobel Prize.
      This is presumably how we tune ourselves up as infants to detect the phonemes in the speech of the adults around us: Each phoneme has various frequencies present at the same time (analogous to a chord in music), and the tonal combination varies with time (as would the successive chords in a musical phrase). Detecting /a/ and /k/ (and the other phonemes of English) is exactly the same kind of problem as detecting a short musical phrase, but compressed into a tenth of a second. Once we have gotten used to the three dozen or so basic speech sounds in English, we'll tend to categorize sounds into those phoneme pigeonholes. Any sound phrase that comes along that is some admixture will be labeled "strange" or just "nonspeech" unless it is right on the borderline between two phonemes, in which case it will be heard as either one or the other of them ("categorical perception" is what this is called in the speech-and-hearing literature).
      Human language is characterized by the ability to detect (and produce) such phonemes, to sequence a series of them (usually no more than a half-dozen) into recognizable "words." The order of the phonemes is important; some sequences yield one word, the reverse sequence another word (but most permutations of those phonemes may be nonsense). We assign certain simple meanings to such words; the average educated person reading this book probably knows 100,000 of them, and can look up many more in a dictionary. Translating phoneme sequences into recognizable words, in itself, is not uniquely human: Though your dog doesn't usually produce meaningful sequences of sounds, he can probably learn to associate a simple meaning to quite a few of the phoneme sequences that he hears, including stock phrases such as "Come here."
      In human language, there is a "duality of patterning": In this second level of patterning beyond phoneme sequencing, we have word sequencing for an additional meaning. And often these word sequences are unique, never before encountered (like this sentence). We judge groups of words for a new meaning, depending on their ordering (we have conventions called "grammar," sometimes involving word order, called "syntax"), and decode a much more complicated message (such as this sentence hopefully conveys). The analysis of serial-sequential events is the basis of human language. We will need to understand the neural machinery that goes beyond such simple circuits as the mosquito detector and the Beethoven's Fifth detector, and how it modifies itself.

COMMITTEES CAN ORGANIZE THEMSELVES, especially if they are given some "feedback" about how well they are doing: Rather in the same way that a language teacher corrects a student's pronunciation, one can correct a committee and it will get better next time. You can shape up a committee by rote learning.
      There was a spectacular demonstration of this at the computational neuroscience course, and it also showed the power of neurallike committees to produce speech. Not to decide what to say, or to get the grammar straight, but simply how to pronounce a written text without sounding like a scratchy record inside a child's toy. You can't just string together phonemes without paying attention to what comes next, because it may alter things. For example, in pronouncing the digits 6-7-5, the vocalizations for 7 usually start before the 6 is completed; they may sound distinctly separated to you, but the sounds actually overlap when spoken by a human rather than one of those automated-announcement machines that describes arriving airline Flight 675 as "flight-(pause)-six-(pause)-seven-(pause)-five." The rules for how to fluently pronounce English are many, and there are so many exceptions that one imagines that the brain needs to carry around a list of exceptions to the rules. As people have begun programming computers to "talk," many logical schemes have been attempted by programmers, but they always have to first look through a table of many hundreds of exceptions.
      The brain doesn't have to do things according to the linguists' logic. You can show that by training a committee of neuronlike "cells" to pronounce English text. You don't have to give it rules in the way that many linguists originally insisted was necessary. You don't have to give it a list of exceptions. All you have to do is patiently monitor its pronunciation for a few thousand times and tell it, "You're getting closer," or "That's worse," rather as you might prompt the blindfolded person in the game of Blind Man's Bluff by saying "hotter" or "colder" if his random turn took him closer or farther away from the target.
      Terry Sejnowski (who was one of Steve Kuffler's last collaborators) and his co-worker, Charles Rosenberg, programmed their computer to mimic a network of a few hundred neuronlike cells. They let most of them look at the sentence to be pronounced: If there are 26 letters plus a space, comma, and period possible, then 29 cells will easily handle the job of representing the alphabet (actually, five would do the job committee-style, but let's just assume specialist cells for convenience). Rather than looking at only one letter at a time, the device looked at seven letters at a time, three letters either side of the current one, with the string of letters slowly shifted along. So a letter was always seen in the context of what preceded it and what followed it.
      Each of those 203 input cells talked to all 80 "interneurons," but not with the same strength: An input cell might inhibit some, excite others, and with a strength that was initially randomly set. Each of the 80 interneurons talked to all 26 of the output "motorneurons", again with random excitatory or inhibitory strengths for their "synapses." The "motorneurons" were specialized according to the sound they produced: Seventeen of them for 17 phonemes (elementary speech sounds, corresponding to a certain position of lips, tongue, and such), 4 for punctuation (silent, elide, pause, and full stop), and another 5 for stresses and syllable boundaries. Those are the kinds of instructions that one has to give a speech synthesizer. neurallike network for phonemes
      So it was a simple three-layered arrangement, using cells without any dynamic features such as adaptation or rebounds (a simplification to avoid settling times) and without any feedback connections (ditto) -- a "neurallike network" with specialized inputs and specialized outputs, but a totally uncommitted set of interneurons whose input strengths and output strengths could be varied to achieve the goal of making a sentence come out sounding right.
      But who wants to spend a lifetime sitting around twiddling the strengths of 18,629 synaptic connections? Enter the back-propagation algorithm, an invention of another group of neural-like network researchers. Initially, when the network is fed a sentence like this one, the speech synthesizer's loudspeaker sounds like complete garbage. That's what you'd expect from all those randomized synapses. But after each error, the correct pronunciation was set into the 26 "motorneurons," and each interneuron-to-motorneuron synapse's strength was altered in the direction of trying to make the error a little less. Or if perchance that motorneuron had done the right thing, its input synapses were strengthened: If excitatory, they were made even more excitatory; if inhibitory, even more inhibitory. Not only that, but each of the 80 interneurons was examined and its input synapses from the letter-detector cells were twiddled once, all in the direction of minimizing the final error.
      Next time through the sentence it sounds a little better, getting the pauses and stresses a bit better; after a few more passes, it begins to sound like a baby babbling and getting some vowels correct (maybe /a/ and /e/), but probably saying "u" to both /o/ and /u/ -- some such collection of confusions. Then it begins sounding like a Mark I robot, sort of flat and occasionally garbled. After a few thousand words of practice with rote corrections made by back-propagation, it is right about 90 percent of the time -- all without being given any linguistic rules, or table of exceptions. Tested with an unfamiliar text with a similar vocabulary, it does almost as well.
      So how did the network achieve correct pronunciation? Did one of the 80 interneurons become an /e/ specialist? Rarely. If one peered inside the workings of the interneuron layer in the same way that we record from single cerebral neurons with a microelectrode, one finds that a cell will respond to a lot of different letters: it's on the /e/ committee as well as the /i/ committee, but another cell will be a member of the /k/, /i/, /o/, and /u/ committees, and so forth. One gets the correct pronunciation of /e/ because of a committee triggering that phoneme specialist, but each committee member also helps trigger some other phoneme-specialist motorneuron too.
      Now in the real world, there probably aren't any pure specialists in the input layer, or in the output layer -- they're used here for simplicity, just to show how committees are formed in the one intermediate layer's input and output synaptic strengths. But how robust is this committee arrangement? Suppose that you disable a few randomly selected interneurons? Does the pronunciation retreat to babbling? Somewhat, but it recovers with only a little additional training, achieving 90 percent correct pronunciation during retraining much more quickly than during original training. This is very much like stroke damage in humans, and the way that language performance recovers.
      It's all a good example of how trial and error will get you to a goal if only some intermediate results can be fed back into the system to let it know how it's doing. All those people who thought that explicit rules are needed were surprised when they heard the babbling turn into quite reasonable English pronunciation. The network seems to have discovered the rules, and quite without establishing specialist cells.

What is the nature of categorization, generalization, and memory, and how does their interaction mediate the continually changing relationships between experience and novelty?
the American immunologist Gerald M. Edelman, 1987

There must be a trick to the train of thought, a recursive formula. A group of neurons starts working automatically, sometimes without external impulse. It is a kind of iterative process with a growing pattern. It wanders around the brain, and the way it happens must depend on the memory of similar patterns.

the Polish mathematician Stanislaw M. Ulam, 1976

MANY OTHER TASKS have been given to similarly naive neurallike networks; they can discover some rules of grammar, for example, another thing that was supposed to have been built in by the genes. While this doesn't prove that human language cortex does things the same way, it shows that networks can organize themselves, discover by trial and error the way to detect a schema, pronounce a syllable, or tap a finger in a complicated rhythm. The genes need only carry enough information to give them a head start on pure randomness; the individual's successive interactions with the environment will shape up the fancier kinds of organization automatically.
      Such discoveries have caused much excitement among the neurobiologists, developmental biologists, and the cognitive cognoscenti (even the artificial intelligentsia). And it has stirred up a great many hopeful technologists, who are flocking to the banner of neurallike networks as an alternative way of shaping up smart machines. Why worry with logical rules and careful computer programming when a randomized network and a little rote instruction will make a pretty good machine? Behaviorism isn't dead after all: The tabula rasa has been reincarnated in silico!

BUT WHAT IF THERE IS NO TEACHER to correct your errors? Some committees form anyway: Just as snowflakes form fancy patterns while falling through moist air, just as a pan of oatmeal cooking unstirred on your stove will soon have its surface furrowed into a collection of squares and hexagons, so too will interneurons with initially random synapses tend to achieve patterns. They often correspond to simple kinds of order in the environment.
      This comes through clearly in a neurallike network studied by John Pearson, Leif Finkel, and Gerald Edelman, that resembles the somatosensory cortical maps of the hand. Their layer of input cells are simply sensors scattered around the skin of a model hand. Like the thalamic projection to the real somatosensory cortex, they let each point on the "hand" connect randomly across the entire "cortex." Some "synapses" were randomly excitatory, some randomly inhibitory -- but the initially random strengths could be later changed by experience.
      There is no "output" layer (the cells talk to one another within the interneuron layer, unlike the previous examples, where cells only "feed-forward" to the next layer). And there is no "error correcting teacher" -- one just sits back and waits to see what happens to the interneurons and their relative synaptic strengths. One watches on a color computer display. It reminds me of the back side of a colorful tapestry, little threads running here and there, as if they were axons in a tangential section of brain; their colors denote synaptic strengths. Initially, thanks to the randomized initial conditions, the picture is so haphazard as to suggest that Jackson Pollock had finally designed a true tabula rasa.
      If there is no input at all, not much happens -- but give the "hand" some experience with the world. Go around touching each "finger," one at a time, maybe stroking it. The computer provides the neurallike network with a simple synaptic property rather like one of the glutamate receptors: Any one synapse's strength is affected by the simultaneous activity of neighboring synapses (it's not synergy but more like lingering hypersensitivity). So when you stroke a finger, activating one patch after another of adjacent skin, you activate a series of "cortical synapses." Some, thanks to the initial randomization, happen to be adjacent to one another on a particular "cortical neuron." So the stroking of that finger tends to strengthen the within-cell neighbors; next time they'll respond even better to the stroke, even if it is in the opposite direction down the finger surface.
      But these clusters of enhanced "synapses" are upon all sorts of cells in the cortical array of 1500. Where by chance there are cells with more enhanced clusters than otherwise, one will start to get "neuron" clusters responding well to that finger. And pretty soon, that Jackson Pollock randomness seems to look like a map of the hand: There will be a big cluster of "neurons" for the thumb, another for the forefinger, and so forth. As the neurallike network gains experience, one starts to see red patches of strongly connected cells emerging from an increasingly blue boundary area where cells are weakly interconnected. Groups emerge, the physiological boundaries becoming far sharper than the underlying smear of anatomical connections -- and all without instruction.
      If one has stimulated the top side of the "finger" at different times than when stimulating the bottom surface, then there will be ten big irregular patches in the 1500 "neuron" array: five top surfaces and five bottom surfaces. With nothing more than some lightly organized experience with the external world, this neurallike network has self-organized itself into a "cortical map" that looks remarkably like those recorded by Merzenich, Kaas, Nelson, and friends from monkey somatosensory cortex. Matthew effect on fingers
      Naturally, one wants to know if the California-Oregon game (see chapter 8) works with the neurallike network in the same way that it works in real monkeys. And so Finkel, Edelman, and Pearson tried overstimulating the middle finger. Sure enough, the second and fourth finger maps in the "cortex" shrank as the third finger map enlarged. Distant boundaries between fingers (like Oregon-Washington and Washington-Canada boundaries in the analogy) didn't also shift, however, in the manner of the real cortex, but the basic result was perfectly clear: There was a battle going on to see whether a "neuron" was going to prefer digit 2 or digit 3 (it received anatomical connections from all five, but most were turned down by experience), and the outcome was influenced by the amount of "exercise" a digit received.

Them's that got shall get
Them's that not shall lose.
So the Bible says
and it still is news.

the singer Billie Holiday's version of the Matthew Effect

      Some call this neural darwinism -- but not everything that involves random initial conditions and selective survival deserves to be called darwinism. The dance we call the Darwinian Two Step, randomness then selection continuing back and forth for many rounds to increasingly shape up nonrandom-looking results, is the key. These repeated injections of randomness lie at the heart of what some would consider as delimiting darwinism from simpler forms of self-organization such as clumping and zero-sum matthewism. This New Testament exemplar of zero-sum thinking (Matthew 25:29) is usually paraphrased as "the rich get richer while the poor get poorer"; it's one example of how random initial conditions can come to result in a pattern without any additional injections of randomness. Something even simpler than darwinism might account for some elementary forms of neural patterning too.

NEITHER THE SENSORY TEMPLATE favored by a model "interneuron," nor the cortical "map" seen in their interconnection strengths, is a schema. But you can begin to see that perceptual categorization could occur via shaping up initially random cortical interconnection strengths. That has some profound consequences for our concepts of reality: As Gerald Edelman notes, in consequence of the random element, "we must look at all acts of perception as acts of creativity." We create the world we see: We surely modify it with experience, but it's an invented world. How we emotionally react to something may, in turn, affect how we see it in the future. Literally.
     

[This is in a sense] a theory of the "natural selection" of behaviour-patterns. Just as in the species the truism that the dead cannot breed implies that there is a fundamental tendency for the successful to replace the unsuccessful, so in the nervous system does the truism that the unstable tends to destroy itself imply that there is a fundamental tendency for the stable to replace the unstable. Just as the gene pattern in its encounters with the environment tends towards ever better adaptation of the inherited form and function, so does a system of step- and part-functions tend toward ever better adaptation of learned behaviour.
the brain theorist W. Ross Ashby, 1952

There is a popular cliché... which says that you cannot get out of computers any more than you have put in..., that computers only do exactly what you tell them to, and that therefore computers are never creative. This cliché is true only in a crashingly trivial sense, the same sense in which Shakespeare never wrote anything except what his first schoolteacher taught him to write -- words.

the English sociobiologist Richard Dawkins, 1986
The Cerebral Symphony (Bantam 1989) is my book on animal and human consciousness, using the setting of the Marine Biological Labs and Cape Cod. AVAILABILITY is limited.
Email || Home Page || Table of Contents || Endnotes for this chapter || continue reading Next Chapter