Home Page || Public Bookmarks || The Calvin Bookshelf
A book by
William H. Calvin
The Cerebral Symphony
Seashore Reflections on the
Structure of Consciousness

Copyright ©1989 by William H. Calvin.

You may download this for personal reading but may not redistribute or archive without permission (exception: teachers should feel free to print out a chapter and photocopy it for students).


Simulations of Reality:
Of Augmented Mammals and Conscious Robots

Perhaps I am no one.
True, I have a body
and I cannot escape from it.
I would like to fly out of my head,
but that is out of the question.
It is written on the tablet of destiny
that I am stuck here in this human form.
That being the case,
I would like to call attention to my problem.

Anne Sexton, The Poet of Ignorance

The stars are all shining brightly. And there is half of a moon about to set in the west. I see both Jupiter and Mars in the southern sky, Ursa major in the northern sky, and Cassiopeia overhead. The ancient people gave those star configurations names as they played connect-the-dots, they imagined animals and everyday objects up there in the heavens. Now we know that constellations aren't real, merely chance configurations of nearby and distant stars which, from our viewpoint on a minor arm of the Milky Way galaxy, just happen to take on a familiar shape. We select among the chance configurations (Where have I heard that before? The schema strikes again). Constellations are human creations -- now, that would certainly surprise an ice age hunter.
      On one hand, the human brain is Procrustean, always trying to force something to fit its preconceptions. On the other hand, it's always looking for new ways to piece things together, new categories that can be created. We worry about whether something has a real identity, or is just a figment of our imaginations. Faces in the clouds.
      Or in the rippling waves underfoot. In the quiet obliquely-moonlit waters, I seem to see something disturb the surface, then disappear again. My imagination, or reality?
      I keep looking around -- and, a moment later, a furry round head appears. Then big round eyes. Finally (so I imagine in the moonlight) whiskers emerge from beneath the waves.
      I find myself holding my breath. It is indeed a harbor seal.
      He is fishing near the dock on the incoming tide. This time he cruises along the surface and looks around.
      And he sees me standing nearby. I see his eyes focus on me in the moonlight. Our eyes meet. Another creature in my universe asking, "Who am I?"
      I endeavor to look harmless (even if it is misleading advertising for my species). Not finding me any more interesting than the setting moon, the seal slips under the surface. And I am again alone with sky and water.
      Suppose we neurophysiologists can find some secondary uses for the specializations of other mammalian brains, such as the fancy hearing of the bat and whale? Perhaps a way to train them to use the sequencing abilities they possess? Echo-locating animals decipher fancy sound sequences, and whale song demonstrates that marine mammals can learn and modify sequence "traditions." Surely some animals' neural circuits aren't so hard-wired that we can't, with a little training from an early age, induce them to achieve limited Darwin Machine abilities. And so we might be able to augment the language of such species -- indeed, even their consciousness -- and so enable them to develop a distinctive culture. Perhaps they might someday take over their own evolution as we humans have ours. Though, of course, increased look-ahead would also greatly augment their limited abilities to worry and suffer -- and so they might not thank us.

THAT SEAL AND I -- we're two individuals, each covered by a skin that encapsulates a whole collection of physiological processes operating more or less independently, but incidentally for the good of the whole organism. That's because an individual organism lives and dies as a unit, and components that were too inconsiderate probably didn't leave many offspring. Blades of grass aren't individuals in the same sense, nor are coral colonies; they're more like the surface cells of my skin, where losing a few won't change my identity as a person.
      But it's been very unsatisfactory to define self that way: It seems to miss the really interesting things, such as my sense of being the narrator of my life's story, of being the focus of a lot of things going on in my subconscious that my "self" occasionally gets to choose between, as I decide what to do next -- sometimes routine, sometimes novel. Sometimes the safe thing, sometimes risky.
      I assume that the seal's brain feels something of that sense of self, as it looks out for the collective interests of the cells inside its skin. But, if the Darwin Machine notion is even approximately correct, I may have a lot more subconscious than the seal does, have a lot more alternatives being shaped up offline on all those planning tracks, have a lot more memories about sequences that have been a part of my history as an individual. And thus a lot more imagination about what might happen next.
      And because of chunking and those higher-order schemas that I shape up when I get bored with the Procrustean bed of existing words/schemas/concepts, my Darwin Machine is often sequencing things that have no immediate movement pattern in the offing -- I can sometimes think of concepts about which I cannot yet speak. As when I contemplate the universe out there, trying to imagine it during the Big Bang, trying to imagine it as the solar system formed, trying to imagine the crystallization and clays that got organic chemistry going -- which got protein enzymes to catalyze reactions, which shaped up the DNA-RNA-protein route, and cells, and colonies, and sex, and fish, and mammals.

EVEN UNINTELLIGENT ROBOTS have long captivated humans, since they're such a puzzle: No one can readily figure out where to place them in the plant-to-animal, animal-to-human spectrum. The ancient Greeks were fascinated with automata; even Homer played around with the idea of robots.
      It's all tied up with our own view of ourselves as mechanical beings. The doctrine that men are machines, or robots, had its first clear and forceful formulation in the title of a famous book by the Cartesian physician Julien Offroy de la Mettrie, L'Homme-Machine ("Man a Machine"), published in 1747. He said things such as, "The human body is a machine that winds up its own springs" (well, they didn't understand much about metabolism back then). Less than a century later came Mary Wollstonecraft Shelley's Frankenstein. Karel Capek's first use of the word robot (the Czech word robota means servitude) in his play R.U.R. (Rossum's Universal Robots) added the word to the world's vocabulary in the early twentieth century. These all antedate the industrial robots of today, and our thinking robots of tomorrow.
      Robots are creations of cultural rather than biological evolution, and their evolution will differ from hominid evolution in many ways. This is largely because biological evolution is subject to a number of constraints not likely to be shared with robots. We cannot form hybrids between the smart octopus and the smart crow -- yet robots will be hybrids of all sorts of separately successful developmental paths, cobbled together. Another difference is that biology is always standing on the shoulders of the grandparents, not the accomplished individual: It practices "planned obsolescence," destroying the accomplished individual through ageing, rather than copying him or her and carrying on from that advanced position (we pass on shuffled copies of our grandparents' genes, not the genes that our own body expresses). Unless we have an identical twin, our unique combination dies with us; certainly our unique combination of genes, which is further shaped by our individual choices during a long lifetime, is shared with no one.
      But a particularly successful robot will be cloned at some point, accumulated experiences and all. A dozen copies will then develop separately thereafter, with some more successful than others at taking its parent's experience with the world and elaborating on it to reach new heights of sophistication. No matter how much we attempt to pass on our experience to our children, they usually have to make their mistakes for themselves, pass through painful adolescence, discover how to deal with the world of fickle facades.
      If only they could stand on our shoulders, combine youthful vigor with our hard-earned wisdom (minus, of course, our creeping conservatism). Occasionally one sees a twenty-year-old who seems to know how to handle people with the ease of that exceptional experienced executive who can somehow keep everyone happy and productive. Such precocious social development might not be as rare in robots: Sequentially cloned robots might accumulate such experiences from each ancestor.

HOW TO BUILD A CONSCIOUS ROBOT can now be glimpsed; it falls out of scenario-spinning considerations, out of Darwin Machines, out of neurallike networks. You just shape up the neurallike networks so as:

1) to create massively serial "candelabra"; 2) to load up "cars" (sensory schemas, movement verbs, and similar words) according to recent use and associations, word frequency, but with a random overlay too; 3) to match each track to sequential memories and "grade" the fit according to some version of Subjective Expected Utility (graded for both the goodness of fit to the local rules; of grammar, and additionally for the sequence's suitability to the present situation) 4) then copy the winner (with a synonym or mutation occasionally substituted) into many of the losers' tracks, and shape up repeatedly; 5) but partition the sequencer population into subpopulations so that the initial losers get some opportunities to evolve on their own and occasionally take over the lead ("capture consciousness").
And so we will get a working Darwin Machine not unlike the one inside our heads. It will be much more than the usual roomful of monkeys typing Shakespeare because of the Darwinian Two-Step and the utility scores shaping intermediate results (random doesn't mean the interjection of complete nonsense; it means unplanned variations on themes). Such rounds of variation and selection, as Richard Dawkins showed in The Blind Watchmaker, can quickly shape up a random string of words into increasingly good matches to a Shakespearian model sentence.
      But it won't be a very interesting Darwin Machine until it acquires some humanlike qualities, such as shifting attention (changing the weightings) and boredom. If the "cars" are selected too randomly, with too little weighting of the ones already in short-term memories and their associations in long-term memory, then it will produce too much nonsense. If not random enough, it will merely seem to shuffle the scenarios that it starts with. If the utility scores are weighted too much toward "drives," such as goals imposed from outside (in the manner of human drives toward power, acquisitiveness, "getting the job done," perfection), it might seem more like an inefficient program for a standard computer.
It is probably no accident that the term "machinelike" has come to have two opposite connotations. One means completely unconcerned, unfeeling, and emotionless, devoid of any interest. The other means being implacably committed to some single cause. Thus each suggests not only inhumanity, but also some stupidity. Too much commitment leads to doing only one thing; too little concern produces aimless wandering. the computer scientist Marvin Minsky, 1986

      When will we be tempted to call such a Darwin Machine "conscious"? Probably not until its memories approximate some of our real-world experience, including our vocabularies. But earlier nonlinguistic versions that just spin scenarios (maybe trying out collision scenarios for air-traffic control, or maybe controlling downtown traffic signals almost as well as a traffic cop) might come close. For me, the criteria would probably emphasize creativity in problem-solving; for the general public, the humanlike speech and whims and cleverness will probably influence the impression of consciousness. Of course, once Hollywood techniques shape up its appearance to something out of the cartoons, such a darwinian robot will probably achieve the status of "pet" rather quickly.

I REMEMBER THE IN-FLIGHT MOVIE on my last trans-Atlantic flight. It was about a robot named "Number 5" who escapes from the robot factory, sneaks into someone's house, and watches an old movie on TV. And starts mimicking it. He first prattles like a two-year-old, then acts like a child, and gradually develops an adolescent personality, declares itself "alive." Concomitant with this, he learns about death (having seen a TV movie of cars being crushed in a recycling yard) and so develops a phobia about being repaired, for fear he'll be junked. And so the rest of the movie is a great chase scene, as the increasingly smart robot eludes pursuit and learns to enjoy life.
      He is, thanks to Hollywood techniques, even more charming than a pet cat or dog; judging from the number of people who seem to think their pets "honorary humans" (and by extension, all cats and dogs), we are going to have trouble when the first robots (or augmented seals!) come out that really do mimic human speech and mannerisms.
      There is, of course, a real philosophical issue here: At what point would we declare a machine (or a genetically engineered animal) as having human rights, including protection from slavery and murder, freedom of speech, liability to taxes, ability to own and dispose of property, the right to vote, and all the rest? Suppose that the computer-based neurallike networks get fancy enough to actually create personable robots with consciousness and individuality? What about the ethical issues that would raise? Just what would be functionally unique about human brains then? Would we reconsider the inferior status of animals and machines? And once we discount the Hollywood-type facades, what would be our new basis? We have to be careful here, as the criteria will strongly interact with criteria for human life in such areas as brain death, abortion, and the severely demented.

CONTRARY TO WHAT OTHERS MIGHT THINK, I'd bet that we will achieve speech and consciousness in robots sooner than we'll solve some of the more machinelike tasks such as driving a car in Boston rush-hour traffic. I will also bet that we'll solve problems such as robot locomotion not by a mathematical analysis and careful engineering of robots, but rather by shaping up a robot brain via much the same trial and error that children go through -- the robot will first thrash around (as a fetus does in utero), then crawl, then stand, then walk, then run, and only later ride a bicycle successfully. Once we've trained such a robot (or it has trained itself by attempting to mimic what it observes in people), we will then clone the robot brain -- not necessarily understanding what goes on in that copied robot brain to produce locomotion any more than two parents understand how they've produced a child that can walk.
      This isn't to say that robots with our speech and mannerisms will be "human" -- they'll lack our primate heritage, for one thing, all those joys and fears and drives that determine much of our social life, mating habits, and ambitions. Even if the robots were to mimic the behaviors of the people that they see around them while "growing up," the robot brain will still lack all of our unexpressed behaviors, those instincts in us that come out only when the setting is right. We were shaped up by the ice ages, and when glaciers return, we'll become different people because of those ancient behavioral patterns emerging. The robots will lack that genetic library of useful-on-occasion behaviors.
      But we'll give them additional behaviors, ones that we have sometimes and wish we had more reliably. Altruism. Stewardship of the environment. Avoiding endangering others by recklessness. We'll build in protections against mob behaviors, book-burning, and making obscene phone calls at four in the morning.

THIS VIEW OF ROBOTS is, of course, closely shaped by my analysis of what makes humans unique among the animals. And there has been little agreement on that subject, especially as regards language and consciousness. Descartes in 1664 did provide us a target:

If there were machines that had the organs and the features of a monkey... we would have no means of recognizing that they were not of the same nature as these animals. However, if there were machines that resembled our bodies, and which imitated as many of our actions as might morally be possible, we would always have two certain means for recognizing that they were not real men: [they would lack language and consciousness].

      The fanciest machines of Descartes' acquaintance were pneumatic automata; it was two more centuries before electrical machines started to appear. Descartes' understandable lack of mechanical imagination did, however, create a polarity in thinking about thought that engendered three more centuries of dualism.
      The physiologist Emil DuBois-Reymond, who in 1848 fulfilled the century-long dream of physicists and physiologists by showing that nerves actually ran on electricity rather than some elusive "nervous principle," nonetheless by 1872 had proclaimed that there were absolute limits to our knowledge of nature: "Ignoramus, ignorabimus," that we not only didn't know the link between energy and matter, or between consciousness and movement, but that we could never know and would always be ignorant (the phrase recalls the practice of English juries: If they didn't have sufficient information to decide guilt or innocence, they could always declare ignoramus and if there was no hope of ever improving on ignorance of the facts -- perhaps the only witness had died -- they could use the extreme ignorabimus form of dismissal of the charges). In less than four decades, DuBois-Reymond's pessimism regarding energy and matter had become untenable, what with Einstein's success finding the simple proportionality E=mc2. We have not yet found a similarly simple relationship between consciousness and movement, but shaping-up selections among stochastic sequences in a command buffer now offers a candidate mechanism for us to contemplate.

CONSCIOUSNESS CONTEMPLATES, both the present (such as this beach, the surf, the sky of stars) and the remote (such as the possibility of life elsewhere). That's why we have so much trouble imagining a machine with our kind of consciousness. We have no trouble imagining a machine with willpower, such as a self-propelled lawn mower with a runaway appetite for the neighbor's flower garden. We have no trouble imagining a cormorant making an economistlike rational choice, between sunning and fishing and flying off to another pond. But contemplating the universe, thinking about how consciousness itself could arise -- that seems special, quite unlikely to be achieved by any programming genius, no matter how elaborate the computer.
      Yet that is exactly what I am saying is possible, that we could indeed create another contemplative but nonbiological form of sentient life. Thanks to successive selection among neural sequencers, which can be mechanically mimicked by the Darwin Machines, we should be able to create non-biological machines that not only will and choose but also contemplate, that have most of what we call consciousness. They could regret the past and learn from their mistakes. They could evolve on their own, perhaps even without further design help from us, and pretty soon there might be intelligent robots with whom we could converse, compare perspectives on the universe.

The Earth is just too small and fragile a basket for the human race to keep all its eggs in. the American writer Robert A. Heinlein

TO EXPORT OUR GENES to other heavenly bodies has already been done (though the Moon is fairly close by, and our gene representatives hurried home). We contemplate space stations as a next logical move for the human race, and even that is being done on a temporary-visit basis (though self-sufficiency is surely far off, and that is the more appropriate criterion).
      But intelligent contemplation per se might be exported to places inhospitable to life -- send the software without the wetware. If our consciousness is, following Shelley's insight, inherent in the ways in which the molecules of our brains are organized, rather than the molecules and electrical signals themselves, why not export the organization detached from flesh and blood?
      And sentimental liking for real humans aside, why not make that a major way in which we expand? We primates have to put up with a long food chain, from sunlight to steak, that is easily broken or contaminated. We breathe a fragile atmosphere, easily polluted by volcanoes and our throwaways -- plus big meteor impacts that throw lots of dust into the stratosphere on occasion. We are unable to live in this universe in general, only on one delicate green planet.
      And one of these days, a really big rock is going to hit the Earth --- and if humanity hasn't learned to launder the atmosphere by then, the Earth is going to be a pretty uninhabitable place for a while. Humanity stands a very good chance of eventually going the way of the dinosaurs -- if we haven't established ourselves elsewhere by then.
      Why not just initially export contemplative intelligence, our highest product, to live in space, getting its power from solar cells, reproducing itself using raw materials from asteroids? It could exist in the colder reaches of space where heat doesn't constantly threaten to disorganize things. There, it wouldn't compete directly with humans for niche space. While we're trying to make superhumans, why not go all the way and free intelligence from this fragile dependency on the green machines? So we can fly out of our heads, escape the prison of our human form? Silico sapiens, and all that?
      Perhaps. But there are some very good reasons for doing it slowly, lest we create monsters. If we are fearful and want to take out some Heinleinlike insurance against catastrophe by exporting contemplative robots as well as humans, we need to at least make sure that there are numerous fail-safe leashes via which we can recall our robot creations and replace them with improved models.
      First reason: We don't understand intelligence very well yet; whatever the attractions of Darwin Machines, we will want to remember Mary Midgley's analysis:

What we normally mean by "intelligence" is not just cleverness. It includes such things as imagination, sensibility, good sense, and sane aims: things far too complex to appear in tests or to be genetically isolated.... Certainly we need our nerves and brain to think with. But the power of thought to which they contribute is not something which can be sliced off and packaged separately. It is not an ingredient to be measured out into the stew, but an aspect of the whole personality.

      Sensibility and sane aims may be pretty hard to build into a robot, because they're derived from evolutionary sources quite different from the Darwin Machine we use to contemplate. Consider Jane Goodall's description of the chimpanzee, and think about how many of their traits one would also want to include in a robot, just in order to help insure sane aims:
Chimps... show a capacity for intentional communication that depends, in part, on their ability to understand the motives of the individual with whom they are communicating. Chimps are capable of empathy and altruistic behavior. They show emotions that are undoubtedly similar, if not identical, to human emotions -- joy, pleasure, contentment, anxiety, fear and rage. They even have a sense of humor.

      Those are all part of Midgley's stew. And so our "intelligent contemplation", in its broad sense rather than the narrow Darwin Machine sense, is going to be a hard stew to concoct. It isn't going to be easy to decide what is safe to let loose on the world -- and other worlds as well.
      Second, there is an important principle from evolution and ecology that says that the first species to fill a new niche has an enormous advantage -- because it's hard to displace once it occupies the new "territory" (being not just space but also ways of making a living, ways of interacting with other species). In the terminology of battlefield tactics, it "occupies the high ground." A Mark I robot colony might be so thoroughly ensconced that neither human settlers nor a new, improved Mark IV robot team could displace the Mark I without major warfare.
      Just as dictators are hard to displace once they come to rule a roost here on Earth, so a robot colony might acquire a strong central power that does things only one way -- but does them so well that it inhibits any variations within, and repels any improved versions from without. A Hitler or Stalin eventually goes the way of all flesh, but I think that we're going to want to build protections stronger than mere planned obsolescence into our intelligent robots before we let them loose. After all, they might learn to circumvent the planned obsolescence, just as we've doubled our life span via improved sanitation, nutrition, and science.
Will we become the "contented cows" or the "household pets" of the new computer kingdom of life? Or will Homo sapiens be exterminated as Homo sapiens has apparently exterminated all the other species of Homo?
the American theologian Ralph Wendell Burhoe, 1971

"DOWNLOADING" A HUMAN BRAIN into a computer work-alike has captured the imagination of those preoccupied with immortality: A person could live on, reconstituted in silico, thereby totally circumventing planned obsolescence.
      I think that it is significant that this reconstitution proposal comes from the AI community (and science-fiction authors) rather than from the neuroscientists. We don't have the slightest idea of how to "read out" (even destructively, as by slicing up a brain) the complete wiring diagram, connection strengths, nonlinear characteristics of each and every cell -- or how to mimic the parahormonal influences between near neighbors that don't rely on proper synapses, or the glial-cell influences on excitability, etc. Nor how to test subassemblies, tune them for stability, and prevent the whole thing from going into wild oscillations or otherwise "locking up." Physical scientists don't know either; they just assume, with Laplace, that if it is a deterministic system, it can be mimicked. But we have seen, with the advent of chaos studies ("sensitive dependence on initial conditions"), how such expectations about the atmospheric dynamics we call weather have been wrong; little "chance" alterations can make big mode-switching differences sometimes, and who would want to be reconstituted only to be warehoused with the criminally insane or those in persistent vegetative states?
      Now training a Darwin Machine is quite another matter. Were a Darwin Machine used as a personal auxiliary brain (as I described in The River that Flows Uphill on Day 13) that did some "pre-thinking" for you about the facts you'd stored there, it would gradually acquire some of the judgment ability of the person who trained it. You might even be able to let it run the shop for a week while you went on vacation.
      After its human trainer died, the auxiliary might live on, a repository of many of the facts, and ways of thinking about them, that were in the departed brain. It could continue thinking about them, armed with new facts from other sources. The auxiliary might more readily acquire humanlike ways of looking at things, including ethics (or sociopathic behavior, if trained by a sociopath), than conventional robots. And it would be nice to have Einstein's auxiliary around to ask questions of -- the next best to the real thing. We might be able to clone it (assuming that it will be easier to go from silicon to silicon than from organic molecules to silicon). Some versions will likely be fixed to learn nothing new after the trainer's death (so as to continue to approximate Einstein's 1955 working habits and knowledge base) and others allowed to keep up with developments. If an auxiliary were good enough at freewheeling without human guidance, it might discover new research strategies that are beyond the abilities of human brains.

WE UNDERSTAND NEITHER OURSELVES nor evolutionary principles well enough in the present century to safely spin off colonies of intelligent robots, human-trained auxiliaries included. Yet we are in a race: Between overpopulation and overpollution, the Earth may soon become a failing enterprise, with the barriers to innovation that are often constructed by bureaucracies attempting to stretch strained resources.
      And it isn't merely potential failures of ecosystems and economics: Just consider the history of civilizations and how often they decline from their vigorous peak into mere ornamentation or into Dark Ages. Before someone says, "That's a worry for the next century, not ours," consider how quickly we have retreated from space exploration despite a growing economy that has doubled the number of jobs. Consider how quickly we have retreated from public responsibility in the care of the mentally ill and the homeless, in the provision of quality public education. Consider the resurgence of don't-bother-me-with-the-facts fundamentalist religions, not just in the Islamic world but in highly technological societies, and their propensity for arrogantly telling other people what to do (not merely burning books but even commanding their followers to kill the offending author). Consider the people who happily utilize the benefits of modern medicine but who don't want any biomedical research done (some don't seem to mind eating animals and feeding them to their pets; they just don't want anyone to seek knowledge via an anesthetized animal). How many more mindless retreats are we about to witness?
      So many recent examples of self-imposed tunnel vision make one want to take out some insurance against failures of the spirit. Such could leave us with insufficient energy for responding to an ecological crisis by then moving some of our biological or intellectual eggs elsewhere. The time to buy insurance is before things get tight -- and hope that it turns out to be wasted money, just as I hope my homeowner's insurance premiums are wasted.
      I can see that it's time for the Grand March from Aida again. Or maybe explaining consciousness.

MY MINIMALIST MODEL FOR MIND suggests that consciousness is primarily a Darwin Machine, using utility estimates to evaluate projected sequences of words/schemas/movements that are formed up off-line in a massively serial neural device. The best candidate becomes what "one is conscious of" and sometimes acts upon. What's going on in mind isn't really a symphony but is more like a whole rehearsal hall of various melodies being practiced and composed; it is our ability to focus attention upon one well-shaped scenario that allows us to hear a cerebral symphony amid all the fantasy.
      What's going on in an animal's mind, in comparison to ours? Probably a lot less fantasy in the background, just choosing between well-trodden paths, not imagining all sorts of fantastic things, especially about tomorrow. Yes, my cat dreams about chasing mice, but I doubt that either she or that harbor seal has flying carpet fantasies forming up on another parallel track at the same time. Imagining familiar scenarios is not the same as making up novel scenarios, i.e., fantasy, not the running of a familiar movement program with muscles inhibited.
      Maybe things are occasionally more complicated than successive generations of shaping up: should one suspect that a complicated mental construct (say, arithmetic) is better described by a hierarchical model using some postulated multilayered representation, we now have a null hypothesis against which to test it. If successive generations of a Darwin Machine seem excessively cumbersome, or too slow, or won't make the characteristic mistakes, we will have a better basis for believing in a higher-level proposal for a brain mechanism. This is essentially how we now evaluate candidates for higher-order evolutionary mechanisms in the rest of biology (including, in the extreme, the "argument from design" and similar top-down proposals): We try our best to see if a standard darwinian explanation won't suffice instead.
      Is this Darwin Machine the most minimal mechanism for contemplation? Is there anything even simpler that would make a better null hypothesis for serial-ordered behaviors? One-round variation and then a long period of selection can be simpler than darwinism, as I noted (Chapter 10) in regards to cortical maps that could have been formed up by a "rich getting richer causing the poor to become poorer" Matthew Principle (it is, for example, the explanation for why there are two sexes via gamete dimorphism). Self-organization in physical systems such as crystallization can be even simpler (they may account for the hexagonal packing pattern of photoreceptors in the retina), and we do not yet know how many of the orderly phenomena in the brain owe their order to such elementary processes. But for dynamic phenomena such as consciousness and language, where there is time for a Darwinian Two-Step to operate (and, indeed, so much noise that it is unreasonable to postulate one-round randomness), then the massively serial Darwin Machine is likely to be the appropriate null hypothesis. With this null hypothesis, we are applying Occam's razor ("entities shall not be multiplied beyond necessity") to hypotheses about mind.

THE NULL HYPOTHESIS is usually the dull, uninteresting alternative ("mere chance"). But there is nothing trivial about randomness-and-selection back and forth for many generations of shaping up; that's because it is more than random. Pure one-shot randomness is too trivial to be a useful null hypothesis. Yet the Darwin Machine as the null hypothesis may be even more interesting than the alternatives being tested!
      Indeed, a Darwin Machine in left brain would seem to provide a natural mechanical foundation for many of the uniquely human functions:

      Versatile ballistic movements would be the simplest use of this Darwin Machine in the left brain, especially hammering and throwing (the most strongly right-handed actions): creating a variety of muscle-activation scenarios, judging each against memory and calculating utility estimates for each combination, and then using the best to make the ballistic movement of arm and hand. One could switch from a Variations-on-the-theme Mode to a precision Choral Mode by loading the same sequence into all of the serial buffers.
      Consciousness would, in this model, simply be the massive extension of this planning sequencer in its Random Thoughts or its Variations-on-a-theme Mode. It would often involve no action -- a kind of free-wheeling device that was always making dozens of scenarios on sidetracks, preferentially incorporating schemas that had recently been used for something (and so were in short-term memory) but also sampling from linked long-term memories. Nearly all of the random bashing about is now done off-line and, indeed, subconsciously. The best track would be all that one was "aware of," accounting for the unitary sense of consciousness and experience that constitutes the narrator. Only occasionally would the best track be gated out into the production of actual movement.
      Language production would simply be consciousness, but preferentially involving word-codeable sequences rather than nonvocal ones. We could sequence movement schemas ("verbs") as well as sensory schemas ("nouns") and state-of-being schemas ("adjectives," such as happy and hungry). Deciding what to say next would just be a special case of deciding what to do next. Grammar would be one set of rules by which proposed sequences were judged; syntax would not have a special status compared to other sequences in memory -- yet, because of frequent use, might appear special. This again is the minimalist position against which candidates for special status (such as Chomsky's) can be judged.
      Language reception uses a serial buffer to hold the incoming sentence while it is analyzed, its arbitrary phonemes recognized, and from the chain of phonemes, its words recognized (sometimes in groups). Again, a Darwin Machine can compare this word chain to sequential memories and come up with interpretations based on constructing an equivalent sentence in one's own words, equivalence being when utility estimates finally suggest a good fit. This "simultaneous translation" model for reception may become transparent to the user, just as meanings in a foreign language become intuitive once one truly "learns" the language. So the best track of the Darwin Machine would be the deep meaning envisaged by transformational grammars and a separate transformational level of the brain is unnecessary.
      Poetry is like language, but with a superposition of some additional structural requirements (rhyme, for example), something like dance is a format superimposed on more standard locomotion. Poetry is essentially a more elaborate version of prosody, the inflections that right brain tends to impose upon speech. Alliteration involves such structural patterning as well; the surprise ending in humor may involve a violation of an expected relationship. Poetry's tendency to repeat the same number of syllables on each line is reminiscent of chunking, the tendency to handle only a half-dozen items in a chain and, when tempted to exceed this, collapsing several items into a single higher-order item ("apples and oranges" into "fruit"). Apes also generalize, but we are often forced to do so (and thus expand our vocabularies) by our frequent use of a buffer that is too short for some secondary tasks.
      Logic and reasoning are uses of the consciousness version of the Darwin Machine that have a particularly rigorous structure, with many more constraints than are usually present in syntax or poetry, just to insure that entailment is reliable. But fundamentally a "grammatical sentence" is the model for a "logical argument" -- just as legions of high school English teachers have been saying all along.
      Music listening is where "notes" and "chords" are substituted for phonemes, and "melodies" substituted for sentences, and "musical phrases" for a slightly higher-order idea expressed in multiple sentences. Perfect pitch might be the "ganged-into-lockstep" Choral Mode of the Darwin Machine. There are many other recreations that seem to exercise serial planning skills, such as card, board, and video games; indeed, it is difficult to identify games that do not.
The Darwin Machine is thus sometimes operating on "abstract" schema that no longer have one-to-one correspondence with the individual things we see, or the actual movement commands we would have to issue; like the higher-level computer languages, the higher-order schema can have their own rules of sequence that are developed through training, as when we say "two plus three equals five."
      Darwin Machines are not especially well suited for explaining other beyond-the-apes mental specializations such as depiction. Increased visualization abilities might, however, have arisen from the increase in occipital and parietal lobe structures that came along with the larger frontal and temporal lobes, so handy for sequencing activities exposed to considerable selection pressures. Evolution conceivably could have increased frontal and temporal lobes' selectively, without simultaneous enlargement elsewhere, but a general rescaling of cerebral-cortical developmental parameters might have been the cheap-and-easy way to implement it. And so better visualization could have come "for free," except for the power requirements (not an insignificant problem, when the brain requires 25 percent of what the heart pumps -- and except that it threatens to overheat disastrously whenever we run too long in summer sunshine).
      Nor does the Darwin Machine solve the problem of "value," that which determines the Subjective Expected Utility scores. But it does show the level at which value might act, demonstrating one plausible mechanical basis for mental evolution. Knowing "DNA makes RNA makes Proteins" and the double helix did not tell us what the organism's environment valued either, but it did illuminate the mechanics that implement (and constrain) reproduction and inheritance and so create long-lasting memories of what worked well for similar organisms in past environments. Value is a property of the virtual environment that one has created inside one's head, a set of "initial conditions" that one applies to new situations via the Darwin Machine.
[Is mind] primary or an accidental consequence of something else? The prevailing view among biologists seems to be that the mind arose accidentally out of molecules of DNA or something. I find that very unlikely. It seems more reasonable to think that mind was a primary part of nature from the beginning and we are simply manifestations of it at the present stage of history. It's not so much that mind has a life of its own but that mind is inherent in the way the universe is built....
the theoretical physicist Freeman Dyson, 1988

DARWIN MACHINES are not the whole story for the brain's function, but they do seem to handle aspects of imagination, language, and the "self" -- that narrator who has been so troublesome. We start assuming the unitary hypothesis, that a Darwin Machine can account for all the serial-order specialties, since the rules of science require me to put forward as simple a theory as will account for the most phenomena -- and which will have the property of being vulnerable to disproof, e.g., someone could conceivably show that the brain regions involved in planning novel sentences do not overlap with those involved for planning novel throws. While the unitary hypothesis is a good working strategy, we have to remember that simplicity is not one of Nature's principles: The neural machinery may turn out to be somewhat different for some of the aforementioned traits, thanks to adaptations having separately shaped an early version of a neural sequencer into several different versions existing in parallel.
      Darwinism seems to be a "Maxwell's demon" that bootstraps complexity on multiple levels in open systems with a throughput of energy. Undoubtedly, we will discover in the realm of mental phenomena, as Darwin did for biological species in general, that there are circumstances in which selection temporarily plays a minor role -- as when a new niche is discovered, or when a conversion of function is possible. Because the rules of cultural evolution are considerably more flexible than those of biological evolution, we will likely discover situations in which Darwin Machines can be superceded by an efficient algorithm.
      But the basic phenomena that allow each of us to have a sense of self, to contemplate the world, to forecast the future and make ethical choices, to feel dismay on seeing a tragedy unfold, to enjoy music if not too preoccupied with talking or planning -- these things we may owe to the same kind of process that give the earth abundant life. Each of us now has under our control a miniature world, evolving away, making constructs that are unique to our own head. There may or may not be life evolving on some planet near one of those thousands of stars that I see in tonight's sky, but comparable evolution is taking place inside the heads of everyone in Woods Hole tonight. The ability of this mental darwinism to simulate the future is the fundamental foundation of our ethics, what sets us apart from the rest of the animal kingdom.
      Like Johann Sebastian Bach, many scientists have been deeply motivated by religious principles; they have considered scientific research an attempt to understand their Creator's works more fully. This is surely true of William James, who a century ago did so much to infuse evolutionary thinking into the new science of psychology. We have sought links between the laws by which the world was created and those that created the human mind.
      And here we seem to glimpse one: The darwinian principles that shaped life on earth over the billions of years, that daily reshape the immune systems in our bodies, have again blossomed in human heads on an even more accelerated time scale. In much the manner that life itself unfolded, our mental life is progressively enriched, enabling each of us to create our own world. To paraphrase Charles Darwin, there is grandeur in this view of mind.

No one can possibly simulate you or me with a system that is less complex than you or me. The products that we produce may be viewed as a simulation, and while products can endure in ways that our bodies cannot, they can never capture the richness, complexity, or depth of purpose of their creator. Beethoven once remarked that the music he had written was nothing compared with the music that he had heard.
Heinz Pagels (1939-1988)

There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or one; and that, whilst this planet has gone cycling according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

Charles Darwin (1809-1882)

The Cerebral Symphony (Bantam 1989) is my book on animal and human consciousness, using the setting of the Marine Biological Labs and Cape Cod. AVAILABILITY is limited.
Email || Home Page || Table of Contents || Endnotes for this chapter