Home Page || Public Bookmarks || The Calvin Bookshelf || Table of Contents
A book by
William H. Calvin
A Science Masters book (BasicBooks in the US; to be available in 12 translations)
copyright ©1996 by William H. Calvin

Prospects for a
Superhuman Intelligence

Of course, if my “self” is a mere bundle of instincts of known number and exact dimension, then let me tie the bundle up neatly and make the best of it; but if this elusive personality, with its queer and satisfying aspirations and relapses and struggles and touches of the eternal, is not just a machine with wheels that get out of order and a definitive maximum horsepower, but a living thing infinitely variable, constantly readjusting itself to circumstances, capable of incalculable achievement or of pathetic meanness, in some sense master of its fate; if its freedom is not an illusion, and its possibility of spiritual experience not a lie, then we must not allow ourselves to fall back into the old error of the mechanistic materialist.
Charles E. Raven, The Creator Spirit, 1928

We have a life of the mind, and it is because of the dynamic darwinism of our mental lives that we can invent — and daily reinvent — ourselves. That life of the mind, a muddle at the beginning of this book, perhaps can now be imagined as a darwinian process — a high-level one, up near the top of those levels of stratified stability — that is capable of implementing Charles Raven’s sense of self. Such depth and versatility could emerge from cerebral codes cloning away, competing for territory with other cerebral codes, and spinning out new variations.

    It’s not a computer, at least not in our usual sense of a reliable machine that can faithfully repeat its actions. For most people, it’s something new in the mechanistic realm, utterly without good analogies — except for the other known darwinian processes. But you can get a feeling for what it’s like: looking down on the (virtually flattened) surface of the cortex would be like seeing a mosaic — a dynamic patchwork quilt, with the “patches” never at rest. On closer inspection, each patch would appear like a wallpaper pattern that repeated, but each unit pattern would be dynamic, a twinkling spatiotemporal pattern, rather than the traditional static one. The boundaries between adjacent patches of the quilt would sometimes be stable, sometimes moving, like a battlefront. Sometimes the unit patterns would fade from an area, the triangular arrays no longer synchronizing homologous points — and another unit pattern, unopposed, might quickly colonize the disorganized territory.

    The current winner of that copying competition, the one with the biggest chorus vying for the attention of the output pathways, looks like a good candidate for what we term consciousness. Our shifting focus could be another clone coming to the fore. Our subconscious could be the other active patterns not currently dominant. No particular area in cortex is the “center of consciousness” for very long before another takes over.

    The shifting mosaics also seem to provide a good candidate for intelligence. Among the spatiotemporal patterns that they shape up are the commands for novel movements. The evolving mosaics can discover new order à la Horace Barlow, since spatiotemporal patterns can vary to find new resonances. The mosaics can simulate actions in the real world à la Kenneth Craik, since the cerebral code for a movement schema can be judged against the resonances of long-term memories and the current sensory inputs. They have Jean Piaget’s feature of handling situations in which it isn’t obvious what to do next.

    And the mosaics have the open-ended aspect of our mental lives — as when we invent new levels of complexity, like crossword puzzles, or (as can be the case with poems) compound symbols to embody new levels of meaning. Because the cerebral codes can represent not just sensory and movement schemas but also ideas, we can imagine metaphors of quality emerging, can imagine how Coleridge’s “willing suspension of disbelief” takes place when we enter into an imaginary realm of fiction.

    Cerebral codes and darwinian processes were what I had in mind back at the beginning of this book, when I suggested that by its end the reader might be able to imagine a process that could result in consciousness and could operate fast enough to constitute a quick intelligence, good at guessing. This last chapter is about the implications of augmenting our brains and creating artificial approximations. But first, a sideways glance at competing styles of explanation.

The gold standard of explanation — the one to which all the sciences aspire (though sometimes inappropriately) — is abstract and mathematical. It is surely impressive when someone can unfold, from a set of abstract definitions and axioms, a forward-leaping chain of inferences. From Plato’s ideal, both Descartes and Kant tried to understand how the mind could operate mathematically. We finally seem on the threshold of answering some such questions.

    But there have long been challenges to the whole scientific enterprise — challenges that will come strongly into play once again, as science tries to explain the human mind. The mystic’s and irrationalist’s visions of truth spring from illumination, not deduction; the truths of science are seen by them as second-rate and impatient, compared to those achieved by pure contemplation. A second challenge is from dogma; Galileo got into trouble not over his astronomy but because his scientific methods of constant challenge and revision threatened the very concept of revealed truth that religions used to make their world view seem everlasting and internally coherent. Then there is what the literary critic George Steiner calls the challenge from “romantic existential polemic” — Nietzsche’s preference for instinctive wisdom over sterile deduction, for instance, or Blake’s critique of Newton’s optics of the rainbow. A fourth line of attack sees ulterior motives everywhere, or claims that truth is relative to political viewpoints.

    These are fundamentally challenges from outside the scientific tradition; their modern-day adherents will surely seize upon our everyday scientific confusions and try to exploit them, in the manner of the fundamentalist Christian attack on evolutionary biology itself. Such styles of explanation have long competed with science, with a few short-term wins (such as La Mettrie’s exile) and many long-term losses. Threads from all four can be found today in the movements founded by the drop-outs from the age of reason.

    So we must try to be clear about our scientific explanations and not create false oppositions — like the supposed conflict between the principles of evolution by genetic mutations and natural selection, a needless confusion that lasted for decades until resolved in the 1940s by the Modern Synthesis. We must avoid using mathematical concepts to dazzle rather than enlighten; we must watch out for “proofs by want of imagination,” as when we conclude, out of arrogance or impatience, that there are no other alternatives to the answers we have found. When it comes to the brain, in particular, we must be careful to pitch our theories at the right level of mechanistic explanation.

Accordingly, the neuron level of description that provides the currently fashionable picture of the brain and mind is a mere shadow of the deeper level of cytoskeletal action — and it is at this deeper level where we must seek the physical basis of mind!
Roger Penrose, Shadows of the Mind, 1994

I’m sure some consciousness physicist or ecclesiastical neuroscientist will say, despite all the prior chapters, that a ghost in the machine is still necessary, leaping over those dozen intermediate levels of stratified stability to provide a guiding role for enigmatic quantum mechanics, down there in the microtubules of the neuron’s cytoskeleton, where some immaterial spirit can interface with the brain’s biological machinery. Actually, such theorists usually avoid the word “spirit” and say something about quantum fields. I’ll be happy to compromise on “mystery” using Dan Dennett’s definition: a phenomenon that people don’t know how to think about. All that the consciousness physicists have accomplished is the replacement of one mystery by another; so far, there are no parts and pieces of their explanations, the combinations of which can explain other things.

    And even if they improve on their combinations, any effects from synchronized microtubules would only provide us with another candidate for the unitary nature of our conscious experience — one that will have to complete in mechanistic detail with explanations at other levels, and which will have to compete with them for sheer coverage. The darwinian process, thus far, seems to have the right parts and pieces to explain the successes and malfunctions of important aspects of consciousness.

    I think we’ll continue to see those tiresome debates in which one philosopher tries to hog-tie another philosopher (or at least paint him into a corner, brick him up with a wall of words) over the issue of whether a machine can ever truly understand anything, whether they will ever be able to have our kind of consciousness. Unfortunately, even if all scientists and philosophers agreed about how mind arises from brain, the complexity of the subject would still cause most people to abstract that complexity, using some simpler-to-imagine concept such as “spirit.” And perhaps to feel like the book reviewer who said (perhaps rhetorically), “Is the digital computer merely a simpler version of the human brain, as many theorists contend? If in fact it is, the implications are scary.”

    Scary? Personally, I find ignorance scary. It has a substantial track record, what with demonic possession “explaining” mental illness, and all those witch trials and inquisitions. We badly need a metaphor more useful than a quantum-mechanical mystery; we need a metaphor that successfully bridges the gap between our perceived mental life and the neural mechanisms responsible for it.

So far, we’ve actually needed two metaphors: a top-down metaphor that maps thoughts onto ensembles of neurons, and a bottom-up metaphor that accounts for how ideas emerge from those apparently chaotic neuron ensembles. But the neocortical Darwin Machine may well do for both metaphors — if it really is the creative mechanism within.

The neocortical Darwin Machine theory seems to me to be at the right level of explanation; it’s not down in the synapse or cytoskeleton but up at the level of dynamics involving tens of thousands of neurons, generating the spatiotemporal patterns that are the precursors of movement — of behavior in the world outside the brain. Moreover, the theory is consistent with a lot of phenomena from a century of brain research, and it’s testable (with some improvement in the spatial and temporal resolution of brain imaging or microelectrode arrays).

    The darwinian process at its core is, at least among biologists, widely understood as a creative mechanism. We’ve had well over a century to realize just how powerful such copying competitions can be, when it comes to shaping up quality from random variations on a timescale of millennia. In recent decades, we’ve been able to see the same process operating on the timescale of days and weeks, as the immune response creates a better-fitting antibody. That this neocortical Darwin Machine can operate in milliseconds to minutes is only another change in scale; we should be able to carry over our understanding of what the darwinian process can accomplish from evolutionary biology and immunology to the timescale of thought and action.

    It seems to me that the adoption of the William James viewpoint about our mental life is long overdue. But many people, including scientists, still hold to a cardboard view of darwinism as mere selective survival (Darwin, alas, contributed to the confusion by naming his theory for only the fifth of the six essentials, natural selection). What I hope I have done in this book is to pull together all of the essentials, as well as the accelerating aspects, of a darwinian process, and then describe a specific neural mechanism that could implement such a process in primate neocortex. As mechanism rather than improved metaphor, the best thing going for my neocortical Darwin Machine at this point is that the cortical neuroanatomy and the entrained oscillators principles provide a nice fit to those six essentials of a darwinian process and the accelerating factors.

    Whether this is the most important process going on in the brain, or whether another process dominates consciousness and guessing, is hard to tell; there might be one without antecedents in biology or computer science — one we cannot yet imagine without first discovering some intermediate metaphors. Indeed, I suspect that the process of “managing” the cloning competitions in order to avoid psychosis or stagnation is going to require its own metalevel of description. (I’m not thinking of a manager in the usual sense of the term but something like the way that global weather patterns are strongly influenced by jet streams or El Niño.) In psychological terminology, such management might be something like Raven’s “elusive personality, with its queer and satisfying aspirations and relapses and struggles.”

    Composite cerebral codes, shaped up by darwinian copying competitions, could explain much of our mental lives. Copying competitions suggest why we humans can get away with many more novel behaviors than other animals (we have offline evolution of nonstandard movement plans). It suggests how we can engage in analogical reasoning (relationships themselves can have codes that can compete). Because cerebral codes can be formed from pieces, you can imagine a unicorn and form a memory of it (bumps and ruts can reactivate the spatiotemporal code for unicorn). Best of all, a darwinian process provides a machine for metaphor: you can code relationships between relationships and shape them up into something of quality.

Such an explanation for intelligent consciousness gives us some insight into metaphor and operations in an imaginary realm. And it ought to tell us the kinships between thought and other mental operations. In the case of my proposed explanation, the ballistic movements and music seem intimately related to thought and language. We’ve already seen that the emphasis on novel sequences allows for nonlanguage natural selection that benefits language (and vice versa). Those overlaps between oral-facial sequencing and hand-arm sequencing (the apraxic aphasics) suggest that both are using the same neural machinery.

    The important secondary use of the neocortical Darwin Machine would be for prospective movements other than the ballistic ones: planning on the time scale of seconds, hours, days, careers. It allows for trying out combinations, judging what’s wrong with them, refining them, and so forth. Individuals who are good at this are known as intelligent.

Any explanation of intelligence also ought to give us some insight into other paths to intelligence than the ones followed by life on Earth: it ought, in short, to have implications for artificial intelligence (AI), for augmenting animal and human intelligence, and perhaps for finding signals from exotic intelligences. Not much can yet be said on the “intelligence elsewhere” subject, but let me suggest an ethological perspective that may also help us think about AI and augmented intelligence.

    An intelligence freed from the necessity of finding food and avoiding predators might (like artificial intelligence) not need to move — and so such an intelligence might well lack the what-happens-next orientation of animal intelligence. We solve movement problems, and only later, in both phylogeny and ontogeny, we graduate to the pondering of more abstract problems, acting to preempt the future by guessing what lies ahead.

    There may be other ways in which high intelligence can be achieved, but up-from-movement is the paradigm we know about. It is, curiously, seldom mentioned in the literature of psychology or artificial intelligence. Though there is a long intellectual thread in brain research that emphasizes up-from-movement, it is far more common to see discussions of cognitive function that emphasize a passive observer who intellectually analyzes the sensory world. Contemplation of the world still dominates most approaches to the mind, and — by itself — it can be thoroughly misleading. The exploration of the person’s world, with its constant guessing and intermittent decisions about what to do next, must be included in the way we intellectually frame the issues.

    It is difficult to estimate how often high intelligence might emerge in evolutionary systems — both here on earth and elsewhere in the universe. The main limitation, which makes most speculations meaningless, is our present ignorance about how dead ends in nature are overcome: it’s easy to get trapped in an equilibrium, stuck in a rut. And then there’s that continuity requirement: that, at each step along the way, the species remains stable enough not to self destruct and competitive enough not to lose out to a streamlined specialist.

    Lists of intelligence attributes can, if carried far enough, be little better than stand-ins for giving a human IQ test to the other species (or computer). But we now can say something about what kinds of physiological mechanisms would aid a brain in guessing right and discovering new order.

We could assess promising species (or artificial creations, or augmentation schemes) by counting how many building blocks of intelligence each had managed to assemble, and the number of stumbling blocks each had managed to avoid. My current assessment list would emphasize:

   A wide repertoire of movements, concepts such as words, and other tools. But even with a large vocabulary from cultural sharing over a long lifespan, high intelligence still needs additional elements in order to make novel combinations of quality.
   A tolerance for creative confusion, which would allow an individual to occasionally escape old categories and create new ones.
    More than a half-dozen simultaneous work spaces (“windows”) per individual — enough so that you can pick and choose between analogies but not so many as to obviate the tendency to chunk and thereby create new vocabulary.
   Ways of establishing new relationships between the concepts in those work spaces — relations fancier than the is-a and is-larger-than, which many animals can grasp. Treelike relationships seem particularly important for our kind of linguistic structures. Our ability to compare two relationships (analogy) enables operations in a metaphorical space.
   The ability to shape up off-line before acting in the real world — a shaping-up that somehow incorporated the six darwinian essentials (patterns that copy, vary, compete judged by multifaceted environments, with the more successful patterns providing the center for the next round of variants) and some accelerating factors (equivalents of recombination, climate change, islands), with shortcuts so that the darwinian process can operate at the level of ideas rather than movements.
   The ability to formulate long-term strategies as well as short-term tactics, making intermediate moves that help set the stage for a future feat. Evolving agendas, and monitoring their progress, helps even more.
Chimps and bonobos may be missing a few elements but they’ve got more of them than the present generation of AI programs.

    Another implication of my darwinian theory is that, even with all the elements, we would expect considerable variation in intelligence because of individual differences in implementing shortcuts, in finding the appropriate level of abstraction when using analogies, in processing speed, and in perseverance (more is not always better, as when boredom allows better variants a chance to develop).

   “Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else — if you ran very fast for a long time, as we’ve been doing.”
   “A slow sort of country!” said the [Red] Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!”

Lewis Carroll, Through the Looking Glass, 1871

Why aren’t there more species with complex mental states? There is, of course, a fantasy nourished by the comic strips that attributes silent wisdom even to insects. But the apes would be the terror of Africa if they had even a tenth of our plan-ahead mental states.

    I suspect that the reason there aren’t more highly intelligent species is that there’s a hump to get over. And it’s not just a Rubicon of brain size, or a body image that permits you to imitate others, or a dozen other beyond-the-apes improvements seen in the hominids. A little intelligence can be a dangerous thing — whether it be exotic, artificial, or human. A beyond-the-apes intelligence must constantly navigate between twin hazards, just as the ancient mariners had to cope with a rock named Scylla and a whirlpool named Charybdis. The turbulence of dangerous innovation is the more obvious hazard.

    The peril posed by the rock is more subtle: business-as-usual conservatism ignores what the Red Queen explained to Alice about running to stay in the same place. For example, when you’re running rapids in a small boat, the way you usually get pushed against a hard rock is when you fail to maintain your speed in the main channel. Intelligence, too, is in a race with its own byproducts.

    Foresight is our special form of running, essential for the intelligent stewardship that the evolutionary biologist Stephen Jay Gould warns is needed for longer-term survival: “We have become, by the power of a glorious evolutionary accident called intelligence, the stewards of life’s continuity on earth. We did not ask for this role, but we cannot abjure it. We may not be suited to it, but here we are.”

Speaking of other intelligent species, what about the ones we might create ourselves? A human mind embedded in silico, a copy of the detailed structure of one individual’s brain, is a possibility which has received some attention.

    I suspect that such an “immortality machine” — the downloading of an individual’s brain to a workalike computer — is unlikely to function well. Even if we neuroscientists should eventually solve the readout problem, as some physicists and computer scientists blithely assume can be done, I think that dementia, psychosis, and seizures are all too likely, unless the workalike circuits are well tuned (and stay that way). Just think of the human beings who suffer from obsessions and compulsions: “Stuck in an endless loop” takes on new meaning when the asylum is timeless, no longer limited by the human life span. Who wants to gamble on that kind of Hell?

    Far better, I think, to recognize the essential nature of copying across successive generations, both of genes and memes. Richard Dawkins saw these copying relations clearly in The Selfish Gene, as did my friend, the futurist Thomas F. Mandel, in addressing his cyberspace friends while coping with his increasingly dim prospects of surviving lung cancer:

   I had another motive in opening this topic, to tell the truth, one that winds its way through almost everything I’ve done online in the five months since my cancer was diagnosed.
   I figured that, like everyone else, my physical self wasn’t going to survive forever and I guess I was going to have less time than actuarials allocate us. But if I could reach out and touch everyone I knew on-line... I could toss out bits and pieces of my virtual self and the memes that make up Tom Mandel, and then when my body died, I wouldn’t really have to leave... Large chunks of me would also be here, part of this new space.
   Not an original idea, but what the hell, worth the try, and maybe one day someone can reconstruct all of the pieces in some sort of mandelbot and I can be arrogant and obstinate and affectionate and compassionate and everything else that you all seem to feel I am.

The ad-hoc schemes of AI might also produce intelligent robots. But I think that with the aid of principles seen in neuroscience, we can build a computer that talks like a human, is as endearing as our pets, thinks in metaphor, and manages multiple levels of abstraction.

    The first-order human workalike would, at a minimum, reason, categorize, and understand speech. I think that even the first-order workalike will be recognizably “conscious,” and likely as self-centered as we are. I don’t mean trivial aspects of consciousness such as aware, awake, sensitive, and arousable. And I don’t mean self-aware, which seems insignificant. Self-centered consciousness is, I think, going to be easy to achieve; getting it to contribute to intelligence will be harder.

    It seems to me that progressive generations of workalikes will come to acquire aspects of intelligent consciousness, such as steerable attention, mental rehearsal, language production guided by syntax, abstraction, imagery, subconscious processing, “what-if” planning, strategic decision making — and especially the narratives we humans tell ourselves while we are awake or dreaming.

    Though running on principles closely analogous to those used in our brains, a workalike would be carefully engineered so that it could be rebooted when difficulties arose. I can already see one way of engineering this, using those darwinian essentials and the cortical wiring patterns that lead to triangular arrays and thus to hexagonal copying competitions among variants and hybrids. To the extent that such functions can operate far faster than they do in our own millisecond-scale brains, we’ll see an aspect of “superhuman” abilities emerging from the “workalike.” If workalikes are able to achieve new levels of organization (meta-metaphors!), it may point the way to educate humans to make the same step..

    But that’s the easy part — the extrapolation of existing trends in computing technology, AI, and the neuropsychological and neurophysiological understanding of human brains. Refining wisdom out of knowledge does, of course, take a lot longer than refining knowledge out of data And there are at least three hard parts.

One hard part will be to make sure a superhuman intelligence fits into an ecology comprised of animal species. Such as us.

    Especially us. That’s because competition is most intense between closely related species — which is the reason that none of our Australopithecine and Homo erectus cousins are still around, the reason why only two omnivorous ape species have survived. (The other apes are vegetarians, with long guts to extract the meager calories from all that high-bulk food.) Our more immediate ancestors probably wiped out the other ape and hominid species as competitors, if climate change didn’t do the job.

The world of the future will be an even more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.
Norbert Wiener, 1950
    “To keep every wheel and cog,” said the environmentalist Aldo Leopold in 1948, “is the first precaution of intelligent tinkering.” Introducing a powerful new species into the ecosystem is not a step to be taken lightly.

    When automation rearrangements occur so gradually that no one starves, they are often beneficial. Everyone used to gather or hunt their own food, but agricultural technologies have gradually reduced the percentage of the population that farms to about 3 percent in the industrialized countries. And that’s freed up many people to spend their time at other pursuits. The relative mix of those occupations changes over time, as in the shift from manufacturing jobs to service jobs in recent decades. A century ago, the two largest occupational groups in the developed countries were farm workers and household servants. Now they’re a small fraction of the total.

    Workalikes, however, will displace even some of the more educated workers; those of poor education or below-average intelligence will have even bleaker prospects than they do now. But there could be some significant benefits to humans: imagine a superhuman teaching machine as a teacher’s assistant, one that could hold actual conversations with students, never got bored with drills, always remembered to provide the necessary variety to keep the students interested, could tailor the offerings to a student’s particular needs, and could routinely scan for signs of such developmental disorders as dyslexia or poor attention span.

    Silicon superhumans could also apply their talents to teaching the next generation of superhumans, evolving still smarter ones just by variation and selection: after all, their star silicon pupil could be cloned. Each offspring would be educated somewhat differently thereafter. With varied experiences, some might acquire desirable traits — values such as sociability or concern for human welfare. Again, we could select the star pupil for cloning. Since the copying includes memories to date (that’s the other advantage of intelligence in silico besides rebooting: you can include readout capabilities for use in cloning), experience would be cumulative and truly Lamarckian: the offspring wouldn’t have to repeat the parent’s mistakes.

Values are the second hard part: agreeing on them and implementing them in silico.

    The first-order workalikes will be just as amoral as our pets or a young child — just raw intelligence and language ability. They won’t even come with the inherited qualities that make our pets safe to be around. We humans tend to be treated by our pets as either their mother (in the case of cats) or as their pack leader (in the case of dogs); they defer to us. This cognitive confusion on their part allows us to benefit from their inborn social behaviors. We’ll probably want something similar in our intelligent machines, but since they’ll be a lot more capable of doing mischief than our pets are, we’ll probably want real safeguards — something fancier than muzzles, leashes, and fences.

    How do we build in safeguards as abstract as Isaac Asimov’s Laws of Robotics? My guess is that it will require a lot of star-pupil cloning, a process not unlike the domestication of the dog. This gradual evolution over many superhuman generations might partially substitute for biological inheritance at birth, perhaps minimizing any possible sociopathic tendencies in silicon superhumans and limiting their risk-taking behaviors.

    If that’s true, it will take many decades to get from raw intelligence (that first-order workalike) to a safe-without-constant-supervision superhuman. The early models could be smart and talkative without being cautious or wise — a very risky combination, potentially sociopathic. They would have the top-end abilities without those abilities’ well-tested evolutionary predecessors as the underpinning.

Declare the past, diagnose the present, foretell the future.
Hippocrates of Cos (460-377 b.c.), advice to physicians
The third hard part is moderating the reactions of humanity to the perceived challenge. Just as an overenthusiastic reaction by your immune system to an antigen can cripple you via allergies and autoimmune diseases (and perhaps kill you by anaphylactic shock), so human reactions to silicon superhumans could create enormous strains in our present civilization. A serious reaction, once workalikes were already playing a significant role in the economy, could disrupt the system that allows the farmers to feed the other 97 percent of us. Remember that famines kill because the distribution system fails, not because there isn’t enough food grown somewhere in the world.

    But the Luddites and sabots of the twenty-first century will be aided by some very basic features of human ethology — ones which played little role in nineteenth-century Europe. Groups try to distinguish themselves from others. Despite the benefits of a common language, most tribes in history have exaggerated linguistic differences with their neighbors, so as to tell friend from foe. You can be sure that the Turing Test will be in regular use, with people trying to determine whether a real human is at the other end of the phone line. Machines could be required to speak in a characteristic voice to dampen this anxiety, but that won’t be enough to prevent “us and them” tensions.

    Workalikes and superhumans could also be restricted to certain occupations. Their entry into other areas could be subject to an evaluation process that carefully tested a new model against a sample of real human society. When the potential for serious side effects is so great, and the rate of introduction is potentially rapid, we would be well advised to adopt procedures similar to how the FDA tests new drugs and medical instruments for efficacy, safety, and side effects. This would not slow the development of the technology so much as it would slow its widespread use, and allow for a retreat before too great a dependency developed.

    Workalikes might be restricted to a limited sphere of interactions; they might require stringent licensing to use the Internet or telephone networks. There might be a one-day-delay rule for distributing output from superhumans that only had a beginner’s license, to address some of the “program trading” hazards. For some fledgling workalikes, we might want the computer equivalent of a biohazard containment for lethal viruses.

The search for truth is predatory. It is a literal hunt, a conquest. There is that exemplary instant in Book IV of The Republic, when Socrates and his companions in discourse corner an abstract truth. They halloo, like hunters who have unearthed and run down their quarry.... [Even if enjoined from the scientific quest,] somewhere, at some moment, a man alone, a group of men addicted to the drug of absolute thought, will be seeking to create organic tissue, to determine the nature of heredity, to produce the cloud-chamber trail of quarks. Not for renown, not for the benefit of the human species, not in the name of social justice or profit, but because of a drive stronger than love, stronger than even hatred, which is to be interested in something. For its own enigmatic sake. Because it is there.
George Steiner, 1978
These considerations do start to raise the question: “Just what is the proper business of this society of ours?” Making humans “all they can be” by removing shackles and optimizing upbringing? Or making computers better than humans? Maybe we can do both (as in those teacher’s assistants), but during our headlong rush to produce superhumans — a major form of tinkering — we need to protect humanity.

    The ways in which we could introduce caution, however, are constrained by the various drives that are leading us to this intelligence transition:

   Curiosity is my own primary motivation — how does intelligence come about? — and surely that of many computer scientists. But even if because-it-is-there curiosity were somehow hobbled (as various religions have attempted), other drives lead us in the same direction.

   The technology version of the Red Queen Effect. If we don’t improve the technology, someone else will. Historically, losing technological races has often meant being taken over (or eliminated) by your competitor — and on the scale of nations, not just companies. Given those doubling-every-eighteen-months growth curves in speed and megabytes over the last several decades in digital computers, the rest of the world probably wouldn’t slow down even if the majority decided to do so. As the phrase goes in the biotech business, “They’ll just do it offshore.”

   Serious environmental threats to civilization demand the development of huge computing resources: our climate can “shift gears” in only a few years when a rearrangement of ocean currents occurs. Such a sudden flip now (and global warming appears to make a flip more likely, not less) would set off World War III, as everyone (not just the Europeans) struggled for Lebensraum. It is urgent, for our own survival, that we learn how to postpone those climatic gearshifts. The big computers needed for global climatic modeling are very similar to what one would need for simulating brain processes.

I don’t see realistic ways of buying time to make this superhuman transition at a more deliberate pace. And so the problems of superintelligent machines will simply need to be faced head-on in the next several decades, not somehow postponed by slowing technological progress.

    Our civilization will, of course, be “playing God” in an ultimate sense of the phrase: evolving a greater intelligence than currently exists on earth. It behooves us to be a considerate creator, wise to the world and its fragile nature, sensitive to the need for stable footings that will prevent backsliding — and keep that house of cards we call civilization from collapsing.

Only two centuries ago, we could explain everything about everything, out of pure reason, and now most of that elaborate and harmonious structure has come apart before our eyes. We are dumb..... We have discovered how to ask important questions, and now we really do need, as an urgent matter, some answers. We now know that we cannot do this any longer by searching our minds, for there is not enough there to search, nor can we find the truth by guessing at it or by making up stories for ourselves. We cannot stop where we are, stuck with today’s level of understanding, nor can we go back. I do not see that we have any real choice in this, for I can see only the one way ahead. We need science, more and better science, not for its technology, not for leisure, not even for health and longevity, but for the hope of wisdom which our kind of culture must acquire for its survival.
Lewis Thomas, 1979

Email || Home Page || The Calvin Bookshelf || End Notes for this chapter || To Table of Contents

You are reading HOW BRAINS THINK.

The paperback US edition
is available from most bookstores and