William H. Calvin, "Brains and the World of 2025." Potomac Institute for Policy Studies conference, Washington DC, 27 June 2000See also

Webbed Lecture Collection
This 'tree' is really a pyramidal neuron of cerebral cortex.  The axon exiting at bottom goes long distances, eventually splitting up into 10,000 small branchlets to make synapses with other brain cells.
William H. Calvin

University of Washington
Seattle WA 98195-1800 USA

Brains and the World of 2025


William H. Calvin
University of Washington, Seattle


I have two tales to tell.  Well, maybe three.  Presently I will get to the good news concerning what knowledge of the brain will do to education and training by 2025, making adults far more mentally capable than most of us are now, with all its implications for warfare and other less lethal forms of competition.

            But first, the bad news.  Bill Joy of Sun Microsystems, in his WIRED article a few months ago, had a lot to say on the subject of robotic trends.  And at the recent Highlands Forum meeting, which some of you probably attended, Bill Joy was quoting something that I said in response to his article, so let me tell the whole story now in fuller detail.

            It’s easy to see the slower insidious trends about robots competing with humans, but I tend to agree with what Danny Hillis said about them in his response to Bill Joy:  We’ll get used to it.  I’d worry a lot more about the low-percentage possibilities, about the sociopathic outliers rather than the main trends.

And so the bad news bit is about how brain malfunctions will cause big trouble - not because the average brain will have changed for the worse, but because of changes in what just one person will be able to accomplish with rare but bad motivations.

            As Bill Joy introduced the subject, “We’re lucky Kaczynski was a mathematician, not a molecular biologist.”  Most of the mentally ill are harmless.  Those who aren’t are usually too dysfunctional to do organized harm; if they plan it, they get distracted along the way.  But I’d point out that there is a class of people with what we call “delusional disorder” who can remain employed and pretty functional for decades, despite their jealous-grandiose-paranoid-somatic delusions.  Like the Unabomber, they usually don’t seek medical attention, making their numbers hard to estimate.  Even if they are only one percent in the population (and I’ve seen much higher estimates), that’s still about 20,000 delusional people in the DC area, and mostly untreated.

            You don’t have to be mentally ill to do malicious things, and few of the mentally ill do them, but in an anonymous big city one percent of sociopaths or delusional types is sure different from one percent in a small town, where everyone knows one another and word gets around to humor them and take precautions.  And bare fists are sure different from the same person equipped with technology.  Note I didn’t say high technology, though that makes the situation much worse.

            As we’ve seen several times in recent years, it doesn’t take special skills or intelligence to create the fuel-oil-and-fertilizer bombs. You don’t even have to be delusional, just mean.  Many fewer of the delusional will have the intelligence or education to intentionally create sustained or widespread harm from biological or info-network terrorism.  Even if that number is only one percent of the one percent, it’s still a pool of 200 high-performing sociopathic or delusional techies just in the DC area alone (and you can scale that up to the nation - 20,000 - and to the world).  That bad things happen so infrequently from the few Unabomber or terrorist types among them isn’t too comforting when the capability of that tiny fraction is growing enormously with our technologies.  Small relative numbers still add up to enough absolute numbers to be worrisome, given the biological terrorism that will become possible, given the high densities at which people crowd themselves these days.  Ditto for network mischief and terrorism.

            More generally, the issue here is managing rare-but-high-risk situations, and few people in our society know the art and science of high-risk management.   In the military and in medicine, people are at least trained in the subject at the higher levels, but politicians and the public often look only at average trends, not the outliers.

            You can provide your own military examples, but let me say that there are many everyday examples in medicine of where you worry more about the off-chance, those situations where treatments must be promptly started for a condition that isn’t the most likely disease.  You might have a set of signs and symptoms and lab findings for which there is an 80% chance of a mild outcome, but with a 20% chance of a disease that can kill you in six months, like lymphoma.  And with the diagnosis still uncertain, you are likely to get treated with chemotherapy “just in case.”  The physician who waits until “dead certain” of a diagnosis may well wind up with a dead patient.

            So it may be with civilizations.  Considering the minority possibilities, and acting on still-incomplete knowledge, is likely going to be the name of the game.  Fatalism, which is essentially what Bill Joy is describing among the technologists, is one way of dealing with the future. But with it may go an abdication of responsibility for seeing that things go on and that everything turns out well.

            The future is arriving more quickly than it used to.  But our reaction times are as slow as ever, given how long it takes (thirty years so far in the case of the greenhouse) to educate politicians and to build political consensus for taking effective action.  This makes foresight more important than ever.  We should have started in on greenhouse gases a hundred years ago.  I will come back to this point about improving foresight, if time permits.

And now for the good news, at least good if we don’t flub it and big inequalities start appearing in the world, for someone to take advantage of, by making war rather than, well, trade pacts.

            Intelligence gets framed in surprisingly narrow terms most of the time, as if it were some more-is-better number that could be assigned to a person in the manner of a batting average.  It has always been measured by a varied series of subtests that give you a sampling of spatial abilities, verbal comprehension, word fluency, number facility, inductive reasoning, perceptual speed, deductive reasoning, rote memory, and the like.  In recent decades, there has been a tendency to talk about these various subtests as ``multiple intelligences.’‘  Indeed, why conflate these abilities by trying to boil intelligence down to a single number like IQ?

            The short answer is that the single number seems to tell us something additional - while hazardous when overgeneralized, it's an interesting bit of information.  Here’s why:  Doing well on one kind of intelligence subtest never predicts that you'll do poorly on another; one ability never seems to be at the expense of another.  Rather, an individual who does well on any one test will often perform better than average on the other subtests.

            It's as if there were some common factor at work, such as test-taking ability.  The so-called `` general factor g’‘ expresses this interesting correlation between subtests.  The psychologist Arthur Jensen likes to point out that the two strongest influences on g are speed (such as how many questions you can answer in a fixed amount of time) and the number of items you can mentally juggle at the same time.  Analogy questions (A is to B as C is to [D, E, F]) typically require at least six concepts to be kept in mind simultaneously and compared.  Some people can manage eight, but those who can manage only five don’t score very well on questions that presuppose six.

            Together, they make high IQ sound like a job description for a high-volume short-order cook, juggling the preparation of six different meals at the same time, hour after hour.  Or a barista at your favorite Seattle-coffeeshop clone, keeping all the verbal orders straight (I wish someone would tell me what making airplanes, espresso, software, grunge, and higher education all have in common; maybe they all thrive on the Seattle rain).  Most of us can manage a half-dozen mental objects at the same time, even seven digit phone numbers.  We have trouble with ten digits, and certainly 15-digit international numbers.  We have to write the longer ones down to keep from scrambling them.

            Intelligence is much more than just IQ score, and intelligence itself is only one aspect of a mix of any individual’s important personal traits.  Thus high IQ might be without significance for the kind of lives that most people lead, or important only on those rare occasions demanding a particularly quick versatility.  In our society, a high IQ is usually necessary to perform well in very complex or fluid jobs (for example, being a doctor), and high IQ is an advantage in moderately complex occupations (secretarial or police work), but it provides little advantage in work that requires only routine, unhurried decision making or simple problem solving (for example, for clerks and cashiers, whose reliability and social skills are likely to be far more important than their speed-and-numbers IQ).

            And high IQ may be helpful, but not essential, for professors and the like.  We can usually think about a problem for a week before deciding.  We don’t have to make a lot of decisions in a 15-minute office visit like a physician does.  And sometimes the quality is higher when you dwell longer on the problem.  Speed can be addictive.  We’ve all seen physicians in a hurry to move on to the next problem, they like things snap-snap-snap.  “Keeping busy?” seems to be their favorite greeting to one another.

            While it’s only part of intelligence in our everyday sense of the word, let me focus on the two things.  The speed of decision making.  And the number of mental balls you can keep in the air at the same time.  Now, suppose we could enhance either or both through training, particularly in youth - training based on a scientific knowledge of how the brain actually works.  Very little education or training is currently based on any scientific knowledge of brain mechanisms.  But that will change by 2025.  Let me give you some examples from medicine.

            Two centuries ago, medicine was largely empirical; vaccination for smallpox was invented in 1796, and the circulation of the blood was known, but that’s about it.  Digitalis was used for congestive heart failure because someone tried it and it worked.  Or so they thought.  They often thought incorrectly, and it took forever to get rid of bleeding and purging.  Generations of physicians were convinced they worked, but now we know they just prevented suffering by killing patients more quickly than the disease would have done.  Even when they guessed correctly, these physicians didn’t know how their treatment worked, the physiological mechanism of the drug action or vaccination.  When you do understand mechanism, you can make all sorts of improvements or guess far better schemes.  That’s what adding science gets you.

            One century ago, medicine was still largely empirical and only maybe a tenth had been modified by science; now it is more like half and half.

            Today, education and training are largely empirical and only slightly scientific.  We know some empirical truths, like the traditional advice to briefing officers (tell them what you’re going to tell them, then tell them, then tell them what you’ve just told them).  But we don’t know how the successful ones are implemented in the brain, and thus we don’t know rational ways of improving on them.   In another quarter-century, education will be half empirical and half scientific in the manner of medicine today.  We will not only know more about what works, but we’ll know why it works and where in the brain it does it.  We will thus know when to rehearse, when to present new material, when to play around, and how to consolidate progress.  Imagine teaching machines used to tailor the presentation of new material and the repetition of old.  Just extrapolate from your favorite touch-typing software to, say, learning how to juggle a dozen mental objects at once instead of only a half-dozen.

            I suggest that this will change everything.  We'll look the same as newborns but as adults we will be far more capable.

            Consider those major elements of IQ that I mentioned, sheer speed and the number of mental balls that you can keep in the air.  Suppose we knew, on a scientific basis from brain mapping during training or some such, how to speed up quality decision making.  Or how to expand the number of  balls in the air, so we could juggle 15 elements at once.  Just imagine an officer or physician able to juggle twice the average number of mental objects.  They’ll think their way around the competition.  And if they also did it twice as fast, you’ll be able to spot them from across the room, as a real standout.  You all know how parents fight to get their kids into good schools now.  Well, just imagine the competition for a school that had the techniques to routinely turn out such standouts.  And imagine the differences between a country with such an educational system for youth and adults, and a country still stuck with empirical techniques like today’s.

            There will be some other hazards as well, when we start educating effectively for speed and numbers.  We won’t have cured the more-is-better fallacy.  Physicians in ancient Greece were familiar with the tendency of patients to think that if some was good, more was surely better -- thereby converting a useful medicinal herb into a deadly poison.

            Faster is better could have its problems, too, especially in early childhood education.  Speaking in complete sentences by 11 months of age, rather than 36 months, could turn out to preempt some later developmental stages.  We might see a result much like that blues lament, about when “Your mind is on vacation while your mouth is working overtime.”

            You have to remember that fast is always relative to some other speed, and to a particular point of view.  Both are well illustrated by the story of the two guys being chased by the bear - you don’t have to run faster than the bear, only faster than the other guy, and only for a little while.  And so all these mental improvements are just as capable of destabilizing the world as more familiar forms of technology.  We have to worry about it for all the same reasons as why we worry about the information technology have-nots.


Now [time permitting] a few final words about foresight, and how badly we’re going to need it.  Maybe those improvements in mental juggling ability will help more people think productively so as to head off trouble before it happens.  My favorite example involves what we need to do about abrupt climate change.  And no, I don’t mean the gradual greenhouse warming over the next few centuries, ramping up like a dimmer switch.  I mean mode-switching climate change, like an ordinary light switch flips from one stable state to another, very quickly.  Such abrupt climate change is what could set off a series of wars that could, in about a decade’s time, dramatically reduce the human population and largely destroy civilization as we know it.

            Since the greenhouse warming was discovered about thirty years ago, the geophysics folks have also discovered that climate can abruptly cool in just several years, ushering in a madhouse century of climate jitters.  Climate has several stable states, just like a flickering fluorescent light tube, and the transitions can be very fast, abruptly cooling and abruptly rewarming.  They’ve happened frequently, every few thousand years on average (though not in the last 11,000 years).  And what we know of the underlying oceanographic and atmospheric mechanisms suggests that greenhouse warming could set off one of these disastrous cool droughts.  As Ray Pierrehumbert recently said in the Proceedings of the National Academy of Sciences , the February 15 issue  where you’ll find a series of articles on the subject, "If one is tugging on the dragon's tail with little notion of how much agitation is required to wake him, one must be prepared for the unexpected."

            Our world is now so crowded that it is dreadfully dependent on high agricultural productivity and efficient supply lines.  Were a cold flip to happen now, much of civilization would be ruined over the next decade in a series of wars over the remaining food.  With death all around, life would become cheap.  Millions of humans would survive but what would be left would be in a series of small countries under despotic rule, all hating their neighbors for good reasons because of mutual atrocities during the down­sizing.  Recovery from such antagonistic gridlock would be very slow, Balkanization writ large.

            Surprisingly, these large fast climate changes may be easier to prevent than a greenhouse warming or an El Niño.  Maybe.  That’s the good news.  I reported on this several years ago in a cover story that I wrote for The Atlantic Monthly called “The great climate flip-flop.”  While such abrupt coolings and worldwide droughts have happened hundreds of times in the past several million years, their Gulf-Stream-shutdown mechanism is so simple and so localized that it is, at least, conceivable that we could stabilize it over the next century and so buy time, heading off much trouble.

            Climate scientists don’t talk in such terms yet, and indeed they have not had to cope with managing high-risk situations because, heretofore, they’ve had few interventions to offer.  As that changes, thanks to the magnificent science now being done on climate, they’ll need some appreciation for how to manage situations described 2,500 years ago by the Hippocratic aphorism, “Life is short, the art long, opportunity fleeting, experience treacherous, judgment difficult.  The physician must be ready [to act].”  It’s a good motto for politicians and the military too, especially that “opportunity fleeting” bit.

APPENDIX:  The Future’s Intelligence Test for Humans

It has been 8,200 years since an abrupt cooling of even half the magnitude of the big ones (the Little Ice Age starting about 700 years ago was an order of magnitude smaller).  Everything we know about the geophysical mechanisms (see Broecker 1999, Calvin 1998a) suggests that another abrupt cooling-drought could easily happen – indeed, it looks as if our greenhouse-effect warming could trigger an abrupt cooling in several different ways.

            Because such a cooling would occur too quickly for us to make readjustments in agricultural productivity and associated supply lines, it would be a potentially civilization-shattering affair, likely to cause a population crash far worse than those seen in the wars and plagues of history.

            The best understood part of the flip-flop tendencies involves what happens to the warm Gulf Stream waters, with the flow of about a hundred Amazon Rivers, once they split off Ireland into the two major branches of the North Atlantic Current. They sink to the depths of the Greenland-Norwegian Sea and the Labrador Sea because so much evaporation takes place (warming up the cold dry winds from Canada, and eventually Europe, so that it is unlike Canada and Siberia) that the surface waters become cold and hypersaline – and therefore more dense than the underlying waters.   At some sinking sites, giant whirlpools 15 km in diameter can be found, carrying surface waters down into the depths.  Routinely flushing the cold waters in this manner makes room for more warm waters to flow far north.

            But this flushing mechanism can fail if fresh water accumulates on the surface, diluting the dense waters.  The increased rainfall that occurs with global warming causes more rain to fall into the oceans at the high latitudes.  Ordinarily, rain falling into the ocean is not a problem -- but at these sites in the Labrador and Greenland-Norwegian Seas, it can be catastrophic.  So can meltwater from the nearby Greenland ice cap, especially when it comes out in surges.  By shutting down the high-latitude parts of this “Nordic Heat Pump,” these consequences of global warming can abruptly change Europe’s climate.  If Europe’s agriculture reverted to the productivity of Canada’s (at the same latitudes but lacking a preheating for winds off the Pacific Ocean), 22 out of 23 Europeans would starve.

            The surprise was that it isn’t just Europe that gets hit hard.  Most of the habitable parts of the world have similarly cooled during past episodes.  Another failure would cause a population crash that would take much of civilization with it, all within a decade.

            Ways to postpone such a climatic shift are conceivable, however — cloud-seeding to create rain shadows in critical locations is just one possibility.  Although we can't do much about everyday weather or greenhouse warming, we may nonetheless be able to stabilize the climate enough to prevent an abrupt cooling.

            Devising a long-term scheme for stabilizing the flushing mechanism  has now become one of the major tasks of our civilization, essential to prevent a drastic downsizing whose wars over food would leave a world where everyone hated their neighbors for good reasons.  Human levels of intelligence allow us both foresight and rational planning.  Civilization has enormously expanded our horizons, allowing us to look far into the past and learn from it.  But it remains to be seen whether humans are capable of passing this intelligence test that the climate sets for us.

 Search: Enter keywords... logo || Home Page || Calvin publication list || The Calvin Bookshelf ||