Only now are 
			we beginning to sense a hinge in history, a time when the earth is 
			beginning to move beneath our feet.  In the near term [of an 
			exponential increase in technology affecting human capabilities], 
			the world could divide up into three kinds of humans.  One would be 
			the Enhanced, who embrace these opportunities.  A second would be 
			the Naturals, who have the technology available but who, like 
			today’s vegetarians, choose not to indulge for moral or esthetic 
			reasons.  Finally, there would be The Rest – those without access to 
			these technologies for financial or geographic reasons, lagging 
			behind, envying or despising those with ever-increasing choices.  
			Especially if the Enhanced can easily be recognized because of the 
			way they look, or what they can do, this is a recipe for conflict 
			that would make racial or religious differences quaintly obsolete.
			
			
			             – 
			Joel Garreau, 2003
			
			 
			
			There is no 
			more powerful law of nature than that of unintended consequences.  
			However carefully we might think out the possible results of our 
			actions, they are likely to give rise to difficulties we hadn’t 
			thought of – and fixing secondary problems of our own making is 
			often more difficult than addressing those presented to us by 
			Nature.
			
			
			                   – Ian Tattersall, 2002
			
			  
 
		
		
 
		
		
		14
		
		The 
		Future of the Augmented Mind 
		
		A 
		combustible mixture of ignorance and power? 
		 
		
		
		 
		
		
		Are there 
		genetically engineered prospects 
		of super genius – maybe even a do-it-ourselves successor species to 
		Homo sapiens sapiens?  Or some lash-up of computers and people that 
		will create a hybrid?
		
		
		            Not soon, I suspect – and to get to the long term, 
		civilization has to first survive the short term.  (For example, how 
		would we avoid genocides in the transition period?)   I have already 
		expressed, in chapter nine, my doubts about the course-plotting skills 
		of the “not ready for prime time” prototype that escaped prematurely 
		from the African cradle and took over the world.
		
		
		            
		Most popular 
		speculations about mind’s future (say, those mind-liberated-from-body 
		enthusiasms featured in slick magazines advertising cooler-than-cool 
		gadgets) lack any cognitive perspective on the limitations of our 
		prototype.  Nor do such articles seem to offer any anthropological 
		perspective of the evolutionary trajectory we’ve been on.  Nor any 
		neurobiological concern with the stability problems already evident in 
		seizures and mental illness.  The implications of the growing divide 
		between The Enhanced and The Rest is seldom addressed.
		
		            This 
		brief history of the mind is not the place to critique these blinkered 
		views of the future, nor the place to sketch out why genetic 
		manipulations may not turn out to be quite what we would hope.  By 
		offering a rather low-tech glimpse of the future, I can focus on 
		patching up the prototype and addressing what we will build atop it.  
		This final chapter is not a speculation about specific futures, though I 
		will mention some cautions.
		
		            
		Scientists have no special wisdom in areas of ethics and stewardship, 
		just a strong tradition of skepticism and theorizing.  And, in common 
		with the technologists such as Bill Joy, we sometimes have the knowledge 
		from which to give early warnings of trouble ahead.  That’s different 
		from knowing what to do about it, or where wisdom lies.
		
		            But we 
		do have a major responsibility, to get across to more general audiences 
		and policymakers the nature of the challenges coming up, so that 
		initiatives can properly focus on the long term.  
		Some things really 
		are important and it has proven easy to lose sight of them in the 
		gee-whiz version of the future.
		
		            
		Unfortunately, no
		one seems 
		able to discuss the future of the mind without marveling about this 
		exhilarating speed of technology and the power of human-computer 
		hybrids.  They do tend to grab the attention.  So 
		perhaps 
		I should first 
		offer some perspective about the setting in which mind’s future might 
		unfold – graying, speedups, wireheads, pumping up IQ, and emergent 
		properties more generally – before I tackle the properties of future 
		mind per se.
		
		 
		
		
		One of the more pervasive changes in average mind 
		may occur because the average mind becomes much older.  More experienced 
		and less prone to beginners’ mistakes, perhaps, but likely far less 
		energetic and adventuresome.
		
		
		            I was recently asked to imagine a day in my life, assuming 
		that I lived until the year 3000.  
		I thought about 
		walking around the neighborhood on my artificial hips and my artificial 
		knees. I suspected that my mood would be sad. I would be thinking about 
		how we had dug ourselves into a deep hole, and that it would be hard to 
		escape from it.
		
		            On 
		average, people turn more and more conservative with age, self-centered 
		and disinclined to rock the boat.  By the year 3000, I would be 
		experiencing the loneliness of the last liberal.
		
		 
		
		
		Extrapolating speed is easy to do 
		and exponential growth curves abound, such as the Moore’s law doubling 
		of memory chip capacity and processor speeds every several years over 
		the last few decades.
		
		            Many 
		technology aficionados suggest that as exponential technological change 
		continues to accelerate into the first half of the twenty-first century, 
		“it will appear to explode into infinity, at least from the limited and 
		linear perspective of contemporary humans,” as the inventor Raymond 
		Kurzweil says of “the singularity,” resulting in “technological change 
		so rapid and profound that it represents a rupture in the fabric of 
		human history.”  Like Malthus and the population bomb, it’s possible – 
		unless something else slows it down before it rips the social fabric.  
		Or uses the new technological capabilities, in the manner of Aum 
		Shinrikyo in the sarin attack in Tokyo, to hasten some rapture-promoting 
		Armageddon.
		
		            Things 
		usually happen in the meantime to interrupt or re-channel exponential 
		growth.  Recall those 1950s extrapolations of leisure time, where the 
		wage earner would get a shorter and shorter work week.  Back then, one 
		salary often supported an average family of five.  And what happened to 
		this vision of enhanced leisure time?  Now it takes two salaries to 
		support a family of four.  Someone forgot about the Red Queen principle
		in Lewis 
		Carroll’s Through the Looking Glass.
		
		 
		
		    
		 “Well, in our country,” said Alice, still panting a little, 
		“you’d generally get to somewhere else — if you ran very fast for a long 
		time, as we’ve been doing.”
		     “A slow sort of country!” said the [Red] Queen. “Now, here, 
		you see, it takes all the running you can do, to keep in the same 
		place. If you want to get somewhere else, you must run at least twice as 
		fast as that!”
		
		 
		
		            
		
		Our technological 
		lifestyle has already begun changing so rapidly that a person’s working 
		lifetime has to include one career after another after another.  But 
		only the best and the brightest can cope with such frequent retraining, 
		leaving most of the population constantly battered by insecurity and 
		lack of job satisfaction, alienated by the situation in which they are 
		trapped.
		
		            So 
		there are big problems with the speed of change.  The faster you go, the 
		more easily a pothole can spin you out of control.  But as I earlier 
		noted, it usually isn’t speed by itself that matters; it is relative 
		speed. Army generals love the blitzkrieg concept, of overrunning the 
		enemy before it can effectively react.  People (and societies) can 
		overrun themselves, too.  Your own speed of travel must be judged 
		relative to your speed of reaction.  If you can’t shorten your reaction 
		time commensurate with your faster speed, or cannot find better 
		headlights to give you a longer view, then things that would give you no 
		trouble at normal speeds can give you a lot of trouble at higher speeds.
		
		            And 
		reaction times are only the simplest application of speed differences.  
		When innovation operates in one area faster than in related ones, when 
		one is nimble and the other is ponderous, things can bend and break. 
		Contrast the speed of technological advance with that of societal 
		consensus.  It took less than a decade to put together an atomic bomb, 
		once the physics was understood.  It took only four years after the 
		first free web browser appeared until there were a billion web pages 
		worldwide, indexed by a free search engine that even children could use 
		without formal instruction.  Compare those technological spurts to one 
		of the best examples of political progress, short of shaky revolutions: 
		the European Union took 50 years, two generations of politicians, to get 
		to the stage where the Euro started to circulate.  And that’s fast for 
		consensus building.
		
		            The 
		science and technology of mind may move far more quickly than we can 
		create consensus about what to do – say, for insuring that things go on, 
		that individuals’ independence and upwards mobility in society is 
		maintained, that costs and benefits are distributed, that stratification 
		does not develop in society and become a caste system.
		
		            We do 
		get better headlights from science – something increasingly important as 
		things speed up – but the political reaction times are so slow that it 
		hasn’t helped much.  It has been clear for at least 30 years that 
		greenhouse problems were upon us, but denial still reigns in high places 
		(so too does ignorance of science).  The same nimble-ponderous problem 
		will likely be seen as the future of the mind unfolds and its societal 
		implications become manifest.
		
		 
		
		
		Wireheads are technology enthusiasts 
		who want to plug their brains into a computer.  I’m not one of them but 
		three decades ago, I was among the neurophysiologists who regularly 
		wiretapped individual nerve cells in the brains of awake patients and 
		used computers to analyze the meaning of their conversations among 
		themselves.  (It’s in Conversations with Neil’s Brain.)  The 
		technology of doing that hasn’t improved very much since then.  Each 
		time some press release offers yet another photograph of a brain slice 
		in a dish with wires attached, I get phone calls from reporters wanting 
		to know about this exciting new prospect.
		
		            What I 
		tell them is that I have been seeing such “news” every few years since 
		1964 and that, while it is nearly always competent state-of-the-art 
		research, it hasn’t yet provided much of a foundation from which to wire 
		a wirehead – that the problems of a permanent interface are 
		considerable, that bandwidth is still narrow (about at the Morse Code 
		stage), and that we still don’t know how to “talk” the language of the 
		brain well enough to get across conceptual-level stuff.
		
		            The 
		problems of doing something useful for an awake human from a 
		carry-it-around computer are severe in the cognitive realm, though more 
		approachable in the assisted movement applications.  Someday we’ll 
		see a cognitive adjunct (probably in the area of supplementing memory, 
		as in the “Brain in a Biceps” described in my The River That Flows 
		Uphill where all that silicon memory does double-duty as a silicon 
		augmentation of a breast or biceps).  But, until we solve the 
		interface problems, I’d bet on educational technologies.  I also think 
		improved education in the early years is what will influence far more 
		people than either genetic engineering or the wirehead approaches.
		
		 
		
		
		“All the children are above average” 
		in Lake Wobegon.  While intended as humor, you’d think that we were 
		heading for such an impossible utopia.  While everyone tends to talk 
		about the average as a stand-in for the whole bell-shaped distribution, 
		the average need not change to get important effects – some of which we 
		will surely see long before anything shifts the whole curve to the 
		right.
		
		            
		Indeed, the bell curve of IQ probably began to spread early in the last 
		century when coeducational colleges began to supplement 
		separate men’s colleges and women’s colleges.  So in the typical years 
		for finding partners and settling down, the handy choices were those who 
		could also pass the entrance exam for the college.  Where you would have 
		had more of a genetic mixing under normal circumstances between average 
		and high (as, say, in Israel where everyone spends several years in the 
		army before going on to college, and many mates are found in the wider 
		choices available in this less selected population), the 
		high-entry-requirement college tends to produce more high-high matings.
		
		            It 
		isn’t necessarily changing the average of the population.  
		When the cream rises to the top, what’s left behind is thinner – but 
		there’s no change in the milk bottle’s average of fat.  
		There are things like that, where there is no attempt at manipulating 
		average intelligence, which nonetheless affect its distribution.
		
		
		            
		The Luke Effect 
		(the biblical “the rich get richer”) is likely to occur when parents try 
		to assure the best for their baby via germline gene technology and by 
		elective abortions of low-IQ fetuses.  But the same exaggeration of 
		differences can happen with education via private schools – even if the 
		public schools catch up a decade later, you will still have an ongoing 
		disparity, The Enhanced always well ahead of The Rest.  So for at least 
		two reasons, the IQ average just doesn’t tell you what you need to know.
		
		
		            
		There’s a third big reason:  variability is the real stuff of 
		evolution.  There isn’t a standard type, but always a highly variable 
		population of unique individuals.  The distribution is capable of being 
		biased this way and that. But it isn’t easy to engage in what we call 
		“population thinking.”  It takes years to train biologists to think in 
		terms of a variable population of unique individuals instead of a type 
		(Platonic essences is what we default to).  Without achieving that 
		viewpoint, it can be difficult to appreciate how evolution occurs over 
		time.  “He who does not understand the uniqueness of individuals is 
		unable to understand the working of natural selection,” said Ernst Mayr.
		
		 
		
		
		There are many surprises that emerge 
		from the intrinsically unpredictable aspects of the world.  Small 
		changes can produce big effects, and the future of mind will surely 
		include some novelties arising from self-organization tendencies.
		
		            Some 
		examples from the simpler world of geology:  When there is a high 
		throughput of energy, things like convection cells form.  Whenever you 
		see cliffs of basalt with hexagonal columns, remember that there are 
		emergent properties lurking in anything that produces a steep gradient.  
		Hot to cool may be what causes the hexagons to form (you can see it in 
		cooking oatmeal, when you forget to stir the pan), but I can imagine 
		softwiring emergents in the brain from intensively engaging in 
		structured stuff at earlier ages.  The steeper gradients between rich 
		and poor may produce surprising social effects unless we do something 
		about the rich getting richer.  Emergents are hard to predict, and they 
		are not all beneficial – such as gridlock.
		
		            But 
		many of the surprises aren’t even emergents.  Mentally we can invent a 
		scheme that makes a difficult task easy.  Consider trying to move a big, 
		heavy object like the box containing a new refrigerator.  You cannot 
		lift it.  You cannot easily push it across the floor because the 
		friction is considerable.  It seems an impossible task.  But in trying 
		to maneuver it, you discover the technique of walking it across the 
		room:  you tilt it back onto the near edge, then rotate it around one 
		corner, then the other corner, “walking” it across the room with little 
		more effort than it takes to keep it on edge.  It is much like sailing 
		into the wind at an angle,  tacking back and forth – something else that 
		initially seems counterintuitive.  Our mental life often makes such 
		shortcut discoveries on more abstract levels, and we might get even 
		better at it in the future.
		
		            
		So much for the 
		general principles and the gee-whiz settings that usually distract us.  
		What about the properties of future mind per se?
		
		 
		
		Speculation is 
		never a waste of time.  It clears away the deadwood in the thickets of 
		deduction.
		
		   – the novelist 
		Elizabeth Peters, 2000
		
		 
		
		
		Where does mind go from here, 
		its powers extended by science-enhanced education and new tools – but 
		with its slowly evolving gut instincts still firmly anchored to the ice 
		ages?  With the mental hardware still full of the shortcomings of the 
		rough-around-the-edges prototype, the preliminary version that evolution 
		never got a chance to further improve before the worldwide distribution 
		occurred?
		
		            
		Perhaps we will come to manage our minds better, as some Buddhists 
		aspire to do, learning how to put things on the back burner and revisit 
		them, rather than worrying continuously.
		
		            
		Evolutionary psychiatry will probably give us some alternative ways of 
		looking at common disorders – and perhaps offer us some paths to 
		improving mental functioning.  Mood disorders like depression are, of 
		course, the most common of problems, exerting a pervasive bias on what 
		interests us.  Of all the mental illnesses, depression is the easiest to 
		appreciate as an evolutionary adaptation as it seems widespread in 
		mammals.  A wounded animal holes up, doesn’t move much, loses its 
		appetite and interest in sex – all of which makes perfect sense if there 
		is a broken bone or wound to heal.
		
		            While 
		the mood disorders do not seem related to higher intellectual function 
		in basic mechanism, the behaviorally modern transition and its 
		imperfections may have made mood disorders more common in settings not 
		part of the usual evolutionary rationale for depression, triggering the 
		reclusive reaction when there is nothing broken.  Stress is also a 
		possible setup for depression.  Adding a layer of intellect modifies 
		this simple picture; some think that many clinical cases of depression 
		involve the pending failure of something that the patient is emotionally 
		committed to, that depression serves to help disengage.
		
		           
		
		Hallucinations, delusions and dementia are also the stuff of our 
		nighttime dreams, where we see cognitive processes freewheeling without 
		much quality control.    Fortunately our movement command centers are 
		inhibited during most dreams, so we don’t get into trouble acting on 
		dangerous nonsense.  When similar incoherence is the best thing our 
		consciousness has available during waking hours, it may be part of a 
		thought disorder.
		
		            
		Obsessions and compulsions are lower-level stuff, somewhere between 
		thought and mood disorders, but they seem related to agendas and their 
		updating.  Our cat may have some instincts for keeping track of the 
		field mice in the back yard and revisiting each of our closets every few 
		weeks, but behaviorally modern humans have very versatile, structured 
		agendas.  Yet we sometimes get stuck and fail to move on, with respect 
		to both thought (obsessions) and action (compulsions). Just imagine the 
		“Give him…” advertising agency able to craft an ad that causes a more 
		normal person to obsess over the product and then go out and 
		compulsively buy it.
		
		 
		
		
		Real mental illness 
		may be prominent in the future of the mind.  Much as I think that we 
		will learn how to treat mental illness better, one must also consider 
		that the number of cases might rise at the same time, perhaps just 
		because of the speed and complexity of everyday life.  And new types of 
		malfunction may appear.
		
		            There 
		have been several disturbing trends of late.  The age of onset of major 
		disorders has been dropping, so that psychoses are seen earlier and 
		earlier in life.  And the number of cases of autism has greatly 
		increased in the last two decades.  It takes a long time to sort out the 
		causes of such things and, to some extent, we must suspend judgment.  
		But the possibility of society having to cope with much more mental 
		illness is real.
		
		            Our 
		society has also changed in ways to make us much more vulnerable to even 
		rare acts of mental illness.  As Bill Joy said of the Unabomber, “We’re 
		lucky Kaczynski was a mathematician, not a molecular biologist.”  Most 
		of the mentally ill are harmless.  Those who aren’t are usually too 
		dysfunctional to do organized harm.
		
		            But 
		I’d point out that there is a class of patients with what is called 
		“delusional disorder.”  They differ greatly from schizophrenics and 
		untreated manic-depressives because they can remain employed and pretty 
		functional for decades, despite their jealous-grandiose-paranoid-somatic 
		delusions.  Like the sociopaths, they usually don’t seek medical 
		attention, making their numbers hard to estimate.  Even if they are only 
		1 percent in the population (and I’ve seen much higher estimates), 
		that’s a lot of mostly untreated delusional people.  You don’t have to 
		be mentally ill to do malicious things, and few of the mentally ill 
		perform them, but 1 percent of sociopaths or delusional types in an 
		anonymous big city is sure different from 1 percent in a small town 
		where everyone knows one another and can keep tabs on the situation.  
		And bare fists are quite different from the same person equipped with 
		technology.
		
		            As 
		we’ve seen several times in recent years, it doesn’t take special skills 
		or intelligence to create the fuel-oil-and-fertilizer bombs.  Many fewer 
		will have the intelligence or education intentionally to create 
		sustained or widespread harm using high-tech means.  But even if that is 
		only 1 percent of the 1 percent, it’s still a pool of 3,400 
		high-performing sociopathic or delusional techies just in California 
		alone - 
		and you can scale that up to the nation and world.  That bad things 
		happen so infrequently from the few Unabomber types among them isn’t too 
		comforting when the capability of that tiny fraction is growing 
		enormously.  Small relative numbers still add up to enough absolute 
		numbers to be worrisome.  With cults, you may get some warning.  But 
		here we are talking about the escalating power of the often suicidal 
		one-person cult where deterrence doesn’t work.
		
		            
		Fatalism, which is essentially what Bill Joy describes among the 
		technologists, is one way of dealing with the future.  But with it may 
		go an abdication of responsibility for seeing that things go on and that 
		everything turns out well.
		
		            It is 
		important to distinguish between science and technology here, because 
		the connection between them is so often oversold and simplified.  Even 
		without more new science, technology would continue producing many new 
		ways in which society could get in trouble from unintended 
		consequences.  (The explosion of the world wide web didn’t require any 
		new science.)  Science, whatever it may also do in occasionally seeding 
		new technology, tends to provide society’s long-range headlights.  It is 
		science that can detect instabilities before they cause collapse.  And 
		in combination with such technological marvels as massively parallel 
		computers, science can provide the working models that show us the 
		probable consequences of our actions, an important ingredient of ethical 
		choices.
		
		            The 
		future is arriving more quickly than it used to, and, since our reaction 
		time is slowed by the necessary consensus building, it makes foresight 
		more important than ever.
		
		 
		
		
		Will we also shift mental gears again, 
		into more-and-faster – juggling more concepts simultaneously, making 
		decisions even faster?  As a mundane example, consider how we struggle 
		with remembering even 7-digit-long telephone numbers – then imagine your 
		grandchildren able 
		to recall 15-digit 
		telephone numbers a day later, and even say them backwards.
		
		            It 
		probably doesn’t take genetic engineering to do this.  Better training 
		in childhood, based on understanding brains and childhood development 
		better – as in my softwiring examples for syntax and reading – could do 
		a great deal in preparing us to deal with more things at the same time, 
		to hold more agendas and revisit them, and to make decisions more 
		reliably.
		
		            Very 
		little education or training is currently based on scientific knowledge 
		of brain mechanisms.  But that will change in the next several decades.  
		To imagine what a difference it could make, consider the history of 
		medicine.
		
		            Two 
		centuries ago, medicine was largely empirical; vaccination for smallpox 
		was invented in 1796, and the circulation of the blood was known, but 
		scientific contributions were a tiny proportion of medicine.  Digitalis 
		was used for congestive heart failure because someone tried foxglove 
		extracts and they worked. 
		
		            
		Physicians often overgeneralized and it took forever to get rid of 
		bleeding and purging.  Generations of physicians were convinced that 
		bleeding worked, but now we know it just weakened patients more quickly 
		than the disease would have done – unless you were one of the few 
		patients who had an iron overload disease, where bleeding could be 
		lifesaving.  Purging works for acute poisoning but not much else.  A 
		“grain of truth” is often massively misleading.
		
		            Even 
		when they guessed correctly and avoided overgeneralization, these early 
		physicians didn’t know how their treatment worked, the 
		physiological mechanism of the drug action or vaccination.  When you do 
		understand mechanism, you can make all sorts of improvements and guess 
		far better schemes of intervention.  That’s what adding science gets 
		you.
		
		            One 
		century ago, medicine was still largely empirical and only maybe a tenth 
		had been modified by science.  It wasn’t until 1896, for example, that 
		Emil Kraepelin proposed the separation of the psychoses into 
		schizophrenia and manic-depressive types.
		
		            These 
		days, medicine is perhaps half empirical and half scientific (where you 
		know not only what works, but a lot about how and why 
		it works).  It is only a slight exaggeration to say that the transition 
		from an empirical to a semi-scientific medicine has doubled lifespan and 
		reduced suffering by half. 
		
		            Now 
		consider education.  Today, it is largely empirical and only slightly 
		scientific, much as medicine before 1800.  We know some empirical truths 
		about education but we don’t know how the successful ones are 
		implemented in the brain, and thus we don’t know rational ways of 
		improving on them.
		
		            Yet 
		once education has the techniques and technology to incorporate what is 
		being learned about brain plasticity and inborn individual differences, 
		we are likely to produce many more adults of unusual abilities, able to 
		juggle twice as many concepts at once, able to follow a longer chain of 
		reasoning, able to shore up the lower floors of their mental house of 
		cards to allow fragile new levels to be tried out, meta-metaphors and 
		beyond – the survival of the stable but on a higher level yet again.
		
		            We may 
		expose students to the common beginners’ mistakes in computer 
		simulations, for example, so that they will become sensitized to the 
		common logical fallacies and hone their critical thinking skills.  We 
		already do advanced versions of this; medical students now learn the 
		consequences of not thinking ahead in simulated emergency-room 
		situations.  (“Because you didn’t order a CT scan an hour ago to check 
		for a bleeder in the brain, the patient’s hidden hemorrhage has now 
		progressed to the point of irreversible brain damage.  You missed the 
		window of opportunity to save the patient.  THE END.”  At least it’s not 
		a real patient.)   Such simulation of common errors will trickle down to 
		educating ten-year-olds about how advertising manipulates them; done in 
		small groups, where repeatedly getting fooled causes some embarrassment, 
		critical thinking skills might become a more regular feature of the 
		teen-aged mind set.
		
		            
		 Such education, 
		perhaps more than any of the imagined genetic changes, could make for a 
		very different adult population. We would still look the same coming out 
		of the womb, would still have the same genetics, but adults could be 
		substantially different.  A lot of the elements of human intelligence 
		are things that, while they also have a genetic basis, are malleable; we 
		ought to be able to educate for superior performance. 
		
		            I 
		think that as we move into a new generation of creative teachers 
		augmented by teaching machines to handle the more rote aspects, they 
		will tune into the individual’s weak points and strong points.  We will 
		have children coming out of the school system who will perform very 
		differently from the ones today – maybe not uniformly, but the high end 
		may be substantially higher.  We might bring up the bottom by more 
		timely interventions.
		
		            Maybe 
		those improvements in mental juggling ability will help many people 
		think more productively, so as to head off trouble before it happens.  
		Ethics might be a beneficiary of such improved foresight, and so might 
		stewardship.  
		The amount of time 
		we spend considering the possibilities versus rushing to judgment is an 
		example of a variable where you finally move away from making beginner’s 
		mistakes to having a much more nuanced view of things.  We may be able 
		to train for that.  Our higher education pushes people in that 
		direction, and science trains for skepticism, but there is quite a lot 
		that we can do in childhood.
		
		            Will 
		only the rich get smarter, or will everyone’s children gain from the new 
		flowering of education?
		
		
		 
		
		
		What will happen to consciousness?  
		And to those related things called conscience and self-consciousness?  
		There are many subconscious aspects of mind operating in the background, 
		such as our agendas, but in the foreground is something much more 
		personal, the narrator of the life story capable of aspirations and 
		reflections, capable of great achievement and pathetic meanness.  To 
		some degree, we can invent — and daily reinvent — ourselves. 
		
		            And 
		what about higher consciousness, 
		you may ask?  I’m not sure what it is (you may have noticed that I tend 
		to talk instead about higher intellectual functions and the 
		decision-making process), but can we jack “it” up even higher?
		
		            A 
		great deal of our consciousness involves guessing well, as we try to 
		make a coherent story out of fragments.  The neurologist Adam Zeman 
		lumps it all into the search for meaning:  “Eye and brain run ahead of 
		the evidence, making the most of inadequate information – and, 
		unusually, get the answer wrong…. What we see resonates in the memory of 
		what we have seen; new experience always percolates through old, leaving 
		a hint of its flavor as it passes. We live, in this sense, in a 
		‘remembered present.’”   
		
		            The 
		neurologist Antonio Damasio speaks of an extended consciousness having 
		an enhanced level of detail and time span.  But note that this likely 
		could not be achieved without an equivalent in thought of syntax’s past 
		and future tenses and the long sentences made unambiguous by 
		structuring.  In short, Damasio’s extended consciousness needs syntax’s 
		structuring aspect, even without overt planning or speech, just to keep 
		mental life from blending everything like a summer drink.  And to keep 
		from getting muddled when more than maybe three concepts have to be 
		juggled at the same time.  Nor can you speculate about the future 
		without an ability to improve novel thoughts into something of quality.
		
		            To 
		come back to what I said at the beginning of this brief history of mind, 
		we tend to see ourselves situated as the 
		narrator of a life story, always at a crossroads between past and 
		future, swimming in speculation.  I think that some people today have a 
		lot of this sense of being a narrator-in-charge, while others have less 
		of the creative imagination needed to analyze the past and speculate 
		about the future.  In the future, we might see enhanced 
		conscience, with the higher-order emotions like embarrassment, envy, 
		pride, guilt, shame, and humiliation changing as well.
		
		
		
		            
		But at the high 
		end, what might pump us up even higher?  If our consciousness is a house 
		of cards, perhaps there are techniques, equivalent to bending the cards, 
		that will allow us to spend more time at the more abstract levels.  Can 
		we shore up our mental edifices to build much taller “buildings” or 
		discover the right mental “steel?”
		
		
		 
		
		
		We could certainly use some help, 
		as we have some 
		giant problems to solve soon, problems of vulnerability that 
		civilization faces from its success.  Even if we manage to fix all the 
		rough spots and augment the higher-order stuff, we will still need to 
		cope with two major products of higher intellectual function so far.  
		One is population size, associated with the metaphor “the bigger they 
		are, the harder they fall.” The second is relative cultural 
		speed, as in my earlier discussion about “speed kills” and “we need 
		better headlights.”
		
		Thanks to simple 
		planning applied to farming, population size has gone up about 
		6,000-fold since the beginning of agriculture.  This productivity is 
		what makes big cities possible, but we usually forget the unfortunate 
		consequences of size.  For example, if a mouse falls off a cliff, it is 
		likely to land and get up, shake itself, and scamper off into the 
		undergrowth.  An object the size of a dog that falls off a cliff is 
		likely to break half the bones in its body.  Anything the size of a 
		horse will splatter.  To apply this to civilization, recall the earth 
		scientists who say, “Earthquakes don’t kill people, but buildings do.” 
		
		
		            
		Lurches can come from many things that last longer than hurricanes and 
		earthquakes, which are over in a day and localized enough so the rest of 
		the country can bail you out.  But droughts can last for decades – far 
		longer than the Dust Bowl of the 1930s in the United States – and affect 
		wide areas.  Some even last for centuries (North Dakota had one that 
		lasted 700 years).  
		
		            Will 
		the farmers still be able to support 70 times their own population if we 
		have a widespread drought?  What about droughts that are both 
		century-long and widespread?  Alas, five of the last 20 centuries in 
		North America have featured widespread droughts in the Great Plains and 
		West that each lasted for more than a hundred years.  So, just from the 
		paleoclimate records, the present century has at least a 25 percent 
		chance of suffering from drought conditions in which agriculture could 
		no longer feed our large cities.
		
		Or what happens 
		when an agricultural monoculture gets in trouble from a widespread 
		disease, as happened in the 1848 potato famine in Ireland?  (Eliminating 
		the seed varieties via the efficiency of a centrally manufactured seed – 
		already a problem, which genetically modified seed will make worse – may 
		put too many eggs in one basket.)  Will the city populations quietly 
		starve in place, or flee to further disrupt the agricultural areas?
		
		            What 
		if such a lurch were so widespread and long-lasting that it affected 
		much of civilization?  (Worldwide droughts – usually known under the 
		name of “abrupt cooling episodes” – have occurred many times, the 
		average interval being 3,000 years but with the most recent one 12,000 
		years ago.)  A collapse of civilization would not merely reduce world 
		population size to what it was a few centuries ago.  The attendant 
		genocides during downsizings might also take us into an 
		everyone-hates-their-neighbors hole from which it would be difficult to 
		escape.  The harder we fall, the deeper the hole we will create.
		
		            A 
		collapse can be augmented by speed: stampedes can kill many more people 
		than their direct cause could have done (urban panic was what Aum 
		Shinrikyo was trying, but failed, to stir up with their Tokyo subway 
		attacks).  Our speed of communication helps set us up for panics, where 
		a lot of people head for the door at exactly the same time.
		
		            The 
		economic area is probably just as vulnerable to abrupt impacts as 
		climate, and the 1997 currency crisis in Indonesia caused a lot of 
		starvation even though food production was still working.  We can now 
		have widespread panics in the world’s economies, accelerated by having 
		24/7 markets. 
		
		            Our 
		transportation systems are now moving a lot of insects and viruses 
		around the world.  Sometimes they find a new niche and are off and 
		running.  It takes time to detect them and even longer to devise 
		effective strategies to contain the problem; this slowness of response 
		allows a major epidemic to establish itself.
		
		            The 
		number of people an epidemic directly kills may be only part of the 
		problem.  If you cannot get truck drivers to go into a contaminated 
		city, a lot of people die from starvation.  Lawlessness springs up and 
		amplifies the problem.  (Recall how even Baghdad hospitals were 
		inexplicably looted when the police disappeared for a few days in 2003.)
		
		            
		Gradual change (as in our notions of gradual greenhouse warming) seems 
		to be the default setting for our minds, even though evidence abounds 
		for whiplashes.  Some people assume that a free marketplace of ideas and 
		products will solve any problem, given enough time, without realizing 
		that many natural causes are more like a 1940 blitzkrieg invader than 
		like a 1916 back-and-forth battlefront.  We are very vulnerable to a 
		lurch, whether from climate, disease, or economic panic.  Yet we 
		continue to treat these problems as if simple extrapolation from 
		present-day conditions will suffice.  Sustainability must also encompass 
		surviving the lurches.
		
		            For a 
		lurch, only a lot of organized prevention will head off the consequences 
		– and defense is expensive, having to cover so many routes to collapse, 
		all at the same time.  (The generals say that offense is much easier 
		than defense because you get to choose the time and route.)  Judging 
		from the past, creeping climate could suddenly turn into a blitzkrieg 
		against civilization.  Some things are too important to be left to 
		on-the-fly improvisation and competition – and that now must include the 
		abrupt aspects of public health, economic stability, and climate change.
		
		 
		
		
		Though I’m generally an optimist, 
		it is easy for me to sound pessimistic when forced to list the hazards.  
		It’s a fundamental asymmetry; a pessimist can be much more concrete 
		about the downside than an optimist can ever be about the upside.  In 
		comparison, the possibilities imagined by an optimist will always seem 
		fuzzy when contrasted with the known dangers having a substantial track 
		record.
		
		            Yes, 
		in the face of the “not ready for prime time” aspects of our intellects 
		that I earlier mentioned, we have some serious problems.  Yet much the 
		same could have been said in earlier periods – and civilization 
		nonetheless improved greatly in both technological and humanistic 
		terms.  As David Brin observed, “In two or three centuries our levels of 
		education, health, liberation, tolerance and confident diversity have 
		been momentously, utterly transformed.” We cannot neglect the creeping 
		trends and incipient lurches that endanger us, but we can also feel 
		hopeful, given our frequent ability to transcend our apparent 
		limitations, once we have a clear view of the challenges.  Fatalism is a 
		cheap copout.
		
		            We 
		need to shore up civilization’s foundations to deal with any type of 
		lurch, whether climatic or economic or epidemic.  And humanity has done 
		it before:  there is a famous example of shoring up your foundations 
		called the flying buttress, and it is emblematic of our situation today.
		
		            
		Consider that prime example of the large-scale projects that western 
		civilizations have undertaken in the past:  just reflect on the amount 
		of energy and labor – the percentage of the GNP, if you like – that went 
		into building cathedrals.  And then what it took a century later, when 
		retrofitting them with flying buttresses.  This example from a thousand 
		years ago gives us some perspective on the situation we face today, 
		where we cannot even find the money to pay for high quality public 
		schools and long-term projects like coping with climate change, because 
		we are so overcommitted to less essential things.
		
		            Like 
		some other commentators on the future, I think that we are heading into 
		a dangerous period – full of opportunity, but precarious.  We may not be 
		gods, but it is as if we were – in our impact on the world and 
		our own evolution – so perhaps, as Stewart Brand once said, we had 
		better get good at the god business.
		
		
		            It’s 
		not that we need to create a new successor species, a Homo sapiens 
		sapiens sapiens.  But we must become far more competent at managing 
		our situation, and become more conscientious about our long-term 
		responsibilities to keep things going.  Certainly, it is juvenile of us 
		to think that someone else is going to clean up after us, or pick us up 
		after we fall.
		
		