FUTURE OF INTELLIGENCE

The Future of Intelligence.

Once we know what we need to do, our nanotechnologies should enable us to construct replacement bodies and brains that won't be constrained to work at the crawling pace of "real time." The events in our computer chips already happen millions of times faster than those in brain cells. Hence, we could design our "mind-children" to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.

But could such beings really exist? Many thinkers firmly maintain that machines will never have thoughts like ours, because no matter how we build them, they'll always lack some vital ingredient. They call this essence by various names--like sentience, consciousness, spirit, or soul. Philosophers write entire books to prove that, because of this deficiency, machines can never feel or understand the sorts of things that people do. However, every proof in each of those books is flawed by assuming, in one way or another, the thing that it purports to prove--the existence of some magical spark that has no detectable properties.

I have no patience with such arguments. We should not be searching for any single missing part. Human thought has many ingredients, and every machine that we have ever built is missing dozens or hundreds of them! Compare what computers do today with what we call "thinking." Clearly, human thinking is far more flexible, resourceful, and adaptable. When anything goes even slightly wrong within a present-day computer program, the machine will either come to a halt or produce some wrong or worthless results. When a person thinks, things constantly going wrong as well--yet this rarely thwarts us. Instead, we simply try something else. We look at our problem a different way, and switch to another strategy. The human mind works in diverse ways. What empowers us to do this?

On my desk lies a textbook about the brain. Its index has about 6000 lines that refer to hundreds of specialized structures. If you happen to injure some of these, you could lose your ability to remember the names of animals. Another injury might leave you unable to make any long range plans. Yet another kind of impairment could render you prone to suddenly utter dirty words, because of damage to the machinery that normally censors that sort of expression. We know from thousands of similar facts that the brain contains diverse machinery.

Thus, your knowledge is represented in various forms that are stored in different regions of the brain, to be used by different processes. What are those representations like? In the brain, we do not yet know. However, in the field of Artificial Intelligence, researchers have found several useful ways to represent knowledge, each better suited to some purposes than to others. The most popular ones use collections of "If-Then" rules. Other systems use structures called 'frames'--which resemble forms that are filled out. Yet other programs use web- like networks, or schemes that resemble tree-like scripts. Some systems store knowledge in language- like sentences, or in expressions of mathematical logic. A programmer starts any new job by trying to decide which representation will best accomplish the task at hand. Typically then, a computer program uses only a single representation and if this should fail, the system breaks down. This shortcoming justifies the common complaint that computers don't really "understand" what they're doing.

But what does it mean to understand? Many philosophers have declared that understanding (or meaning, or consciousness) must be a basic, elemental ability that only a living mind can possess. To me, this claim appears to be a symptom of "physics envy"--that is, they are jealous of how well physical science has explained so much in terms of so few principles. Physicists have done very well by rejecting all explanations that seem too complicated, and searching, instead, for simple ones. However, this method does not work when we're dealing with the full complexity of the brain. Here is an abridgment of what I said about understanding in my book, The Society of Mind. "If you understand something in only one way, then you don't really understand it at all. This is because, if something goes wrong, you get stuck with a thought that just sits in your mind with nowhere to go. The secret of what anything means to us depends on how we've connected it to all the other things we know. This is why, when someone learns 'by rote,' we say that they don't really understand. However, if you have several different representations then, when one approach fails you can try another. Of course, making too many indiscriminate connections will turn a mind to mush. But well-connected representations let you turn ideas around in your mind, to envision things from many perspectives until you find one that works for you. And that's what we mean by thinking!"

I think that this flexibility explains why thinking is easy for us and hard for computers, at the moment. In The Society of Mind, I suggest that the brain rarely uses only a single representation. Instead, it always runs several scenarios in parallel so that multiple viewpoints are always available. Furthermore, each system is supervised by other, higher-level ones that keep track of their performance, and reformulate problems when necessary. Since each part and process in the brain may have deficiencies, we should expect to find other parts that try to detect and correct such bugs.

In order to think effectively, you need multiple processes to help you describe, predict, explain, abstract, and plan what your mind should do next. The reason we can think so well is not because we house mysterious spark-like talents and gifts, but because we employ societies of agencies that work in concert to keep us from getting stuck. When we discover how these societies work, we can put them to inside computers too. Then if one procedure in a program gets stuck, another might suggest an alternative approach. If you saw a machine do things like that, you'd certainly think it was conscious.

The Failures of Ethics

This article bears on our rights to have children, to change our genes, and to die if we so wish. No popular ethical system yet, be it humanist or religion-based, has shown itself able to face the challenges that already confront us. How many people should occupy Earth? What sorts of people should they be? How should we share the available space? Clearly, we must change our ideas about making additional children. Individuals now are conceived by chance. Someday, though, they could be 'composed' in accord with considered desires and designs. Furthermore, when we build new brains, these need not start out the way ours do, with so little knowledge about the world. What sorts of things should our mind-children know? How many of them should we produce- -and who should decide their attributes?

Traditional systems of ethical thought are focused mainly on individuals, as though they were the only things of value. Obviously, we must also consider the rights and the roles of larger scale beings--such as the super-persons we call cultures, and the the great, growing systems called sciences, that help us to understand other things. How many such entities do we want? Which are the kinds that we most need? We ought to be wary of ones that get locked into forms that resist all further growth. Some future options have never been seen: Imagine a scheme that could review both your and my mentalities, and then compile a new, merged mind based upon that shared experience.

Whatever the unknown future may bring, already we're changing the rules that made us. Although most of us will be fearful of change, others will surely want to escape from our present limitations. When I decided to write this article, I tried these ideas out on several groups and had them respond to informal polls. I was amazed to find that at least three quarters of the audience seemed to feel that our life spans were already too long. "Why would anyone want to live for five hundred years? Wouldn't it be boring? What if you outlived all your friends? What would you do with all that time?" they asked. It seemed as though they secretly feared that they did not deserve to live so long. I find it rather worrisome that so many people are resigned to die. Might not such people be dangerous, who feel that they do not have much to lose?

My scientist friends showed few such concerns. "There are countless things that I want to find out, and so many problems I want to solve, that I could use many centuries," they said. Certainly, immortality would seem unattractive if it meant endless infirmity, debility, and dependency upon others--but we're assuming a state of perfect health. Some people expressed a sounder concern--that the old ones must die because young ones are needed to weed out their worn-out ideas. However, if it's true, as I fear, that we are approaching our intellectual limits, then that response is not a good answer. We'd still be cut off from the larger ideas in those oceans of wisdom beyond our grasp.

Will robots inherit the earth? Yes, but they will be our children. We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called Evolution. Our job is to see that all this work shall not end up in meaningless waste.

Further Reading

THE SOCIETY OF MIND, Marvin Minsky. Simon and Schuster,

MIND CHILDREN Hans Moravec, Harvard University Press, 1988.

NANOSYSTEMS, K. Eric Drexler. John Wiley & Sons, 1992.

THE TURING OPTION, Marvin Minsky and Harry Harrison. Warner Books, 1992.more lings