Isaac Asimov had a pretty good handle on how to deal with smart machines: create them with built-in ethics. Thus Asimov's three laws of robotics, which make it possible to live with creatures smarter than ourselves.
The first law: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." The second: "A robot must obey orders given it by human beings except where such orders would conflict with the First Law." And the third: "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
The question is, can we build such laws into machinery? The answer may be important, for if Vernor Vinge is right, we're rapidly moving to a time when computer intelligence will be so far beyond our own that predicting the future will become impossible. Vinge calls this the Singularity.
Vinge, a retired computer scientist, writes superb science fiction of his own, his best being the 1992 novel "A Fire Upon the Deep." Since he first laid out the Singularity at a 1993 NASA gathering, his work has been the source of endless speculation about the nature of intelligence, and the ramifications of current technology trends as they follow what seem to be unstoppable, exponential laws.
"An ultraintelligent machine could design even better machines," Vinge writes. "There would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man ever need make ..."
The arrival of such an intelligence -- which Vinge thinks could occur before 2030 when large computer networks "wake up" as an entity with super-human intelligence, or as researchers develop it in labs -- would be a "singularity" because we would not be able to use human reason to see beyond it. As for using it to our advantage, Vinge says that it would not be our tool, "any more than humans are the tools of rabbits or robins or chimpanzees."
And what of the human race as the Singularity approached? Steven Spielberg explored one outcome in his movie "AI," where a humanity being replaced by intelligent machines begins to realize its predicament, destroying the androids in almost orgiastic festivals of destruction known as "flesh fairs." Another scenario might see the Asimov solution, with human rules being followed by computers even as the machines move well beyond our limited understanding.
But there is one school of thought -- count me in this camp -- that says human brains are too complex for the Singularity to form any time soon. This might be the case, as Vinge himself notes, if the brain were 10 orders of magnitude more powerful than today's computer hardware, rather than the three that some theorists have supposed.
If that is the case, the Singularity will be slowed, and computer performance will begin to level off. We will eventually reach the limits on the kind of automated design that would make further improvements possible. The idea that we are approaching a plateau in computer power is a corrective to the breathless talk of machine takeover that animates the debate about artificial intelligence.
Instead of our artifacts "awakening," we may find ourselves awakening to the fact that technology remains a matter of tough choices based on human ethics. That's not a "singularity," but merely a call to stay in the game, considering the consequences of all the technology we build with care. It's also a call for human dignity to reassert itself and stop waiting for an all-too-hypothetical day when machines will take moral choice out of our hands.
Paul A. Gilster, a local author and technologist, can be reached at firstname.lastname@example.org.