puppetry vs. creationism: why ai must cross the chasm

3
SEPTEMBER/OCTOBER 2006 1541-1672/06/$20.00 © 2006 IEEE 7 Published by the IEEE Computer Society Histories & Futures puters to perform tasks that previously only humans could do. Then, people began comparing machines to humans as problem solvers, and the race was on to see where machines could match or even surpass human performance. Success in solving math word problems, winning checkers and chess championships, understanding natural language, and generating plans and schedules reinforced our efforts to build supercapable machines. I call these puppets, not to derogate the machines but to respect the importance of the programmers and builders who were actually respon- sible for their accomplishments. From time to time, many of us have recognized the field’s rate-limiting factor under various names and view- points, such as the knowledge-acquisition bottleneck and the challenges of machine learning, system bootstrapping, artificial life, and self-organizing systems. Some have fo- cused on efforts to create a large corpus of off-the-shelf knowledge that would enable the next puppet to stand on the shoulders of its predecessors. Mostly, however, these efforts have had limited success. The little bit of learning and adaptation they’ve demonstrated has paled in compari- son to the puppeteers’ laborious inputs. I believe we’re on a local maximum, making better and better puppets with no apparent increase in speed or accel- eration. We’re stuck, and puppet making is a technology begging to be leapfrogged. Problem and analysis The best systems of our times have been mostly hand- crafted by great engineers. These puppet makers have analyzed the task environments, knowledge requirements, and reasoning skills necessary for successful applications, and they’ve addressed them with better and better tools over time. This approach can work for any well-defined and sufficiently narrow task. If the puppets failed, the engineers would diagnose and debug the errors. They would determine what knowledge to add or modify, how to program it, and how to modify and rebalance the pre- existing programs to accommodate the new performance without harming the parts that already worked well. Auto- mation in adaptation, learning, and knowledge acquisition was very limited—a tiny fraction of the overall knowledge required, which the engineers mostly prepared manually. We haven’t yet figured out how to make the puppets re- sponsible for their own debugging and improvement. Be- cause we’re mostly labor intensive, we’re on a curve of diminishing returns. Efforts to address this productivity decline through reusable knowledge bases have had lim- ited success, chiefly because human engineers must com- prehend the problem, analyze the knowledge requirements, determine how to adapt the available knowledge to the new application requirements, and conduct the essentially ex- perimental cycle of modifying the implementation, then testing, diagnosing, refining the knowledge, and adapting the code. So, even though the puppets get more marvelous, the credit goes to the puppeteers. Moreover, the time inter- vals between significant improvements aren’t decreasing. The rate-limiting factor is the speed with which human engineers can change the puppets. Vision and opportunity In the dome of the Sistine Chapel, Michelangelo Buo- narroti’s fresco shows the creation of Adam with a preg- nant touch between the hand of God and the hand of Ad- am. Many interpret this as illustrating the simultaneous animating of Adam and the bestowal of free will. In AI jargon, Adam was granted autonomy and agency, as well as responsibility. This is a bit of a conundrum for a naïve, ignorant, or merely inexpert creature. Given the lack of knowledge and experience, such creatures are bound to commit errors. To thrive, they must incorporate a strong drive to improve as well as an effective process for con- tinuous improvement. Continuous improvement is widely taken for granted as the sine qua non of organizational excellence, but we can be pretty sure that the biblical creation story took a lot of this A rtificial intelligence began with an enthusiastic em- brace of newly available computing machinery and the basic question of what kinds of problems we could solve with it. The first 50 years focused on programming com- Puppetry vs. Creationism: Why AI Must Cross the Chasm Rick Hayes-Roth, Naval Postgraduate School

Upload: vungoc

Post on 17-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Puppetry vs. Creationism: Why AI Must Cross the Chasm

SEPTEMBER/OCTOBER 2006 1541-1672/06/$20.00 © 2006 IEEE 7Published by the IEEE Computer Society

H i s t o r i e s & F u t u r e s

puters to perform tasks that previously only humans coulddo. Then, people began comparing machines to humans asproblem solvers, and the race was on to see where machinescould match or even surpass human performance. Successin solving math word problems, winning checkers andchess championships, understanding natural language,and generating plans and schedules reinforced our effortsto build supercapable machines. I call these puppets, notto derogate the machines but to respect the importance ofthe programmers and builders who were actually respon-sible for their accomplishments.

From time to time, many of us have recognized thefield’s rate-limiting factor under various names and view-points, such as the knowledge-acquisition bottleneck andthe challenges of machine learning, system bootstrapping,artificial life, and self-organizing systems. Some have fo-cused on efforts to create a large corpus of off-the-shelfknowledge that would enable the next puppet to stand onthe shoulders of its predecessors. Mostly, however, theseefforts have had limited success. The little bit of learningand adaptation they’ve demonstrated has paled in compari-son to the puppeteers’ laborious inputs.

I believe we’re on a local maximum, making better andbetter puppets with no apparent increase in speed or accel-eration. We’re stuck, and puppet making is a technologybegging to be leapfrogged.

Problem and analysisThe best systems of our times have been mostly hand-

crafted by great engineers. These puppet makers haveanalyzed the task environments, knowledge requirements,and reasoning skills necessary for successful applications,and they’ve addressed them with better and better toolsover time. This approach can work for any well-definedand sufficiently narrow task. If the puppets failed, the

engineers would diagnose and debug the errors. Theywould determine what knowledge to add or modify, howto program it, and how to modify and rebalance the pre-existing programs to accommodate the new performancewithout harming the parts that already worked well. Auto-mation in adaptation, learning, and knowledge acquisitionwas very limited—a tiny fraction of the overall knowledgerequired, which the engineers mostly prepared manually.

We haven’t yet figured out how to make the puppets re-sponsible for their own debugging and improvement. Be-cause we’re mostly labor intensive, we’re on a curve ofdiminishing returns. Efforts to address this productivitydecline through reusable knowledge bases have had lim-ited success, chiefly because human engineers must com-prehend the problem, analyze the knowledge requirements,determine how to adapt the available knowledge to the newapplication requirements, and conduct the essentially ex-perimental cycle of modifying the implementation, thentesting, diagnosing, refining the knowledge, and adaptingthe code. So, even though the puppets get more marvelous,the credit goes to the puppeteers. Moreover, the time inter-vals between significant improvements aren’t decreasing.The rate-limiting factor is the speed with which humanengineers can change the puppets.

Vision and opportunityIn the dome of the Sistine Chapel, Michelangelo Buo-

narroti’s fresco shows the creation of Adam with a preg-nant touch between the hand of God and the hand of Ad-am. Many interpret this as illustrating the simultaneousanimating of Adam and the bestowal of free will. In AIjargon, Adam was granted autonomy and agency, as wellas responsibility. This is a bit of a conundrum for a naïve,ignorant, or merely inexpert creature. Given the lack ofknowledge and experience, such creatures are bound tocommit errors. To thrive, they must incorporate a strongdrive to improve as well as an effective process for con-tinuous improvement.

Continuous improvement is widely taken for granted asthe sine qua non of organizational excellence, but we can bepretty sure that the biblical creation story took a lot of this

A rtificial intelligence began with an enthusiastic em-

brace of newly available computing machinery and

the basic question of what kinds of problems we could solve

with it. The first 50 years focused on programming com-

Puppetry vs. Creationism: Why AI Must Cross the Chasm

Rick Hayes-Roth, Naval Postgraduate School

Page 2: Puppetry vs. Creationism: Why AI Must Cross the Chasm

for granted. Nevertheless, the idea thatcreation would launch many sentient,autonomous, continuously improving, andresponsible creatures suggests the wayforward for AI. If we want our puppets tofulfill AI’s potential, we need to launch anew category of critters that have similarcapabilities. Making better and betterclockwork puppets won’t do it for us; wesimply can’t evolve them fast enough.

Many researchers have gone after this ideaby focusing on simple creatures that couldadapt and evolve through relatively simple,mechanistic techniques. They’re makinggood progress, albeit low on the evolutionarytree. We should be launching our evolution-ary efforts on a much higher plane of under-standing and sophistication so that we canavoid the long process of recreating the kindof human knowledge that engineers nowregularly transfer into puppets.

I believe the high-reward, low-riskapproach is straightforward. We shouldshift most of our R&D to self-improving,high-competence, knowledge-intensivesystems. We should launch a new age ofcreationism, borrowing from the creationof Adam to focus on sentient, intelligent,self-improving, and responsible agents.

Efficient thought as a blueprint

We want critters that can plan and act inthe world with continuously improving re-sults. These agents will make many deci-sions based on their beliefs about how the

world works, which I term their “worldmodel.” That model lets them interpretobservations by instantiating their para-meterized models to match the observa-tions. In other words, the agents can per-form analysis through synthesis. Theirworld model lets them predict likely out-comes of actions and dynamic processesby computing the implications of hypo-thetical model states. With this capability,the agents can choose promising plans byselecting those that lead to favorable pre-dicted outcomes.

I call this entire cycle efficient thought.1

Figure 1 illustrates it in eight steps num-bered in a typical sequence, although mostcomplex organizations perform all eightsteps in parallel. The intelligent being (1)observes what’s happening in the environ-ment, (2) assesses the situation for signifi-cant threats and opportunities, (3) deter-mines what changes would be desirable, (4)generates candidate plans for making thosechanges, (5) projects the likely outcomes ofthose plans, (6) selects the best plan, and(7) communicates that plan to key partiesbefore implementing it. Throughout theprocess, the intelligent being (8) validatesand improves its model. The model sup-ports all eight activities, although only steps1, 2, 7, and 8 directly update and modifythe model.

The singularity and its risksFuturologists, science fiction writers, and

other visionaries often foresee events. Al-

though no one has an accurate crystal ball,many visions are ultimately realized. Sev-eral long-range forecasts about AI haveproved true. We have champion game play-ers, autonomous vehicles, mobile robots, ex-pert systems, speech transcription systems,and so forth. In other cases, practice hascome up short, or the visions themselveshave turned out to be absurd.

Most of the visions for self-modifyingand self-improving AI have been a bit scaryor a bit shallow. Much discussion has oc-curred around the concept of the singularity,first explained by Vernor Vinge2 and now a theme popularized by Ray Kurzweil,3

among others (for example, see www.aleph.se/Trans/Global/Singularity). Roughly, thesingularity is a point in history when tech-nology accelerates beyond human capacityto master it. With regard to AI, this couldmean that computers learn and communi-cate with one another faster than they canwith humans. At that point, they might notbe willing to slow down or engage further.In such a case, the machines would reach akind of escape velocity, enabling them toleave human culture behind.

I’m suggesting that we should activelyseek to create the capabilities underlyingsuch a possibility, because the potentialgains are exponentially greater than whattraditional puppetry can produce. But thissurely entails risks, as readily suggested inmost of the scary movies about rogue AIand robots gone haywire.

We obviously have limited insight intoboth the positive and negative capabilitiesof self-improving AI systems. Although theroad will be somewhat long, we ought toconsider ways to identify and mitigate therisks before they afflict us. Fortunately,many people interested in the singularityand future visions of robots have giventhese issues serious consideration. We’llwant to incorporate their ideas and relatedtechnologies into the new creationismagenda. This will help prevent predictableproblems and provide additional insuranceagainst the unforeseen.

Although I consider these risks serious,I think we can de-emphasize or deferthem a bit. To get out of the puppetrybusiness will require a major shift ininvestment, orientation, and technology.By focusing mostly on the required newcapacities for model-based learning andimprovements in operational contexts, weput first things first.

8 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Observe

Assesssituation

Determinedesiredchanges

Communicateand implement

chosen plan

Validate andimprove model

Generatecandidate plans

Select bestalternative plan

1Project likely

outcomesWorld model

7

2

3

4

5

68

Figure 1. Efficient thought employs eight key functions supported by a world model.

Page 3: Puppetry vs. Creationism: Why AI Must Cross the Chasm

Like the master puppet makers of theclassic folktales, AI engineers have builtsome marvelous machines. These machinesare surely valuable, but our puppetry won’tcross the chasm separating us from a worldof artificially intelligent creatures. To getthere, we need a singularity of artificialcreationism, where we launch artificialbeings that can adapt, learn, and evolve. Weneed to emphasize the development of con-tinuously self-improving systems that inter-act with and perform tasks in the physicalworld. Creation of those systems will marka singularity in the punctuated evolution ofartificial intelligence.

References

1. R. Hayes-Roth, Hyper-Beings: How Intelli-gent Organizations Attain Supremacy throughInformation Superiority, Booklocker.com,2006; www.hyper-beings.com.

2. V. Vinge, “Essay on the Singularity,” pre-sented at VISION-21 Symp., 1993; http://mindstalk.net/vinge/vinge-sing.html.

3. R. Kurzweil, The Singularity Is Near: WhenHumans Transcend Biology, Viking, 2005.

For more information on this topic or any othercomputing topic, please visit our Digital Library atwww.computer.org/publications/dlib.

SEPTEMBER/OCTOBER 2006 www.computer.org/intelligent 9

Rick Hayes-Rothis a professor in theInformation Sci-ences Dept. at theNaval PostgraduateSchool in Monterey,California. Contacthim at [email protected].

Editorial Calendar

2006–2007

NOVEMBER/DECEMBERSoftware Engineering Curriculum Development

JANUARY/FEBRUARYSoftware Engineering Challenges in Small Software Organizations

MARCH/APRILNeeds-Driven Dependability Engineering

MAY/JUNERequirements Engineering: Understanding

the Stakeholders’ Desires and Needs

JULY/AUGUSTTest-Driven Development

SEPTEMBER/OCTOBERRapid Application Development

with Dynamically Typed Languages

w w w. c o m p u t e r. o r g / s o f t w a r e

Building the community of leading software practitioners