echarnaik and mcdermo

Upload: jincy-jacob

Post on 04-Jun-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 Echarnaik and Mcdermo

    1/8

    Introduction toArtificial IntelligenceEugene CharniakBrown University

    Drew McDermottYale University

    Techniscb ochschufe Oarmstadt. F C H B E R E I C H I N F O R M T I K1 1 _ L U _ O _ I _ H eK ' :

    fsventar-Nr.j.

    ADDISON WESLEY PUBLISHING COMPANYReading, MassachusettsMenlo Park, CaliforniaDon M ills, O ntario Wokingham, England Amsterdam Bonn SydneySingapore Tokyo Madrid Bogota Santiago San Juan

  • 8/13/2019 Echarnaik and Mcdermo

    2/8

    Contents1 AI and Internal Represen tation 1

    1.1 Artificial Intelligence and the W orld 11.2 What Is Artificial Intelligence? 61.3 Representation in AI 81.4 Properties of Internal Representation 111.5 The Predicate Calculus 141.5.1 Predicates and Argum ents 15

    1.5.2 Connectives 161 .5.3 Variables and Quantification 181.5.4 H ow to Use the Predicate Calculus 18

    1.6 O ther Kinds of Inference 211.7 Indexing , Po inters, and Alternative Notations 22

    1.7.1 Indexing 241.7.2 The Isa H ierarchy 251.7.3 Slot-Assertion Notation 271.7.4 Fram e Notation 28

    1.8 References and Further Reading 28Exercises 29

    2 Lisp 332.1 Why L isp? 332.2 L isps 34

  • 8/13/2019 Echarnaik and Mcdermo

    3/8

    xii Contents

    2.3 Typing at Lisp 342.4 Denning Programs 372.5 Basic Flow of Control in Lisp 392.6 Lisp Style 422.7 Atoms and Lists 442.8 Basic Debugging 482.9 Building Up List Structure 532.10 More on Predicates 562.11 Properties 582.12 Pointers, Cell Notation, and the Internals Almost) of Lisp 592.13 Destructive Modification of Lists 652.14 The for Function 702.15 Recursion 722.16 Scope of Variables 732.17 Input/Output 762.18 Macros 782.19 References and Further Reading 80Exercises 80

    3 Vision 873.1 Introduction 873.2 Defining the Problem 893.3 Overview of the Solution 933.4 Early Processing 97

    3.4.1 Gray-Level Image to Primal Sketch 993.4.2 Convolution with Gaussians Optional) 1013.4.3 Virtual Lines I l l3.4.4 Stereo Disparity 1133.4.5 Texture 1203.4.6 Intrinsic Image 1263.4.7 Cooperative Algorithms 1313.4.8 Vertex Analysis and Line Labeling 137

    3.5 Representing and Recognizing Scenes 1433.5.1 Shape Description 1443.5.2 Matching Shape Descriptions 1503.5.3 Finding a Known Shape to Match Against 1533.5.4 Describing a Seen Shape 155

  • 8/13/2019 Echarnaik and Mcdermo

    4/8

    Contents X1U

    3.6 References and Further Reading 159Exercises 160

    4 Parsing Language 1694.1 L evels of L anguage 1694.2 Expressing the Rules of Syntax 174

    4.2 .1 Why W e Need Rules of Syntax 1744.2 .2 Diagraming Sentences 1754.2 .3 Why Do W e Care about Sentence Structure? 1774.2.4 Context-Free Gramm ars 1794.2 .5 Dictionaries and Features 1834.2 .6 Transformational Gram mar (O ptional) 1884.3 Syntactic Parsing 1944.3.1 Top-Down and Bottom-Up Parsing 1944.3 .2 Transition Network Parsers 1974. 3.3 Augm ented Transition Netwo rks (O ptional) 2064.3 .4 Movem ent Rules in ATN Gram mars (O ptional) 210

    4.4 Building an ATN Interpreter (O ptional) 2144.4 .1 A Non-Backtracking ATN Interpreter 2154.4.2 A Backtracking ATN Interpreter 2184.4 .3 Alternative Search Strategies 222

    4.5 From Syntax to Semantics 2234.5 .1 The Interpretation of Definite Noun Phrases 2234.5 .2 Case Gram mar and the Meaning of Verbs 230

    4.6 When Seman tics, When Syntax? 2384.6 .1 The Syntactic Use of Semantic Knowledge 2384.6 .2 The O rganization of Parsing 241

    4.7 References and Further Reading 245Exercises 2465 Search 255

    5.1 Introduction 2555.1 .1 The Need for Guesswork 2555.1 .2 Search Problems 257

    5.2 A Search Algorithm 2595.3 Goal Trees 2705.3.1 Formal Definition 271

    5.3.2 Searching Goal Trees 274

  • 8/13/2019 Echarnaik and Mcdermo

    5/8

    XIV Contents

    5.3.3 Formalism Revisited 2755.3.4 ATN Parsing as a Search Problem (O ptional) 278

    5.4 Game Trees (O ptional) 2815.4.1 Game Trees as Goal Trees 2815.4.2 Minimax Search 2865.4.3 Actual Game Playing 290

    5.5 Av oiding Repeated States 2945.6 Transition-o riented State Represe ntations (O ptional) 2975.7 GPS 3005.8 Continuous O ptimization (O ptional) 3065.9 Summ ary 3095.10 References and Further Reading 310Exercises 311

    6 Logic and Deduction 3196.1 Introduction 3196.2 Using Predicate Calculus 321

    6.2.1 Syntax and Semantics 3216.2.2 Some Abstract Representations 3236.2 .3 Quantifiers and Axiom s 3336.2.4 Encoding Facts as Predicate Calculus 3376.2.5 Discussion 343

    6.3 Deduction as Search 3446.3.1 Forw ard Chaining and Unification 3456.3 .2 Skolemization 3496.3.3 Backward Chaining 3516.3.4 Goal Trees for Backward Chaining 353

    6.4 Applications of Theorem Proving 3606.4.1 Mathematical Theorem Proving 3616.4.2 Deductive Retrieval and L ogic Programm ing 365

    6.5 Advanced Topics in Representation 3696.5.1 Nonm onotonic Reasoning 3696.5.2 Using X-Expressions as De scriptions (O ptional) 3716.5.3 Modal and Intensional L ogics (O ptional) 375

    6.6 Com plete Resolution (O ptional) 3786.6.1 The General Resolution Rule 3796.6.2 Search Algo rithms for Resolution 382

  • 8/13/2019 Echarnaik and Mcdermo

    6/8

    Contents XV

    6.7 References and Further Reading 385Exercises 386

    7 Mem ory Organization and Deduction 3937.1 The Im portance of Mem ory O rganization 3937.2 Approaches to Mem ory O rganization 396

    7.2.1 Indexing Predicate-Calculus Assertions 3967.2.2 Associative Networks 4007.2.3 Property Inheritance 405

    7.3 Data Dependencies 41 17.4 Reasoning Involving Time 416

    7.4.1 The Situation Calculus 4177.4.2 Temp oral System Analysis 4207.4.3 Time-Map Management 429

    7.5 Spatial Reasoning 4337.6 Rule-Based Programm ing 4377.7 References and Further Reading 440Exercises 442

    8 Abduction Uncertainty and Expert Systems 4538.1 What Is Abduction? 453

    8.1 .1 Abduction and Causation 4538.1 .2 Abduction and Evidence 4548.1 .3 Expert Systems 455

    8.2 Statistics in Abduction 4578.2.1 Basic Definitions 4578.2.2 Bay es s Theorem 4608.2.3 The Problem of Mu ltiple Symptoms 4618.3 The Mycin Program for Infectious Diseases 465

    8.4 Search Con siderations in Abduction 4688.4.1 Search Strategy in Mycin 4688.4.2 Bottom-Up Abduction 4698.4.3 Search in Caduceus 471

    8.5 Multiple Diseases 4718.5.1 Mu ltiple Diseases According to Bayes 4718.5.2 H euristic Techniques 4728.6 Caduceus 474

    8.7 Bayesian Inference Networks 477

  • 8/13/2019 Echarnaik and Mcdermo

    7/8

    XVI Contents

    8.8 Still M ore Com plicated Cases 4808.9 References and Fur ther Reading 482Exercises 482

    9 M anaging Plans of Action 4859.1 Introduction 4859.2 A Basic Plan In terpreter 4899.3 Planning Decisions 499

    9.3.1 Anticipating Protection Violations 4999.3.2 Choosing O bjects to Use 5119.3.3 Temporally Restricted Goals 5129.3.4 Planning by Searching through Situations 5149.3.5 Shallow Reasoning about Plans 51 89.3.6 Decision Theory 51 9

    9.4 Execution Monitoring and Replanning 5249.5 Dom ains of Application 527

    9.5.1 Robot Motion Planning 5279.5.2 Gam e Playing 535

    9.6 References and Furth er Reading 542Exercises 543

    10 Language Com prehension 55510.1 Story Com prehension as Abduction 5551 0.2 Determ ining Motivation 557

    1 0.2.1 Motivation Analysis = Plan Synthesis in Reverse 5571 0.2.2 Deciding Between Motivations 5601 0.2.3 When to Stop 566

    1 0.3 Gene ralizing the Model 5671 0.3.1 Abductive Projection 5671 0.3.2 Understanding O bstacles to Plans 5691 0.3.3 Subsumption Goals and rep eat- un til 572

    1 0.4 Details of Motivation Analysis (O ptional) 5731 0.4.1 Abductive M atching 5731 0.4.2 Finding Possible Motivations 578

    1 0.5 Speech Acts and Conversation 5811 0.5.1 Speech Acts in Problem Solving 5841 0.5.2 The Recognition of Speech Acts 5861 0.5.3 Conversations 589

  • 8/13/2019 Echarnaik and Mcdermo

    8/8

    C o n t en t s XV11

    1 0.6 Disambiguation of L anguage 5911 0.6.1 Referential Am biguity and Context 5921 0.6.2 Con versation and Reference 5971 0.6.3 W ord Sense Disambiguation 59810.7 Where We H ave Been, and W here We Are Going 601

    1 0.8 References and Furth er Reading 603Exercises 604

    11 Learning 60911 .1 Introduction 60911 .2 L earning as Induction 610

    1 1 .2.1 The Em piricist Algorithm 6111 1 .2.2 Generalization and Specialization 6141 1 .2.3 Matching 6211 1 .2.4 A Matching Algorithm (O ptional) 6241 1 .2.5 Analogy 6261 1 .2.6 Indexing for L earning 6291 1 .2.7 Assessment 633

    1 1 .3 Failure-driven L earning 63511 .4 L earning by Being Told 63811 .5 L earning by Exploration 6421 1 .6 L earning L anguage 650

    1 1 .6.1 An O utline of the Problem 6501 1 .6.2 Determining the Internal Representation 6511 1 .6.3 L earning Phrase-structure Rules 6521 1 .6.4 L earning Transformational Rules 654

    1 1.7 References and Further Reading 659Exercises 660

    References 663Index 687