introduction to ai - second lecture

22
Introduction to AI – 2 nd Lecture 1950’s – The Inception of AI Wouter Beek [email protected] 15 September 2010

Upload: wouter-beek

Post on 25-May-2015

453 views

Category:

Education


4 download

TRANSCRIPT

Page 1: Introduction to AI - Second Lecture

Introduction to AI – 2nd Lecture1950’s – The Inception of AI

Wouter [email protected]

15 September 2010

Page 2: Introduction to AI - Second Lecture

OVERVIEW OF THE 1950’SPart I

Page 3: Introduction to AI - Second Lecture

1948 - Information Theory

• Shannon 1948, A Mathematical Theory of Communication

• Source: thought message• Transmitter: message signal• Channel: signal signal’– Because of noise

• Receiver: signal’ message’• Destination: message’ thought’

Page 4: Introduction to AI - Second Lecture

Information Entropy

• Quantifies the information contained in a message.• Discrete random variable X with possible outcomes x1, …, xn.• Entropy: • The base of the logarithm is 2 for bit encoding.• We say that (limit).• Coin toss: p(head) = 1 - p(tails)

– If you know that the coin has heads on both sides, then telling you the outcome of the next toss tells you nothing, i.e. H(X) = 0.

– If you know that the coin is fair, then telling you the outcome of the next toss tells you the maximum amount of information, i.e. H(X) = 1.

– If you know that the coin has any other bias, then you receive information with entropy between 0 and 1.

Page 5: Introduction to AI - Second Lecture

1946 - ENIAC

• The first general-purpose, electronic computer.

• Electronic Numerical Integrator And Computer• Turing-completeness, i.e. able to simulate a

Turing Machine.

Page 6: Introduction to AI - Second Lecture

1937 – Turing Machine• Finite tape on which you can read/write 0 or 1.• Reading/writing head can traverse Left or Right.• Formalism for natural numbers: sequence of 1’s.• Convention: start at the first 1 of the first

argument; segregate arguments by a single 0.• Software for addition: From state Observe Act To state

START 1 0 1

1 0 R 2

2 1 R 2

2 0 1 3

3 1 L 3

3 0 R END

Page 7: Introduction to AI - Second Lecture

1937 – Turing Machine – Computational implications

• Effective computation: a method of computation, each step of which is precisely predetermined and is certain to produce the answer in a finite number of steps.

• Church-Turing Thesis: Every effectively computable function can be computed by a Turing Machine.

Page 8: Introduction to AI - Second Lecture

1955 – Logic Theorist (LT)

• “Over Christmas, Al[len] Newell and I invented a thinking machine.” [Herbert Simon, January 1956]

• LT proved 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica.

• The proof for one theorem was shorter than the one in Principia.

• The editors of the Journal of Symbolic Logic rejected a paper about the LT, coauthored by Newell and Simon.

Page 9: Introduction to AI - Second Lecture

Philosophical Ramifications

• “[We] invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind-body problem, explaining how a system composed of a matter can have the properties of mind.” [Simon]

• Opposes the traditional mind-body dichotomy:– Plato’s Forms– Christian concept of the separation of body and soul, due to

St. Paul in the Letter to the Romans.• Only under the following presupposition is Simon right:

– “A physical symbol system has the necessary and sufficient means for general intelligent action.”[Newell and Simon, 1976, Computer Science as an Empirical Inquiry]

Page 10: Introduction to AI - Second Lecture

Cartesian dualism

• Descartes: immaterial mind and material body are– ontologically distinct, yet– causally related

• Compare this to the Turing Test:– behavioral or functional

interpretation of thought, and– mechanical devices will

succeed the test

Page 11: Introduction to AI - Second Lecture

1956 - Darthmouth Conference (1/2)

• Organizers: John McCarthy, Marvin Minsky, Nathaniel Rochester, Claude Shannon

• “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”[Darthmouth Conference Proposal, 1955, italics added]

Page 12: Introduction to AI - Second Lecture

1956 - Darthmouth Conference (2/2)

• Paul McCarthy coined the term ‘Artificial intelligence’ to designate the field.

• Newell and Simon showed off their LT.• AI@50 / Dartmouth Artificial Intelligence

Conference: The Next Fifty Years– July 13–15, 2006– 50th anniversary commemoration.

Page 13: Introduction to AI - Second Lecture

NEWELL, SHAW, SIMON 1958 - GPSPart II

Page 14: Introduction to AI - Second Lecture

General Problem Solver (GPS)• Problem: the perceived difference between the desired object and the

current object.• Objects: the things that the problem is about. (E.g. theorems in logic.)

– Differences exist between pairs of objects.• Operator: something that can be applied to objects in order to

produce different objects. (E.g. the rules of inference in logic.)– Operators are restricted to apply to only certain kinds of objects.– Indexed with respect to the differences that these operators are able to

mitigate.• Heuristic information: that which aids the problem-solver in solving a

problem.– Relating operators to differences between objects.– What is and what is not (heuristic) information is relative to the problem at

hand.• Theory of problem solving: discovering and understanding systems of

heuristics.

Page 15: Introduction to AI - Second Lecture

General Problem Solver (GPS) – Generalized reasoning

1. Task environment vocabulary– proper nouns common nouns

2. Problem-solving vocabulary3. Conversion between 1 and 24. Correlative definitions

Page 16: Introduction to AI - Second Lecture

Means-Ends Analysis (MEA) –Ancient Origin

“We deliberate not about ends, but about means. […] They assume the end and consider how and by what means it is attained, and if it seems easily and best produced thereby; while if it is achieved by one means only they consider how it will be achieved by this and by what means this will be achieved, till they come to the first cause, which in the order of discovery is last …”[Aristotle, Nicomachean Ethics, III.3.1112b]

Page 17: Introduction to AI - Second Lecture

Means-Ends Analysis (MEA) –Modern Origin

• “I want to take my son to nursery school. What’s the difference between what I have and what I want? One of distance. What changes distance? My automobile. My automobile won’t work. What is needed to make it work? A new battery. What has new batteries? An auto repair shop. I want the repair shop to put in a new battery; but the shop doesn’t know I need one. What is the difficulty? One of communication. What allows communication? A telephone . . . and so on.” [Newell and Simon]

• Principle of subgoal reduction.– Part of every heuristic.

Page 18: Introduction to AI - Second Lecture

Means-Ends Analysis (MEA) –What it is

• A way of controlling search in problem solving.• Input: current state, goals state.• Output: sequence of operators that, when

applied to the current state, delivers the goal state.

• The output is derived from the input by mapping operators onto differences.– Presupposes a criterion of two states being the

same.– Presupposes a criterion of identifying the difference

between two states.

Page 19: Introduction to AI - Second Lecture

Means-Ends Analysis (MEA)

• Presupposition to make MEA always succeed:– For every two objects A and B there exists a

sequence F1, …, Fn such that Fn(…F1(A)…)=B.

– Sequence F1, …, Fn is finite.

– In the search space of finite sequences, F1, …, Fn

can be lifted out in finite time.• The subject of search techniques.

Page 20: Introduction to AI - Second Lecture

Means-Ends Analysis (MEA) –Performance Limitations

• Brute force variant: has to try every operator w.r.t. every object.

• Include operator restrictions, i.e. an operator only works on specific kinds of objects.

• Include operator indexing w.r.t. categories of differences that they mitigate.– Requires a preliminary categorization of differences.

• Impose a partial order (PO) on the set of differences (or categories of differences).– Prefer operators that reduce complex differences to simpler

differences.• But regardless of all this: it can only see one step ahead.

Page 21: Introduction to AI - Second Lecture

Planning

• Constructing a proposed solution in general terms before working out the details.

• Ingredients:– T: original task environment (from object A to B).– T’: abstracted task environment (from object A’ to B’).– Translation from problems in T to problems in T’ (A

to A’ and B to B’).– Translation from solutions in T’ to plannings for

solutions in T (sequence of operators F1, …, Fn to F’1, …, F’m).

• In both T and T’ we use MEA.

Page 22: Introduction to AI - Second Lecture

Planning

• Presupposition to make planning always succeed:– Every operator F in T is covered by an abstracted

operator F’ in T’, such that for every object A in T there is an object A’ in T’ such that [if R’(A’)=B’, then R(A)=B].

– Under the condition that the problem can always be solved in T of course…