introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 ai approaches...

32
2017 Introduction to artificial intelligence Chapter 1

Upload: vanlien

Post on 07-Feb-2018

236 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Introduction to artificial intelligence

Chapter 1

Page 2: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

AI definitions

Thinking Humanly

“The exciting new effort to make computers

think . . . machines with minds, in the full

and literal sense.” (Haugeland, 1985)

“[The automation of] activities that we

associate with human thinking, activities

such as decision-making, problem solving,

learning . . .” (Bellman, 1978)

Thinking Rationally

“The study of mental faculties through the

use of computational models.”

(Charniak and McDermott, 1985)

“The study of the computations that make

it possible to perceive, reason, and act.”

(Winston, 1992)

Acting Humanly

“The art of creating machines that perform

functions that require intelligence when

performed by people.” (Kurzweil, 1990)

“The study of how to make computers do

things at which, at the moment, people are

better.” (Rich and Knight, 1991)

Acting Rationally

“Computational Intelligence is the study

of the design of intelligent agents.” (Poole

et al., 1998)

“AI . . . is concerned with intelligent

behavior in artifacts.” (Nilsson, 1998)

Page 3: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

AI approaches

• Acting humanly: The Turing Test approach

– Compare intelligent behaviour with human behaviour

• Thinking humanly: The cognitive modeling approach

– Look inside to understand the human mind

• Thinking rationally: The “laws of thought” approach

– Computational logic to codify the “laws of thought”

• Acting rationally: The rational agent approach

– Behave such that to achieve the best expected outcome

Page 4: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Foundations of AI

• Philosophy

• Mathematics

• Economics

• Neuroscience

• Psychology

• Computer engineering

• Control theory and cybernetics

• Linguistics

Page 5: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

History of AI

• The gestation of artificial intelligence (1943–1955)

• The birth of artificial intelligence (1956)

• Early enthusiasm, great expectations (1952–1969)

• A dose of reality (1966–1973)

• Knowledge-based systems: The key to power? (1969–

1979)

• AI becomes an industry (1980–present)

Page 6: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Agents

• Artificial Intelligence = analysis and design of rational agents.

• Agent = anything that can be viewed as:

– perceiving its environment through sensors and

– acting upon that environment through actuators.

Page 7: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Agent function

• Agent function = a mapping from percept histories to actions. It completely defines the agent behaviour.

– 𝑃 = set of percepts

– 𝑃∗ = set of sequences of percepts. The percept history is a sequence of percepts.

– 𝐴 = set of actions

– 𝑓𝑎𝑔𝑒𝑛𝑡: 𝑃∗ → 𝐴 is the agent function

• Agent function = external specification of the agent

– Mathematical (formal) description

• Agent program = internal implementation of the agent

– Concrete implementation based on a physical architecture

• The agent function can be “in principle” described by a very large table.

Page 8: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Vacuum-cleaner world

• Environment:

– 2 locations 𝐴 and 𝐵

– Status of the current location: 𝐶𝑙𝑒𝑎𝑛 or 𝐷𝑖𝑟𝑡𝑦.

• Actions:

– 𝐿𝑒𝑓𝑡, 𝑅𝑖𝑔𝑕𝑡, 𝑆𝑢𝑐𝑘, 𝑁𝑜𝑂𝑝, for “move left”, “move right”, “suck dirt” or “do nothing”.

• Percepts:

– Pair [𝑙𝑜𝑐𝑎𝑡𝑖𝑜𝑛, 𝑠𝑡𝑎𝑡𝑢𝑠], for example [𝐴, 𝐷𝑖𝑟𝑡𝑦]

Page 9: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Vacuum-cleaner agent

• What is the right way to define the agent function ?

• What makes an agent good or bad, intelligent or stupid ?

Percept sequence Action

𝐴, 𝐶𝑙𝑒𝑎𝑛 𝐴, 𝐷𝑖𝑟𝑡𝑦 𝐵, 𝐶𝑙𝑒𝑎𝑛 𝐵, 𝐷𝑖𝑟𝑡𝑦 𝐴, 𝐶𝑙𝑒𝑎𝑛 , 𝐴, 𝐶𝑙𝑒𝑎𝑛 𝐴, 𝐶𝑙𝑒𝑎𝑛 , [𝐴, 𝐷𝑖𝑟𝑡𝑦]

𝑅𝑖𝑔𝑕𝑡 𝑆𝑢𝑐𝑘 𝐿𝑒𝑓𝑡 𝑆𝑢𝑐𝑘 𝑅𝑖𝑔𝑕𝑡 𝑆𝑢𝑐𝑘

Page 10: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Rationality

• A rational agent is one that does the right thing. But what does

it mean “to do the right thing”?

• We consider consequences of agent actions: – in an environment the agent generates a sequence of actions based on

received percepts

– actions determine environment to pass through a sequence of

environment states

– if this sequence is desirable then that the agent performed well.

• Desirability = performance score or measure that evaluates any

given sequence of environment states.

– 𝑠 ∶ 𝐸∗ → ℝ – 𝐸 = set of environment states

– ℝ = set of real numbers

The performance score is

independent of the agent !

Page 11: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Rational agent

• Rationality depends on: – The performance measure that defines the criterion of success.

– The agent’s prior knowledge of the environment.

– The actions that the agent can perform.

– The agent’s percept sequence to date.

• Rational agent ⇔ it should select an action that is expected to

maximize its performance measure, given the evidence

provided by the percept sequence and whatever built-in

knowledge the agent has.

Page 12: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Rationality of vacuum cleaner

• We must specify the performance score. Let us assume: – Performance score = one point for each clean square at each time step

over a life time of 𝑁 steps.

– The agent knows “a priori” the geometry of the environment

– The agent does not know the initial distribution of the dirt and the

initial location

– The only actions are 𝐿𝑒𝑓𝑡, 𝑅𝑖𝑔𝑕𝑡 and 𝑆𝑢𝑐𝑘.

Homework: – Show that this agent is rational.

– What happens if the performance score includes also one penalty point

for each move 𝐿𝑒𝑓𝑡 or 𝑅𝑖𝑔𝑕𝑡?

– What happens if locations can become dirty again?

– What happens if the environment geometry is not known?

Page 13: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Omniscience, learning, and autonomy

• An omniscient agent knows the actual outcome of its actions and

can act accordingly; omniscience is impossible in reality.

• Rational ≠ omniscient – percepts may not supply all relevant information.

• Rational ≠ perfect – rationality maximizes expected performance, while perfection

maximizes actual performance.

– a rational agent may fail

• Rational ⇒ information gathering, exploration, learning, autonomy – Information gathering = taking actions to modify future percepts

– Exploration is needed in an unknown environment

– The more an agent is based on its prior knowledge (given by designer)

rather than (learning from) its own percepts the less autonomous it is.

– Autonomous agent = learns to compensate for partial / incorrect prior

knowledge. Its behavior is determined by its own experience

Page 14: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Task environment

• A task environment defines the “problems” for which agents are

the “solutions”.

• Task environments are given by PEAS descriptions: – Performance, Environment, Actuators, Sensors

• Example: automated taxi driver – Performance measure: safety, destination, profits, legality, comfort, …

– Environment: streets/freeways, traffic, pedestrians, weather, …

– Actuators: steering, accelerator, brake, horn, speaker/display, …

– Sensors: video, accelerometers, infrared, sonar, engine sensors,

keyboard, GPS, …

Page 15: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Other agent types

Homework: PEAS = ? for the following agent types:

– Diagnostic assistant for medical or technical problems

– Part-picking robot

– Delivery robot

– Refinery controller

– Interactive English tutor

– Satellite image analysis system

– Internet shopping agent

– Library information agent (infobot)

– … other ?

Page 16: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Environment types • Fully observable vs. partially observable vs unobservable:

– Noisy & inaccurate sensors, immeasurable parts of the environment

• Single agent vs. multiagent – (Partly) competitive vs (Partly) cooperative; randomized behaviour; communication

• Deterministic vs. stochastic – Deterministic if environment state is completely determined by agent action;

nondeterministic if probabilities of next states are missing; uncertain if partially

observable and stochastic

• Episodic vs. sequential – Episodes are independent (e.g. classification); sequential ⇒ current decisions could

affect all future decisions

• Static vs. dynamic – Dynamic ⇒ environment changes while agent is deliberating; semidynamic ⇒ agent’s

performance score changes in time

• Finite vs. infinite discrete vs. continuous – Applies to environment states, time, percepts, agent actions

• Known vs. unknown – Refers to agent’s (or designer’s) knowledge about the environment “physics”

Page 17: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Examples of task environments

Task

Environment Observable Agents Deterministic Episodic Static Discrete

Crossword

puzzle Fully Single Yes No Yes Yes

Poker Partially Multi No No Yes Yes

Backgammon Fully Multi No No Yes Yes

Taxi driving Partially Multi No No No No

Image analysis Fully Single Yes Yes Semi No

Part-picking

robot Partially Single No Yes No No

Refinery

controller Partially Single No No No No

Interactive

English tutor Partially Multi No No No Yes

Page 18: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Agent program

• AI is concerned with the design of agent programs.

• An agent program runs on a physical agent architecture =

computing device + sensors + actuators.

• Agent = architecture + program

Page 19: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Agent types

• Four basic types in order of increasing generality:

– simple reflex agents

– model-based reflex agents

– goal-based agents

– utility-based agents

• All these can be turned into learning agents

Page 20: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Simple reflex agents

• SRA selects an action based only on current percept.

• Vacuum cleaner with history horizon of size 𝑛 has 𝑂(4𝑛) possible histories

(why?). The simple reflex vacuum cleaner will limit these possibilities to 4.

• SRA can use condition-action rules: if 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 then 𝑎𝑐𝑡𝑖𝑜𝑛.

Page 21: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Simple reflex agent program

• INTERPRET-INPUT maps current 𝑝𝑒𝑟𝑐𝑒𝑝𝑡 to an abstract 𝑠𝑡𝑎𝑡𝑒 description.

• RULE-MATCH returns the first rule that matches the 𝑠𝑡𝑎𝑡𝑒 description

• SRA are simple.

• SRA will work ⇔ environment is fully observable.

• SRA has very limited intelligence.

• Sometimes SRA might have to randomize their actions.

• What happens if the location sensor of the vacuum cleaner is removed ?

Page 22: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

World model

• The agent handles partial observability by keeping track of the

part of the world it can’t currently see.

• The agent maintains an internal state that depends on the history

of its percepts.

• The agent needs two types of information: – How does the world evolve independently of the agent ?

– How do agent’s actions affect the world?

• This knowledge is called world model, and the corresponding

agent is called model-based agent.

Page 23: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Model-based reflex agents

Page 24: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Model-based reflex agent program

• UPDATE-STATE creates the new internal 𝑠𝑡𝑎𝑡𝑒 description.

• Less obvious about the “internal state” of a model-based agent is that it

does not have to describe “what the world is like now” in a literal sense.

• For example, the taxi can have a rule telling “to fill up with gas on the way

home unless it has at least half a tank.” The aspect of “driving back home”

is an aspect of the agent internal state. For example, the taxi can be in the

same location driving in another direction.

Page 25: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Goal-based agent

• Often knowing the current state of the environment is not enough.

• The agent may need a description of desirable situations called goal information.

• Often goal achievement requires to find a sequence of actions using searching or planning.

• GBA is more flexible because knowledge that supports its decisions is represented

explicitly and can be modified..

Page 26: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Utility • Goals are limited, i.e. they just provide a crude binary distinction between

“good” and “bad” states.

• Economists and computer scientists proposed the term utility to appreciate

the “goodness” of a state.

• An agent’s utility function is essentially an internalization of the

performance measure.

• If the internal utility function and the external performance measure are in

agreement, then an agent that chooses actions to maximize its utility will be

rational according to the external performance measure.

• Utility is very useful when: – There are conflicting goals, like speed and safety.

– When there are several achievable goals with different certainties likelihood of succes

can be weighed against the importance of the goals.

• Decision making in uncertain environments (partially observable and

stochastic) is based on expected utility = the average utility given utilities

and probabilities of each individual outcome.

Page 27: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Utility-based agent

Page 28: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Goal-based and utility-based agent programs

• HOMEWORK !

Page 29: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Learning agent

Page 30: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Components of learning agents

• Performance element is responsible for selecting external

actions – was previously considered the entire agent.

• Learning element is responsible for making improvements. – It takes feedback from the critic on how the agent is doing and

determines how the performance element should be modified to do

better in the future.

– It can make changes to any of the “knowledge” components shown in

the agent diagrams.

• Critic tells learning element how well the agent is doing with

respect to a fixed performance standard. It is necessary because

percepts themselves provide no indication of the agent’s success.

• Problem generator is responsible for suggesting actions that

will lead to new and informative experiences. Its job is to

suggest these exploratory actions.

Page 31: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Environment representations

• In an atomic representation each state of the world is indivisible – it

has no internal structure.

• A factored representation splits up each state into a fixed set of

variables or attributes, each of which can have a value.

• In a structured representation, objects and their various and varying

relationships can be described explicitly. Structured representations

underlie relational databases and first-order logic

Page 32: Introduction to artificial intelligencesoftware.ucv.ro/~cbadica/ai/cap1.pdf · 2017 AI approaches • Acting humanly: The Turing Test approach – Compare intelligent behaviour with

2017

Expressiveness and performance

• There is a tradeoff between expressiveness and performance of

(knowledge) representations.

• In increasing expressiveness: 𝑎𝑡𝑜𝑚𝑖𝑐 < 𝑓𝑎𝑐𝑡𝑜𝑟𝑒𝑑 <𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒𝑑.

• A more expressive representation can capture, at least as

concisely, everything a less expressive one can capture, plus

some more.

• More expressive ⇒ much more concise.

• On the other hand, reasoning and learning become more

complex, i.e. performance is lower, as the expressive power of

the representation increases.

• More expressive ⇒ lower performance.