artificial intelligence and philosophy

8
1 Philosophy of AI: The Turing Test The Turing Test Human (tester) communicates with a human and a machine via a typing interface Conversation is totally unconstrained: Any subject, any duration, any language (including slang), lying allowed, etc. Tester must determine which is the machine. If no better than chance, must grant that machine is intelligent (acc to Turing). What’s the point? To cut through philosophical discussions of “what is intelligence?” and “can a machine think?” with a simple test. A common misconception is that the Turing Test is too easy – in fact, very very difficult, and no program has ever passed it. A mini-Turing Test Sam, as Alice (human) and Alice (chatbot) Sam (human) will do all the typing (so that you can’t tell by typing speed) Neither has to tell the truth! Later, check out alicebot.org to play with Alice. Objections to the Turing Test The Theological Objection Thinking is a function of the soul Machines don’t have a soul, therefore they can’t think

Upload: gaganrism

Post on 12-Nov-2014

543 views

Category:

Documents


1 download

DESCRIPTION

a very informative file on the relationship between artificial intelligence and philosophy that i came across on the net! great work by the authors, whoever they are! thnx to them!

TRANSCRIPT

Page 1: Artificial intelligence and philosophy

1

Philosophy of AI: The Turing

Test

The Turing Test

• Human (tester) communicates with a human and a machine via a typing interface

• Conversation is totally unconstrained: Any subject, any duration, any language (including

slang), lying allowed, etc.

• Tester must determine which is the machine. If

no better than chance, must grant that machine is intelligent (acc to Turing).

What’s the point?

• To cut through philosophical discussions

of “what is intelligence?” and “can a machine think?” with a simple test.

• A common misconception is that the Turing Test is too easy – in fact, very very

difficult, and no program has ever passed it.

A mini-Turing Test

• Sam, as Alice (human) and Alice (chatbot)

• Sam (human) will do all the typing (so that

you can’t tell by typing speed)

• Neither has to tell the truth!

• Later, check out alicebot.org to play with

Alice.

Objections to the Turing Test

The Theological Objection

• Thinking is a function of the soul

• Machines don’t have a soul, therefore they

can’t think

Page 2: Artificial intelligence and philosophy

2

The “heads in the sand”objection

• Thinking machines would be terrible, so

let’s not think about it.

The mathematical objection

Gödel: All complete axiomatic formulations of number theory include undecidable propositions.

(e.g. “G is not a theorem of formal system X”)

Lucas (and Penrose): Machines are formal

systems, so there will be a formula the machine will be unable to produce as true, although a mind can see that it is true. And so the machine

will not be an adequate model of the mind.

"Lucas cannot consistently assert this formula". ?

The argument from

consciousnessTwo parts:

• Self-awareness

• Qualia: ‘really feeling’ some sensation or emotion.

Turing’s response:

• Can’t know about consciousness unless you arethe thinker (even with other humans)

• If there are any external manifestations, they will show up in the Turing Test – and if there aren’t, who cares? [paraphrased ☺]

Argument from various disabilities

• Turing lists a number: machines can’t… be

kind… have a sense of humor… make mistakes…fall in love…use language…be

creative…

• Turing’s response is more or less that

these are areas for research, but that he doesn’t see any particular reason why they

can’t do these things.

Lady Lovelace’s Objection

• That is, machines can only do what they’re

programmed to do.

Turing responds that machines surprise him

all the time…

Also, what if they learn? Evolve? Change in such a way that their behavior is surprising

even to their programmers?

Argument from continuity

• Nervous system is not a discrete state

machine.

• Turing argues that a discrete system can

mimic a continuous system well enough to pass the TT.

• Penrose returns to this argument – and so

will we, next week.

Page 3: Artificial intelligence and philosophy

3

Argument from Informality of

behavior

• Humans don’t strictly follow rules; computers do.

Turing responds:

• Humans *do* follow the laws of physics, at least, and probably higher level laws of behavior.

• Machines can break ‘rules of conduct’ as easily as humans can.

Argument from ESP

• If humans have ESP and machines don’t,

then the TT could distinguish.

Turing gives up on this one!

Other objections

• Ignores other kinds of intelligence (e.g. a

dolphin couldn’t pass, nor a very clever Mars rover)

• Overemphasizes linguistic fluency

• Why should an intelligent computer pretend to be a human?

Constrained Turing Tests

• Not used to establish intelligence – but are used to support claims that program is ‘human-like’ in

some specific way

• Can be limited by time (e.g. < 5 min interaction),

subject matter (e.g. must talk about sports), or medium (e.g. art, music, etc.

• Often used to evaluate domain-specific AI apps.

• Check out The Loebner Prize…

(www.loebner.org)

A super-mini TT for humor

Joke Analysis and Production Engine (JAPE)

JAPE is a program that generates jokes

from scratch (see “An implemented model of punning riddles” on my home page if

you are interested).

Look at the following jokes, and decide

which ones are JAPE-generated, and which ones come a published joke book.

Discuss online!

Page 4: Artificial intelligence and philosophy

4

Spot the JAPE joke I

• What’s the difference between money and a

bottom? One you spare and bank, the other you bare and spank.

• What do you give a hurt lemon? Lemon aid.

• What do you call a sour assistant? A lemon aide.

• What do you call Martian beer? An ale-ien.

Spot the JAPE joke II

• What kind of pig can you ignore at a party? A wild bore.

• What animal runs round the the forest making the other animals yawn? A wild bore.

• What do you get when you cross jewelry and a bobcat? Cuff lynx.

• What do you get when you cross the Atlantic with the Titanic? About halfway.

What do you think?

• Is the Turing Test a reasonable way to

establish whether or not a machine is intelligent? Why or why not?

• What would you propose instead?

The Physical Symbol System

Hypothesis

The Physical Symbol Systems

(PSS) Hypothesis

“A physical symbol system has the

necessary and sufficient means for intelligent action.” – Newell & Simon

• Belief in the PSSH is also referred to as “strong AI”.

What makes up a Physical

Symbol System? • Symbols: physical patterns that can be part of

expressions.

• Expressions: instances of symbols related in some physical way.

• Processes: act on expressions – creation, modification, reproduction and destruction.

So, a Physical Symbol System is a “machine that produces through time an evolving collection of symbol structures.” [Newell & Simon] Must exist in a greater world of external objects.

Page 5: Artificial intelligence and philosophy

5

The Turing Machine

• A read/write head that scans a (possibly infinite) tape divided into squares, inscribed with 0 or 1. The head scans a square, erases it, prints a 0 or 1, moves back or forward, and goes into a new state.

• Behavior completely determined by: – the state the machine is in

– the number on the square it is scanning, and

– a finite table of instructions, specifying, for each state and binary input, what the machine should write, which direction it should move in, and which state it should go into.

The Turing Machine

• An abstract representation of a computing device

• All Turing Machines are PSSs

• All modern digital computers are Turing Machines

So, the PSSH says that a digital computer is sufficient for intelligence (but not that digital computers are necessarily intelligent).

What’s so special about a digital computer?

• A digital computer is a Universal Turing

Machine

• So, given the right program, it can mimic

the behavior of any discrete-state machine

• So, we don’t need to worry about the physical instantiation of the computer, just

its software.

Summary: the PSSH/Strong AI

• “running the right sort of program

necessarily results in a mind” [Russell and Norvig, p 959]

Weak vs Strong AI

Weak AI: not committed to the PSSH…

• Maybe physical instantiation matters?

• Maybe sub-symbolic representations necessary? (e.g. neural networks?)

• Maybe probabilistic/quantum effects

necessary?

…but still holds that AI is possible.

Searle’s Chinese Room

Page 6: Artificial intelligence and philosophy

6

Objections to the PSSH: Searle’s Chinese Room

• Take a machine that passes the Chinese imitation game, and reimplement it as a giant library, with a non-Chinese-speaking human as the `processor’. (compare with Turing Machine)

• The inputs and outputs would still pass the Turing Test – but where is the understanding? Who or what understands Chinese?

What is intentionality?

• A quality of mental states: their aboutness.

• Intentional states include believing, knowing, desiring, fearing…

• In particular, Searle claims that the Chinese Room thought experiment shows that machines cannot have the intentional state of “understanding”.

• Conceptually linked to ideas about consciousness and qualia.

The systems reply

• The human in the Chinese room is only

one part of the system – it is the whole system (including the room itself) that

does the understanding.

• Compare with the human brain: Do

neurons ‘understand’?

Searle’s response

• Let man memorize all the rules, and carry

them out in his head. Now the whole system is held within his brain, but there is

still no understanding.

• If you ask the man in Chinese if he

understands Chinese, he will say “yes” (in Chinese), but if you ask in English, he will

say “no”…

The robot reply

• Add a camera, and manipulators, and relate the formal symbols to objects in the real world (grounding).

Searle:

• Still no intentional states – where would they be?

• Robot response acknowledges need for more than formal symbol manipulation.

What do you think?

The brain simulator reply

• The program simulates the neurons in a

Chinese speaker’s brain.

Searle:

• Replace the books with water pipes, and

have the man open and close pipes. Man still doesn’t understand, nor do pipes.

Page 7: Artificial intelligence and philosophy

7

The combination (grounding +

brain simulator) replySearle’s response here is interesting:

It would be “rational and indeed irresistible to accept the hypothesis that the robot had intentionality, as long as we knew nothing more about it”. However, “…if we knew independently how to account for its behavior…we would not attribute intentionality to it, especially if we knew it had a formal program”. (1980, p421)

If we came to understand how the human brain worked, in its entirety, would we still attribute intentionality?

The other minds reply

• We don’t know how anyone actually thinks aside from behavioral tests, so those will have to do.

Searle’s response:

• Not about tests, but about what mental states really are

• “…it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state.”

• i.e. the zombie argument

Many mansions reply

• If programming is not enough to produce cognitive states, then perhaps there are other ways

Searle:

• Strong AI says: "mental processes are computational processes over formally defined elements."

• If it’s not Strong AI, then Chinese Room argument does not apply.

Hofstadter’s reply

• Searle’s argument is based on the

“intuitively obvious” lack of understanding in the Chinese Room.

• If the system is a) in a body, with sensors and actuators, b) functioning in real time,

and c) behaving as if it had intentional states, would those intuitions still hold?

Where does intentionality come

from, according to Searle?

• Implementation of the right kind of programming in the right kind of stuff -"specific biochemistry" (1980, p. 424)

• Simulation of that biochemistry will not do – must have the real meat.

• Not easily distinguished from simple dualism (mind and matter are different ‘substances’)

Consciousness and intelligence

• Is intelligence sufficient to produce

consciousness?

• Is consciousness necessary for

intelligence?

Or, rephrased, are “zombies” and “zimbos”(Dennett) possible?

Page 8: Artificial intelligence and philosophy

8

Zombies and Zimbos

• Zombie: Behaviorally indistinguishable from a human, but no consciousness.

• Zimboes: A type of zombie, which has explicit representations of its own mental states, and rep’ns of those rep’ns, and so on up. Behaviorally indistinguishable from Zombies.

Dennett (1998) mocks the idea that either are possible:

“Zimboes thinkz they are conscious, thinkz they have qualia, thinkz they suffer pains--they are just "wrong" (according to this lamentable tradition), in ways that neither they nor we could ever discover!”

Philosophy vs. science

What level of similarity is required to produce intelligent behavior?

• Symbolic models of intelligent reasoning

• Neural models of the brain

• Biochemical models…

• Quantum models…

• Or other: evolutionary models, etc.

Does intelligence really require a living human brain in a human body?

The COG project (MIT AI lab)

• An attempt to build a robot (body, actuators, sensors), with a “neural network” type ‘brain’, that can learn in the real world.

• Claim: Because it is grounded in its environment, its symbols have real meaning –so capable of real thought.

• Implementation of the “combination” reply to Searle.

• Doesn’t (yet?) work well enough to be evidence in either direction.

Bibliography

• John Searle (1980). "Minds, Brains, and Programs." Behavioral and Brain Sciences, 417-424.

• Daniel Dennett (1998). Brainchildren, Essays on Designing Minds,MIT Press and Penguin.

• A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.

• A. Newell and H. A. Simon, `Computer science as empirical enquiry: symbols and search', Communications of the ACM, 19(3), 113--126, (1976).

• Russell & Norvig.Artificial Intelligence: A Modern Approach (2nd

edition). Prentice-Hall, 2002.

• There is an EXCELLENT bibliography of Turing/chatbot related material at http://cogsci.ucsd.edu/~asaygin/tt/ttest.html