clarifying dreyfus’ critique of gofai's ontological assumption - a formalization

13
Running Head: FORMALIZING GOFAI’S ONTOLOGICAL ASSUMPTION Clarifying Dreyfus’ Critique of GOFAI’s Ontological Assumption: a Formalization Aaron Prosser 996234338 PHL342H1 Dr. Ronald De Sousa April 12 th , 2011

Upload: aaron-prosser

Post on 03-Oct-2014

145 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Running Head: FORMALIZING GOFAI’S ONTOLOGICAL ASSUMPTION

Clarifying Dreyfus’ Critique of GOFAI’s Ontological Assumption: a Formalization

Aaron Prosser

996234338

PHL342H1

Dr. Ronald De Sousa

April 12th

, 2011

Page 2: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

2

Introduction

Hubert L. Dreyfus (1965, 1972, 1979, 1992, 2007) is credited with the first, sustained and

most scathing criticism of “Good Old Fashioned Artificial Intelligence” (GOFAI). GOFAI’s

project was to design and build physical systems that could manipulate symbolic representations

according to formal rules―otherwise known as computation―in order to solve problems

(Newell & Simon 1976). This is because problem-solving was considered to constitute the

essence of intelligence (Newell & Simon 1972; 1976). Of all his criticisms, Dreyfus’ deepest

critique is of GOFAI’s so-called “ontological assumption”. Dreyfus formulated this assumption

differently through his work, but it is essentially this: GOFAI assumes that the world in which

intelligent systems are situated consists in the total set of discrete entities. Entities are discrete

because they are specified in advanced by symbolic descriptions which unambiguously define

what they are. Dreyfus’ work has been to demonstrate how this assumption is false and how its

falsity prevented GOFAI from producing machines with intelligence comparable to the

intelligence of human beings, or strong AI (artificial intelligence).

The purpose of this paper is to formalize the ontological assumption in order to clarify

Dreyfus’ critique. I argue that GOFAI had the conceptual resources at the time to formalize its

assumption and demonstrate why it was false in a way more precise than Dreyfus’ argument.

This formalization not only clarifies Dreyfus’ critique but also the alternative view of ontology

he argued for.

This paper is divided into two sections. The first reviews Dreyfus’ critique of the

ontological assumption through GOFAI’s use of “micro-worlds” during the 1970s. The second

reviews the conceptual framework behind GOFAI’s first attempt at strong AI in order to

formalize the assumption. I end by clarifying Dreyfus’ critique and alternative view of ontology.

Page 3: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

3

The ontological assumption

Dreyfus (1972, 1992) argues that GOFAI’s ontological assumption is rooted in its

preceding philosophical tradition. Arguably, this tradition culminated in Wittgenstein

(1922/1999) when he began his Tractatus Logico-Philosophicus with the famous claim that “The

world is everything that is the case” (p. 29). He would claim in the Tractatus that what is the

case is the totality of atomic facts which are expressed in discrete primitive propositions.

Therefore, under this view, it is possible to have a complete description of reality once an edifice

of propositions which pick out true states of affairs has been built-up from these primitives.

GOFAI inherited this vision of the world and turned it into a well-funded research project. This

is because the world in which intelligent systems are situated in was assumed to be much like the

vision sketched out in the Tractatus.

An exemplary example of this comes from GOFAI’s use of “micro-worlds” during the

1970s. A micro-world is an artificial and highly restricted domain in which a computer program

operates (Haugeland 1985). Micro-worlds are artificial and restricted because the objects,

properties, events and relations contained inside them are explicitly specified in advanced by the

programmer using descriptions built-up from primitive symbols. These descriptions specify the

necessary and sufficient conditions of the micro-world by providing a complete description of it.

Once this comprehensive description of the micro-world has been completed, the computer

program can function inside of it.

The micro-world strategy was employed to simulate a variety of cognitive functions such

as natural-language understanding (Winograd 1972), visual scene parsing (Guzman 1968; Waltz

1972) and learning and categorization (Winston 1975). AI researchers employed the micro-world

strategy because they believed it would allow them to discover the basic principles of

Page 4: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

4

intelligence. They believed this was possible because micro-worlds abstract away the

“distracting” complexities of the real-world, simplifying it into well-defined, isolated domains

(Boden 1977; Dreyfus 1972; Minsky & Papert 1972). The idea was that the basic principles of

intelligence would be discovered once these distractions were removed. They predicted that they

could then implement these principles into another machine which could be “scaled up” from the

micro-world to the real-world and possess human intelligence.

Micro-worlds exhibit the ontological assumption for two reasons. First, it continues the

tradition found in the Tractatus that you can provide a complete specification of the world using

primitive symbols or propositions. The exception is that this is manifested in a highly simplified

world; which leads to the second reason for attributing this assumption to micro-worlds. The fact

that GOFAI researchers predicted that micro-worlds could eventually scale up to the real-world

assumes a relevant similarity between them. The assumed relevant similarity is that the real-

world is likewise constituted by a set of discrete entities which are fully specified in advanced.

Therefore, the fact that micro-worlds are comprehensively described in advanced implies that

GOFAI researchers believed the same about the real-world

Dreyfus (1972, 1979, 1992) takes issue with the ontological assumption because it treats

the world like any other object. If the world consists solely in the total set of discrete entities

which exist fully specified in advanced, then it follows that the world forms a set of discrete

entities that we can (1) have knowledge of and (2) interact with in same way we do with tables

and chairs. Dreyfus argues that this is false because the world in which intelligent systems are

situated forms an implicit “background” which is presupposed by all our knowledge of and

interactions with particular objects in the world. This entails that we can never explicitly

represent or describe the world because every representation or description will presuppose it

Page 5: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

5

(Thompson 2007). Dreyfus views GOFAI’s persistent frustrated attempts to produce strong AI as

evidence against this assumption and thus in favour of his alternative view of ontology.

The problem with Dreyfus’ critique is that he is not clear what his alternative view to the

ontological assumption is. If the world in which intelligent systems are situated is not the way

GOFAI believed (viz. like any other object), what is it? I believe that Dreyfus’ criticism is

unclear until this question is answered because he misses half the story. I argue that we can

clarify not just Dreyfus critique and GOFAI’s ontological assumption, but also the alternative

view of ontology once the assumption is formalized using the very framework Dreyfus criticizes.

Formalizing the Ontological Assumption

GOFAI proper began in the autumn of 1955 when Allen Newell, Cliff Shaw and Herbert

Simon launched their vision of intelligence as being essentially the ability to solve problems.

Problem solving was viewed as a search through a problem space in order to find a pathway that

leads to a goal state. This developed into a project to design and build a machine with a set of

broad problem solving procedures which would allow it to solve a wide variety of problems in

wide variety of contexts. This machine was called the General Problem Solver (GPS; Newell,

Shaw & Simon 1958; Newell & Simon 1972). The GPS was undoubtedly GOFAI’s first and

most ambitious attempt at strong AI (for a review, see Newell & Simon 1976).

The project was abandoned after continuous failures to produce a genuinely general

problem solving machine. In Dreyfus (1972, 1992) eyes, this is because the project operated on

the false ontological assumption. What concerns us is that these researchers developed a

framework which provides the conceptual resources to formalize the ontological assumption and

demonstrates why it is false in a way more precise than Dreyfus argued.

Page 6: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

6

The cornerstone of this framework was Newell and Simon’s (1972) formalization of the

“problem space”. They argued that all problems can be analyzed into four elements. The first is

the representation of the initial/current state (IS). The second is the representation of the goal

state (GS). The third are the set of operations that alter the current state of the problem-solver,

moving them through the various intermediate states. The fourth are the path constraints that are

imposed on the problem solving procedures besides finding the solution (e.g., solving the

problem in the fewest steps). Therefore, the problem space is the set of all possible states that can

be reached by applying all the available operations while obeying the path constraints. A solution

is a path through the problem space from the IS to the GS. A problem solving method is a

procedure for searching through the problem space in order to find this path.

This formalization allowed Newell and Simon to quantify the size of all problem spaces

in terms of FD. F is the number of operators that can be applied at any given state, while D is the

total number of states in the problem space. They learned that most problem spaces are

incomprehensibly vast. For example, in an average chess game there are approximately thirty

legal moves per player at every turn and sixty turns in a game. Given this, we can quantify the

size of the search space through the space of alternative moves in an average chess game in

terms of 3060

. This number is preposterous. It is larger than the estimated number of electrons in

the universe (Haugeland 1985). And that is just a game of chess!

The vastness of problem spaces has been aptly termed combinatorial explosion (Newell

& Simon 1972; Holyoak 1995). The problem of managing exponentially explosive problem

spaces is a central issue in AI, which is why problem solving was reconceptualised as a

“heuristically guided search” (Newell & Simon 1976). Specifically, the search through the

Page 7: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

7

problem space must be selectively guided by heuristics, rather than exhaustively searching the

entire set of possibilities to find a solution.

Problems were further categorized into well- and ill-structured problems (Newell 1969;

Newell & Simon 1972; Pretz, Naples & Sternberg, 2003; Reitman 1964, 1965; Simon 1973;

Voss & Post 1988). Well-structured problems are those in which the IS, GS, operations and

constraints are all clearly defined and represented in the problem space. This is because (1) the

parameters that specify the boundaries of the problem, (2) the description of all possible states

and (3) the operators/constraints that specify the permissible operations are determined a priori

and are clearly displayed. This entails that well-structured problems contain no vagueness or

ambiguity in their structure. For example, consider the following well-structured

problem: ∀𝑥 𝐹𝑥 → 𝐺𝑥 . 𝐹𝑎. ∴ 𝐺𝑎. The IS are the premises and the GS is 𝐺𝑎. The operators

are the derivation rules in first-order predicate calculus, and the constraints are the forms of

derivation and sub-derivation.

Ill-structured problems are those in which the IS, GS, operations and/or constraints are

not clearly defined and represented in the problem space. This is because (1) the parameters that

specify the boundaries of the problem, (2) the description of all possible states and (3) the

operators/constraints that specify the permissible operations are not determined a priori and are

not clearly displayed. This means that ill-structured problems are vague and thus ambiguous

because they allow for multiple possible interpretations of the problem. For example, consider

instructing a student to “write a good, one-page paper”. This is an ill-structured problem. The

IS―a blank page―is completely uninformative as to what they should do. A blank page could

serve as the IS of a painting or a love letter. More seriously, there are no non-arbitrary standards

Page 8: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

8

to determine between these interpretations. The student will demand more information

concerning the content of the paper and what is the criterion of “good”.

There is a qualitative difference between these two kinds of problems. In the former, the

problems are completely formulated in advanced, whereas in the latter they are formulated very

minimally or not at all. Given this, it is natural to see why ill-structured problems contain

vagueness and ambiguity in their structure. This is why the defining characteristic of solving ill-

structured problems is to first formulate the problem in order to clarify what is vague and resolve

any ambiguities (De Young, Peterson & Flanders 2008). Problem formulation commits the

problem-solver to one interpretation of the problem by framing the problem space a particular

way. Once formulated, a problem-solver can effectively apply their procedures to the problem to

find a solution (Vervaeke 1997). In fact, Einstein argued that “[t]he mere formulation of a

problem is far more often essential than its solution” (Einstein & Infeld, 1938, p. 83).

Haugeland (1985) argues that the reason why the GPS project failed to be a genuinely

general problem solver is because it could not formulate problems and thus could only solve

well-structured problems. The reason for this is simple: the problems it was given were always

formulated in advanced. Even Herbert Simon (1973) admits to this limitation when he writes that

before the GPS can begin working on a solution to a problem, it must be given:

1. A description of the solution state, or a test to determine if that state has been

reached;

2. A set of terms for describing and characterizing the initial state, goal state and

intermediate states;

3. A set of operators to change one state into another, together with conditions

for the applicability of these operators;

4. A set of differences, and tests to detect the presence of these differences

between pairs of states;

5. A table of connections associating with each difference one or more operators

that is relevant to reducing or removing that difference. (p. 183-184).

Page 9: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

9

Two paragraphs following this he writes, “Thus, it would appear at first blush that all

problems that can be put in proper form for GPS can be regarded as WSPs [well-structured

problems]” (p. 184). This is problematic for the GPS because real-world problems are ill-

structured (De Young et al. 2008; Peterson & Flanders 2002; Voss & Post 1988). If the GPS

could never formulate problems, then it could never solve ill-structured problems and thus could

not solve real-world problems. This is tantamount to saying that it failed to be a genuinely

general problem solving machine.

Newell and Simon’s formalization of problem-solving is important for two reasons. First,

it reveals that the GPS also made the ontological assumption. Well-structured problems are like

micro-worlds because they are fully specified in advanced using symbolic descriptions. The GPS

researchers assumed that all problems in the world are well-structured, which is why they

neglected endowing their machines with the capacity to formulate problems (Haugeland 1985;

Vervaeke 1997). This implies that they assumed that all real-world problems are well-structured

problems because they felt no need to give machines this vital capacity. Second, if real-world

problems are ill-structured, this suggests a close relationship between the concept of an ill-

structured problem and the real-world.

The work of Jordan Peterson and Joseph Flanders identifies the source of this relationship

in what they call environmental complexity (Peterson 1999; Peterson & Flanders 2002). They

describe environmental complexity likeness an ill-structured problem. They argue that real-world

environments are (1) exponentially explosive in the amount of information that is potentially

available at any given moment and (2) are vague and ambiguous. They are vague because (i) the

parameters that specify the boundaries of an environment and (ii) the descriptions of the

Page 10: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

10

environment are either impoverished or lacking altogether. The presence of vagueness makes

complex environments ambiguous because it implies that they can be framed in multiple ways.

Given the connection to the concept of an ill-structured problem, “environmental

complexity” can be defined by saying that real-world environments are ill-structured domains.

This is in contrast to well-structured domains like chess games. Well-structured domains lack

vagueness/ambiguity because they come formulated and specified in advanced.

If Peterson and Flanders’ arguments are correct, GOFAI can formalize its ontological

assumption. In short, GOFAI assumes is that the world is a well-structured domain. This is

because they assume that it lacks vagueness and is unambiguous because the world and the

various entities within it come formulated and specified in advanced. If that is the case, then the

whole of reality can be fully represented and described in precisely the way Wittgenstein argued

in the Tractatus.

This formalization is theoretically useful because it clarifies Dreyfus’ critique of the

ontological assumption. His argument is clarified in terms of the following:

The ontological assumption is false because intelligent systems situated in ill-

structured domains, not well-structured domains.

This also clarifies Dreyfus’ critique as to why we cannot explicitly represent or describe

the world and thus why the world cannot be treated like any other object. First, the ambiguity of

an ill-structured domain means that there are always alternative ways to represent or describe the

world. This implies that is impossible to provide a comprehensive description of the world which

unambiguously defines it and the various entities contained within it. Second, the vagueness of

the boundaries of ill-structured domains clarifies what Dreyfus means by the implicit

“background”. This is because vagueness implies that we cannot non-arbitrarily draw a boundary

around the domain we are in. That is, there are no a priori boundaries to ill-structured domains

Page 11: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

11

which allow us to isolate them from each other. If that is the case, then ill-structured domains

“open out into the rest of human activities” (Dreyfus 1979, p. 147) because their boundaries

blend into the myriad other domains we find in the real-world. These other domains constitute

what Dreyfus calls the implicit “background”.

In light of these considerations, we can describe Dreyfus’ alternative view of ontology as

the opposite of GOFAI’s. Dreyfus’ ontology views the world as an ill-structured domain. This

formalization explains why GOFAI failed to produce strong AI because it has yet to design and

build a machine that can formulate the ill-structured domains found in the real-world (for a

similar argument, see Vervaeke 1997; Vervaeke et al. 2009). The great irony of this is that

GOFAI could more precisely state this fact than Dreyfus, yet failed to see this implication which

derives from its own conceptual framework.

Conclusion

This brief paper is certainly not the last word on these issues. However I believe that the

formalization of GOFAI’s ontological assumption translates Dreyfus often philosophically

obtuse criticisms into concepts AI researchers can understand. It is my hope that this will

establish new lines of research in light of these clarified criticisms. In doing so, perhaps will help

fulfill the promise that was at the beginning of GOFAI: to produce strong AI.

Page 12: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

12

References

Boden, M. (1977). Artificial intelligence and natural man. Basic Books.

DeYoung, C. G., Flanders, J. L., & Peterson, J. B. (2008). Cognitive abilities involved in insight

problem solving: An individual differences model. Creativity Research Journal 20:278-

290.

Dreyfus, H.L. (1965). Alchemy and artificial intelligence. RAND Corporation.

Dreyfus, H.L. (1972). What Computers Can't Do. MIT Press

Dreyfus, H.L. (1979). From micro-worlds to knowledge representation: Ai at an impasse. In:

Mind Design 2, ed. J. Haugeland, pp. 143-182. MIT Press.

Dreyfus, H.L. (1992). What Computers Still Can't Do. MIT Press.

Dreyfus, H.L. (2007). Why Heideggarian AI failed and how fixing it would require making it

more Heideggarian. Philosophical Psychology 20(2):247-268.

Einstein, A., & Infeld, L. (1938). The evolution of physics. Simon & Schuster.

Guzman, A. (1968). Decomposition of a visual scene into three dimensional bodies. Fall Joint

Computer Conference 291-304.

Haugeland, J. (1985). Artificial intelligence: The very idea. Cambridge: MIT Press.

Holyoak, K.J. (1995). Problem solving. In: An invitation to cognitive science: Thinking, eds.

D.N. Osherson & E.E. Smith, pp. 267-296. MIT Press.

Minsky, M. & Papert, S. (1972). Progress report on artificial intelligence. MIT AI Lab Memo

#252.

Newell, A. (1969). Heuristic programming: Ill structured problems. In: Progress in Operations

Research Vol. 3, ed. J. Aronofsky, pp. 360-414. Wiley.

Newell, A., Shaw, J.C., & Simon, H.A. (1958). Chess-playing programs and the problem of

complexity. IBM Journal of Research and Development 2:320-335.

Newell, A. & Simon, H. A. (1972). Human problem solving. Prentice-Hall.

Newell, A. & Simon, H.A. (1976). Computer science as empirical inquiry: Symbols and search.

In: Mind Design 2, ed. J. Haugeland, pp. 81-110. MIT Press.

Peterson, J. B. (1999). Maps of meaning: The architecture of belief. New York: Routledge.

Peterson, J.B. & Flanders, J. (2002). Complexity management theory: motivation for ideological

rigidity and social conflict. Cortex 38:429-458.

Pretz, J.E., Naples, A.J., & Sternberg, R.J. (2003) In: The Psychology of Problem Solving, eds.

Page 13: Clarifying Dreyfus’ Critique of GOFAI's Ontological Assumption - a Formalization

Formalizing GOFAI’s Ontological Assumption

13

J.E. Davidson & R.J. Sternberg, pp. 3-30. Cambridge University Press.

Reitman, W.R. (1964). Heuristic decision procedures, open constraints, and the structure of ill-

defined problems. In: Human Judgments and Optimality, eds. M.W. Shelley & G.L.

Bryan, pp. 282-315. Wiley.

Reitman, W.R. (1965). Cognition and Thought. New York: Wiley.

Simon, H.A. (1973). The structure of ill structured problems. Artificial Intelligence 4:181-201.

Thompson, E. (2007). Mind in life: Biology, phenomenology, and the sciences of mind.

Cambridge: Harvard University Press.

Vervaeke, J. (1997). The Naturalistic Imperative in Cognitive Science. National Library of

Canada, University of Toronto, Graduate Department of Philosophy.

https://tspace.library.utoronto.ca/handle/1807/10696.

Vervaeke, J., Lillicrap, T., & Richards, B. (2009). Relevance realization and the emerging

framework in cognitive science. Journal of Logic and Computation 19:1-52.

Voss, J. F., & Post, T. A. (1988). On the solving of ill-structured problems. In The nature of

expertise, eds. M. T. H. Chi, R. Glaser, & M. J. Farr, pp. 261-286. Lawrence Erlbaum.

Waltz, D.L. (1972). Generating semantic descriptions from drawings of scenes with shadows.

MIT Artificial Intelligence Laboratory Technical Report 271

Winograd, T. (1972). Understanding natural language. Cognitive Psychology 3:1-191.

Winston, P.H. (1975). Learning structural descriptions from examples. MIT Artificial

Intelligence Laboratory Technical Report 231

Wittgenstein, L. (1922/1999). Tractatus Logico-Philosophicus. (Trans. C.K. Ogden). Dover

Publications, Inc.