9/9. num iterations: (d+1) asymptotic ratio of # nodes expanded by iddfs vs dfs (b+1)/ (b-1)...

Post on 19-Dec-2015

213 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

9/9

Num iterations: (d+1)

Asymptotic ratio of # nodes expanded by IDDFS vs DFS

(b+1)/ (b-1) (approximates to 1 when b is large)

Num iterations: (d+1)

Asymptotic ratio of # nodes expanded by IDDFS vs DFS

(b+1)/ (b-1) (approximates to 1 when b is large)

IDDFS: Review

A

B

C

D

G

DFS:

BFS:

IDDFS:

A,B,GA,B,C,D,G(A), (A, B, G)

Note that IDDFS can do fewer Expansions than DFS on a graphShaped search space.

A

B

C

D

G

DFS:

BFS:

IDDFS:

A,B,GA,B,A,B,A,B,A,B,A,B(A), (A, B, G)

Note that IDDFS can do fewer Expansions than DFS on a graphShaped search space.

Search on undirected graphs… Cycles galore…

Graph (instead of tree) Search: Handling repeated nodes

Main points: --repeated expansions is a bigger issue for DFS than for BFS or IDDFS --Trying to remember all previously expanded nodes and comparing the new nodes with them is infeasible --Space becomes exponential --duplicate checking can also be exponential--Partial reduction in repeated expansion can be done by --Checking to see if any children of a node n have the same state as the parent of n -- Checking to see if any children of a node n have the same state as any ancestor of n (at most d ancestors for n—where d is the depth of n)

A

B

C

D

G

9

1

1

1

2

Uniform Cost Search

No:A (0)

N1:B(1) N2:G(9)

N3:C(2)

N4:D(3)

N5:G(5)

Completeness?Optimality? if d < d’, then paths with d distance explored before those with d’

Branch & Bound argument (as long as all op costs are +ve)

Efficiency? (as bad as blind search..)

A

B

C

D

G

9

0.1

0.1

0.1

25

Bait &SwitchGraph

Notation: C(n,n’) cost of the edge between n and n’ g(n) distance of n from root dist(n,n’’) shortest distance between n and n’’

Proof of Optimality of Uniform search

Proof of optimality: Let N be the goal node we output.Suppose there is another goal node N’We want to prove that g(N’) >= g(N)Suppose this is not true. i.e. g(N’) < g(N) --Assumption A1

When N was picked up for expansion,Either N’ itself, or some ancestor of N’,Say N’’ must have been on the search queue

If we picked N instead of N’’ for expansion,It was because

g(N) <= g(N’’) ---Fact f1

But g(N’) = g(N’’) + dist(N’’,N’)So g(N’) >= g(N’’)So from f1, we have g(N) <= g(N’) But this contradicts our assumption A1

No

N N’

N’’

Holds only because dist(N’’,N’) >= 0 This will hold if every operator has +ve cost

“Informing” Uniform search…

A

B

C

D

G

9

0.1

0.1

0.1

25

Bait &SwitchGraph

No:A (0)

N1:B(.1) N2:G(9)

N3:C(.2)

N4:D(.3)

N5:G(25.3)

Would be nice if we could tell thatN2 is better than N1 --Need to take not just the distance until now, but also distance to goal --Computing true distance to goal is as hard as the full search --So, try “bounds” h(n) prioritize nodes in terms of f(n) = g(n) +h(n) two bounds: h1(n) <= h*(n) <= h2(n) Which guarantees optimality?--h1(n) <= h2(n) <= h*(n) Which is better function?

Admissibility

Informedness

A*(if there are multiple goal nodes, we consider the distance to the nearest goal node)

h*

h1

h4

h5

Admissibility/Informedness

h2h3

Max(h2,h3)

Proof of Optimality of Uniform search

Proof of optimality: Let N be the goal node we output.Suppose there is another goal node N’We want to prove that g(N’) >= g(N)Suppose this is not true. i.e. g(N’) < g(N) --Assumption A1

When N was picked up for expansion,Either N’ itself, or some ancestor of N’,Say N’’ must have been on the search queue

If we picked N instead of N’’ for expansion,It was because

g(N) <= g(N’’) ---Fact f1

But g(N’) = g(N’’) + dist(N’’,N’)So g(N’) >= g(N’’)So from f1, we have g(N) <= g(N’) But this contradicts our assumption A1

No

N N’

N’’

Holds only because dist(N’’,N’) >= 0 This will hold if every operator has +ve cost

Proof of Optimality of A* search

Proof of optimality: Let N be the goal node we output.Suppose there is another goal node N’We want to prove that g(N’) >= g(N)Suppose this is not true. i.e. g(N’) < g(N) --Assumption A1

When N was picked up for expansion,Either N’ itself, or some ancestor of N’,Say N’’ must have been on the search queue

If we picked N instead of N’’ for expansion,It was because

f(N) <= f(N’’) ---Fact f1i.e. g(N) + h(N) <= g(N’’) + h(N’’) Since N is goal node, h(N) = 0So, g(N) <= g(N’’) + h(N’’)

But g(N’) = g(N’’) + dist(N’’,N’)Given h(N’) <= h*(N’’) = dist(N’’,N’) (lower bound)So g(N’) = g(N’’)+dist(N’’,N’) >= g(N’’) +h(N’’) ==Fact f2So from f1 and f2 we have g(N) <= g(N’) But this contradicts our assumption A1

No

N N’

N’’

Holds only because h(N’’) is a lower bound on dist(N’’,N’)

In remembrance of all the lives and liberties lost to the wars by and on terror

Guernica,Picasso

9/11

A*(if there are multiple goal nodes, we consider the distance to the nearest goal node)

Several proofs: 1. Based on Branch and bound --g(N) is better than f(N’’) and f(n’’) <= cost of best path through N’’ 2. Based on contours -- f() contours are more goal directed than g() contours 3. Based on contradiction

No

N

N’’

A

B

C

D

G

9

.1

.1

.1

25

A* Search

No:A (0)

N1:B(.1+8.8) N2:G(9+0)

N3:C(max(.2+0),8.8)

N4:D(.3+25)

7

20

0

28

25

7

8.8

0

0

25

f(B)= .1+8.8 = 8.9f(C)= .2+0 = 0.2 This doesn’t make sense since we are reducing the estimate of the actual cost of the path A—B—C—D—G To make f(.) monotonic along a path, we say f(n) = max( f(parent), g(n)+h(n))

9

25.2

0

25.1

25

No:A (0)

N1:B(.1+25.2)N2:G(9+0)

It will not expandNodes with f >f*(f* is f-value of theOptimal goal)

Uniform cost search

A*

Proof of Optimality of A* search

Proof of optimality: Let N be the goal node we output.Suppose there is another goal node N’We want to prove that g(N’) >= g(N)Suppose this is not true. i.e. g(N’) < g(N) --Assumption A1

When N was picked up for expansion,Either N’ itself, or some ancestor of N’,Say N’’ must have been on the search queue

If we picked N instead of N’’ for expansion,It was because

f(N) <= f(N’’) ---Fact f1i.e. g(N) + h(N) <= g(N’’) + h(N’’) Since N is goal node, h(N) = 0So, g(N) <= g(N’’) + h(N’’)

But g(N’) = g(N’’) + dist(N’’,N’)Given h(N’) <= h*(N’’) = dist(N’’,N’) (lower bound)So g(N’) = g(N’’)+dist(N’’,N’) >= g(N’’) +h(N’’) ==Fact f2So from f1 and f2 we have g(N) <= g(N’) But this contradicts our assumption A1

No

N N’

N’’

Holds only because h(N’’) is a lower bound on dist(N’’,N’)

IDA*--do iterativedepth first search but Set threshold in terms off (not depth)

(h*-h)/h*

IDA* to handle the A* memory problem

• Basicaly IDDFS, except instead of the iterations being defined in terms of depth, we define it in terms of f-value

– Start with the f cutoff equal to the f-value of the root node

– Loop• Generate and search all nodes whose f-values are

less than or equal to current cutoff. – Use depth-first search to search the trees in the

individual iterations– Keep track of the node N’ which has the smallest f-

value that is still larger than the current cutoff. Let this f-value be next-largest-f-value

-- If the search finds a goal node, terminate. If not, set cutoff = next-largest-f-value and go back to Loop

Properties: Linear memory. #Iterations in the worst case? =

Bd !! (Happens when all nodes have distinct f-values. There is such a thing as too much discrimination…)

Where do heuristics (bounds) come from?

From relaxed problems (the more relaxed, the easier to compute heuristic, but the less accurate it is)

For path planning on the plane (with obstacles)?

For 8-puzzle problem?

For Traveling sales person?

Assume away obstacles. The distance will then beThe straightline distance (see next slide for other abstractions)

Assume ability to move the tile directly to the place distance= # misplaced tilesAssume ability to move only one position at a time distance = Sum of manhattan distances.

Relax the “circuit” requirement. Minimum spanning tree

Different levels of abstraction for shortest path problems on the plane

I

G

I

G

“circular abstraction”

I

G

“Polygonal abstraction”

I

G

“disappearing-act abstraction”

hD

hC

hP

h*

The obstacles in the shortest path problem canbe abstracted in a variety of ways. --The more the abstraction, the cheaper it is to solve the problem in abstract space --The less the abstraction, the more “informed” the heuristic cost (i.e., the closer the abstract path length to actual path length)

Actual

hDhC hP

h*h0

Cost of computing the heuristic

Cost of searching with the heuristic

Total cost incurred in search

Not always clear where the total minimum occurs• Old wisdom was that the global min was closer to cheaper heuristics• Current insights are that it may well be far from the cheaper heuristics for many problems

• E.g. Pattern databases for 8-puzzle • polygonal abstractions for SP• Plan graph heuristics for planning

How informed should the heuristic be?

I

G

I

G

“circular abstraction”

I

G

“Polygonal abstraction”

I

GhD

hC

hP

h*Actual

/Informedness

Advanced Topic: Pattern Databases

Moral: Heuristics don’t always have to be neat and tidy closed-form

solutions…They can be pretty large lookup tables computed off-line

Performance on 15 Puzzle

• Random 15 puzzle instances were first solved optimally using IDA* with Manhattan distance heuristic (Korf, 1985).

• Optimal solution lengths average 53 moves.

• 400 million nodes generated on average.

• Average solution time is about 50 seconds on current machines.

Limitation of Manhattan Distance

• To solve a 24-Puzzle instance, IDA* with Manhattan distance would take about 65,000 years on average.

• Assumes that each tile moves independently

• In fact, tiles interfere with each other.

• Accounting for these interactions is the key to more accurate heuristic functions.

More Complex Tile Interactions

37

1112 13 14 15

14 73

15 1211 13

M.d. is 19 moves, but 31 moves are needed.

M.d. is 20 moves, but 28 moves are needed

37

1112 13 14 15

7 1312

15 311 14

M.d. is 17 moves, but 27 moves are needed

37

1112 13 14 15

12 117 14

13 315

Pattern Database Heuristics

• Culberson and Schaeffer, 1996

• A pattern database is a complete set of such positions, with associated number of moves.

• e.g. a 7-tile pattern database for the Fifteen Puzzle contains 519 million entries.

Applications of Pattern Databases

• On 15 puzzle, IDA* with pattern database heuristics is about 10 times faster than with Manhattan distance (Culberson and Schaeffer, 1996).

• Pattern databases can also be applied to Rubik’s Cube.

Heuristics from Pattern Databases

1 2 3

4 5 6 7

8 9 10 11

12 13 14 15

5 10 14 7

8 3 6 1

15 12 9

2 11 4 13

31 moves is a lower bound on the total number of moves needed to solve this particular state.

Precomputing Pattern Databases

• Entire database is computed with one backward breadth-first search from goal.

• All non-pattern tiles are indistinguishable, but all tile moves are counted.

• The first time each state is encountered, the total number of moves made so far is stored.

• Once computed, the same table is used for all problems with the same goal state.

Used while discussing A* alg

Invoking the code with the simple 8-puzzleConfiguration (the one which is one move away from Goal)

Tracing puz-expand and puz-goal-p on the simple problem

Tracing puz-f1-val to showHow f values get set in mycode

Example of a larger problem being solved

What is needed: --A neighborhood function The larger the neighborhood you consider, the less myopic the search (but the more costly each iteration) --A “goodness” function needs to give a value to non-solution configurations too for 8 queens: inverse of number of pair-wise conflicts

Solution(s): Random restart hill-climbing Do the non-greedy think with some probability p>0 Use simulated annealing

top related