ai 1st module
DESCRIPTION
AI:1st moduleTRANSCRIPT
![Page 1: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/1.jpg)
BY:ABHINAV KISHORE
AIMODULE:1
![Page 2: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/2.jpg)
THE MONKEY & BANANAS PROBLEM A monkey is in a cage and bananas are
suspended from the ceiling, the monkey wants to eat a banana but cannot reach them in the room are a chair and a stick if the monkey stands on the chair and waves the
stick, he can knock a banana down to eat it what are the actions the monkey should take?
Initial state: monkey on ground with empty hand bananas suspendedGoal state: monkey eating Actions: climb chair/get off grab X wave X eat X
![Page 3: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/3.jpg)
SEARCH Given a problem expressed as a state space
(whether explicitly or implicitly) with operators/actions, an initial state and a goal state,
how do we find the sequence of operators needed to solve the problem?
this requires search Formally, we define a search space as [N, A, S, GD]
N = set of nodes or states of a graph A = set of arcs (edges) between nodes that correspond to
the steps in the problem (the legal actions or operators) S = a nonempty subset of N that represents start states GD = a nonempty subset of N that represents goal states
Our problem becomes one of traversing the graph from a node in S to a node in GD we can use any of the numerous graph traversal
techniques for this but in general, they divide into two categories:
brute force – unguided search heuristic – guided search
![Page 4: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/4.jpg)
CONSEQUENCES OF SEARCH As shown a few slides back, the 8-puzzle has over
40000 different states what about the 15 puzzle?
A brute force search means trying all possible states blindly until you find the solution for a state space for a problem requiring n moves where
each move consists of m choices, there are 2m*n possible states
two forms of brute force search are: depth first search, breath first search
A guided search examines a state and uses some heuristic (usually a function) to determine how good that state is (how close you might be to a solution) to help determine what state to move to hill climbing best-first search A/A* algorithm Minimax
While a good heuristic can reduce the complexity from 2m*n to something tractable, there is no guarantee so any form of search is O(2n) in the worst case
![Page 5: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/5.jpg)
FORWARD VS BACKWARD SEARCH
The common form of reasoning starts with data and leads to conclusions for instance, diagnosis is data-driven – given the patient
symptoms, we work toward disease hypotheses we often think of this form of reasoning as “forward chaining”
through rules Backward search reasons from goals to actions
Planning and design are often goal-driven “backward chaining”
![Page 6: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/6.jpg)
DEPTH-FIRST SEARCH
Starting at node A, our search gives us:A, B, E, K, S, L, T, F, M, C, G, N, H, O, P,U, D, I, Q, J, R
![Page 7: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/7.jpg)
DEPTH-FIRST SEARCH EXAMPLE
![Page 8: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/8.jpg)
TRAVELING SALESMAN PROBLEM
![Page 9: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/9.jpg)
BREADTH-FIRST SEARCH
Starting at node A, our search would generate the nodes in alphabetical order from A to U
![Page 10: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/10.jpg)
BREADTH-FIRST SEARCH EXAMPLE
![Page 11: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/11.jpg)
BACKTRACKING SEARCH ALGORITHM
![Page 12: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/12.jpg)
The monkey and the banana The purpose of this example is to show the use of variables.
DescriptionA monkey enters a room via the door. In the room, near the window, is a box. In the middle of the room hangs a banana from the ceiling. The monkey wants to grasp the banana, and can do so after climbing on the box in the middle of the room.
StatesFor each state, we need to record:- the position of the monkey (door, window, middle, ...)- the position of the box- if the monkey is on the box- if the monkey has the bananaThe initial state is (door, window, no, no).The set of goal states is (*, *, *, yes).
![Page 13: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/13.jpg)
Moveswalk(P): from (M, B, no, H) to (P, B, no, H).push(P): from (M, M, no, H) to (P, P, no, H).climb: from (M, M, no, H) to (M, M, yes, H).grasp: from (middle, B, yes, no) to (middle, B, yes, yes).
State spaceWithout variables, the state space and search space can be very large (how many positions are there?).With variables, we can represent the reachable part as follows.
![Page 14: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/14.jpg)
![Page 15: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/15.jpg)
Monkey and Banana ExampleThere is a monkey at the door of a room. �In the middle of the room a banana hangs �from the ceiling. The monkey wants it, but cannot jump high enough from the floor.At the window of the room there is a box that �the monkey can use.
![Page 16: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/16.jpg)
The monkey can perform the following actions: Walk on the floor Climb the box Push the box around (if it is beside the box) Grasp the banana if it is standing on the box directly under the banana We define the state as a 4 tuple : (monkey at, on floor, box at, has banana)
![Page 17: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/17.jpg)
move( state( middle, onbox , middle, hasnot ), onbox, grasp, state( middle, onbox , middle, has)). onbox move( state( P, onfloor , P, H ), onfloor climb, state( P, onbox , P, H )). onbox move( state( P1, onfloor , P1, H ), onfloor push( P1, P2 ), state( P2, onfloor , P2, H)). onfloor move( state( P1, onfloor , B, H ), onfloor walk( P1, P2 ), state( P2, onfloor , B, H )). Onfloor
canget ( state( _, _, _, has )). canget ( State1 ) :-move( State1, Move, State2 ), canget ( State2 ). canget( state( atdoor , onfloor , atwindow , hasnot ))
![Page 18: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/18.jpg)
18
INTRODUCTORY PROBLEM: TIC-TAC-TOE
X X o
![Page 19: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/19.jpg)
19
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Program 1:Data Structures: Board: 9 element vector representing the board, with 1-9 for
each square. An element contains the value 0 if it is blank, 1 if it is filled by X, or 2 if it is filled with a O
Movetable: A large vector of 19,683 elements ( 3^9), each element is 9-element vector.
Algorithm:1. View the vector as a ternary number. Convert it to a
decimal number.2. Use the computed number as an index into
Move-Table and access the vector stored there.3. Set the new board to that vector.
![Page 20: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/20.jpg)
20
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Comments:This program is very efficient in time.1. A lot of space to store the Move-Table.2. A lot of work to specify all the entries
in the Move-Table.
3. Difficult to extend.
![Page 21: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/21.jpg)
21
INTRODUCTORY PROBLEM: TIC-TAC-TOE
1 2 3 4 5 6 7 8 9
![Page 22: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/22.jpg)
22
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Program 2:Data Structure: A nine element vector representing the board.
But instead of using 0,1 and 2 in each element, we store 2 for blank, 3 for X and 5 for O
Functions:Make2: returns 5 if the center sqaure is blank. Else any other
balnk sqPosswin(p): Returns 0 if the player p cannot win on his next
move; otherwise it returns the number of the square that constitutes a winning move. If the product is 18 (3x3x2), then X can win. If the product is 50 ( 5x5x2) then O can win.
Go(n): Makes a move in the square nStrategy:Turn = 1 Go(1)Turn = 2 If Board[5] is blank, Go(5), else Go(1)Turn = 3 If Board[9] is blank, Go(9), else Go(3)Turn = 4 If Posswin(X) 0, then Go(Posswin(X)).......
![Page 23: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/23.jpg)
23
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Comments:1. Not efficient in time, as it has to check
several conditions before making each move.
2. Easier to understand the program’s strategy.
3. Hard to generalize.
![Page 24: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/24.jpg)
24
INTRODUCTORY PROBLEM: TIC-TAC-TOE
8 3 4 1 5 9 6 7 215 - (8 + 5)
![Page 25: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/25.jpg)
25
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Comments:1. Checking for a possible win is quicker.2. Human finds the row-scan approach
easier, whilecomputer finds the number-counting approach moreefficient.
![Page 26: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/26.jpg)
26
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Program 3:
1. If it is a win, give it the highest rating.
2. Otherwise, consider all the moves the opponent could make next. Assume the opponent will make the move that is worst for us. Assign the rating of that move to the current node.
3. The best node is then the one with the highest rating.
![Page 27: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/27.jpg)
27
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Comments:1. Require much more time to consider all
possible moves.
2. Could be extended to handle more complicated games.
![Page 28: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/28.jpg)
28
STATE SPACE SEARCH: PLAYING CHESS Each position can be described by an 8-by-8
array. Initial position is the game opening position. Goal position is any position in which the
opponent does not have a legal move and his or her king is under attack.
Legal moves can be described by a set of rules:- Left sides are matched against the current state.- Right sides describe the new resulting state.
![Page 29: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/29.jpg)
29
STATE SPACE SEARCH: PLAYING CHESS State space is a set of legal positions. Starting at the initial state. Using the set of rules to move from one
state to another. Attempting to end up in a goal state.
![Page 30: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/30.jpg)
30
STATE SPACE SEARCH: WATER JUG PROBLEM“You are given two jugs, a 4-litre one and a
3-litre one. Neither has any measuring markers on it.
There is a pump that can be used to fill the jugs with
water. How can you get exactly 2 litres of water into 4-
litre jug.”
![Page 31: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/31.jpg)
31
STATE SPACE SEARCH: WATER JUG PROBLEM State: (x, y)
x = 0, 1, 2, 3, or 4 y = 0, 1, 2, 3
Start state: (0, 0). Goal state: (2, n) for any n. Attempting to end up in a goal state.
![Page 32: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/32.jpg)
32
STATE SPACE SEARCH: WATER JUG PROBLEM1. (x, y) (4, y)
if x 42. (x, y) (x, 3)
if y 33. (x, y) (x - d, y)
if x 04. (x, y) (x, y - d)
if y 0
![Page 33: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/33.jpg)
33
STATE SPACE SEARCH: WATER JUG PROBLEM5. (x, y) (0, y)
if x 06. (x, y) (x, 0)
if y 07. (x, y) (4, y - (4 - x))
if x y 4, y 08. (x, y) (x - (3 - y), 3)
if x y 3, x 0
![Page 34: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/34.jpg)
34
STATE SPACE SEARCH: WATER JUG PROBLEM9. (x, y) (x y, 0)
if x y 4, y 010. (x, y) (0, x y)
if x y 3, x 011. (0, 2) (2, 0)
12. (2, y) (0, y)
![Page 35: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/35.jpg)
35
STATE SPACE SEARCH: WATER JUG PROBLEM
1. current state = (0, 0)2. Loop until reaching the goal state (2, 0)
- Apply a rule whose left side matches the current state- Set the new current state to be the resulting state
(0, 0)(0, 3)(3, 0)(3, 3)(4, 2)(0, 2)(2, 0)
![Page 36: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/36.jpg)
36
STATE SPACE SEARCH: WATER JUG PROBLEM
The role of the condition in the left side of a rule restrict the application of the rule more efficient
1. (x, y) (4, y)if x 4
2. (x, y) (x, 3)if y 3
![Page 37: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/37.jpg)
37
STATE SPACE SEARCH: WATER JUG PROBLEM
Special-purpose rules to capture special-case
knowledge that can be used at some stage in solving a
problem
11. (0, 2) (2, 0)
12. (2, y) (0, y)
![Page 38: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/38.jpg)
38
SEARCH STRATEGIESRequirements of a good search strategy:1. It causes motion
Otherwise, it will never lead to a solution.
2. It is systematicOtherwise, it may use more steps than necessary.
3. It is efficientFind a good, but not necessarily the best, answer.
![Page 39: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/39.jpg)
39
SEARCH STRATEGIES1. Uninformed search (blind search)
Having no information about the number of steps from the current state to the goal.
2. Informed search (heuristic search)More efficient than uninformed search.
![Page 40: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/40.jpg)
40
SEARCH STRATEGIES
(0, 0)
(4, 0) (0, 3)
(1, 3)(0, 0)(4, 3) (3, 0)(0, 0)(4, 3)
![Page 41: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/41.jpg)
41
SEARCH STRATEGIES: BLIND SEARCH Breadth-first search
Expand all the nodes of one level first.
Depth-first searchExpand one of the nodes at the deepest level.
![Page 42: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/42.jpg)
42
SEARCH STRATEGIES: BLIND SEARCH
Criterion Breadth-First
Depth-First
Time
Space
Optimal?
Complete?
b: branching factor d: solution depth m: maximum depth
![Page 43: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/43.jpg)
43
SEARCH STRATEGIES: BLIND SEARCH
Criterion Breadth-First
Depth-First
Time bd bm
Space bd bmOptimal? Yes No
Complete?
Yes No
b: branching factor d: solution depth m: maximum depth
![Page 44: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/44.jpg)
44
SEARCH STRATEGIES: HEURISTIC SEARCH Heuristic: involving or serving as an aid
to learning, discovery, or problem-solving by experimental and especially trial-and-error methods. (Merriam-Webster’s dictionary)
Heuristic technique improves the efficiency of a search process, possibly by sacrificing claims of completeness or optimality.
![Page 45: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/45.jpg)
45
SEARCH STRATEGIES: HEURISTIC SEARCH Heuristic is for combinatorial explosion. Optimal solutions are rarely needed.
![Page 46: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/46.jpg)
46
SEARCH STRATEGIES: HEURISTIC SEARCHThe Travelling Salesman Problem“A salesman has a list of cities, each of which he must visit exactly once. There are direct roads between each pair of cities on the list. Find the route the salesman should follow for the shortest possible round trip that both starts and finishes at any one of the cities.”
A
B
C
D E1 10
5 5515
![Page 47: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/47.jpg)
47
SEARCH STRATEGIES: HEURISTIC SEARCHNearest neighbour heuristic:1. Select a starting city.2. Select the one closest to the current
city.
3. Repeat step 2 until all cities have been visited.
![Page 48: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/48.jpg)
48
SEARCH STRATEGIES: HEURISTIC SEARCHNearest neighbour heuristic:1. Select a starting city.2. Select the one closest to the current
city.
3. Repeat step 2 until all cities have been visited.
O(n2) vs. O(n!)
![Page 49: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/49.jpg)
49
HILL CLIMBING
Searching for a goal state = Climbing to the top of a hill
![Page 50: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/50.jpg)
50
HILL CLIMBING
Generate-and-test + direction to move. Heuristic function to estimate how
close a given state is to a goal state.
![Page 51: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/51.jpg)
51
SIMPLE HILL CLIMBING
Algorithm1. Evaluate the initial state.2. Loop until a solution is found or there
are no new operators left to be applied:- Select and apply a new operator- Evaluate the new state:goal quitbetter than current state new current state
![Page 52: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/52.jpg)
52
SIMPLE HILL CLIMBING Evaluation function as a way to inject
task-specific knowledge into the control process.
![Page 53: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/53.jpg)
53
STEEPEST-ASCENT HILL CLIMBING (GRADIENT SEARCH)
Considers all the moves from the current state.
Selects the best one as the next state.
![Page 54: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/54.jpg)
54
STEEPEST-ASCENT HILL CLIMBING (GRADIENT SEARCH)
Algorithm1. Evaluate the initial state.2. Loop until a solution is found or a complete
iteration produces no change to current state:- SUCC = a state such that any possible successor of the current state will be better than SUCC (the worst state). - For each operator that applies to the current state, evaluate the new state:goal quitbetter than SUCC set SUCC to this state - SUCC is better than the current state set the current state to SUCC.
![Page 55: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/55.jpg)
55
HILL CLIMBING: DISADVANTAGESLocal maximumA state that is better than all of its
neighbours, but not better than some other states far away.
![Page 56: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/56.jpg)
56
HILL CLIMBING: DISADVANTAGESPlateauA flat area of the search space in which
all neighbouring states have the same value.
![Page 57: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/57.jpg)
57
HILL CLIMBING: DISADVANTAGES
Ways Out Backtrack to some earlier node and try
going in a different direction. Make a big jump to try to get in a new
section. Moving in several directions at once.
![Page 58: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/58.jpg)
58
HILL CLIMBING: DISADVANTAGES Hill climbing is a local method:
Decides what to do next by looking only at the “immediate” consequences of its choices.
Global information might be encoded in heuristic functions.
![Page 59: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/59.jpg)
59
BEST-FIRST SEARCH Depth-first search: not all competing
branches having to be expanded.
Breadth-first search: not getting trapped on dead-end paths. Combining the two is to follow a single path at a time, but switch paths whenever some competing path look more promising than the current one.
![Page 60: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/60.jpg)
60
BEST-FIRST SEARCH
A
DCBFEHG
JI
566 5
2 1
A
DCBFEHG 566 5 4
A
DCBFE56
34
A
DCB53 1
A
![Page 61: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/61.jpg)
61
BEST-FIRST SEARCH OPEN: nodes that have been generated,
but have not examined.This is organized as a priority queue.
CLOSED: nodes that have already been examined.Whenever a new node is generated, check whether it has been generated before.
![Page 62: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/62.jpg)
62
BEST-FIRST SEARCH
Algorithm1. OPEN = {initial state}.2. Loop until a goal is found or there are
no nodes left in OPEN:- Pick the best node in OPEN- Generate its successors- For each successor:new evaluate it, add it to OPEN, record its parentgenerated before change parent, update successors
![Page 63: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/63.jpg)
63
BEST-FIRST SEARCH Greedy search:
h(n) = estimated cost of the cheapest path from node n to a goal state.Neither optimal nor complete
Uniform-cost search:g(n) = cost of the cheapest path from the initial state to node n.Optimal and complete, but very inefficient
![Page 64: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/64.jpg)
64
PROBLEM REDUCTION
Goal: Acquire TV set
AND-OR Graphs
Goal: Steal TV set Goal: Earn some money Goal: Buy TV set
Algorithm AO* (Martelli & Montanari 1973, Nilsson 1980)
![Page 65: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/65.jpg)
65
PROBLEM REDUCTION: AO*
A
DCB43 5
A5
6
FE44
A
DCB43
10
9
9
9
FE44
A
DCB4
6 10
1112
HG75
![Page 66: Ai 1st Module](https://reader036.vdocuments.us/reader036/viewer/2022062510/5472b7b9b4af9ff82c8b458d/html5/thumbnails/66.jpg)
66
PROBLEM REDUCTION: AO*
A
G
CB 10
5
11
13
ED 65 F 3
A
G
CB 15
10
14
13
ED 65 F 3
H 9Necessary backward propagation