lisp recitation after class in the same room
TRANSCRIPT
Lisp recitation after class in the
same room
Time for Buyer’s Remorse?
• Final class tally:– Total 46 (Room Capacity)– CSE 471: 28 [72%; 1 soph; 3 junior; 24 senior]– CSE 598: 18 [28%; 1 PhD, 13 MS, 4 MCS ]
This is one of the most exciting courses in the department! Unlike other courses that form the basis of a field of study, this course is (sort of) at the top of the food chain, so the concepts that we learnt in various other fields are applied here to solve practical problems and create systems that are truly useful.
--Unedited comment of a student from Spring 2009 class
•
Representation Mechanisms: Logic (propositional; first order) Probabilistic logic
Learning the models
Search Blind, InformedPlanning Inference Logical resolution Bayesian inference
How the course topics stack up…
Agent Classification in Terms of State Representations
Type State representation Focus
Atomic States are indivisible;No internal structure
Search on atomic states;
Propositional(aka Factored)
States are made of state variables that take values(Propositional or Multi-valued or Continuous)
Search+inference in logical (prop logic) and probabilistic (bayes nets) representations
Relational States describe the objects in the world and their inter-relations
Search+Inference in predicate logic (or relational prob. Models)
First-order +functions over objects Search+Inference in first order logic (or first order probabilistic models)
Illustration with Vacuum WorldAtomic:
S1, S2…. S8
Each state is seen as an indivisible snapshot
All Actions are SXS matrices..
If you add a second roomba the state space doubles
If you want to consider noisinessOf the rooms, the representationQuadruples..
Propositional/Factored:
States made up of 3 state variables
Dirt-in-left-room T/FDirt-in-right-room T/FRoomba-in-room L/R
Each state is an assignment ofValues to state variables23 Different states
Actions can just mention the variablesthey affect
Note that the representation is compact (logarithmic in the size of the state space)
If you add a second roomba, theRepresentation increases by just one More state variable.
If you want to consider “noisiness” of rooms, we need two variables, one forEach room
Relational:
World made of objects: Roomba; L-room, R-room, dirtRelations: In (<robot>, <room>); dirty(<room>)
If you add a second roomba, or more rooms, only the objects increase.
If you want to consider noisiness, you just need to add one other relation
Atomic or
Simple goal: Both rooms should be clean.
What happens when the domain
Is inaccessib
le?
Search in Multi-state(inaccessible)version
Set of states isCalled a “Belief State” So we are searching in the space of belief states
Notice that actions can sometimesReduce state-uncertainty
Sensing reducesState Uncertainty
Space of belief-states is exponentially larger than space of states.If you throw in likelihood of states in a belief state the resulting state-space is infinite!
Will we really need to handle multiple-state problems?
• Can’t we just buy better cameras? so our agents can always tell what state they are in?
• It is not just a question of having good pair or eyes.. Otherwise, why do malls have the maps of the malls with “here you are” annotation in the map?– The problem of localizing yourself in a map is a
non-trivial one..
If we can solve problems without sensors, then why have sensing?
Medicate without killing..• A healthy (and alive) person
accidentally walked into Springfield nuclear plant and got irradiated which may or may not have given her a disease D.
• The medication M will cure her of D if she has it; otherwise, it will kill her
• There is a test T which when done on patients with disese D, turns their tongues red R
• You can observe with Look sensors to see if the tongue is pink or not
• We want to cure the patient without killing her..
(D,A)(~D,A)
(~D,A)(~D,~A)
Medicate
(D,A,R)(~D,A,~R)
test
(D,A,R) (~D,A,~R)
Is Tongue Red?
y n
(~D,A,R)
Medicate
Sensing“partitions” belief state
(A) Radiate
Non-det ActionsAre normalEdges in Belief-space(but hyperEdges inThe originalState space
Unknown State Space
• When you buy Roomba does it have the “layout” of your home? – Fat chance! For 200$, they aren’t going to customize it to
everyone’s place!• When map is not given, the robot needs to both “learn” the
map, and “achieve the goal”– Integrates search/planning and learning
• Exploration/Exploitation tradeoff– Should you bother learning more of the map when you already
found a way of satisfying the goal?– (At the end of elementary school, should you go ahead and
“exploit” the 5 years of knowledge you gained by taking up a job; or explore a bit more by doing high-school, college, grad-school, post-doc…?)
Most relevant sub-area: Reinforcement learning
1/17
Project 0 Due on Thursday…Makeup class on Friday—TIME?…Tuesday’s class time will be optional
recitation for project
Given a state space of size n (or 2v where v is the # state variables) the single-state problem searches for a path in the graph of size n (2v) the multiple-state problem searches for a path in a graph of size 2n (22^v) the contingency problem searches for a sub-graph in a graph of size 2n (22^v)
Utility of eyes (sensors) is reflected in the size of the
effective search space!
In general, a subgraph rather than a tree (loops may be needed consider closing a faulty door )
2n is the EVIL that every CS student’s nightmares should be made of
The important difference from the graph-search scenario you learned in CSE 310 is that you want to keep the graph implicit rather than explicit (i.e., generate only that part of the graph that is absolutely needed to get the optimal path) VERY important since for most problems, the graphs are ginormous tending to infinite..
Example: Robotic Path-Planning
• States: Free space regions• Operators: Movement to
neighboring regions• Goal test: Reaching the goal
region• Path cost: Number of
movements (distance traveled)
I
GhD
??
General S
earch
Search algorithms differ based on the specific queuing function they use
All search algorithms must do goal-test only when the node is picked up for expansion
We typically analyze properties of search algorithms on uniform trees --with uniform branching factor b and goal depth d (tree itself may go to depth dt )
Evaluating
For the tree below, b=3 d=4
Breadth first search on a uniform tree of b=10 Assume 1000nodes expanded/sec 100bytes/node
Qn: Is there a way of getting linear memory search that is complete and optimal?
The search is “complete” now (since there is finite space to be explored). But still inoptimal.
Num iterations: (d+1)
Asymptotic ratio of # nodes expanded by IDDFS vs DFS
(b+1)/ (b-1) (approximates to 1 when b is large)
IDDFS: Review
A
B
C
D
G
DFS:
BFS:
IDDFS:
A,B,GA,B,C,D,G(A), (A, B, G)
Note that IDDFS can do fewer Expansions than DFS on a graphShaped search space.
A
B
C
D
G
DFS:
BFS:
IDDFS:
A,B,GA,B,A,B,A,B,A,B,A,B(A), (A, B, G)
Note that IDDFS can do fewer Expansions than DFS on a graphShaped search space.
Search on undirected graphs or directed graphs with cycles… Cycles galore…
Graph (instead of tree) Search: Handling repeated nodes
Main points: --repeated expansions is a bigger issue for DFS than for BFS or IDDFS --Trying to remember all previously expanded nodes and comparing the new nodes with them is infeasible --Space becomes exponential --duplicate checking can also be exponential--Partial reduction in repeated expansion can be done by --Checking to see if any children of a node n have the same state as the parent of n -- Checking to see if any children of a node n have the same state as any ancestor of n (at most d ancestors for n—where d is the depth of n)