planning under uncertainty: pomdpsdmeger/comp765/slides/comp765... · •partially observable...
TRANSCRIPT
![Page 1: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/1.jpg)
Planning Under Uncertainty:
POMDPs
McGill COMP 765
Oct 19th, 2017
![Page 2: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/2.jpg)
Uncertainty is king
• Real robots in the field do not know their state exactly – this is a universal truth
• Our control and planning methods so far have assumed knowledge of the state, x! • Was this a giant waste of time?
• Robots can be designed such that uncertainty is quite low in practice: we are sometimes safe
• This will always be a limitation, causing “dumb” actions in some cases.
![Page 3: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/3.jpg)
Trajectory control
• When state information is not available exactly
• Recall, even giving a single best estimate for localization is difficult
• Now, consider what our Optimal Control algorithms required:• Estimate the future outcomes (as a cost-to-go
Value function) over the entire planning horizon so we can select an optimal action
• Do this without precise knowledge of where we are
![Page 4: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/4.jpg)
![Page 5: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/5.jpg)
Planning to build maps
• Now there is uncertainty both in our current robot’s position within the map, as well as our estimate of the map’s shape
• Multiple objectives to balance:• Maintain the robot’s positional accuracy
• Explore new unseen space
• Ensure that the map we create is as accurate as possible (e.g., by re-visiting areas)
![Page 6: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/6.jpg)
Problem formulations
• There is not one single tidy objective in this world. It rather touches many problem-specific formalisms:• Maximize reward under uncertainty from sensing
• Maximize reward under model uncertainty
• Learn the model as its own end goal
• Ensure we have gathered all possible observations
• Act conservatively: maximize the minimum possible reward
• And many more…
![Page 7: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/7.jpg)
Outline of next few lectures
• Partially Observable Markov Decision Processes: • The natural extension of our Optimal Control ideas
• Learning to build uncertainty-aware models from data
• Planning under uncertainty
• Robotic applications and current research
![Page 8: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/8.jpg)
Recall MDP Formulation
• Defined by a tuple: (S, A, T, R):• S a set of states
• A a set of actions
• T a transition model that describes how we move between states
• R a reward function
• We called Optimal Control the search for a policy that maximizes expected reward over a known MDP
• We saw algorithms: Value Iteration, LQR, DDP, LQR Tree search
• Each one selects the action that maximizes the Value function (cost-to-go) for every state
![Page 9: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/9.jpg)
Recall MDP Formulation
• Defined by a tuple: (S, A, T, R):• S a set of states
• A a set of actions
• T a transition model that describes how we move between states
• R a reward function
• We called Optimal Control the search for a policy that maximizes expected reward over a known MDP
• We saw algorithms: Value Iteration, LQR, DDP, LQR Tree search
• Each one selects the action that maximizes the Value function (cost-to-go) for every state
![Page 10: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/10.jpg)
10
POMDPs
In POMDPs we apply the very same idea as in MDPs.
Since the state is not observable, the agent has to make its decisions based on the belief state which is a posterior distribution over states.
Let b be the belief of the agent about the state
under consideration.
POMDPs compute a value function over belief space:
![Page 11: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/11.jpg)
11
Problems
Each belief is a probability distribution, thus, each value in a POMDP is a function of an entire probability distribution.
This is problematic, since probability distributions are continuous.
Additionally, we have to deal with the huge complexity of belief spaces.
For finite worlds with finite state, action, and measurement spaces and finite horizons, however, we can effectively represent the value functions by piecewise linear functions.
![Page 12: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/12.jpg)
12
An Illustrative Example
2x1x 3u
8.0
2z
1z
3u
2.0
8.0
2.0
7.0
3.0
3.0
7.0
measurements action u3 state x2
payoff
measurements
1u 2u 1u 2u
100 50100 100
actions u1, u2
payoff
state x1
1z
2z
![Page 13: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/13.jpg)
13
The Parameters of the Example
The actions u1 and u2 are terminal actions.
The action u3 is a sensing action that potentially
leads to a state transition.
The horizon is finite and =1.
![Page 14: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/14.jpg)
14
Payoff in POMDPs
In MDPs, the payoff (or return) depended on the state of the system.
In POMDPs, however, the true state is not exactly known.
Therefore, we compute the expected payoff by integrating over all states:
![Page 15: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/15.jpg)
15
Payoffs in Our Example (1)
If we are totally certain that we are in state x1 and execute action u1, we receive a reward of -100
If, on the other hand, we definitely know that we are in x2 and execute u1, the reward is +100.
In between it is the linear combination of the extreme values weighted by the probabilities
![Page 16: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/16.jpg)
16
Payoffs in Our Example (2)
![Page 17: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/17.jpg)
17
The Resulting Policy for T=1
Given we have a finite POMDP with T=1, we would use V1(b) to determine the optimal policy.
In our example, the optimal policy for T=1 is
This is the upper thick graph in the diagram.
![Page 18: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/18.jpg)
18
Piecewise Linearity, Convexity
The resulting value function V1(b) is
the maximum of the three functions at each point
It is piecewise linear and convex.
![Page 19: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/19.jpg)
19
Pruning
If we carefully consider V1(b), we see
that only the first two components contribute.
The third component can therefore safely be pruned away from V1(b).
![Page 20: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/20.jpg)
20
Increasing the Time Horizon
Assume the robot can make an observation before deciding on an action.
V1(b)
![Page 21: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/21.jpg)
21
Increasing the Time Horizon
Assume the robot can make an observation before deciding on an action.
Suppose the robot perceives z1 for which p(z1 | x1)=0.7 and p(z1| x2)=0.3.
Given the observation z1 we update the belief using Bayes rule.
3.04.0)1(3.07.0)(
)(
)1(3.0'
)(
7.0'
1111
1
12
1
11
pppzp
zp
pp
zp
pp
![Page 22: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/22.jpg)
22
Value Function
b’(b|z1)
V1(b)
V1(b|z1)
![Page 23: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/23.jpg)
23
Increasing the Time Horizon
Assume the robot can make an observation before deciding on an action.
Suppose the robot perceives z1 for which p(z1 | x1)=0.7 and p(z1| x2)=0.3.
Given the observation z1 we update the belief using Bayes rule.
Thus V1(b | z1) is given by
![Page 24: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/24.jpg)
24
Expected Value after Measuring
Since we do not know in advance what the next measurement will be, we have to compute the expected belief
2
1
111
2
1
111
2
1
111
)|(
)(
)|()(
)|()()]|([)(
i
i
i i
ii
i
iiz
pxzpV
zp
pxzpVzp
zbVzpzbVEbV
![Page 25: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/25.jpg)
25
Expected Value after Measuring
Since we do not know in advance what the next measurement will be, we have to compute the expected belief
![Page 26: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/26.jpg)
26
Resulting Value Function
The four possible combinations yield the following function which then can be simplified and pruned.
![Page 27: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/27.jpg)
27
Value Function
b’(b|z1)
p(z1) V1(b|z1)
p(z2) V2(b|z2)
![Page 28: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/28.jpg)
28
State Transitions (Prediction)
When the agent selects u3 its state
potentially changes.
When computing the value function, we have to take these potential state changes into account.
![Page 29: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/29.jpg)
29
State Transitions (Prediction)
![Page 30: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/30.jpg)
30
Resulting Value Function after executing u3
Taking the state transitions into account, we finally obtain.
![Page 31: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/31.jpg)
31
Value Function after executing u3
![Page 32: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/32.jpg)
32
Value Function for T=2
Taking into account that the agent can either directly perform u1 or u2 or first u3and then u1 or u2, we obtain (after
pruning)
![Page 33: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/33.jpg)
33
Graphical Representation of V2(b)
u1 optimal u2 optimal
unclear
outcome of measurement is important here
![Page 34: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/34.jpg)
34
Deep Horizons and Pruning
We have now completed a full backup in belief space.
This process can be applied recursively.
The value functions for T=10 and T=20 are
![Page 35: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/35.jpg)
35
Deep Horizons and Pruning
![Page 36: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/36.jpg)
36
![Page 37: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/37.jpg)
37
Why Pruning is Essential
Each update introduces additional linear components to V.
Each measurement squares the number of linear components.
Thus, an un-pruned value function for T=20 includes more than 10547,864 linear functions.
At T=30 we have 10561,012,337 linear functions.
The pruned value functions at T=20, in comparison, contains only 12 linear components.
The combinatorial explosion of linear components in the value function are the major reason why POMDPs are impractical for most applications.
![Page 38: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/38.jpg)
38
POMDP Summary
POMDPs compute the optimal action in partially observable, stochastic domains.
For finite horizon problems, the resulting value functions are piecewise linear and convex.
In each iteration the number of linear constraints grows exponentially.
POMDPs so far have only been applied successfully to very small state spaces with small numbers of possible observations and actions.
![Page 39: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/39.jpg)
39
POMDP Approximations
Point-based value iteration
QMDPs
AMDPs
![Page 40: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/40.jpg)
40
Point-based Value Iteration
Maintains a set of example beliefs
Only considers constraints that maximize value function for at least one of the examples
![Page 41: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/41.jpg)
41
Point-based Value Iteration
Exact value function PBVI
Value functions for T=30
![Page 42: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/42.jpg)
42
Example Application
![Page 43: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/43.jpg)
43
Example Application
![Page 44: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/44.jpg)
44
QMDPs
QMDPs only consider state uncertainty in the first step
After that, the world becomes fully observable.
![Page 45: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/45.jpg)
45
N
j
ijjii xuxpxVuxruxQ1
),|()(),(),(
N
j
iiu
uxQp1
),(maxarg
![Page 46: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/46.jpg)
46
Augmented MDPs
Augmentation adds uncertainty component to state space, e.g.,
Planning is performed by MDP in augmented state space
Transition, observation and payoff models have to be learned
dxxbxbxH
xH
xbb b
b
x )(log)()(,)(
)(maxarg
![Page 47: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/47.jpg)
47
![Page 48: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/48.jpg)
48
![Page 49: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/49.jpg)
49
Coastal Navigation
![Page 50: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/50.jpg)
50
Dimensionality Reduction on Beliefs
![Page 51: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/51.jpg)
51
Monte Carlo POMDPs
Represent beliefs by samples
Estimate value function on sample sets
Simulate control and observation transitions between beliefs
![Page 52: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/52.jpg)
52
Derivation of POMDPsValue Function Representation
Piecewise linear and convex:
![Page 53: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/53.jpg)
53
Value Iteration Backup
Belief update is a function:
Backup in belief space:
![Page 54: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/54.jpg)
54
Derivation of POMDPs
Break into two components
![Page 55: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/55.jpg)
55
Finite Measurement Space
![Page 56: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/56.jpg)
56
Starting at Previous Belief
*: constant**: linear function in params of belief space
![Page 57: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/57.jpg)
57
Putting it Back in
![Page 58: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/58.jpg)
58
Maximization over Actions
![Page 59: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/59.jpg)
59
Getting max in Front of Sum
![Page 60: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/60.jpg)
60
Final Result
Individual constraints:
![Page 61: Planning Under Uncertainty: POMDPsdmeger/comp765/slides/COMP765... · •Partially Observable Markov Decision Processes: •The natural extension of our Optimal Control ideas •Learning](https://reader030.vdocuments.us/reader030/viewer/2022040307/5ed099dce2e54a54101262ba/html5/thumbnails/61.jpg)
61