10 æ 601 introduction to machine learningmgormley/courses/10601/slides/lecture24-qlearn … ·...

37
Reinforcement Learning: QǦLearning 1 10Ǧ601 Introduction to Machine Learning Matt Gormley Lecture 24 Apr. 13, 2020 Machine Learning Department School of Computer Science Carnegie Mellon University

Upload: others

Post on 02-Jun-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Reinforcement�Learning:QǦLearning

1

10Ǧ601�Introduction�to�Machine�Learning

Matt�GormleyLecture�24

Apr.�13,�2020

Machine�Learning�DepartmentSchool�of�Computer�ScienceCarnegie�Mellon�University

Page 2: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Reminders

� Homework 8:�Reinforcement Learning± Out:�Fri,�Apr 10± Due:�Wed,�Apr�22�at�11:59pm

� Today’s InǦClass Poll± http://poll.mlcourse.org

3

Page 3: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

VALUE�ITERATION

5

Page 4: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Value�Iteration

11

Variant�2:�without�Q(s,a)�table

Page 5: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Value�Iteration

12

Variant�1:�with�Q(s,a)�table

Page 6: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Synchronous�vs.�AsynchronousValue�Iteration

13

asynchronous�updates:�compute�and�update�V(s)�for�each�state�one�at�a�time

synchronous�updates:�compute�all�the�fresh�values�of�V(s)�from�all�the�stale�values�of�V(s),�then�update�V(s)�with�fresh�values

Page 7: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Value�Iteration�Convergence

14

Provides�reasonable�

stopping�criterion�for�value�iteration

Often�greedy�policy�converges�well�before�the�value�

function

Holds�for�both�asynchronous�and�

sychronousupdates

very�abridged

Page 8: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Value�Iteration�Variants

15

Question:�True�or�False:�The�value�iteration�algorithm�shown�below�is�an�example�of�synchronous updates

Question:�True�or�False:�The�value�iteration�algorithm�shown�below�is�an�example�of�synchronous updates

Page 9: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

POLICY�ITERATION

16

Page 10: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Policy�Iteration

18

Page 11: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Policy�Iteration

19

System�of�|S|�equations�and�|S|�

variables

Compute�value�function�for�fixed�

policy�is�easy

Greedy�policy�w.r.t.�current�value�function

Greedy�policy�might�remain�the�same�for�a�particular�state�if�there�is�

no�better�action

Page 12: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Policy�Iteration�ConvergenceInǦClass�Exercise:�How�many�policies�are�there�for�a�finite�sized�state�and�action�space?InǦClass�Exercise:�How�many�policies�are�there�for�a�finite�sized�state�and�action�space?

20

InǦClass�Exercise:�Suppose�policy�iteration�is�shown�to�improve�the�policy�at�every�iteration.�Can�you�bound�the�number�of�iterations�it�will�take�to�converge?�If�yes,�what�is�the�bound?�If�no,�why�not?

InǦClass�Exercise:�Suppose�policy�iteration�is�shown�to�improve�the�policy�at�every�iteration.�Can�you�bound�the�number�of�iterations�it�will�take�to�converge?�If�yes,�what�is�the�bound?�If�no,�why�not?

Page 13: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Value�Iteration�vs.�Policy�Iteration� Value�iteration�requires�

O(|A|�|S|2)�computation�per�iteration

� Policy�iteration�requires�O(|A|�|S|2 +�|S|3)�computation�per�iteration

� In�practice,�policy�iteration�converges�in�fewer�iterations

21

Page 14: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Learning�ObjectivesReinforcement�Learning:�Value�and�Policy�Iteration

You�should�be�able�to…1. Compare�the�reinforcement�learning�paradigm�to�other�learning�paradigms2. Cast�a�realǦworld�problem�as�a�Markov�Decision�Process3. Depict�the�exploration�vs.�exploitation�tradeoff�via�MDP�examples4. Explain�how�to�solve�a�system�of�equations�using�fixed�point�iteration5. Define�the�Bellman�Equations6. Show�how�to�compute�the�optimal�policy�in�terms�of�the�optimal�value�

function7. Explain�the�relationship�between�a�value�function�mapping�states�to�

expected�rewards�and�a�value�function�mapping�stateǦaction�pairs�to�expected�rewards

8. Implement�value�iteration9. Implement�policy�iteration10. Contrast�the�computational�complexity�and�empirical�convergence�of�value�

iteration�vs.�policy�iteration11. Identify�the�conditions�under�which�the�value�iteration�algorithm�will�

converge�to�the�true�value�function12. Describe�properties�of�the�policy�iteration�algorithm

22

Page 15: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Today’s�lecture�is�brought�you�by�the�letter….

23

Page 16: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

QǦLEARNING

24

Page 17: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

QǦLearning

Whiteboard±Motivation:�What�if�we�have�zero�knowledge�of�the�environment?

± QǦFunction:�Expected�Discounted�Reward

25

Page 18: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Example:�Robot�Localization

26Figure�from�Tom�Mitchell

Page 19: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

QǦLearning

Whiteboard± QǦLearning�Algorithm

� Case�1:�Deterministic�Environment� Case�2:�Nondeterministic�Environment

± Convergence�Properties± Exploration�Insensitivity± Ex:�ReǦordering�Experiences± Ǧgreedy�Strategy

27

Page 20: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Reordering�Experiences

28

Page 21: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Designing�State�Spaces

29

Q:�Do�we�have�to�retrain�our�RL�agent�every�time�we�change�our�state�space?

A: Yes.�But�whether�your�state�space�changes�from�one�setting�to�another�is�determined�by�your�design�of�the�state�representation.

Two�examples:± State�Space�A:�<x,y>�position�on�map

e.g.�st =�<74,�152>± State�Space�B:�window�of�pixel�colors�

centered�at�current�Pac�Man�locatione.g.�st =� 0 1 0

0 0 0

1 1 1

Page 22: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

DEEP�RL�EXAMPLES

30

Page 23: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

TD�Gammon�Æ Alpha�Go

Learning�to�beat�the�masters�at�board�games

31

³«WKH�ZRUOG¶V�WRS�FRPSXWHU�SURJUDP�IRU�EDFNJDPPRQ��7'�*$0021��7HVDXUR���������������OHDUQHG�LWV�VWUDWHJ\�E\�SOD\LQJ�RYHU�RQH�PLOOLRQ�SUDFWLFH�JDPHV�DJDLQVW�LWVHOI«´

�0LWFKHOO�������

THEN NOW

Page 24: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

REVHUYDWLRQ

UHZDUG

DFWLRQ

At

Rt

Ot

Playing�Atari�with�Deep�RL� Setup:��RL�system�observes�the�pixels�on�the�screen

� It�receives�rewards�as�the�game�score

� Actions�decide�how�to�move�the�joystick�/�buttons

32Figures�from�David�Silver�(Intro�RL�lecture)

Page 25: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Playing�Atari�with�Deep�RL

Videos:± Atari�Breakout:�https://www.youtube.com/watch?v=V1eYniJ0Rnk

± Space�Invaders:https://www.youtube.com/watch?v=ePv0Fs9cGgU

33

)LJXUH�� 6FUHHQ VKRWV IURP ¿YH$WDUL ����*DPHV� �Left-to-right� 3RQJ� %UHDNRXW� 6SDFH ,QYDGHUV�6HDTXHVW� %HDP 5LGHU

Figures�from�Mnih et�al.�(2013)

Page 26: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Playing�Atari�with�Deep�RL

34

)LJXUH�� 6FUHHQ VKRWV IURP ¿YH$WDUL ����*DPHV� �Left-to-right� 3RQJ� %UHDNRXW� 6SDFH ,QYDGHUV�6HDTXHVW� %HDP 5LGHU

Figures�from�Mnih et�al.�(2013)

Page 27: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Deep�QǦLearning

35

Question:�What�if�our�state�space�S�is�too�large�to�represent�with�a�table?

Examples:� st =�pixels�of�a�video�game� st =�continuous�values�of�a�sensors�in�a�manufacturing�robot� st =�sensor�output�from�a�selfǦdriving�car

Answer:�Use�a�parametric�function�to�approximate�the�table�entries

Question:�What�if�our�state�space�S�is�too�large�to�represent�with�a�table?

Examples:� st =�pixels�of�a�video�game� st =�continuous�values�of�a�sensors�in�a�manufacturing�robot� st =�sensor�output�from�a�selfǦdriving�car

Answer:�Use�a�parametric�function�to�approximate�the�table�entries

Key�Idea:1. Use�a�neural�network�Q(s,a;�Ʌ)�to�approximate�Q*(s,a)2. Learn�the�parameters�Ʌ via�SGD�with�training�

examples�<�st,�at,�rt,�st+1 >

Key�Idea:1. Use�a�neural�network�Q(s,a;�Ʌ)�to�approximate�Q*(s,a)2. Learn�the�parameters�Ʌ via�SGD�with�training�

examples�<�st,�at,�rt,�st+1 >

Page 28: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Deep�QǦLearning

Whiteboard± Strawman�loss�function�(i.e.�what�we�cannot�compute)

± Approximating�the�Q�function�with�a�neural�network

± Approximating�the�Q�function�with�a�linear�model± Deep�QǦLearning± function�approximators�(<state,�actioni>�Æ qǦvalue�vs.state�Æ all�action�qǦvalues)

36

Page 29: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

notǦsoǦDeep�QǦLearning

37

Page 30: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Experience�Replay� Problemswith�online�updates�for�Deep�QǦlearning:

± not�i.i.d.�as�SGD�would�assume± quickly�forget�rare�experiences�that�might�later�be�useful�to�

learn�from� Uniform�Experience�Replay�(Lin,�1992):

± Keep�a�replay�memory�D�=�{e1,�e2,�…�,�eN}�of�N�most�recent�experiences�et =�<st,�at,�rt,�st+1>

± Alternate�two�steps:1. Repeat�T�times:�randomly�sample�ei from�D�and�apply�a�QǦ

Learning�update�to�ei2. Agent�selects�an�action�using�epsilon�greedy�policy�to�receive�

new�experience�that�is�added�to�D� Prioritized�Experience�Replay (Schaul et�al,�2016)

± similar�to�Uniform�ER,�but�sample�so�as�to�prioritize�experiences�with�high�error

38

Page 31: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Alpha�GoGame�of�Go�(ᅩᲦ)� 19x19�board� Players�alternately�play�black/white�stones

� Goal�is�to�fully�encircle�the�largest�region�on�the�board

� Simple rules,�but�extremely�complex�game�play

39Figure�from�Silver�et�al.�(2016)

Page 32: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Alpha�Go� State�space�is�too�large�to�represent�explicitly�since

#�of�sequences�of�moves�is�O(bd)± Go:�b=250 and�d=150± Chess:�b=35 and�d=80

� Key�idea:�± Define�a�neural�network�to�approximate�the�value�function± Train�by�policy�gradient

40Figure�from�Silver�et�al.�(2016)

Page 33: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Alpha�Go

� Results�of�a�tournament

� From�Silver�et�al.�(2016):�“a�230�point�gap�corresponds�to�a�79%�probability�of�winning”

41Figure�from�Silver�et�al.�(2016)

Page 34: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Learning�ObjectivesReinforcement�Learning:�QǦLearning

You�should�be�able�to…1. Apply�QǦLearning�to�a�realǦworld�environment2. Implement�QǦlearning3. Identify�the�conditions�under�which�the�QǦ

learning�algorithm�will�converge�to�the�true�value�function

4. Adapt�QǦlearning�to�Deep�QǦlearning�by�employing�a�neural�network�approximation�to�the�Q�function

5. Describe�the�connection�between�Deep�QǦLearning�and�regression

42

Page 35: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

QǦLearningQuestion:For�the�R(s,a)�values�shown�on�the�arrows�below,�which�are�the�corresponding�Q*(s,a)�values?Assume�discount�factor�=�0.5.

Question:For�the�R(s,a)�values�shown�on�the�arrows�below,�which�are�the�corresponding�Q*(s,a)�values?Assume�discount�factor�=�0.5.

46

Answer:Answer:

0

2

4

00 8

8

0

24

04 8

8

0

67

24 8

8

0

12

02 4

4

0

67

34 8

8

Page 36: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

ML�Big�Picture

47

Learning�Paradigms:What�data�is�available�and�when?�What�form�of�prediction?� supervised�learning� unsupervised�learning� semiǦsupervised�learning� reinforcement�learning� active�learning� imitation�learning� domain�adaptation� online�learning� density�estimation� recommender�systems� feature�learning� manifold�learning� dimensionality�reduction� ensemble�learning� distant�supervision� hyperparameter optimization

Learning�Paradigms:What�data�is�available�and�when?�What�form�of�prediction?� supervised�learning� unsupervised�learning� semiǦsupervised�learning� reinforcement�learning� active�learning� imitation�learning� domain�adaptation� online�learning� density�estimation� recommender�systems� feature�learning� manifold�learning� dimensionality�reduction� ensemble�learning� distant�supervision� hyperparameter optimization

Problem�Formulation:What�is�the�structure�of�our�output�prediction?Problem�Formulation:What�is�the�structure�of�our�output�prediction?boolean Binary�Classificationcategorical Multiclass�Classificationordinal Ordinal�Classificationreal Regressionordering Rankingmultiple�discrete Structured�Predictionmultiple�continuous (e.g.�dynamical�systems)both�discrete�&cont.

(e.g. mixed�graphical�models)

Theoretical�Foundations:What�principles�guide�learning?� probabilistic� information�theoretic� evolutionary�search� ML�as�optimization

Theoretical�Foundations:What�principles�guide�learning?� probabilistic� information�theoretic� evolutionary�search� ML�as�optimization

Facets�of�Building�ML�Systems:How�to�build�systems�that�are�robust,�efficient,�adaptive,�effective?1. Data�prep�2. Model�selection3. Training�(optimization�/�

search)4. Hyperparameter tuning�on�

validation�data5. (Blind)�Assessment�on�test�

data

Facets�of�Building�ML�Systems:How�to�build�systems�that�are�robust,�efficient,�adaptive,�effective?1. Data�prep�2. Model�selection3. Training�(optimization�/�

search)4. Hyperparameter tuning�on�

validation�data5. (Blind)�Assessment�on�test�

data

Big�Ideas�in�ML:Which�are�the�ideas�driving�development�of�the�field?� inductive�bias� generalization�/�overfitting� biasǦvariance�decomposition� generative�vs.�discriminative� deep�nets,�graphical�models� PAC�learning� distant�rewards

Big�Ideas�in�ML:Which�are�the�ideas�driving�development�of�the�field?� inductive�bias� generalization�/�overfitting� biasǦvariance�decomposition� generative�vs.�discriminative� deep�nets,�graphical�models� PAC�learning� distant�rewards

App

lication�Areas

Key�ch

alleng

es?

NLP

,�Spe

ech,�Com

puter�

Vision

,�Rob

otics,�M

edicine,�

Search

App

lication�Areas

Key�ch

alleng

es?

NLP

,�Spe

ech,�Com

puter�

Vision

,�Rob

otics,�M

edicine,�

Search

Page 37: 10 æ 601 Introduction to Machine Learningmgormley/courses/10601/slides/lecture24-qlearn … · time we change our state space? A: Yes. But whether your state space changes from one

Learning�Paradigms

48