continuous adaptation via meta-learning in nonstationary...

1
Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments Maruan Al-Shedivat 1 Trapit Bansal 2 Yuri Burda 4 Ilya Sutskever 4 Igor Mordatch 4 Pieter Abbeel 3,4 Abstract The ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. Approach: Problem. We define a setup for continuous adaptation in a realistic few-shot regime. Algorithm. A variant of gradient-based meta-learning. Training is done on pairs of temporally shifted tasks. The agent learns to anticipate and adapt to nonstationary transitions. Evaluation. Use nonstationary locomotion and competitive multi-agent environments. Define iterated games to consistently evaluate adaptation in the multi-agent setting. 1. Motivation Nonstationary worlds require fast continuous adaptation. Multi-agent systems, any machine learning systems in the wild. A step towards continual, never-ending learning [1]. A system that can keep learning and improving over a lifetime. 2. Background Learning to learn for fast adaptation Given a task description, a good adaptation rule must generate a model suitable for the task at hand: ˆ θ = argmin θ E T P [L T [g θ (T )]] , where L T [g θ (T )] := E z D T [L (f φ (z ))] := g θ (T ) distribution over tasks/datasets adaptation rule takes a task description and outputs a model distribution over examples for task T T 1 T 2 T n ... ... adaptation Training sets Test sets In few-shot classification, tasks are described by small labeled datasets. Model-agnostic meta-learning (MAML) Adaptation via a gradient steps on a task-specific loss: φ i = g θ (T i ) := θ - αθ L T i (f θ ) At meta-training, search for a good parameter initialization: min θ X T i ∼P L tst T i f θ -αθ L trn T i (f θ ) Intuition: figure from [2] 3. Adaptation as Inference θ τ θ φ τ φ T Meta-learning for RL. The data are trajectories: τ := (x 1 ,a 1 ,r 1 ,...,x H ,a H ,r H ). Treat policy parameters, tasks, and all trajectories as random variables. In this view, adaptation = inference and meta-learning = learning a prior. Brings in compositionality of probabilistic modeling: Different priors and inference algorithms new meta-learning methods (cf. [3]). Different dependencies between the variables new adaptation methods. 4. Meta-learning for Continuous Adaptation φ i-1 τ i-1 T i-1 φ i τ i T i φ i+1 τ i+1 T i+1 ... ... ... ... Real tasks are rarely i.i.d. There are often relationships that we can to exploit. Assuming that the tasks change over time consistently, we can learn to anticipate the changes and adapt to the temporal shifts. Meta-learn on pairs of tasks by solving: min θ E P (T 0 ),P (T i+1 |T i ) L X i=1 L T i ,T i+1 (θ ) , where L T i ,T i+1 (θ ) := E τ 1:K i,θ P T i (τ |θ ) h E τ i+1P T i+1 (τ |φ) L T i+1 (τ i+1) | τ 1:K i,θ i The algorithm Meta-learning at training time: Sample a batch of task pairs, {(T i ,T i+1 )} n i=1 . Rollout trajectories τ 1:K θ for T i (the first task in each pair) using π θ . Compute φ ( τ 1:K θ ,θ,α ) and rollout τ φ for each T i+1 using π φ . Update θ and α using the stochastic gradient of the meta-loss. + Policy Loss Trajectory + Intermediate steps deterministic stochastic gradient Unbiased estimator of the gradient of the meta-loss θ,α L T i ,T i+1 (θ, α)= E τ 1:K i,θ P T i (τ |θ ) τ i+1P T i+1 (τ |φ) L T i+1 (τ i+1) θ,α log π φ (τ i+1)+ θ K X k =1 log π θ (τ k i,θ ) N.B.: The highlighted term was missing in the original derivation of the policy gradients for MAML-RL, which made the gradient estimators biased [2]. A general solution for such issues is developed in [4]. Adaptation at execution time: Interact with the environment using π φ . Store all trajectories and importance weights, π θ / π φ , in the experience buffer. Before each episode, compute φ using importance-corrected adaptation updates using trajectories from the buffer. φ i := θ - α 1 K K X k =1 π θ (τ k ) π φ i-1 (τ k ) θ L(τ k ), τ 1:K ExperienceBuffer, 5. Environments & Setup Iterated adaptation games Round 1 Round 2 Round 3 Round K Opponent: version 1 version 2 version 3 version K Agent: Episodes: A multi-round game where an agent must adapt to opponents of increasing competence. The outcome of each round is either win, loss, or draw. Opponents are either pre-trained or also adapting. 6. Experiments Nonstationary locomotion 1 3 5 7 0 500 1000 Total episodic reward Back two legs 1 3 5 7 Consecutive episodes 0 500 1000 Middle two legs 1 3 5 7 0 500 1000 Front two legs Policy + adaptation method MLP MLP + PPO-tracking MLP + meta-updates LSTM LSTM + PPO-tracking LSTM + meta-updates RL 2 Figure 1: Episodic rewards for 7 consecutive episodes in 3 held out nonstationary locomotion environments. Multi-agent competition 0 25 50 75 100 0.2 0.4 0.6 Win rate 0 25 50 75 100 Consecutive rounds 0.0 0.2 0.4 0.6 0 25 50 75 100 0.2 0.4 0.6 Opponent: Ant Opponent: Bug Opponent: Spider Agent: Ant RL 2 LSTM + PPO-tracking LSTM + meta-updates Figure 2: Win rates for different adaptation strategies in iterated games with opponents pretrained via self-play. Competence of the opponents was increasing from round to round based on the precomputed policies at different stages of self-play. 0 1 2 3 4 5 6 7 8 9 10 Creature generation (#) 0 25 50 75 100 Proportion (%) Ants Bugs Spiders Policy + adaptation MLP MLP + PPO-tracking MLP + meta-updates LSTM LSTM + PPO-tracking LSTM + meta-updates RL 2 MLP LSTM 20 30 40 TrueSkill Sub-population: Ants MLP LSTM 20 30 40 Sub-population: Bugs MLP LSTM 20 30 40 Sub-population: Spiders no adaptation PPO-tracking meta-updates RL 2 Figure 3: Top: Evolution of a population of 1050 agents. Bottom: TrueSkill of the top-performing agents in the population. Discussion Limitations: Gradient-based adaptation requires estimating second-order derivatives. This is computation- and sample-inefficient (needs large batches). Unlikely to work with sparse rewards. Future work: Adaptation + model-based RL. Adaptation + curriculum learning/generation. Multi-step adaptation (i.e., planning with tasks); better use of historical information. References [1] Ring ’94, ’97, Mitchell et al. ’15. [2] Finn et al.: Model-agnostic meta-learning for fast adaptation of deep networks. ICML 2017. [3] Grant et al.: Recasting Gradient-Based Meta-Learning as Hierarchical Bayes. ICLR 2018. [4] Foerster et al.: DiCE: The Infinitely Differentiable Monte-Carlo Estimator. ICLR WS 2018. Acknowledgements Harri Edwards, Jakob Foerster, Aditya Grover, Aravind Rajeswaran, Vikash Kumar, Yuhuai Wu, Carlos Florensa, anonymous reviewers, and the OpenAI team. Videos & highlights: International Conference on Learning Representations, ICLR 2018, Vancouver, Canada 1 Carnegie Mellon University, 2 UMass Amherst, 3 UC Berkeley, 4 OpenAI Correspondence: [email protected]

Upload: others

Post on 22-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Continuous Adaptation via Meta-Learning in Nonstationary ...mshediva/assets/pdf/ca-iclr2018-poster.pdfContinuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

Continuous Adaptation via Meta-Learning in Nonstationary and Competitive EnvironmentsMaruan Al-Shedivat 1 Trapit Bansal 2 Yuri Burda 4 Ilya Sutskever 4 Igor Mordatch 4 Pieter Abbeel 3,4

Abstract

The ability to continuously learn and adapt from limited experience in nonstationary environmentsis an important milestone on the path towards general intelligence.

Approach:

Problem. We define a setup for continuous adaptation in a realistic few-shot regime.

Algorithm. A variant of gradient-based meta-learning. Training is done on pairs of temporallyshifted tasks. The agent learns to anticipate and adapt to nonstationary transitions.

Evaluation. Use nonstationary locomotion and competitive multi-agent environments.Define iterated games to consistently evaluate adaptation in the multi-agent setting.

1. Motivation

• Nonstationary worlds require fast continuous adaptation.Multi-agent systems, any machine learning systems in the wild.

• A step towards continual, never-ending learning [1].A system that can keep learning and improving over a lifetime.

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

2. Background

Learning to learn for fast adaptation

Given a task description, a good adaptation rule must generate a model suitable for the task at hand:

θ̂ = argminθ

ET∼P [LT [gθ(T )]] , where

LT [gθ(T )] := Ez∼DT [L (fφ(z))] , φ := gθ(T )

distribution overtasks/datasets

adaptation rule takes a taskdescription and outputs a model

distribution overexamples for task T

T1

T2

Tn

...

...

adaptation

Training sets Test sets

In few-shot classification, tasks are described

by small labeled datasets.

Model-agnostic meta-learning (MAML)

• Adaptation via a gradient steps on a task-specific loss:

φi = gθ(Ti) := θ − α∇θLTi(fθ)• At meta-training, search for a good parameter initialization:

minθ

∑Ti∼PLtstTi

(fθ−α∇θLtrn

Ti(fθ)

)Intuition:

figure from [2]

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

3. Adaptation as Inference

θ

τθ

φ

τφ

T

Meta-learning for RL. The data are trajectories: τ := (x1, a1, r1, . . . , xH, aH, rH).

• Treat policy parameters, tasks, and all trajectories as random variables.

• In this view, adaptation = inference and meta-learning = learning a prior.

• Brings in compositionality of probabilistic modeling:• Different priors and inference algorithms⇒ new meta-learning methods (cf. [3]).• Different dependencies between the variables⇒ new adaptation methods.

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

4. Meta-learning for Continuous Adaptation

φi−1

τi−1

Ti−1

φi

τi

Ti

φi+1

τi+1

Ti+1. . . . . .

. . . . . .

Real tasks are rarely i.i.d. There are often relationships that we can to exploit. Assuming that the taskschange over time consistently, we can learn to anticipate the changes and adapt to the temporal shifts.

Meta-learn on pairs of tasks by solving:

minθ

EP(T0),P(Ti+1|Ti)

L∑i=1

LTi,Ti+1(θ)

, where

LTi,Ti+1(θ) := Eτ 1:K

i,θ ∼PTi(τ |θ)

[Eτi+1,φ∼PTi+1

(τ |φ)

[LTi+1

(τi+1,φ) | τ 1:Ki,θ , θ

]]The algorithm

Meta-learning at training time:• Sample a batch of task pairs, {(Ti, Ti+1)}ni=1.• Rollout trajectories τ 1:K

θ for Ti (the first task in each pair) using πθ.• Compute φ

(τ 1:Kθ , θ, α

)and rollout τφ for each Ti+1 using πφ.

• Update θ and α using the stochastic gradient of the meta-loss.

+

Policy Loss Trajectory

+

Inte

rmed

iate

ste

ps

deterministic stochastic gradient

Unbiased estimator of the gradient of the meta-loss

∇θ,αLTi,Ti+1(θ, α) = E τ 1:K

i,θ ∼PTi(τ |θ)

τi+1,φ∼PTi+1(τ |φ)

LTi+1(τi+1,φ)

∇θ,α log πφ(τi+1,φ) +∇θ

K∑k=1

log πθ(τki,θ)

N.B.: The highlighted term was missing in the original derivation of the policy gradients for MAML-RL,which made the gradient estimators biased [2]. A general solution for such issues is developed in [4].

Adaptation at execution time:• Interact with the environment using πφ. Store all trajectories

and importance weights, πθ/πφ, in the experience buffer.• Before each episode, compute φ using importance-corrected

adaptation updates using trajectories from the buffer.

φi := θ − α 1

K

K∑k=1

(πθ(τ

k)

πφi−1(τk)

)∇θL(τ k), τ 1:K ∼ ExperienceBuffer,

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

5. Environments & Setup

Iterated adaptation gamesRound 1 Round 2 Round 3 Round K

Opponent:version 1 version 2 version 3 version K

Agent:

Episodes:

A multi-round game where an agent must adapt to opponents of increasing competence. The outcomeof each round is either win, loss, or draw. Opponents are either pre-trained or also adapting.

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

6. Experiments

Nonstationary locomotion

1 3 5 7

0

500

1000

Tota

lepi

sodi

cre

war

d

Back two legs

1 3 5 7Consecutive episodes

0

500

1000

Middle two legs

1 3 5 7

0

500

1000Front two legs Policy + adaptation method

MLPMLP + PPO-trackingMLP + meta-updatesLSTMLSTM + PPO-trackingLSTM + meta-updatesRL2

Figure 1: Episodic rewards for 7 consecutive episodes in 3 held out nonstationary locomotion environments.

Multi-agent competition

0 25 50 75 100

0.2

0.4

0.6

Win

rate

0 25 50 75 100Consecutive rounds

0.0

0.2

0.4

0.6

0 25 50 75 100

0.2

0.4

0.6Opponent: Ant Opponent: Bug Opponent: Spider

Agent: Ant RL2 LSTM + PPO-tracking LSTM + meta-updates

Figure 2: Win rates for different adaptation strategies in iterated games with opponents pretrained via self-play. Competenceof the opponents was increasing from round to round based on the precomputed policies at different stages of self-play.

0 1 2 3 4 5 6 7 8 9 10

Creature generation (#)

0

25

50

75

100

Pro

port

ion

(%)

Ants

Bugs

Spiders

Policy + adaptation

MLPMLP + PPO-trackingMLP + meta-updatesLSTMLSTM + PPO-trackingLSTM + meta-updatesRL2

MLP LSTM

20

30

40

True

Ski

ll

Sub-population: Ants

MLP LSTM

20

30

40Sub-population: Bugs

MLP LSTM

20

30

40Sub-population: Spiders

no adaptation PPO-tracking meta-updates RL2

Figure 3: Top: Evolution of a population of 1050 agents. Bottom: TrueSkill of the top-performing agents in the population.

Discussion

Limitations:

• Gradient-based adaptation requires estimatingsecond-order derivatives. This is computation-and sample-inefficient (needs large batches).

• Unlikely to work with sparse rewards.

Future work:

• Adaptation + model-based RL.

• Adaptation + curriculum learning/generation.

• Multi-step adaptation (i.e., planning with tasks);better use of historical information.

References[1] Ring ’94, ’97, Mitchell et al. ’15.[2] Finn et al.: Model-agnostic meta-learning for fast adaptation of deep networks. ICML 2017.[3] Grant et al.: Recasting Gradient-Based Meta-Learning as Hierarchical Bayes. ICLR 2018.[4] Foerster et al.: DiCE: The Infinitely Differentiable Monte-Carlo Estimator. ICLR WS 2018.

AcknowledgementsHarri Edwards, Jakob Foerster, Aditya Grover, Aravind Rajeswaran, Vikash Kumar, Yuhuai Wu,Carlos Florensa, anonymous reviewers, and the OpenAI team.

Videos & highlights:

International Conference on Learning Representations, ICLR 2018, Vancouver, Canada 1 Carnegie Mellon University, 2 UMass Amherst, 3 UC Berkeley, 4 OpenAI Correspondence: [email protected]