agent activation regimes rob axtell brookings institution santa fe institute george mason university

47
Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Upload: kenneth-mclaughlin

Post on 26-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agent Activation Regimes

Rob AxtellBrookings Institution

Santa Fe Institute

George Mason University

Page 2: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents as Simulation Systems

• Discrete Event Simulation– Population of objects– Objects ‘wired’ together in some fashion– Objects act/interact upon ‘events’

– External events– Internal events

– Objects not (usually) autonomous

• Discrete Time Simulation– Often discretization of continuous time system

Agent models have features of both

Page 3: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Who Interacts with Whom?

• Need to specify how agents are activated– Serial or parallel?

• Serial: uniform, random, Poisson clock

• Parallel: synchronous or asynchronous?

• Need to specify the graph of interaction

• If data available, use it!

In what follows, a population of A agents

Page 4: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Review of terms…

• Serial: Agents act one at a time

• Parallel:– Synchronous: All agents acts in lock-step, with

previous period’s state information (e.g., CAs)– Partially asynchronous: Agents act in parallel

and communicate as possible (delays bounded)– Fully asynchronous: Agents act in parallel

without any guarantees on delays

Page 5: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Review of terms…

• Serial: Agents act one at a time

• Parallel:– Synchronous: All agents acts in lock-step, with

previous period’s state information (e.g., CAs)– Partially asynchrounous: Agents act in parallel

and communicate as possible (delays bounded)– Fully asynchronous: Agents act in parallel

without any guarantees on delays

Page 6: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Nowak and Mayvs

Huberman and Glance, I

• Context: Early ‘90s, microcomputer color graphics just possible, spatial games a new idea

• Nowak and May in Nature: Ostensibly showed that a spatially-extended PD game could support large-scale cooperation

• Theory? Screen snapshot!

Page 7: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Nowak and May

Red and yellow are defectors, green and blue are cooperators

Page 8: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Nowak and Mayvs

Huberman and Glance, II• Huberman and Glance (PNAS): This result is an

artifact of synchronous updating in the model

Page 9: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Nowak and Mayvs

Huberman and Glance, III

• Nowak and May responded that synchronization was common in biological systems

• Huberman and Glance answered that this rationale does not apply to human social systems

Page 10: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Idea

• A period is defined by all A agents being activated exactly once

• Feature: No agent inactive• Problem: Calling agents in same order could create

artifacts• Fix: Randomize the order of agent activation• Cost: Expensive to randomize?• Unknown: How much randomization is OK?• Examples: Sugarscape, many early agent models

Page 11: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, I

Agent 1 state 1 state 2

…behavior1()behavior 2()

Agent 2 state 1 state 2

…behavior1()behavior 2()

Agent 3 state 1 state 2

…behavior1()behavior 2()

Agent 4 state 1 state 2

…behavior1()behavior 2()

Agent A state 1 state 2

…behavior1()behavior 2()

Agent array

System memory

Page 12: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, I

Agent 1 state 1 state 2

…behavior1()behavior 2()

Agent 2 state 1 state 2

…behavior1()behavior 2()

Agent 3 state 1 state 2

…behavior1()behavior 2()

Agent 4 state 1 state 2

…behavior1()behavior 2()

Agent A state 1 state 2

…behavior1()behavior 2()

Activate the population at agent 1, sequentiallyProblem: Agent 1 always gets to move first

Page 13: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, I

Agent 1 state 1 state 2

…behavior1()behavior 2()

Agent 2 state 1 state 2

…behavior1()behavior 2()

Agent 3 state 1 state 2

…behavior1()behavior 2()

Agent 4 state 1 state 2

…behavior1()behavior 2()

Agent A state 1 state 2

…behavior1()behavior 2()

Pick random starting point

Problem: Agent i always moves before agent i+1

Page 14: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, I

Agent 1 state 1 state 2

…behavior1()behavior 2()

Agent 2 state 1 state 2

…behavior1()behavior 2()

Agent 3 state 1 state 2

…behavior1()behavior 2()

Agent 4 state 1 state 2

…behavior1()behavior 2()

Agent A state 1 state 2

…behavior1()behavior 2()

Pick random starting point AND random direction

Problem: Still correlation between i and i+1

Page 15: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, I

Agent 1 state 1 state 2

…behavior1()behavior 2()

Agent 2 state 1 state 2

…behavior1()behavior 2()

Agent 3 state 1 state 2

…behavior1()behavior 2()

Agent 4 state 1 state 2

…behavior1()behavior 2()

Agent A state 1 state 2

…behavior1()behavior 2()

Solution: Array/list randomization together with randomstarting point and random direction

Page 16: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, II—Efficiency

Agent 1 state 1 state 2

…behavior1()behavior 2()

Agent 2 state 1 state 2

…behavior1()behavior 2()

Agent 3 state 1 state 2

…behavior1()behavior 2()

Agent 4 state 1 state 2

…behavior1()behavior 2()

Agent A state 1 state 2

…behavior1()behavior 2()

Pointer toAgent 1

Agent array

Pointer toAgent 2

Pointer toAgent 3

Pointer toAgent 4

Pointer toAgent A

System memory

Page 17: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, II—Efficiency

Agent 1 state 1 state 2

…behavior1()behavior 2()

Agent 2 state 1 state 2

…behavior1()behavior 2()

Agent 3 state 1 state 2

…behavior1()behavior 2()

Agent 4 state 1 state 2

…behavior1()behavior 2()

Agent A state 1 state 2

…behavior1()behavior 2()

Pointer toAgent 1

Agent array

Pointer toAgent 4

Pointer toAgent 3

Pointer toAgent 2

Pointer toAgent A

System memory

Page 18: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, III

How Much ‘Shuffling’ to Do?

Pointer toAgent 1

Agent array

Pointer toAgent 2

Pointer toAgent 3

Pointer toAgent 4

Pointer toAgent 5

Case 1: Neighbors swapped

Result: 4 agents have 1 new neighbor each

Pointer toAgent 6

Page 19: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, III

How Much ‘Shuffling’ to Do?

Pointer toAgent 1

Agent array

Pointer toAgent 2

Pointer toAgent 3

Pointer toAgent 4

Pointer toAgent 5

Case 2: Agents with common neighbor swapped

Result: 2 agents have 1 new neighbor each2 (moving) agents have 2 new neighbors each

Pointer toAgent 6

Page 20: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, III

How Much ‘Shuffling’ to Do?

Pointer toAgent 1

Agent array

Pointer toAgent 2

Pointer toAgent 3

Pointer toAgent 4

Pointer toAgent 5

Case 3: Agents distant from one another swapped

Result: 4 agents have 1 new neighbor each2 (moving) agents have 2 new neighbors each

Pointer toAgent 6

Page 21: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Uniform Activation: Implementation, III—Shuffling

25 50 75 100 125 150

% Rearranged

20

40

60

80

100

Percentage of Agents with New Neighbors

≥ 1

2

To give 1/2 of the agents 1 new neighbor, shuffle ~ 25% of agentsTo give 1/2 of the agents 2 new neighbors, shuffle ~ 50% of agents

Page 22: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Random Activation: Idea

• Agents are selected to be active with uniform probability

• A period is defined by A agents being activated• Feature: Super efficient to implement• Cost: Not all agents are active each period• Unknown: When different from uniform activation?• Examples: Zero-intelligence traders, bilateral

exchange, Axelrod culture model, many others

Page 23: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Random Activation: Analysis, I

At each ‘click’ the probability a specific agent isactivated is 1/AOver K activations—K/A periods—the probability an agent is activated exactly i times is

Ki

⎝ ⎜

⎠ ⎟1A

⎝ ⎜

⎠ ⎟i

A −1A

⎝ ⎜

⎠ ⎟K −i

The mean number of activations is K/AThe variance in the number of activations across the agent population is K(A-1)/A2 K/A, the meanThe skewness is (A-2)/K(A-1)

Page 24: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Call w the number of ‘clicks’ an agent has to wait to be activated, the waiting time. It’s distribution is

The expected value is A-1 and the variance is A(A-1)

In terms of time:Let us say that T periods have gone by, thus K = TAThe mean number of activations is T, as we expectThe variance is now T(A-1)/A TThe coefficient of variation is (A-1)/A 1The skewness becomes (A-2)/TA(A-1) 1/T

Random Activation: Analysis, II

Prw =K[ ] =1A

A −1A

⎝ ⎜

⎠ ⎟K −1

Page 25: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Random Activation: Analysis, III

Generalization: k agents active at each ‘click’1 period still consists of A clicksPr[a specific agent is active at any click] = k/APr[agent activated exactly i times over T periods]:

Mean number of activations is kTVariance is kT(A-k)/ACoefficient of variation is (A-k)/ASkewness is (A-2k)/kTA(A-k)

TAi

⎝ ⎜

⎠ ⎟kA

⎝ ⎜

⎠ ⎟i

A −kA

⎝ ⎜

⎠ ⎟TA−i

Page 26: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Random Activation: Example

Bilateral exchange model: Distribution of agent activations with random pairings; k = 2, A = 100, T = 1000

Page 27: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Comparison of Uniform and Random Activation Regimes

• Example 1: Axelrod culture model– Replication with Sugarscape model– Qualitative replication succeeded, quantitative

replication failed– Converted Sugarscape agent activation from

uniform to random, then quantitative replication worked!

• Example 2: Firm formation model

Page 28: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Poisson Clock Activation: Idea

• Story: Each agent has an internal clock that wakes it up periodically

• A period is defined as the amount of ‘wall time’ such that A agents are active on average

• Feature: ‘True’ agent autonomy• Disadvantage: Agents must be sorted each period• Examples: Sugarscape reimplementation, game

theory models

Page 29: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Poisson Clock Activation: Implementation

• Specify at the beginning that the model will be run

for T periods• At time 0, for each agent draw T random numbers as

ti+1 = ti - log(U[0,1])

• Now, sort these AT random numbers to develop a

schedule of agent activation:• Naïve sort scales like N2

• Quicksort goes like N log(N)

Page 30: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Poisson Clock Activation: Analysis

Over T periods, the mean number of activations/agent = TThe variance is also TSkewness and kurtosis are each 1

Now, the number of agents, n, active each period is a random variable having pmf

Assumes a nearly Gaussian shape for large A

f n;A( ) =e−AAn

n!

Page 31: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Comparison of Uniform and Poisson Activation Regimes

• Sexual reproduction runs of Sugarscape can yield large-amplitude fluctuations

• Computer scientists reimplementing Sugarscape attempted to replicate this finding

• Results negative, i.e., not robust to activation regime

Page 32: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agent Activation Intercomparison

Regime Summary Advantage Disadvantage

UniformEvery agent active every

periodSimple

Requires randomization of call order

Random

Fixed # of agents active,

some more than others

FastLeast credible behaviorally

PoissonVariable #

active, some more active

Gives agents autonomy

Scheduling the agents is costly

Page 33: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Preferential Activation [Page 97]

• What if agent activation were not so egalitarian?

• What if agents could user their resources to ‘buy’ activations?

• Could successful agents gain further (positive feedback)?

• Perhaps a firm can be thought of in this way

• No definitive results but clearly this matters

Page 34: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Lessons

• Activation regime may matter, especially for the quantitative character of your results

• Dr. Pangloss world: robustness of each result would be tested with each activation regime– Easy in Ascape, MASON, RePast– Not easy in NetLogo

Page 35: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Does Agent Activation Regime Always Matter?

• Gacs [1997]: Technical requirements for updating not to matter

• Istrate [forthcoming, Games and Economic Behavior]: When models are formally ergodic, the asymptotic states are shown to be independent of agent activation schemes

Page 36: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

How to Activate Agents with Many Rules?

• So far, agents only have had 1 behavior

• Now, agents have multiple behaviors, e.g., movement, gathering, trading, procreation

• Previous problem now intra-agent:– Uniform activation– Random activation– Poisson clock activation

Agent i: rule A rule B rule C

Page 37: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents with Multiple Rules

• In Ascape, other agent-modeling platforms, there are software switches to either– Execute all agent rules when agent activated

Agent k: rule A rule B rule C

Agent j: rule A rule B rule C

Agent i: rule A rule B rule C

Page 38: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents with Multiple Rules

• In Ascape, other agent-modeling platforms, there are software switches to either– Execute all agent rules when agent activated– Execute all agents on a particular rule and

repeat for other rules

Agent k: rule A rule B rule C

Agent j: rule A rule B rule C

Agent i: rule A rule B rule C

Page 39: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents with Multiple Rules

• In Ascape, other agent-modeling platforms, there are software switches to either– Execute all agent rules when agent activated– Execute all agents on a particular rule and

repeat for other rules

Agent k: rule A rule B rule C

Agent j: rule A rule B rule C

Agent i: rule A rule B rule C

Page 40: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents with Multiple Rules

• In Ascape, other agent-modeling platforms, there are software switches to either– Execute all agent rules when agent activated– Execute all agents on a particular rule and

repeat for other rules

Agent k: rule A rule B rule C

Agent j: rule A rule B rule C

Agent i: rule A rule B rule C

Page 41: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents with Multiple Rules

• In Ascape, other agent-modeling platforms, there are software switches to either– Execute all agent rules when agent activated– Execute all agents on a particular rule and

repeat for other rules

Agent k: rule A rule B rule C

Agent j: rule A rule B rule C

Agent i: rule A rule B rule C

Page 42: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents with Multiple Rules

• In Ascape, other agent-modeling platforms, there are software switches to either– Execute all agent rules when agent activated– Execute all agents on a particular rule and

repeat for other rules

Agent k: rule A rule B rule C

Agent j: rule A rule B rule C

Agent i: rule A rule B rule C

Page 43: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents with Multiple Rules

• In Ascape, other agent-modeling platforms, there are software switches to either– Execute all agent rules when agent activated– Execute all agents on a particular rule and

repeat for other rules

Agent k: rule A rule B rule C

Agent j: rule A rule B rule C

Agent i: rule A rule B rule C

Page 44: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents with Multiple Rules

• In Ascape, other agent-modeling platforms, there are software switches to either– Execute all agent rules when agent activated– Execute all agents on a particular rule and

repeat for other rules

Agent k: rule A rule B rule C

Agent j: rule A rule B rule C

Agent i: rule A rule B rule C

Page 45: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents with Multiple Rules

• In Ascape, other agent-modeling platforms, there are software switches to either– Execute all agent rules when agent activated– Execute all agents on a particular rule and

repeat for other rules

Agent k: rule A rule B rule C

Agent j: rule A rule B rule C

Agent i: rule A rule B rule C

Page 46: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Agents with Multiple Rules

• In Ascape, other agent-modeling platforms, there are software switches to either– Execute all agent rules when agent activated– Execute all agents on a particular rule and

repeat for other rules

Agent k: rule A rule B rule C

Agent j: rule A rule B rule C

Agent i: rule A rule B rule C

Page 47: Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

Reality: Each Agent on Own ‘Thread’

• Serial execution getting ‘messy’ so why not just move asynchronous parallel execution?– Now need rules for collisions:

• Avoidance: collision immanent, so flip coin, say

• Adjudication: collision has happened, resolve it

• Social institutions ‘solve’ these problems in reality

– Debugging can be difficult

• Multi-threading: debugging very difficult