distributed constraint optimization for the...
TRANSCRIPT
Distributed Constraint Optimizationfor the Internet-of-Things
Pierre Rust Gauthier Picard
Orange [email protected]
MINES Saint-ÉtienneLaHC UMR CNRS 5516
UMR • CNRS • 5516 • SAINT-ETIENNE
Internet-of-Things (IoT) and its Control
Huge (marketing ?) trend today25 billion of connected objects in 2020 ? (Gartner)Hardware and communication is cheaper and cheaperConstrained devicesI limited CPU and memory resourcesI limited communication capabilities
Connected things’ actions should becoordinatedCurrent approach: centralizing decisionsI CommunicationsI ResilienceI ScalabilityI Cost
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 2
Distributed Coordination and Decision MakingAutonomous and spontaneous
Coordinating objects to achieve objectivesCoordinationI DecentralizedI SpontaneousI Autonomous
No central pointSelf-adaption to environmental changesSelf-repair in case of one componentfailure
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 3
About decisions
xi ?
s.t. “I’m happy with xi”
xj ?s.t. “agent i is fine with xj”
How can agents autonomously make their decisionsin a coordinated way, without external control ?
⇒ Decentralized decision making
Agents have to coordinate to perform best actionsAgents form a team→ best actions for the team
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 4
About decisions
xi ?s.t. “I’m happy with xi”
xj ?s.t. “agent i is fine with xj”
How can agents autonomously make their decisionsin a coordinated way, without external control ?
⇒ Decentralized decision making
Agents have to coordinate to perform best actionsAgents form a team→ best actions for the team
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 4
About decisions
xi ?s.t. “I’m happy with xi”
xj ?s.t. “agent i is fine with xj”
How can agents autonomously make their decisionsin a coordinated way, without external control ?
⇒ Decentralized decision making
Agents have to coordinate to perform best actionsAgents form a team→ best actions for the team
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 4
About decisions
xi ?s.t. “I’m happy with xi”
xj ?s.t. “agent i is fine with xj”
How can agents autonomously make their decisionsin a coordinated way, without external control ?
⇒ Decentralized decision making
Agents have to coordinate to perform best actionsAgents form a team→ best actions for the team
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 4
About decisions
xi ?s.t. “I’m happy with xi”
xj ?s.t. “agent i is fine with xj”
How can agents autonomously make their decisionsin a coordinated way, without external control ?
⇒ Decentralized decision making
Agents have to coordinate to perform best actionsAgents form a team→ best actions for the team
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 4
About decisions
xi ?s.t. “I’m happy with xi”
xj ?s.t. “agent i is fine with xj”
How can agents autonomously make their decisionsin a coordinated way, without external control ?
⇒ Decentralized decision making
Agents have to coordinate to perform best actionsAgents form a team→ best actions for the team
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 4
Application Domains
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 5
Menu
Some Installation
DCOP Framework
Hands on PyDCOP I
Focus on Some Solution Methods
Hands on PyDCOP II
Focus on Smart Environment Configuration Problems
Distributing Computations
Hands on PyDCOP III
Conclusion
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 6
Menu
Some Installation
DCOP Framework
Hands on PyDCOP I
Focus on Some Solution Methods
Hands on PyDCOP II
Focus on Smart Environment Configuration Problems
Distributing Computations
Hands on PyDCOP III
Conclusion
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 7
Some Installation
Install VirtualBoxImport the pyDCOP Virtual Machine (http://bit.ly/pyDCOP)I It’s a Debian image with everything preinstalled:I python3, pyDCOP, matplotlib, glpk, etc.
Alternatively, followhttps://pydcop.readthedocs.io/en/latest/installation.html
1. https://pydcop.readthedocs.io/en/latest/tutorials/getting_started.html2. https://pydcop.readthedocs.io/en/latest/tutorials/analysing_results.html
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 8
Some InstallationVirtual machine Setup
Before starting the VM:"Bridged adapter" modeSelect wifi network adapterReset MAC Address
ThenStart the VMlogin: dcop / pyDCOPLaunch a terminalNote down the IP with ipaddress
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 9
Some InstallationVirtual machine Setup
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 10
Menu
Some Installation
DCOP Framework
Hands on PyDCOP I
Focus on Some Solution Methods
Hands on PyDCOP II
Focus on Smart Environment Configuration Problems
Distributing Computations
Hands on PyDCOP III
Conclusion
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 11
DCOP1
Distributed Constraints Optimization Problem
Definition (DCOP)A DCOP is a tuple 〈A,X ,D, C, µ〉, where:
A = {a1, . . . , a|A|} is a set of agents
X = {x1, . . . , xn} are variablesD = {Dx1 , . . . ,Dxn} is a set of finite domains, for the xi variablesC = {f1, . . . , fm} is a set of so� constraints, where each ci defines a cost ∈ R ∪ {∞}for each combination of assignments to a subset of variablesµ is a function mapping variables to their associated agent
Definition (Solution)A solution to the DCOP is an assignment A to all variables that minimizes
∑i fi
1Some contents taken from OPTMAS 2011 and OPTMAS-DCR 2014Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 12
DCOPExample and Graphical Representation
a1
a2 a3
a4
x1
x2 x3
x4
(a)
a1
a2
a3a4
x4
x2
x3
x1
(b)
a1
a2a3
a4
x1
f123x2 x3
x4f24
(c)
Objective Function
F (A) =∑
xi,xj∈Xfij where fij = (xi + xj + 1) mod 3
In figure (a):F ({(x1, 0), (x2, 0), (x3, 0), (x4, 0)}) = 4
F ({(x1, 1), (x2, 1), (x3, 1), (x4, 1)}) = 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 13
But first: how to solve DCOPs ?DCOP Algorithms
Complete
Fully Decentralized
Partially Decentralized
Search Inference
Synchronous Asynchronous
Search Inference Search
Incomplete
Fully Decentralized
Synchronous Asynchronous
Search
OPTApo PC-DPOP SyncBB DPOP andvariants
AFB; ADOPT and variants
Region OptimalDSA; MGM
D-Gibbs
Sampling Inference
Max-Sum andvariants
Synchronous
Figure 5: Classical DCOP Algorithm Taxonomy.
3.3. Algorithms
The field of classical DCOPs is mature and a number of different algorithms have been proposed. DCOP algo-rithms can be classified as being either complete or incomplete, based on whether they can guarantee the optimalsolution or they trade optimality for smaller execution times, producing approximated solutions. In addition, each ofthese classes can be categorized into several groups, such as: (1) partially or fully decentralized, depending on the de-gree of locality exploited by the algorithms; and (2) synchronous or asynchronous, based on the way local informationis updated. Finally, the resolution process adopted by each algorithm can be classified in three categories [16]:• Search-based methods, which are based on the use of search techniques to explore the space of possible solutions.
These techniques are often derived from corresponding search techniques developed for centralized AI searchproblems, such as best-first search and depth-first search.• Inference-based methods, which are inspired from dynamic programming and belief propagation techniques.
These techniques allow agents to exploit the structure of the constraint graph to aggregate rewards from theirneighbors, effectively reducing the problem size at each step of the algorithm.• Sampling-based methods, which are incomplete approaches that sample the search space to approximate a func-
tion (usually a probability distribution) as a product of statistical inference.Figure 5 illustrates a taxonomy of classical DCOP algorithms. In the following subsections, we briefly describe
some representative complete and incomplete algorithms of each of the classes introduced above. A detailed descrip-tion of the DCOP algorithms is out of the scope of this document. We refer the interested readers to the originalarticles that introduce each algorithm.
Throughout this document, we will often refer to the following notation when discussing the complexity of thealgorithms: the size of the largest domain is denoted by d = maxDi2D |Di|, and w⇤ refers to the induced width of thepseudo-tree.
COMPLETE ALGORITHMS
SynchBB [17]. Synchronous Branch-and-Bound (SynchBB) is a complete, synchronous, search-based algorithmthat can be considered as a distributed version of a branch-and-bound algorithm. It uses a complete ordering ofthe agents in order to extend a Current Partial Assignment (CPA) via a synchronous communication process. TheCPA holds the assignments of all the variables controlled by all the visited agents, and, in addition, functions as amechanism to propagate bound information. The algorithm prunes those parts of the search space whose solutionquality is sub-optimal, by exploiting the bounds that are updated at each step of the algorithm. SynchBB agentsspace requirement and maximum size of message are in O(n), while they require, in the worst case, to perform O(dm)number of operations. The network load is also in O(dm).
AFB [18]. Asynchronous Forward Bounding (AFB) is a complete, asynchronous, search-based algorithm that can beconsidered as the asynchronous version of SynchBB. In this algorithm, agents communicate their reward estimates,
7
[Fioretto et al., 2018]
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 14
Some issues related to IoT
Internet-of-Things is a physical network infrastructurea1
a2 a3
a4
x1
x2 x3
x4
a1
a2
a3
a4
x1
x2 x3
x4
a1
a2a3
a4
x1
f123x2 x3
x4f24
Things are interconnected and very heterogeneousWhere to place computations (variables and constraints/factors)?
Decision problem by itselfConstrained by the things’ capacities (memory, communication, CPU, ...)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 15
Some issues related to IoT
Internet-of-Things is a physical network infrastructurea1
a2 a3
a4
x1
x2 x3
x4
a1
a2
a3
a4
x1
x2 x3
x4
a1
a2a3
a4
x1
f123x2 x3
x4f24
Things are interconnected and very heterogeneousWhere to place computations (variables and constraints/factors)?
Decision problem by itselfConstrained by the things’ capacities (memory, communication, CPU, ...)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 15
Some issues related to IoT (cont.)
Internet-of-Things is an open systema1
a2 a3
a4
x1
x2 x3
x4
a1
a2
a3
a4
x1
x2 x3
x4
a1
a2a3
a4
x1
f123x2 x3
x4f24
How to cope with things’ (dis)appearance?
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 16
Some issues related to IoT (cont.)
Internet-of-Things is an open systema1
a2
a4
x1
x2 x3
x4
a1
a2
a4
x1
x2 x3
x4
a1
a2
a4
x1
f123x2 x3
x4f24
How to cope with things’ (dis)appearance?
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 16
Some issues related to IoT (cont.)
Internet-of-Things is an open systema1
a2
a4
x1
x2 x3
x4
a1
a2
a4
x1
x2 x3
x4
a1
a2
a4
x1
f123x2 x3
x4f24
How to cope with things’ (dis)appearance?
Disappearance : one solution is to replicate computationsI Where replicas are placed?I Which replicate to activate following a disappearance?
Newcoming things: opportunity to load balance, but...I Which computations to move?
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 16
This tutorial will thus focus on...
Using DCOPs to model IoT applicative problemsModeling the specific problem of distributing decisions/computations
All that will be illustrated using the PyDCOP framework
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 17
This tutorial will thus focus on...
Using DCOPs to model IoT applicative problemsModeling the specific problem of distributing decisions/computationsAll that will be illustrated using the PyDCOP framework
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 17
Menu
Some Installation
DCOP Framework
Hands on PyDCOP I
Focus on Some Solution Methods
Hands on PyDCOP II
Focus on Smart Environment Configuration Problems
Distributing Computations
Hands on PyDCOP III
Conclusion
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 18
Hands on PyDCOP IFiles for the tutorials are in /home/dcop/tutorials.$ cd /home/dcop/tutorials/hands-on_1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 19
Hands on PyDCOP IDCOP - Graph Coloring
v2
v1 v3
(a) constraints graph
p2 v2
d1,2
v1
d2,3
v3p1 p3
(b) factor graph
Objective: minimizeDomain: 2 colors R and BVariables: V1, V2, V3
Constraints: neighbors must have di�erent colors + preferencesAgents: 3 agents
Yaml representation
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 20
Hands on PyDCOP IpyDCOP yaml format
graph_coloring.yaml
name: graph coloringobjective: min
domains:colors:values: [R, G]
variables:v1:domain: colorsv2:domain: colorsv3:domain: colors
constraints:pref_1:type: extensionalvariables: v1values:-0.1: R0.1: G
pref_2:type: extensionalvariables: v2values:-0.1: G0.1: R
pref_3:type: extensionalvariables: v3values:-0.1: G0.1: R
diff_1_2:type: intentionfunction: 10 if v1 == v2 else 0
diff_2_3:type: intentionfunction: 10 if v3 == v2 else 0
agents: [a1, a2, a3, a4, a5]
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 21
Hands on PyDCOP ISolving the Graph Coloring DCOP
Command:
$ pydcop solve --algo dpop graph_coloring.yaml
Output:
..."assignment": {"v1": "R","v2": "G","v3": "R"},"cost": -0.1,...
With other algorithms:
$ pydcop --timeout 2 solve --algo dsa graph_coloring.yaml$ pydcop solve --algo mgm --algo_params stop_cycle:20 \
graph_coloring.yaml
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 22
Hands on PyDCOP IResults
Full results :
{"agt_metrics": {...},"assignment": {"v1": "R","v2": "G","v3": "R"},"cost": -0.1,"cycle": 20,"msg_count": 158,"msg_size": 158,"status": "FINISHED","time": 0.03201029699994251,"violation": 0}
Look at results from mgm and dsa, compared to dpop’s results !
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 23
Hands on PyDCOP ILogs
Simple:use -v 0..3
$ pydcop -v 3 solve --algo dsa --algo_params stop_cycle:20 graph_coloring.yaml
Precise :use -log <log.conf>
$ pydcop --log log.conf solve --algo dsa --algo_params stop_cycle:10graph_coloring.yaml
Now, look at algo.log
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 24
Hands on PyDCOP IRun-time metrics
periodic: "--collect_on period --period <p>"
$ pydcop --log log.conf -t 10 solve \--collect_on period --period 1 --run_metric ./metrics.csv \--algo dsa graph_coloring.yaml
cycle: "--collect_on cycle_change"Only supported with synchronous algorithms !
$ pydcop solve --algo mgm --algo_params stop_cycle:20 \--collect_on cycle_change --run_metric ./metrics.csv \graph_coloring_50.yaml
value: "--collect_on value_change"
$ pydcop -t 5 solve --algo mgm --collect_on value_change \--run_metric ./metrics_on_value.csv \graph_coloring_50.yaml
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 25
Hands on PyDCOP IRun-time metrics
With a bigger graph coloring problem
$ pydcop solve --algo mgm --algo_params stop_cycle:20 \--collect_on cycle_change \--run_metric ./metrics.csv \graph_coloring_50.yaml
Plotting with matplotlib
$ python3 plot_cost.py ./metrics.csv
Do the same thing with DSA, look at the result, what do you see ?Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 26
Hands on PyDCOP IRun-time metrics
MGM (1720) and DSA (1647) , both with 30 cycles
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 27
Hands on PyDCOP IWeb-ui
Web-base agent graphical interface:Run the web application
$ cd ~/pydcop-ui$ python3 -m http.server
Launch a browser on http://127.0.0.1:8000Solve the dcop with the option --uiport <port> (also, use --delay <delay>)
$ pydcop -v 3 solve -a mgm -d adhoc --delay 2 --uiport 10000./graph_coloring_3agts_10vars.yaml
Each agent exposes a web-socket, the web application connects to thesewebsockets and display the agents’ state.
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 28
Hands on PyDCOP IWeb-ui
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 29
Hands on PyDCOP IWeb-ui
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 30
Menu
Some Installation
DCOP Framework
Hands on PyDCOP I
Focus on Some Solution MethodsDPOPMax-SumDSAMGM
Hands on PyDCOP II
Focus on Smart Environment Configuration Problems
Distributing Computations
Hands on PyDCOP III
ConclusionPierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 31
Distributed Pseudotree Optimization Procedure (DPOP)[Petcu and Faltings, 2005]
3-phase distributed algorithm
PHASES MESSAGES
1. DFS Tree construction token passing
2. Utility phase: from leaves to root util (child→ parent, constraint ta-ble [-child])
3. Value phase: from root to leaves value (parent→ children ∪ pseu-dochildren, parent value)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 32
DFS Tree Phase
Distributed DFS graph traversal: token, ID, neighbors(X)1. X owns the token: adds its own ID and sends it in turn to each of its neighbors,
which become children2. Y receives the token from X : it marks X as visited. First time Y receives the token
then parent(Y ) = X . Other IDs in token which are also neighbors(Y ) arepseudoparent. If Y receives token from neighbor W to which it was never sent, Wis pseudochild.
3. When all neighbors(X) visited, X removes its ID from token and sends it toparent(X).
A node is selected as root, which startsWhen all neighbors of root are visited, the DFS traversal ends
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 33
DFS Tree Phase: Example
x1
x2 x3
x4
root
[x1] x1 parent of x2
x1
x2 x3
x4
[x1, x2]
x2 parent of x3x1 pseudoparent of x3
x1
x2 x3
x4
[x1, x2, x3]x3 parent of x4x3 pseudoparent of x1
x1
x2
x3 x4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 34
DFS Tree Phase: Example
x1
x2 x3
x4
root
[x1] x1 parent of x2
x1
x2 x3
x4
[x1, x2]
x2 parent of x3x1 pseudoparent of x3
x1
x2 x3
x4
[x1, x2, x3]x3 parent of x4x3 pseudoparent of x1
x1
x2
x3 x4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 34
DFS Tree Phase: Example
x1
x2 x3
x4
root
[x1] x1 parent of x2
x1
x2 x3
x4
[x1, x2]
x2 parent of x3x1 pseudoparent of x3
x1
x2 x3
x4
[x1, x2, x3]x3 parent of x4x3 pseudoparent of x1
x1
x2
x3 x4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 34
DFS Tree Phase: Example
x1
x2 x3
x4
root
[x1] x1 parent of x2
x1
x2 x3
x4
[x1, x2]
x2 parent of x3x1 pseudoparent of x3
x1
x2 x3
x4
[x1, x2, x3]x3 parent of x4x3 pseudoparent of x1
x1
x2
x3 x4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 34
Util Phase
Agent X :receives from each child Yi a cost function: C(Yi)
combines (adds, joins) all these cost functions with the cost functions withparent(X) and pseudoparents(X)
projects X out of the resulting cost function, and sends it to parent(X)
From the leaves to the root
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 35
Util Phase: Example
X
X T
a a 1a b 2b a 2b b 0
X Y
a a 1a b 2b a 2b b 0
X Z
a a 1a b 2b a 2b b 0
parent
children
X Y Z T
a a a a 3a a a b 4a a b a 4a a b b 5a b a a 4a b a b 5a b b a 5a b b b 6b a a a 6b a a b 4b a b a 4b a b b 2b b a a 4b b a b 2b b b a 2b b b b 0
add
All value combinationsCosts are the sum of appli-cable costs
X Y Z T
a a a a 3a a a b 4a a b a 4a a b b 5a b a a 4a b a b 5a b b a 5a b b b 6b a a a 6b a a b 4b a b a 4b a b b 2b b a a 4b b a b 2b b b a 2b b b b 0
Remove XRemove duplicatesKeep the min cost
Project outX
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 36
Util Phase: Example
X
X T
a a 1a b 2b a 2b b 0
X Y
a a 1a b 2b a 2b b 0
X Z
a a 1a b 2b a 2b b 0
parent
children
X Y Z T
a a a a 3a a a b 4a a b a 4a a b b 5a b a a 4a b a b 5a b b a 5a b b b 6b a a a 6b a a b 4b a b a 4b a b b 2b b a a 4b b a b 2b b b a 2b b b b 0
add
All value combinationsCosts are the sum of appli-cable costs
X Y Z T
a a a a 3a a a b 4a a b a 4a a b b 5a b a a 4a b a b 5a b b a 5a b b b 6b a a a 6b a a b 4b a b a 4b a b b 2b b a a 4b b a b 2b b b a 2b b b b 0
Remove XRemove duplicatesKeep the min cost
Project outX
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 36
Util Phase: Example
X
X T
a a 1a b 2b a 2b b 0
X Y
a a 1a b 2b a 2b b 0
X Z
a a 1a b 2b a 2b b 0
parent
children
X Y Z T
a a a a 3a a a b 4a a b a 4a a b b 5a b a a 4a b a b 5a b b a 5a b b b 6b a a a 6b a a b 4b a b a 4b a b b 2b b a a 4b b a b 2b b b a 2b b b b 0
add
All value combinationsCosts are the sum of appli-cable costs
X Y Z T
a a a a 3a a a b 4a a b a 4a a b b 5a b a a 4a b a b 5a b b a 5a b b b 6b a a a 6b a a b 4b a b a 4b a b b 2b b a a 4b b a b 2b b b a 2b b b b 0
Remove XRemove duplicatesKeep the min cost
Project outX
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 36
Util Phase: Example
X
X T
a a 1a b 2b a 2b b 0
X Y
a a 1a b 2b a 2b b 0
X Z
a a 1a b 2b a 2b b 0
parent
children
X Y Z T
a a a a 3a a a b 4a a b a 4a a b b 5a b a a 4a b a b 5a b b a 5a b b b 6b a a a 6b a a b 4b a b a 4b a b b 2b b a a 4b b a b 2b b b a 2b b b b 0
add
All value combinationsCosts are the sum of appli-cable costs
X Y Z T
a a a a 3a a a b 4a a b a 4a a b b 5a b a a 4a b a b 5a b b a 5a b b b 6b a a a 6b a a b 4b a b a 4b a b b 2b b a a 4b b a b 2b b b a 2b b b b 0
Remove XRemove duplicatesKeep the min cost
Project outX
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 36
Value Phase
1. The root finds the value that minimizes the received cost function in the utilphase, and informs its descendants (children ∪ pseudochildren)
2. Each agent waits to receive the value of its parent / pseudoparents3. Keeping fixed the value of parent/pseudoparents, finds the value that
minimizes the received cost function in the Util phase4. Informs of this value to its children/pseudochildren
This process starts at the root and ends at the leaves
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 37
DTREE : DPOP for DCOPs without backedges
X
Y
Z W
Y Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
Ya b1 0
Ya b1 0
Xa b2 0
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: linear
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 38
DTREE : DPOP for DCOPs without backedges
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
Ya b1 0
Ya b1 0
Xa b2 0
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: linear
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 38
DTREE : DPOP for DCOPs without backedges
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
Ya b1 0
Ya b1 0
Xa b2 0
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: linear
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 38
DTREE : DPOP for DCOPs without backedges
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
Ya b1 0
Ya b1 0
Xa b2 0
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: linear
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 38
DTREE : DPOP for DCOPs without backedges
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
Ya b1 0
Ya b1 0
Xa b2 0
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: linear
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 38
DTREE : DPOP for DCOPs without backedges
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
Ya b1 0
Ya b1 0
Xa b2 0
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: linear
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 38
DTREE : DPOP for DCOPs without backedges
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
Ya b1 0
Ya b1 0
Xa b2 0
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: linear
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 38
DTREE : DPOP for DCOPs without backedges
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
Ya b1 0
Ya b1 0
Xa b2 0
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: linear
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 38
DTREE : DPOP for DCOPs without backedges
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
Ya b1 0
Ya b1 0
Xa b2 0
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: linear
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 38
DPOP for any DCOP
X
Y
Z W
Y Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
X Za a 1a b 2b a 2b b 0
Ya b
Xa 2 2b 2 0
Ya b1 0
Xa b2 0
X ← b
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: exponential
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 39
DPOP for any DCOP
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
X Za a 1a b 2b a 2b b 0
Ya b
Xa 2 2b 2 0
Ya b1 0
Xa b2 0
X ← b
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: exponential
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 39
DPOP for any DCOP
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
X Za a 1a b 2b a 2b b 0
Ya b
Xa 2 2b 2 0
Ya b1 0
Xa b2 0
X ← b
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: exponential
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 39
DPOP for any DCOP
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
X Za a 1a b 2b a 2b b 0
Ya b
Xa 2 2b 2 0
Ya b1 0
Xa b2 0
X ← b
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: exponential
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 39
DPOP for any DCOP
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
X Za a 1a b 2b a 2b b 0
Ya b
Xa 2 2b 2 0
Ya b1 0
Xa b2 0
X ← b
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: exponential
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 39
DPOP for any DCOP
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
X Za a 1a b 2b a 2b b 0
Ya b
Xa 2 2b 2 0
Ya b1 0
Xa b2 0
X ← b
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: exponential
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 39
DPOP for any DCOP
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
X Za a 1a b 2b a 2b b 0
Ya b
Xa 2 2b 2 0
Ya b1 0
Xa b2 0
X ← b
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: exponential
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 39
DPOP for any DCOP
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
X Za a 1a b 2b a 2b b 0
Ya b
Xa 2 2b 2 0
Ya b1 0
Xa b2 0
X ← b
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: exponential
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 39
DPOP for any DCOP
X
Y
Z WY Za a 1a b 2b a 2b b 0
Y Wa a 1a b 2b a 2b b 0
X Ya a 1a b 2b a 2b b 0
X Za a 1a b 2b a 2b b 0
Ya b
Xa 2 2b 2 0
Ya b1 0
Xa b2 0
X ← b
X ← b
Y ← b
Z ← b W ← b
Optimal solution:linear number of messagesmessage size: exponential
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 39
GDL-based approaches
Generalized Distributive Law [Aji and McEliece, 2000]I Unifying framework for inference in Graphical modelsI Builds on basic mathematical properties of semi-ringsI Widely used in Info theory, Statistical physics, Probabilistic models
Max-sumI DCOP settings: maximise social welfare
AJI AND MCELIECE: THE GENERALIZED DISTRIBUTIVE LAW 327
TABLE ISOME COMMUTATIVE SEMIRINGS. HERE
DENOTES AN ARBITRARY COMMUTATIVE RING, IS AN ARBITRARY FINITESET, AND DENOTES AN ARBITRARY DISTRIBUTIVE LATTICE
For example, consider the min-sum semiring in Table I(entry 7). Here is the set of real numbers, plus the specialsymbol “ .” The operation “ ” is defined as the operation oftaking the minimum, with the symbol playing the role of thecorresponding identity element, i.e., we definefor all . The operation “ ” is defined to be ordinaryaddition [sic], with the real number playing the role ofthe identity, and for all . Oddly enough, thiscombination forms a semiring, because the distributive law isequivalent to
which is easily seen to be true. We shall get a glimpse of theimportance of this semiring in Examples 2.3 and 4.3, below.(In fact, semirings 5–8 are all isomorphic to each other; for ex-ample, 5 becomes 6 via the mapping , and 6 becomes7 under the mapping .)Having briefly discussed commutative semirings, we now de-
scribe the “marginalize a product function” problem, which isa general computational problem solved by the GDL. At theend of the section we will give several examples of the MPFproblem, which demonstrate how it can occur in a surprisinglywide variety of settings.Let be variables taking values in the finite
sets , with for . Ifis a subset of , we denote the
product by , the variable listby , and the cardinality of , i.e., , by . We denotethe product simply by , and the variable list
simply by .Now let be subsets of .
Suppose that for each , there is a function, where is a commutative semiring. The
variable lists are called the local domains and the functionsare called the local kernels. We define the global kernel
as follows:
(2.1)
With this setup, the MPF problem is this: For one or more ofthe indices , compute a table of the values of the
-marginalization of the global kernel , which is the function, defined by
(2.2)
In (2.2), denotes the complement of the set relative to the“universe” . For example, if , and if
, then
We will call the function defined in (2.2) the th objec-tive function, or the objective function at . We note that thecomputation of the th objective function in the obvious wayrequires additions and multipli-cations, for a total of arithmetic operations, wheredenotes the size of the set . We shall see below (Section V)
that the algorithm we call the “generalized distributive law” canoften reduce this figure dramatically.We conclude this section with some illustrative examples of
the MPF problem.
Example 2.1: Let , , , and be variables takingvalues in the finite sets , , , and . Suppose
and are given functions of these vari-ables, and that it is desired to compute tables of the functions
and defined by
ptThis is an instance of the MPF problem, if we define localdomains and kernels as follows:
local domain local kernel
The desired function is the objective function at localdomain , and is the objective function at local domain. This is just a slightly altered version of Example 1.1, andwe shall see in Section IV that when the GDL is applied, the“algorithm” of Example 1.1 results.
Example 2.2: Let , , , , , and be six vari-ables, each assuming values in the binary set , and let
be a real-valued function of the variables ,, and . Now consider the MPF problem (the commutative
semiring being the set of real numbers with ordinary additionand multiplication) with the following local domains andkernels:
local domain local kernel
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 40
Max-Sum[Farinelli et al., 2008]
Agents iteratively computes local functions that depend only on the variable theycontrol
x1 x2
x3x4
m1→2
m2→1
m2→3m3→2
m3→4
m4→3
m4→1 m1→4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 41
Max-Sum[Farinelli et al., 2008]
Agents iteratively computes local functions that depend only on the variable theycontrol
x1 x2
x3x4
m1→2
m4→1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 41
Max-Sum[Farinelli et al., 2008]
Agents iteratively computes local functions that depend only on the variable theycontrol
x1 x2
x3x4
m1→2
m4→1
m1→2(x2) = maxx1 (F12(x1, x2) +m4→1(x1))
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 41
Max-Sum[Farinelli et al., 2008]
Agents iteratively computes local functions that depend only on the variable theycontrol
x1 x2
x3x4
m1→2
m4→1
m1→2(x2) = maxx1 (F12(x1, x2) +m4→1(x1))
Shared constraint
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 41
Max-Sum[Farinelli et al., 2008]
Agents iteratively computes local functions that depend only on the variable theycontrol
x1 x2
x3x4
m1→2
m4→1
m1→2(x2) = maxx1 (F12(x1, x2) +m4→1(x1))
Shared constraint
All incoming messages except x2
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 41
Max-Sum[Farinelli et al., 2008]
Agents iteratively computes local functions that depend only on the variable theycontrol
x1 x2
x3x4
m1→2(x2) = maxx1 (F12(x1, x2) +m4→1(x1))
Shared constraint
All incoming messages except x2
z1(x1)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 41
Max-Sum[Farinelli et al., 2008]
Agents iteratively computes local functions that depend only on the variable theycontrol
x1 x2
x3x4
m1→2(x2) = maxx1 (F12(x1, x2) +m4→1(x1))
Shared constraint
All incoming messages except x2
z1(x1)
z1(x1) = m4→1(x1) +m2→1(x1)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 41
Max-Sum[Farinelli et al., 2008]
Agents iteratively computes local functions that depend only on the variable theycontrol
x1 x2
x3x4
m1→2(x2) = maxx1 (F12(x1, x2) +m4→1(x1))
Shared constraint
All incoming messages except x2
z1(x1)
z1(x1) = m4→1(x1) +m2→1(x1)
m4→1
m2→1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 41
Max-Sum[Farinelli et al., 2008]
Agents iteratively computes local functions that depend only on the variable theycontrol
x1 x2
x3x4
m1→2(x2) = maxx1 (F12(x1, x2) +m4→1(x1))
Shared constraint
All incoming messages except x2
z1(x1)
z1(x1) = m4→1(x1) +m2→1(x1)
m4→1
m2→1
All incoming messages
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 41
Max-Sum[Farinelli et al., 2008]
Agents iteratively computes local functions that depend only on the variable theycontrol
x1 x2
x3x4
m1→2(x2) = maxx1 (F12(x1, x2) +m4→1(x1))
Shared constraint
All incoming messages except x2
z1(x1)
z1(x1) = m4→1(x1) +m2→1(x1)
m4→1
m2→1
All incoming messages
Chooseargmax
x1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 41
Max-Sum on acyclic graphs
Max-sum Optimal on acyclicgraphsI Di�erent branches are
independentI Each agent can build a
correct estimation of itscontribution to the globalproblem (z functions)
Message equations verysimilar to Util messages inDPOPI Sum messages from
children and sharedconstraint
I Maximize out agentvariable
I GDL generalizes DPOP[Vinyals et al., 2011]
x1
x2
x3
x4
m1→2(x2) = maxx1(F12(x1, x2) +m4→1(x1))
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 42
Max-Sum Performance
Good performance on loopy networks [Farinelli et al., 2008]I When it converges very good results
I Interesting results when only one cycle [Weiss, 2000]I We could remove cycle but pay an exponential price (see DPOP)
Max-sumPerformance• Goodperformanceonloopynetworks[Farinellietal.08]
– Whenitconvergesverygoodresults• Interes1ngresultswhenonlyonecycle[Weiss00]
– Wecouldremovecyclebutpayanexponen1alprice(seeDPOP)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 43
Max-Sum for low power devices
Low overheadI Msgs number/size
Asynchronous computationI Agents take decisions whenever new messages arrive
Robust to message loss
Max-Sumforlowpowerdevices• Lowoverhead
– Msgsnumber/size• Asynchronouscomputa1on
– Agentstakedecisionswhenevernewmessagesarrive• Robusttomessageloss
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 44
Local Greedy Approaches
Greedy local searchI Start from random solutionI Do local changes if global solution improvesI Local: change the value of a subset of variables, usually one
−4−1 −1 −1
−1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 45
Local Greedy Approaches
Greedy local searchI Start from random solutionI Do local changes if global solution improvesI Local: change the value of a subset of variables, usually one
−4−1 −1 −1
−1
−2 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 45
Local Greedy Approaches
Greedy local searchI Start from random solutionI Do local changes if global solution improvesI Local: change the value of a subset of variables, usually one
−1 −1 −1
−1
−2
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 45
Local Greedy Approaches
Greedy local searchI Start from random solutionI Do local changes if global solution improvesI Local: change the value of a subset of variables, usually one
−1 −1 −1
−1
−2
−2 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 45
Local Greedy Approaches
Greedy local searchI Start from random solutionI Do local changes if global solution improvesI Local: change the value of a subset of variables, usually one
−1 −1 −1
−1
−0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 45
Local Greedy Approaches
ProblemsI Local minimaI Standard solutions: Random Walk, Simulated Annealing
−2−1 −1 −1
−1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 46
Local Greedy Approaches
ProblemsI Local minimaI Standard solutions: Random Walk, Simulated Annealing
−2−1 −1 −1
−1
−1 −1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 46
Local Greedy Approaches
ProblemsI Local minimaI Standard solutions: Random Walk, Simulated Annealing
−2−1 −1 −1
−1
−1 −1
−1 −1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 46
Local Greedy Approaches
ProblemsI Local minimaI Standard solutions: Random Walk, Simulated Annealing
−2−1 −1 −1
−1
−1 −1
−1 −1
−1 −1
−1 −1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 46
Distributed Local Greedy approaches
Local knowledgeParallel executionI A greedy local move might be harmful/uselessI Need coordination
−4−1 −1 −1
−1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 47
Distributed Local Greedy approaches
Local knowledgeParallel executionI A greedy local move might be harmful/uselessI Need coordination
−4−1 −1 −1
−1
0 −2
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 47
Distributed Local Greedy approaches
Local knowledgeParallel executionI A greedy local move might be harmful/uselessI Need coordination
−4−1 −1 −1
−1
0 −2
0 −2
0 −2
0 −2
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 47
Distributed Local Greedy approaches
Local knowledgeParallel executionI A greedy local move might be harmful/uselessI Need coordination
−4−1 −1 −1
−1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 47
Distributed Stochastic Search Algorithm (DSA)[Zhang et al., 2005]
Greedy local search with activation probability to mitigate issues with parallelexecutionsDSA-1: change value of one variable at timeInitialize agents with a random assignment and communicate values toneighborsEach agent:I Generates a random number and execute only if rnd less than activation probabilityI When executing changes value maximizing local gainI Communicate possible variable change to neighbors
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 48
DSA-1: Execution Example
−1 −1 −1
−1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 49
DSA-1: Execution Example
−1 −1 −1
−1
rnd?> 1
4rnd
?> 1
4rnd
?> 1
4rnd
?> 1
4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 49
DSA-1: Execution Example
−1 −1 −1
−1
−2 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 49
DSA-1: Execution Example
−1 −1 −1
−1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 49
DSA-1: Execution Example
−1 −1 −1
−1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 49
DSA-1: Discussion
Extremely “cheap” (computation/communication)Good performance in various domainsI e.g. target tracking [Fitzpatrick and Meertens, 2003; Zhang et al., 2003]I Shows an anytime property (not guaranteed)I Benchmarking technique for coordination
ProblemsI Activation probablity must be tuned [Zhang et al., 2003]I No general rule, hard to characterise results across domains
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 50
Maximum Gain Message (MGM-1)[Maheswaran et al., 2004]
Coordinate to decide who is going to moveI Compute and exchange possible gainsI Agent with maximum (positive) gain executes
AnalysisI Empirically, similar to DSAI More communication (but still linear)I No Threshold to setI Guaranteed to be monotonic (Anytime behavior)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 51
MGM-1: Example
−1 −1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
−2 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
−2 0
G = −2
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
−2 0
G = −2
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
−2 0
G = −2
−1 −1
G = 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
−2 0
G = −2
−1 −1
G = 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
−2 0
G = −2
−1 −1
G = 0
−1 −1
G = 2
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
−2 0
G = −2
−1 −1
G = 0
−1 −1
G = 2
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
−2 0
G = −2
−1 −1
G = 0
−1 −1
G = 2
−1 −1
G = 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
−2 0
G = −2
−1 −1
G = 0
−1 −1
G = 2
−1 −1
G = 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
−2 0
G = −2
−1 −1
G = 0
−1 −1
G = 2
−1 −1
G = 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
MGM-1: Example
−1 −1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 52
Menu
Some Installation
DCOP Framework
Hands on PyDCOP I
Focus on Some Solution Methods
Hands on PyDCOP II
Focus on Smart Environment Configuration Problems
Distributing Computations
Hands on PyDCOP III
Conclusion
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 53
Hands on PyDCOP IIDevelopping with pyDCOP
pyDCOP is designed to make it easy to implement new DCOP algorithmsAll the infrastructure is provided:I agents,I messaging,I metrics,I etc.
Base classes and utility functions forI constraints,I variables,I domains,I etc.
Plugin mechanism to define new algorithms for DCOP, distribution andreplication.Support for synchronous and asynchronous algorithms
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 54
Hands on PyDCOP IIImplementing a DCOP algorithm with pyDCOP
Create a new python module in pydcop.algorithmsDefine a constant indicating the graphical representation used by youralgorithm : GRAPH_TYPE = ’constraints_hypergraph’Define your message(s):message_type(<name>, [fields]);Create a new class:I For synchronous algorithm: subclass VariableComputationI For asynchronous algorithm: SynchronousComputationMixin,VariableComputation
1 class MyComputation(SynchronousComputationMixin,VariableComputation):
2 ...3
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 55
Implementing a DCOP algorithm with pyDCOPSimple DSA implementation - Synchronous
Declare your message with @register(<name>)Send messages to your neighbors using self.post_msg orself.post_to_all_neighbors
Select a new value with self.value_selectionHandle messages for a cycle withon_new_cycle(self, messages, cycle_id) -> Optional[List]:
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 56
Hands on PyDCOP IISimple DSA implementation
One class, 3 main methods:
1 GRAPH_TYPE = ’constraints_hypergraph’2 algo_params = []3
4 DsaMessage = message_type("dsa_value", ["value"])5
6 class DsaTutoComputation(SynchronousComputationMixin,VariableComputation):
7
8 def __init__(self, computation_definition):9 ...
10
11 def on_start(self):12 ...13
14 @register("dsa_value")15 def on_value_msg(self, variable_name, recv_msg, t):16 pass17
18 def on_new_cycle(self, messages, cycle_id) -> Optional[List]:19 ...20
21
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 57
Hands on PyDCOP IISimple DSA implementation - Hints
Two useful utility methods, in pydcop.dcop.relations :
1 def assignment_cost(2 assignment: Dict[str, Any],3 constraints: Iterable["Constraint"],4 consider_variable_cost=False,5 **kwargs,6 ) :7 """8 Compute the cost of an assignment over a set of constraints.9 """
10
1 def find_optimal(2 variable: Variable, assignment: Dict, constraints: Iterable[Constraint], mode:
str3 ) :4 """5 Find the best values for a set of constraints under an assignment.67 Find the best values for ‘variable‘ for the set of ‘constraints‘,
given an8 assignment for all other variables these constraints depends on.9 """
10
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 58
Hands on PyDCOP IISimple DSA implementation - Solution
Creating the computation instance.a computation_definition contains: constraints, variable, neighbors, parameters,etc.
1 class DsaTutoComputation(VariableComputation):23 def __init__(self, variable, constraints, computation_definition):4 super().__init__(computation_definition.node.variable,5 computation_definition)678 self.constraints = computation_definition.node.constraints9
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 59
Hands on PyDCOP IISimple DSA implementation - Solution
On startup, select a value at random and send it to all neighbors.
1 def on_start(self):2 self.random_value_selection()3 self.post_to_all_neighbors(DsaMessage(self.current_value))4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 60
Hands on PyDCOP IISimple DSA implementation - Solution
Receiving values from our neighbors, for the current cycle.DSA is a synchronous algorithm !
1 def on_new_cycle(self, messages, cycle_id) -> Optional[List]:23 assignment = {self.variable.name: self.current_value}4 for sender, (message, t) in messages.items():5 assignment[sender] = message.value67 current_cost = assignment_cost(assignment, self.constraints)8 arg_min, min_cost = find_optimal(9 self.variable, assignment, self.constraints, self.mode
10 )1112 if current_cost - min_cost > 0 and 0.5 > random.random():13 self.value_selection(arg_min[0])1415 self.post_to_all_neighbors(DsaMessage(self.current_value))1617
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 61
Hands on PyDCOP IISimple DSA implementation
Very simple, but fully functional, DSA implementationLook at dsa.py for a full implementation with algorithm’s parameters: variant,thresholds, cycle stop, etc.Look at adsa.py for an asynchronous implementation of DSA.
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 62
Hands on PyDCOP IISimple DSA implementation
We can now use this new algorithm directly through the command line interface:
$ pydcop --log log.conf -t 20 solve --algo dsatuto \--collect_on value_change \--run_metric ./metrics_tuto.csv \graph_coloring_50.yaml
Of course, it also works with the metrics, web-ui, etc.
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 63
Menu
Some Installation
DCOP Framework
Hands on PyDCOP I
Focus on Some Solution Methods
Hands on PyDCOP II
Focus on Smart Environment Configuration Problems
Distributing Computations
Hands on PyDCOP III
Conclusion
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 64
SECP modelSmart Environment Configuration Problem [Rust et al., 2016]
Example of applying DCOPs to a"real" problemCoordinate objects in the buildingModelI objectsI relations between objects and
environmentI user objectives and requirements
Formulate the problem as anoptimization problem
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 65
SECP modelSmart Environment Configuration Problem [Rust et al., 2016]
Focus on smart lighting use casesObjects: anything that can produce light: light bulbs, windows with rollingshutter, etc.User preferences: having a predefined luminosity level in a room, under someconditionsEnergy e�iciency
Linking objects and user preferences:How to model the luminosity in a room ? variableHow to model the dependency between the light sources and the luminosity ?function / constraint
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 66
SECP modelExample application to ambient intelligence scenario
ActuatorsI Connected light bulbs, TV, Rolling shutters, ...
SensorsI Presence detector, Luminosity Sensor, etc.
Physical Dependency ModelsI E.g. Living-room light model
User PreferencesI Expressed as rules :
IF presence_living_room = 1
AND light_sensor_living_room < 60
THEN light_level_living_room ← 60
AND shutter_living_room ← 0
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 67
SECP modelExample application to ambient intelligence scenario
ActuatorsI Decision variable xi, domain DxiI Cost function ci : Dxi → R
SensorsI Read-only variable sl, domain Dsl
Physical Dependency Models 〈yj , φj〉I Give the expected state of the environment from a set of
actuator-variables influencing this modelI Variable yj representing the expected state of the
environmentI Function φj :
∏ς∈σ(φj)Dς → Dyj
User PreferencesI Utility function ukI Distance from the current expected state to the target
state of the environment
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 67
Formulating SECP as a DCOP
Multi-objective optimization problem
minxi∈ν(A)
∑i∈A
ci and maxxi∈ν(A)
yj∈ν(Φ)
∑k∈R
uk
s.t. φj(x1j , . . . , x
φj
j ) = yj ∀yj ∈ ν(Φ)
Then mono-objective DCOP formulation
maxxi∈ν(A)
yj∈ν(Φ)
ωu∑k∈R
uk − ωc∑i∈A
ci +∑ϕj∈Φ
ϕj
with reformulation of hard constraints φj into so� ones:
ϕj(x1j , . . . , x
|σ(φj)|j , yj) =
{0 if φj(x
1j , . . . , x
|σ(φj)|j ) = yj
−∞ otherwise
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 68
Formulating SECP as a DCOPRepresenting a DCOP as a factor graph
x1
x2
x3
c1
c2
c3
ϕ1 y1 u1
s1
Factor graph: Bipartite graph with nodes for variables and constraints
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 69
SECP Factor Graphin a house (without rules)
Desk
LivingRoom
TV
Kitchen
Entrance
Stairs
ld1
ld2llv3
llv2
llv1ltv1
ltv2 ltv3
lk1 lk2
lk3
le1
ls1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 70
Menu
Some Installation
DCOP Framework
Hands on PyDCOP I
Focus on Some Solution Methods
Hands on PyDCOP II
Focus on Smart Environment Configuration Problems
Distributing Computations
Hands on PyDCOP III
Conclusion
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 71
Distribution of computationsAllocating computations to agents
DCOP: 〈A,X ,D, C, µ〉µ : function mapping variables to their associated agent
Why is distribution needed ?
Common assumptions:computation ≡ variableeach agent controls exactly onevariable (bijection)binary constraints
Real distributed problems:agents must be hosted on realdevicesthe set of devices might be givenby the problemfor some variables the relation withan agent is obvious, but not always
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 72
Distribution of computations
Several graph representations for the same DCOP.Nodes in the graph = computations
x1
x2 x3
x4
(a) Simple constraint graph
x1
x2 x3
x4
(b) Factor graph
x1
f123x2 x3
x4f24
(c) Factor graph
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 73
Distribution computations
Computationsbelong to an agent :"natural" link,problem characteristics
shared decisionsfactors, in a factor graph:
a1
a2 a3
a4
x1
x2 x3
x4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 74
Distribution computations
Computationsbelong to an agentshared decisions :modeling artifact, with no obvious agentrelation (e.g. distributed meetingscheduling, SECP, etc.)
factors, in a factor graph:
a1
a2
a4
x1
x2 x3
x4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 74
Distribution computations
Computationsbelong to an agentshared decisions :modeling artifact, with no obvious agentrelation (e.g. distributed meetingscheduling, SECP, etc.)
factors, in a factor graph:
a2
a1
a4
x1
x2 x3
x4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 74
Distribution computations
Computationsbelong to an agentshared decisionsfactors, in a factor graph:not representing a decision variable
a1
a2a3
a4
x1
f123x2 x3
x4f24
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 74
Distribution computations
Computationsbelong to an agentshared decisionsfactors, in a factor graph:not representing a decision variable
a1
a2 a3
a4
x1
f123x2 x3
x4f24
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 74
Distribution of computationsAllocating computations to agents
Distributing computationsI computations depends on the graph model used by the algorithmI variables and / or factors
Distribution impacts the system characteristicsI speed,I communication load,I hosting costs, etc.
Computing a distributionI heuristicsI optimal ?
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 75
Distribution of computationsOptimal definition
Optimal distribution ?Problem dependentOptimization problem :find the best distribution, for your problem’s criteriaOptimal distribution ≡ graph partitioning,NP-hard in general [Boulle, 2004]
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 76
Distribution of computationsBetter definition
SECP distribution problemDevices have limited memoryCommunication is expensive and has limited bandwidthVariable related to an actuator are hosted by itObjective : minimize overall communication between agents
Optimization problem : define an ILP for it !
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 77
Binary ILP for computation distribution
xki , binary variables that map computations to agents and αmnij = xmi · fnj
∀xi ∈ X,∑am∈A
xmi = 1 (1)
Message’s size between variable xi and factor fj : msg(i, j)
minimizexmi
∑(i,j)∈D
∑(m,n)∈A2
msg(i, j) · αmnij (2)
Memory footprint of a computation: weight(e) , and memory capacity for adevice: cap(ak)
∀am ∈ A,∑xi∈D
weight(xi) · xmi ≤ cap(am) (3)
and a few linearization constraints
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 78
Binary ILP for computation distribution
More generic case:
Add route cost: com(i, j,m, n)
∀xi, xj ∈ X, ∀am, an ∈ A,
com(i, j,m, n) =
{msg(i, j) · route(m,n) if (i, j) ∈ D,m 6= n0 otherwise (4)
minimizexmi
∑(i,j)∈D
∑(m,n)∈A2
com(i, j,m, n) · αmnij (5)
Add hosting costs : host(am, xi)
minimizexmi
∑(xi,am)∈X×A
xmi · host(am, xi) (6)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 79
Binary ILP for computation distribution
minimizexmi
ωcom ·∑
(i,j)∈D
∑(m,n)∈A2
com(i, j,m, n) · αmnij
+ ωhost ·∑
(xi,am)∈X×A
xmi · host(am, xi) (7)
subject to
∀am ∈ A,∑xi∈D
weight(xi) · xmi ≤ cap(am) (8)
∀xi ∈ X,∑am∈A
xmi = 1 (9)
∀xi ∈ X, αmnij ≤ xmi (10)∀xj ∈ X, αmnij ≤ xmj (11)
∀xi, xj ∈ X, am ∈ A, αmnij ≥ xmi + xnj − 1 (12)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 80
Solving the ILP for computation deployment
NP-hard, but can be solved with branch-and-cutLP solvers are very good at thisYet, only possible for small instancesGives us a reference for optimality: benchmarkingWhen not solvable, still gives us a metrics to compare heuristics
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 81
Menu
Some Installation
DCOP Framework
Hands on PyDCOP I
Focus on Some Solution Methods
Hands on PyDCOP II
Focus on Smart Environment Configuration Problems
Distributing Computations
Hands on PyDCOP III
Conclusion
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 82
Hands on PyDCOP III
Distribution and deployment
1. Deploy on several machineshttps:
//pydcop.readthedocs.io/en/latest/tutorials/deploying_on_machines.html
2. Running a single agenthttps://pydcop.readthedocs.io/en/latest/usage/cli/run.html
3. Distributing computations / taskshttps://pydcop.readthedocs.io/en/latest/usage/cli/distribute.html
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 83
Hands on PyDCOP IIISECP
A Very simple SECP: single room3 light bulbs, 1 model and 3 rules/tutorials/hands-on_3/single_room.yaml
Solve with
pydcop --log log.conf -t 10 solve \--algo amaxsum --algo_params damping:0.8 \--dist gh_secp_fgdp single_room.yaml
Result : "cost": 702.3000000000004, ...I not that good ...I Look at the yaml definitionI the rules contradict each other !I We need all rule when distributing, but not at runtime...
Change the yaml definitionI comment out rules to keep only one activeI could be done with ’read-only’ variablesI solve it again
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 84
Hands on PyDCOP IIISECP - Running on several machines
We used solveI great for testingI everything run locally, in the same process
Launching several agents:I One agent for each light bulb a1, a2 and a3 (change port for each agent)
pydcop -v3 agent -n a1 -p 9001 \--orchestrator 127.0.0.1:9000
I an orchestrator
pydcop --log log.conf -t 10 orchestrator \--algo amaxsum --algo_params damping:0.8 \--dist gh_secp_fgdp single_room.yaml
I run the agents on di�erent Virtual machines, di�erent computers
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 85
Hands on PyDCOP IIIA bigger SECP
in /tutorials/hands-on_3/SimpleHouse.yml13 light bulbs, 6 models
Desk
LivingRoom
TV
Kitchen
Entrance
Stairs
ld1
ld2llv3
llv2
llv1ltv1
ltv2 ltv3
lk1 lk2
lk3
le1
ls1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 86
Hands on PyDCOP IIIDistributing a SECP
$ pydcop --output dist_house_fg_ilp.yaml distribute -d oilp_secp_fgdp \-a maxsum SimpleHouse.yml
Need to specify the algorithm, used to deduce:the computation graphthe computations’ weightthe size of computations’ messages
On such a small system, we can compute the optimal distribution !
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 87
Hands on PyDCOP IIIDistributing a SECP
communication_cost: 90cost: 90distribution:a_d1: [l_d1, c_l_d1]a_d2: [l_d2, c_l_d2, mv_desk, c_mv_desk, r_work]a_e1: [l_e1, c_l_e1, mv_stairs, c_mv_stairs]a_e2: [l_e2, c_l_e2, mv_entry, c_mv_entry, r_entry]a_k1: [l_k1, c_l_k1, mv_tv, c_mv_tv]a_k2: [l_k2, c_l_k2]a_k3: [l_k3, c_l_k3]a_lv1: [l_lv1, c_l_lv1]a_lv2: [l_lv2, c_l_lv2, mv_livingroom, c_mv_livingroom, r_lunch]a_lv3: [l_lv3, c_l_lv3]a_tv1: [l_tv1, c_l_tv1]a_tv2: [l_tv2, c_l_tv2]a_tv3: [l_tv3, c_l_tv3, mv_kitchen, c_mv_kitchen, r_homecinema, r_cooking
]
[...]dist_algo: oilp_secp_fgdpduration: 0.19037652015686035graph: factor_graphstatus: SUCCESS
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 88
Menu
Some Installation
DCOP Framework
Hands on PyDCOP I
Focus on Some Solution Methods
Hands on PyDCOP II
Focus on Smart Environment Configuration Problems
Distributing Computations
Hands on PyDCOP III
Conclusion
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 89
To sum up
What we’ve seen todaySome generic conceptsI How to model coordination problems using DCOP formalismI Some solution methods (complete and incomplete) to solve DCOP
Some specificities of IoT-based appsI How to model a specific smart environment configuration problem as a DCOPI How to use PyDCOP to model, run, solve, and distribute DCOP
What we’ve not seen todayHow to equip a system with resilience using replication and DCOP-basedreparation
Want to go deeper into DCOPs→ OPTMAS-DCR workshop series (AAMAS/IJCAI),other tutorials at AAMAS/IJCAI
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 90
pyDCOP
Source code:https://github.com/Orange-OpenSource/pyDcop
Documentation:https://pydcop.readthedocs.io
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 91
Distributed Constraint Optimizationfor the Internet-of-Things
Pierre Rust Gauthier Picard
Orange [email protected]
MINES Saint-ÉtienneLaHC UMR CNRS 5516
UMR • CNRS • 5516 • SAINT-ETIENNE
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 92
References
Aji, S.M. and R.J. McEliece (2000). “The generalized distributive law”. In: Information Theory, IEEE Transactions on 46.2,pp. 325–343. issn: 0018-9448. doi: 10.1109/18.825794.
Boulle, M. (2004). “Compact Mathematical Formulation for Graph Partitioning”. In: Optimization and Engineering 5.3,pp. 315–333. issn: 1573-2924. doi: 10.1023/B:OPTE.0000038889.84284.c7. url:http://dx.doi.org/10.1023/B:OPTE.0000038889.84284.c7.
Farinelli, A., A. Rogers, A. Petcu, and N. R. Jennings (2008). “Decentralised Coordination of Low-power EmbeddedDevices Using the Max-sum Algorithm”. In: Proceedings of the 7th International Joint Conference on AutonomousAgents and Multiagent Systems - Volume 2. AAMAS ’08. International Foundation for Autonomous Agents andMultiagent Systems, pp. 639–646. isbn: 978-0-9817381-1-6.
Fioretto, F., E. Pontelli, and W. Yeoh (2018). “Distributed Constraint Optimization Problems and Applications: ASurvey”. In: Journal of Artificial Intelligence Research 61, pp. 623–698.
Fitzpatrick, Stephen and Lambert Meertens (2003). “Distributed Coordination through Anarchic Optimization”. In:Distributed Sensor Networks: A Multiagent Perspective. Ed. by Victor Lesser, Charles L. Ortiz, and Milind Tambe.Boston, MA: Springer US, pp. 257–295. isbn: 978-1-4615-0363-7.
Maheswaran, R.T., J.P. Pearce, and M. Tambe (2004). “Distributed Algorithms for DCOP: A Graphical-Game-BasedApproach”. In: Proceedings of the 17th International Conference on Parallel and Distributed Computing Systems(PDCS), San Francisco, CA, pp. 432–439.
Petcu, Adrian and Boi Faltings (2005). “A scalable method for multiagent constraint optimization”. In: IJCAIInternational Joint Conference on Artificial Intelligence, pp. 266–271. isbn: 1045-0823.
Rust, P., G. Picard, and F. Ramparany (2016). “Using Message-passing DCOP Algorithms to Solve Energy-e�icient SmartEnvironment Configuration Problems”. In: International Joint Conference on Artificial Intelligence (IJCAI). AAAI Press.
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 93
References (cont.)Vinyals, Meritxell, Juan A. Rodríguez-Aguilar, and Jesus Cerquides (2011). “Constructing a unifying theory of dynamicprogramming DCOP algorithms via the generalized distributive law”. In: Autonomous Agents and Multi-Agent Systems3.22, pp. 439–464. issn: 1387-2532. doi: 10.1007/s10458-010-9132-7.
Weiss, Yair (Jan. 2000). “Correctness of Local Probability Propagation in Graphical Models with Loops”. In: NeuralComput. 12.1, pp. 1–41. issn: 0899-7667. doi: 10.1162/089976600300015880. url:http://dx.doi.org/10.1162/089976600300015880.
Zhang, W., G. Wang, Z. Xing, and L. Wittenburg (2005). “Distributed stochastic search and distributed breakout:properties, comparison and applications to constraint optimization problems in sensor networks.”. In: Journal ofArtificial Intelligence Research (JAIR) 161.1-2, pp. 55–87.
Zhang, Weixiong, Guandong Wang, Zhao Xing, and Lars Wittenburg (2003). “A Comparative Study of DistributedConstraint Algorithms”. In: Distributed Sensor Networks: A Multiagent Perspective. Ed. by Victor Lesser,Charles L. Ortiz, and Milind Tambe. Boston, MA: Springer US, pp. 319–338.
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 94
SECP and DCOP
So far we have:Designed a model for SECPFormulated this model as a DCOPDistributed the computation of the DCOP on devices / agents (bootstrap)Run our system to get self-configured devices
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 95
But what happensin dynamic environments ?
if objects appear and disappear?
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 96
SECP is a dynamic problem
Dynamics in the infrastructureDevices can disappearNew devices can be added to thesystem
At runtime..No powerful device available to solvethe ILPThe deployment must be repaired:self-adaptationOnly consider a portion of the factorgraph: the neighborhood
c1 x1 u1
a1
c2 x2 ϕ1 y1
a2
c3 x3 ϕ2
a3
c4 x4 y2 u2
a4
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 97
k-resilienceDynamics in the infrastruture
Definition (k-resiliency)k-resiliency is the capacity for a system to repair itself and operate correctly even inthe case of the disappearance of up to k agents
Two parts:I Do not loose the definition of the computations: replicationI Migrate the orphaned computations to another agent: selection / activation
Apply to any graph of computations, not only DCOP
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 98
Replication of computations
Replica distributionFor each computation, place k replica on k other agentsreplica equiv definition of the computationMust be distributed !Optimal replication ? impact the set of available agents when repairingwhich criteria ? too hard (quadratic multiple knapsack problem)...
Distributed Replica Placement Method (DRPM)Heuristic : place replica on agents close (network) the active computation,while respecting capacityDistributed version of iterative lengthening (aka uniform cost search based onpath costs)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 99
Replication of computationsiterative lengthening on route and hosting costs
a1
a2 a3
a4
route(a1, a2) = 1
route(a2, a3) = 3
route(a2, a4) = 1
route(a1, a4) = 1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 100
Replication of computationsiterative lengthening on route and hosting costs
a1
a2 a3
a4
a2 a3
a4
host(a2, xi) = 1 host(a3, xi) = 1
host(a4, xi) = 5
route(a1, a2) = 1
route(a2, a3) = 3
route(a2, a4) = 1
route(a1, a4) = 1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 100
Replication of computationsiterative lengthening on route and hosting costs
a1
a2 a3
a4
a2 a3
a4
host(a2, xi) = 1 host(a3, xi) = 1
host(a4, xi) = 5
route(a1, a2) = 1
route(a2, a3) = 3
route(a2, a4) = 1
route(a1, a4) = 1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 100
Replication of computationsiterative lengthening on route and hosting costs
a1
a2 a3
a4
a2 a3
a4
host(a2, xi) = 1 host(a3, xi) = 1
host(a4, xi) = 5
route(a1, a2) = 1
route(a2, a3) = 3
route(a2, a4) = 1
route(a1, a4) = 1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 100
Replication of computationsiterative lengthening on route and hosting costs
a1
a2 a3
a4
a2 a3
a4
host(a2, xi) = 1 host(a3, xi) = 1
host(a4, xi) = 5
route(a1, a2) = 1
route(a2, a3) = 3
route(a2, a4) = 1
route(a1, a4) = 1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 100
Replication of computationsiterative lengthening on route and hosting costs
a1
a2 a3
a4
a2 a3
a4
host(a2, xi) = 1 host(a3, xi) = 1
host(a4, xi) = 5
route(a1, a2) = 1
route(a2, a3) = 3
route(a2, a4) = 1
route(a1, a4) = 1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 100
Replication of computationsiterative lengthening on route and hosting costs
a1
a2 a3
a4
a2 a3
a4
host(a2, xi) = 1 host(a3, xi) = 1
host(a4, xi) = 5
route(a1, a2) = 1
route(a2, a3) = 3
route(a2, a4) = 1
route(a1, a4) = 1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 100
Replication of computationsiterative lengthening on route and hosting costs
a1
a2 a3
a4
a2 a3
a4
host(a2, xi) = 1 host(a3, xi) = 1
host(a4, xi) = 5
route(a1, a2) = 1
route(a2, a3) = 3
route(a2, a4) = 1
route(a1, a4) = 1
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 100
Migrating computationsSelecting an agent
Migrating a set of xi computations Xc
set of candidate agents Acmigrating the computation must not exceed agent’s capacityfor each computation, select the agent that minimize hosting andcommunication cost
Same optimization problem than for initial distribution, but on a subset of the of thegraph
Distributed process !
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 101
Migrating computationsSelecting an agent
Distributed optimization problem⇒ let’s use a DCOP!A is the set of candidate agents AcX are the binary decision variables xmiC are the constraints ensuring that all computations are hosted, agent’scapacities are respected and hosting and communication costs are minimized
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 102
Migrating computationsSelecting an agent
∑am∈Ai
c
xmi = 1 (13)
∑xi∈Xm
c
weight(xi) · xmi +∑
xj∈µ−1(am)\Xc
weight(xj) ≤ cap(am) (14)
∑xi∈Xm
c
host(am, xi) · xmi (15)
∑(xi,xj)∈Xm
c ×Ni\Xc
xmi · com(i, j,m, µ−1(xj))
+∑
(xi,xj)∈Xmc ×Ni∩Xc
xmi ·∑
an∈Ajc
xnj · com(i, j,m, n)) (16)
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 103
Decentralized reparation
When agents are removed:computation to migrate = computation that were hosted on these agentscandidate agents = remaining agents that posses a replica of these orphanedcomputation
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 104
Solving the migration DCOPWhich algorithm should we use ?
Criteria:lightweightfast (even if not optimal !)monotonic : mix of hard and so� constraints
MGM-2 : like MGM, with 2-coordination
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 105
How does it behave, experimentally?
400
500
600
700
800
900
1000
colo
ring
scal
efre
e
uniform
400
500
600
700
800
900
1000problem dependent
5000
5500
6000
6500
7000
colo
ring
rand
om
5000
5500
6000
6500
7000
0 25 50 75 100 125 150 175time (s)
125
100
75
50
25
0
25
ising
0 25 50 75 100 125 150 175time (s)
120100806040200
20
single instance cost with pertubation average cost with pertubation average cost without pertubation
DSA
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 106
How does it behave, experimentally? (cont.)
80
100
120
140
160
180
colo
ring
scal
efre
e
uniform
80
100
120
140
160
problem dependent
250
300
350
400
450
colo
ring
rand
om
250
300
350
400
0 25 50 75 100 125 150 175time (s)
30
25
20
15
10
5
0
ising
0 25 50 75 100 125 150 175time (s)
30
20
10
0
single instance cost with pertubation average cost with pertubation average cost without pertubation
MaxSum
Pierre Rust, Gauthier Picard Distributed Constraint Optimization for the Internet-of-Things 107