![Page 1: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/1.jpg)
Distributed Markov Chains
P S ThiagarajanSchool of Computing,
National University of Singapore
Joint work withMadhavan Mukund, Sumit K Jha and Ratul Saha
![Page 2: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/2.jpg)
Probabilistic dynamical systems
• Rich variety and theories of probabilistic dynamical systems– Markov chains, Markov Decision Processes (MDPs), Dynamic
Bayesian networks
• Many applications• Size of the model is a bottleneck
– Can we exploit concurrency theory?
• We explore this in the setting of Markov chains.
![Page 3: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/3.jpg)
Our proposal
• A set of interacting sequential systems.– Synchronize on common actions.
a a
![Page 4: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/4.jpg)
Our proposal
• A set of interacting sequential systems.– Synchronize on common actions.
a
![Page 5: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/5.jpg)
Our proposal
• A set of interacting sequential systems.– Synchronize on common actions.
a
![Page 6: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/6.jpg)
Our proposal
• A set of interacting sequential systems.– Synchronize on common actions.– This leads a joint probabilistic move by the participating
agents.
a, 0.8
a, 0.2a, 0.2
![Page 7: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/7.jpg)
Our proposal
• A set of interacting sequential systems.– Synchronize on common actions.– This leads a joint probabilistic move by the participating
agents.
a, 0.8
a, 0.2a, 0.2
![Page 8: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/8.jpg)
Our proposal
• A set of interacting sequential systems.– Synchronize on common actions.– This leads a joint probabilistic move by the participating
agents.
a, 0.8
a, 0.2a, 0.2
![Page 9: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/9.jpg)
Our proposal
• A set of interacting sequential systems.– Synchronize on common actions.– This leads a joint probabilistic move by the participating
agents.
a, 0.8
a, 0.2a, 0.2
![Page 10: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/10.jpg)
Our proposal• A set of interacting sequential systems.
– Synchronize on common actions.– This leads a joint probabilistic move by the participating
agents.– More than two agents can take part in a synchronization.– More than two probabilistic outcomes possible.– There can also be just one agent taking part in a
synchronization.• Viewed as an internal probabilistic move (like in a Markov chain) by the
agent.
![Page 11: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/11.jpg)
Our proposal• This type of a system has been explored by Pighizzini
et.al (“Probabilistic asynchronous automata”; 1996)– Language-theoretic study.
• Our key idea: – impose a “determinacy of communications”
restriction.– Study formal verification problems using partial
order based methods.• We study here just one simple verification method.
![Page 12: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/12.jpg)
Some notations
![Page 13: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/13.jpg)
Some notations
![Page 14: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/14.jpg)
{a}
Determinacy of communications.
s
s’
s’’
i
{a}
![Page 15: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/15.jpg)
{a}
Determinacy of communications.
s
s’
s’’
i j
![Page 16: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/16.jpg)
{a}
Determinacy of communications.
s
s’
s’’
i j
loc(a) = {i , j}(s, s’), (s, s’’) en a
a
a
a
![Page 17: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/17.jpg)
{a}
Not allowed!
s
s’
i j
s’’
k
act(s) will have more than one action.
![Page 18: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/18.jpg)
Some notations
![Page 19: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/19.jpg)
Some notations
![Page 20: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/20.jpg)
Example
– Two players each toss a fair coin– If the outcome is the same, they toss again– If the outcomes are different, the one who tosses Heads wins
![Page 21: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/21.jpg)
Example
Two component DMC
![Page 22: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/22.jpg)
Interleaved semantics.
Coin tosses are local actions, deciding a winner is synchronized action
![Page 23: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/23.jpg)
Goal
• We wish to analyze the behavior of a DMC in terms of its interleaved semantics.
• Follow the Markov chain route.– Construct the path space .• The set of infinite paths from the initial state.• Basic cylinder: a set of infinite paths with a common
finite prefix.• Close under countable unions and complements.
![Page 24: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/24.jpg)
The transition system view
23
1 4
1
1
1
2/5
3/5
1 3
4
1
1 2
2/5 3/5
3 3
11
4 4
1 1
B
Pr(B) = 1 2/5 1 1 = 2/5
B – The set of all paths that have the prefix 3 4 1 3 4
![Page 25: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/25.jpg)
Concurrency
• Events can occur independent of each other.• Interleaved runs can be (concurrency)
equivalent.• We use Mazurkiewicz trace theory to group
together equivalent runs: trace paths.• Infinite trace paths do not suffice.• We work with maximal infinite trace paths.
![Page 26: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/26.jpg)
(in1, in 2)
(T1, in2) (in1, H2) (in1, T2) (H1, in2)
t1, 0.5 t2, 0.5h1, 0.5 h2, 0.5
(H1, H2) (T1, H2) (H1, T2) (T1, T2)
W1, L2
W1, L2
W1, L2
W1, L2
w1
l2
l2
w1
L1, W2
![Page 27: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/27.jpg)
The trace space
• A basic trace cylinder is the one generated by a finite trace
• Construct the -algebra by closing under countable unions and complements.
• We must construct a probability measure over this -algebra.
• For a basic trace cylinder we want its probability to be the product of the probabilities of all the events in the trace.
![Page 28: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/28.jpg)
(in1, in 2)
(T1, in2) (in1, H2) (in1, T2) (H1, in2)
t1, 0.5 t2, 0.5h1, 0.5 h2, 0.5
(H1, H2) (T1, H2) (H1, T2) (T1, T2)
W1, L2
W1, L2
W1, L2
W1, L2
w1
l2
l2
w1
L1, W2
BPr(B) = 0.5 0.5 = 0.25
![Page 29: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/29.jpg)
The probability measure over the trace space.
• But proving that this extends to a unique probability measure over the whole -algebra is hard.
• To solve this problem :– Define a Markov chain semantics for a DMC.– Construct a bijection between the maximal traces of the
interleaved semantics and the infinite paths of the Markov chain semantics.• Using Foata normal form
– Transport the probability measure over the path space to the trace space.
![Page 30: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/30.jpg)
The Markov chain semantics.
![Page 31: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/31.jpg)
The Markov chain semantics.
![Page 32: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/32.jpg)
Markov chain semantics
What if there were players?
parallel probabilistic moves generate global movesThis has a bearing simulation time.
![Page 33: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/33.jpg)
Probabilistic Product Bounded LTL
Local Bounded LTL• Each component has a local set of atomic propositions
– Interpreted over Si
• Formula of type are atomic propositions and – i
![Page 34: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/34.jpg)
Probabilistic Product Bounded LTL
Local Bounded LTL• Each component has a local set of atomic propositions • Formula of type are atomic propositions and
– t (local) moves of component
Product Bounded LTL• Boolean combinations of Local Bounded LTL formulasProbabilistic Product Bounded LTL• where is a Product Bounded LTL formula• Close under boolean combinations
![Page 35: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/35.jpg)
PBLTL over interleaved runs
• Define –projections for interleaved runs .• Define for local BLTL formulas and for
product BLTL formulas• Use the measure on traces to define
![Page 36: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/36.jpg)
Statistical model checking…
![Page 37: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/37.jpg)
SPRT based model checking
• In our setting, each local BLTL formula for component fixes a bound on the number of steps that needs to make ; by then one will be able to decide if the formula is satisfied or not.
• Product BLTL formula induces a vector of bounds• Simulate the system till each component meets its bound
– A little tricky we can not try to achieve this bound greedily.
![Page 38: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/38.jpg)
Case study
Distributed leader election protocol [Itai-Rodeh]• identical processes in a unidirectional ring• Each process randomly chooses an id in and propagates• When a process receives an id
– If it is smaller than its own, suppress the message– If it is larger than its own, drop out and forward– If it is equal to its own, mark collision and forward
• If you get your own message back (message hop count is , is known to all processes)– If no collision was recorded, you are the leader– If a collision occurred these nodes go to the next round.
![Page 39: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/39.jpg)
Case study…
• In the Markov chain semantics:– Initial choice of identity: probabilistic move, alternatives– Building the global Markov to analyze system is expensive– Asynchronous semantics allows interleaved exploration
![Page 40: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/40.jpg)
Case study…
Distributed leader election protocol [Itai-Rodeh]
![Page 41: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/41.jpg)
Case study
Dining Philosophers Problem• philosophers (processes) in a round table• Each process tried to eat when hungry, and needs both the forks
to his right and left• The steps for a process are
– move from thinking to hungry– when hungry, randomly choose to try and pick up the left or right fork;– wait until the fork is down and then pick it up;– if the other fork is free, pick it up; otherwise, put the original fork
down (and return to step 1);– eat (since in possession of both forks);– when finished eating, put both forks down in any order and return
to thinking.
![Page 42: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/42.jpg)
Case study…
Dining Philosophers Problem
![Page 43: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/43.jpg)
Other examples
• Other PRISM case studies of randomized distributed algorithms– consensus protocols, gossip protocols…– Need to “translate" shared variables using a protocol
• Probabilistic choices in typical randomized protocols are local• DMC model allows communication to influence probabilistic
choices– We have not exploited this yet!– Not represented in standard PRISM benchmarks
![Page 44: Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with Madhavan Mukund, Sumit K Jha and Ratul](https://reader035.vdocuments.us/reader035/viewer/2022062421/56649c785503460f9492db59/html5/thumbnails/44.jpg)
Summary and future work
• The interplay between concurrency and probabilistic dynamics is subtle and challenging.
• But concurrency theory may offer new tools for factorizing stochastic dynamics. – Earlier work on probabilistic event structures [Katoen et al,
Abbes et al, Varacca et al] also attempt to impose probabilities on concurrent structures.
– Our work shows that formal verification as the goal offers valuable guidelines
• Need to develop other model checking methods for DMCs.– Finite unfoldings – Stubborn sets for PCTL like specifications.