distributed systems ii agreement - commit (2-3 phase commit) prof philippas tsigas distributed...

47
DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Upload: loren-snow

Post on 05-Jan-2016

227 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

DISTRIBUTED SYSTEMS II

AGREEMENT - COMMIT (2-3 PHASE COMMIT)

Prof Philippas TsigasDistributed Computing and Systems Research Group

Page 2: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

”I can’t find a solution, I guess I’m just too dumb”

Picture from Computers and Intractability, by Garey and Johnson

2

Page 3: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

”I can’t find an algorithm, because no such algorithm is possible”

Picture from Computers and Intractability, by Garey and Johnson

3

Page 4: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

”I can’t find an algorithm, but neither can all these famous people.”

Picture from Computers and Intractability, by Garey and Johnson

4

Page 5: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

5

Atomic commit protocols

transaction atomicity requires that at the end, – either all of its operations are carried out or none of them.

in a distributed transaction, the client has requested the operations at more than one server

one-phase atomic commit protocol– the coordinator tells the participants whether to commit or abort– what is the problem with that?– this does not allow one of the servers to decide to abort – it may have

discovered a deadlock or it may have crashed and been restarted

two-phase atomic commit protocol– is designed to allow any participant to choose to abort a transaction– phase 1 - each participant votes. If it votes to commit, it is prepared. It cannot

change its mind. In case it crashes, it must save updates in permanent store– phase 2 - the participants carry out the joint decision

•The decision could be commit or abort - participants record it in permanent store

Page 6: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

6

Failure model for the commit protocols

Failure model for transactions– this applies to the two-phase commit protocol

Commit protocols are designed to work in– synchronous system, system failure when a msg does not arrive on time.– servers may crash but a new process whose state is set from information

saved in permanent storage and information held by other processes.– messages may NOT be lost. – assume corrupt and duplicated messages are removed. – no byzantine faults – servers either crash or they obey their requests

2PC is an example of a protocol for reaching a consensus. – Consensus cannot be reached in an asynchronous system if processes

sometimes fail.– however, 2PC does reach consensus under those conditions. – because crash failures of processes are masked by replacing a crashed

process with a new process whose state is set from information saved in permanent storage and information held by other processes.

Page 7: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

7

Operations for two-phase commit protocol

participant interface- canCommit?, doCommit, doAbort

coordinator interface- haveCommitted, getDecision

canCommit?(trans)-> Yes / NoCall from coordinator to participant to ask whether it can commit a transaction. Participant replies with its vote.

doCommit(trans) Call from coordinator to participant to tell participant to commit its part of a transaction.

doAbort(trans) Call from coordinator to participant to tell participant to abort its part of a transaction.

haveCommitted(trans, participant) Call from participant to coordinator to confirm that it has committed the transaction.

getDecision(trans) -> Yes / NoCall from participant to coordinator to ask for the decision on a transaction after it has voted Yes but has still had no reply after some delay. Used to recover from server crash or delayed messages.

This is a request with a reply

These are asynchronous requests to avoid delays

Asynchronous request

Page 8: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

8

The two-phase commit protocol

• Phase 1 (voting phase): • 1. The coordinator sends a canCommit? request to each of the participants in

the transaction.• 2. When a participant receives a canCommit? request it replies with its vote

(Yes or No) to the coordinator. Before voting Yes, it prepares to commit by saving objects in permanent storage. If the vote is No the participant aborts immediately.

• Phase 2 (completion according to outcome of vote):• 3. The coordinator collects the votes (including its own).

(a) If there are no failures and all the votes are Yes the coordinator decides to commit the transaction and sends a doCommit request to each of the participants. (b) Otherwise the coordinator decides to abort the transaction and sends doAbort requests to all participants that voted Yes.

• 4. Participants that voted Yes are waiting for a doCommit or doAbort request from the coordinator. When a participant receives one of these messages it acts accordingly and in the case of commit, makes a haveCommitted call as confirmation to the coordinator.

Page 9: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Two-Phase Commit Protocol

canCommit?

9

1

2

3

4

yes

doCommit

no

doAbort

canCommit?

Page 10: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

TimeOut Protocol

10

1

2

3

4

At step 2 and 3 no commit decision madeOK to abortCoordinator will either not collect all commit votes or will vote for abort

Page 11: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

TimeOut Protocol

11

1

2

3

4

At step 4o cohort cannot communicate with coordinatoro Coordinator may have decidedo Cohort must block until communication re-establishedo Might ask other cohorts

Page 12: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Restart Protocol

canCommit?

12

1

2

3

4

yes

doCommit

no

doAbort

canCommit?

If the site• Has decided, it just picks up from where it left off• Is a cohort that had not voted, it decides abort• Is a cordinator that has not decided, it decides abort• A cohort that crashed after voting commit, it must block until it discovers

Page 13: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Blocking

canCommit?

13

1

2

3

4

yes

doCommit

no

doAbort

canCommit?

Blocking can occur if between 2 and 4:

• Coordinatoor crashes• Cohort cannot communicate with coordinator

Page 14: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Three-Phase Commit Protocol

no

doAbort

canCommit?1

2

3

4

yes

precommit

ack

commit5

6

Page 15: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Three-phase commit protocol

15

Page 16: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

16

Performance of the two-phase commit protocol

if there are no failures, the 2PC involving N participants requires– N canCommit? messages and replies, followed by N doCommit

messages. the cost in messages is proportional to 3N, and the cost in time is three

rounds of messages. The haveCommitted messages are not counted

– there may be arbitrarily many server and communication failures– 2PC is guaranteed to complete eventually, but it is not possible to

specify a time limit within which it will be completed delays to participants in uncertain state some 3PCs designed to alleviate such delays

• they require more messages and more rounds for the normal case

Page 17: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

What to read from your Book

17.1 Introduction17.2 Flat and nested distributed transactions17.3 Atomic commit protocols17.4 Concurrency control in distributed transactions17.5 Distributed deadlocks17.6 Transaction recovery

17

Page 18: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

18

17.3.2 Two-phase commit protocol for nested transactions

Recall Fig 13.1b, top-level transaction T and subtransactions T1, T2, T11, T12, T21, T22

A subtransaction starts after its parent and finishes before it When a subtransaction completes, it makes an independent

decision either to commit provisionally or to abort. – A provisional commit is not the same as being prepared: it is a local decision

and is not backed up on permanent storage.

– If the server crashes subsequently, its replacement will not be able to carry out a provisional commit.

A two-phase commit protocol is needed for nested transactions – it allows servers of provisionally committed transactions that have crashed to

abort them when they recover.

Page 19: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

DISTRIBUTED SYSTEMS II REPLICATION

Prof Philippas TsigasDistributed Computing and Systems Research Group

Page 20: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Distributed Systems Course Replication

18.1 Introduction to replication

18.2 System model and group communication

18.3 Fault-tolerant services

18.4 Highly available services

18.5 Transactions with replicated data

Page 21: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

21

Introduction to replication

replication can provide the following performance enhancement

– e.g. several web servers can have the same DNS name and the servers are selected in turn. To share the load.

– replication of read-only data is simple, but replication of changing data has overheads

fault-tolerant service– guarantees correct behaviour in spite of certain faults (can include timeliness)– if f of f+1 servers crash then 1 remains to supply the service– if f of 2f+1 servers have byzantine faults then they can supply a correct service

availability is hindered by– server failures

replicate data at failure- independent servers and when one fails, client may use another.

– network partitions and disconnected operation Users of mobile computers deliberately disconnect, and then on re-connection,

resolve conflicts•

Replication of data :- the maintenance of copies of data at multiple computers

Page 22: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

22

Availiability

is used for repairable systems

It is the probability that the system is operational at any random time t.

It can also be specified as a proportion of time that the system is available for use in a given interval (0,T).

22

Page 23: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

23

Requirements for replicated data

Replication transparency– clients see logical objects (not several physical copies)

they access one logical item and receive a single result

Consistency– specified to suit the application,

e.g. when a user of a diary disconnects, their local copy may be inconsistent with the others and will need to be reconciled when they connect again. But connected clients using different copies should get consistent results. These issues are addressed in Bayou and Coda.

Page 24: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

24

A basic architectural model for the management of replicated data

FE

Requests andreplies

C

ReplicaC

Service

ClientsFront ends

managers

RM

RMFE

RM

Figure 14.1

A collection of RMs provides a service to clients

Clients see a service that gives them access to logical objects, which are in fact replicated at the RMs

Clients request operations: those without updates are called read-only requests the others are called update requests (they may include reads)

Clients request are handled by front ends. A front end makes replication transparent.

Page 25: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

25

14.2.1 System model

each logical object is implemented by a collection of physical copies called replicas– the replicas are not necessarily consistent all the time (some may

have received updates, not yet conveyed to the others) we assume an asynchronous system where processes fail

only by crashing and generally assume no network partitions replica managers

– a RM contains replicas on a computer and access them directly– RMs apply operations to replicas recoverably

i.e. they do not leave inconsistent results if they crash

– objects are copied at all RMs unless we state otherwise– static systems are based on a fixed set of RMs– in a dynamic system: RMs may join or leave (e.g. when they crash)– a RM can becan be a state machine, which has the following properties:

Page 26: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

State Machine Semantic Characterization

Outputs of a state machine are complitely determined by the sequence of requests it processes indepedent of time and any other activity in the system.

Vague about internal structure

26

Page 27: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

State Machine: Examples

State machine Server: Word store[N]

Read(int loc) {send store[loc] to client;

}Write[int loc, word val] {store[loc]=val}Client

memory.write(100, 4)Memory.read(100)Receive v from memory

Not a state machine while true do

read sensorq := compute adjustmentsend q to actuatorend while

27

Page 28: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

State Machine no Replication Response Guarantees

28

Client Server

Page 29: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Response Guarantees

1) Requests issued by a single client to a state machine are processed in the order issued (FIFO request delivery)

2) – Request r to state machine s by client c1– could have caused request r’ to s by client c2,

then – s processes r before r’

29

Page 30: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

30

Page 31: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

31

Requests are buffered until they become stable to be processed

Page 32: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

All replicas process the same sequence of requests

1. Uniquely identify the requests.

2. Order the requests. Do not forget the guarantees that we expect.

1. Server have to know when to service a request. (When a request is stable)

32

Page 33: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

When to process a reguest – Stability Detection

3 methods:– Logical clocks– Real-time clocks– Server-generated ids

33

Page 34: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Logical Clocks

Assign integer T(e,p) to event e from processor p:– If e is a sending of a message– If e is a receiving of a message– Importanat event

Properties:

T(e,p) < T(e1,q) or vice-versa

If e could have caused e1, then T(e,p)<T(e1,q)

34

Page 35: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

p<q<r

35

Page 36: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Synchronized Real-Time Clocks

– If a message sent with uid t will be received no later than t+D by local clock.

– Uids differ by D at most at any time

36

Page 37: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Server-generated ids

Clients first get an id from the server then issue the id to issue a request (like a sequencer).

37

Page 38: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

State Machine

38

Client Server

Page 39: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

State Machine

39

Client Server

Page 40: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

40

State Machine approach to Replication

Each RM

applies operations atomicallyits state is a deterministic function of its initial state and the operations appliedall replicas start identical start identical and carrycarry out the same sequence of operationsout the same sequence of operationsIts operations must not be affected by clock readings etc.

Page 41: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

Replication

Place a copy of the server state machine on multiple network nodes.

? Communication of the requests? ? Coordination ?

Want:• All replicas start in the same state • All replicas receive the same set of requests• All replicas process the same sequence of requests

41

Page 42: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

42

Four phases in performing a request

issue request – the FE either

sends the request to a single RM that passes it on to the others or multicasts the request to all of the RMs

coordination + agreement – the RMs decide whether to apply the request; and decide on its ordering

relative to other requests (according to FIFO, causal or total ordering)

execution– the RMs execute the request (sometimes tentatively)

response– one or more RMs reply to FE. e.g.

for high availability give first response to client. to tolerate byzantine faults, take a vote

FIFO ordering: if a FE issues r then r', then any correct RM handles r before r'Causal ordering: if r r', then any correct RM handles r before r'Total ordering: if a correct RM handles r before r', then any correct RM handles r before r'

RMs agree - I.e. reach a consensus as to effect of the request. In Gossip, all RMs eventually receive updates.

Page 43: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

43

13.3.2. Active replication for fault tolerance

the RMs are state machines all playing the same role and organised as a group. – all start in the same state and perform the same operations in the same order so that their state

remains identical

If an RM crashes it has no effect on performance of the service because the others continue as normal

It can tolerate byzantine failures because the FE can collect and compare the replies it receives

FE CFEC RM

RM

RMFigure 14.5

a FE multicasts each request to the group of RMs (and FE’s)

the RMs process each request identically and reply

Requires totally ordered reliable multicast so that all RMs perfrom the same operations in the same order

What sort of system do we need to perform totally ordered reliable multicast?

Page 44: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

44

Active replication - five phases in performing a client request

Request– FE attaches a unique id and uses totally ordered reliable multicast to send

request to RMs. FE can at worst, crash. It does not issue requests in parallel

Coordination– the multicast delivers requests to all the RMs in the same (total) order.

Execution– every RM executes the request. They are state machines and receive

requests in the same order, so the effects are identical. The id is put in the response

Agreement– no agreement is required because all RMs execute the same operations in

the same order, due to the properties of the totally ordered multicast.

Response– FEs collect responses from RMs. FE may just use one or more responses. If it

is only trying to tolerate crash failures, it gives the client the first response.

Page 45: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

45

The passive (primary-backup) model for fault tolerance

There is at any time a single primary RM and one or more secondary (backup, slave) RMs

FEs communicate with the primary which executes the operation and sends copies of the updated data to the result to backups

if the primary fails, one of the backups is promoted to act as the primary

FEC

FEC

RM

Primary

Backup

Backup

RM

RM

Figure 14.4

The FE has to find the primary, e.g. after it crashes and another takes over

Page 46: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

46

Passive (primary-backup) replication. Five phases.

The five phases in performing a client request are as follows: 1. Request:

– a FE issues the request, containing a unique identifier, to the primary RM

2. Coordination:– the primary performs each request atomically, in the order in which it receives it

relative to other requests– it checks the unique id; if it has already done the request it re-sends the

response. 3. Execution:

– The primary executes the request and stores the response.

4. Agreement:– If the request is an update the primary sends the updated state, the response

and the unique identifier to all the backups. The backups send an acknowledgement.

5. Response:– The primary responds to the FE, which hands the response back to the client.

Page 47: DISTRIBUTED SYSTEMS II AGREEMENT - COMMIT (2-3 PHASE COMMIT) Prof Philippas Tsigas Distributed Computing and Systems Research Group

47

Passive (primary-backup) replication (discussion)

This system implements linearizability, since the primary sequences all the operations on the shared objects

If the primary fails, the system is linearizable, if a single backup takes over exactly where the primary left off, i.e.:– the primary is replaced by a unique backup– surviving RMs agree which operations had been performed at take over

view-synchronous group communication can achieve this– when surviving backups receive a view without the primary, they use an

agreed function to calculate which is the new primary.– The new primary registers with name service– view synchrony also allows the processes to agree which operations were

performed before the primary failed.– E.g. when a FE does not get a response, it retransmits it to the new primary – The new primary continues from phase 2 (coordination -uses the unique

identifier to discover whether the request has already been performed.