lecture xiii: replication-ii

80
CMPT 431 2008 Dr. Alexandra Fedorova Lecture XIII: Replication- II

Upload: fadhila

Post on 24-Feb-2016

40 views

Category:

Documents


0 download

DESCRIPTION

Lecture XIII: Replication-II. CMPT 431 2008 Dr. Alexandra Fedorova. Outline. Harp A replicated research file system Google File System A real replicated file system Amazon Distributed Data Store A distributed database. Questions about Harp. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Lecture XIII: Replication-II

CMPT 431 2008

Dr. Alexandra Fedorova

Lecture XIII: Replication-II

Page 2: Lecture XIII: Replication-II

2CMPT 431© A. Fedorova

Outline

• Harp– A replicated research file system

• Google File System – A real replicated file system

• Amazon Distributed Data Store– A distributed database

Page 3: Lecture XIII: Replication-II

3CMPT 431© A. Fedorova

Questions about Harp

• Does HARP use the two-phase commit protocol? If so, when and how? How does it differ from the 2PC protocol we studied in class?

• How many replicas that keep copies of data do we need to survive n failures? How many total participants must we have to survive n failures?

• Describe normal operation in Harp. Explain the following:– What the primary does– What the replica does– What the witness does

• How does Harp survive failures without flushing updates to disk before responding to the client?

Page 4: Lecture XIII: Replication-II

4CMPT 431© A. Fedorova

More Questions

• What kind of logging does HARP use? Redo or Undo?• When are log records applied to disk? • Which state is volatile, which state is not volatile? • How fast are view changes? • What happens if a component crashes during the view

change? • Summarize the contributions of HARP as compared to

other replication systems that existed at the time

Page 5: Lecture XIII: Replication-II

5CMPT 431© A. Fedorova

Overview of Harp

• Uses primary copy replication for– Reliability– Availability

• Single primary server, backups and witness• Accessed via NFS interface• Performance was a concern – operations log is kept in

memory only:– To guard against machine failures: other replicas will have the log

in memory– To guard against power failures: each machine has a UPS, upon

power failure there is time to flush log to persistent storage

Page 6: Lecture XIII: Replication-II

6CMPT 431© A. Fedorova

Access via NFS Interface

User application

OS

NFS client

OS

NFS server

Replicated FS: • Primary• Backup• Witness

Page 7: Lecture XIII: Replication-II

7CMPT 431© A. Fedorova

Failover Transparent to Clients

User application

OS

NFS client

OS

NFS server

OS

NFS server

OS

NFS server

• Data is sent to a multicast address

• Reaches all potential primaries

• Discarded by hardware at all except the primary

192.168.51.2

primary

backup

witness

Page 8: Lecture XIII: Replication-II

8CMPT 431© A. Fedorova

Goals and Environment of Harp

• Provide highly available file system service via replication• Assume failstop failures• Survive network partitions• Assume a synchronous system. (Do you agree?)• In many systems, replication caused performance

degradation – replica communication slowed down the sending of response to the client

• Harp’s goal was to provide reliability and availability without performance loss

Page 9: Lecture XIII: Replication-II

9CMPT 431© A. Fedorova

Harp’s Components

• In presence of network partitions, must have 2n + 1 replicated components to survive n failures

• The quorum, (the majority (n+1) servers) get to form a new group and elect a new primary

• Usually data is replicated on 2n+1 replicas

• In Harp, data is replicated on n+1 servers

• The other servers are used to create quorum

• They are called witnesses

Page 10: Lecture XIII: Replication-II

10CMPT 431© A. Fedorova

Harp’s Witnessprimarybackup

witness• Backup and primary cannot communicate• Who should be the primary?• Witness resolves the tie in favor of

primary• Data survives at the primary

primarybackup

witness

• Witness resolves the tie in favor of backup• Data survives at the backup

Page 11: Lecture XIII: Replication-II

11CMPT 431© A. Fedorova

Harp: Normal Operation

primary

backup

witness

1. Send request to the primary

2. Record the operation in the in-memory log

3. Forward request to backup

4. Record the operation in the in-memory log

5. Respond to primary

6. “Commit” the operation – mark it as committed in memory 7. Respond to client

8. Tell the back up to commit

Page 12: Lecture XIII: Replication-II

12CMPT 431© A. Fedorova

Two-phase Protocol for Updates

• Phase 1: – send updates to all backups– wait for backups to respond– send response to the client

• Phase 2:– backups are informed about commit– backups commit the operation locally

• Phase 1 is in the critical path• Phase 2 happens on the background• Phase 1 is quick, because updates do not have to be

written to disk

Page 13: Lecture XIII: Replication-II

13CMPT 431© A. Fedorova

In-Memory Logging• Client operations are recorded in the in-memory logs (at

the primary and at the backup) when the response is sent to client

• Operations are applied to the file system later, in the background

• This is done to remove disk access out of critical path when communicating with the client

• What if there primary fails?– That’s okay, because in-memory log survives at the backup

• What if there is a power failure?– The machines will operate for a while on UPS – this time will be

used to apply operations in the log to the file system

Page 14: Lecture XIII: Replication-II

14CMPT 431© A. Fedorova

Write-Ahead Redo Logging

CP – commit pointer – most recently committed event record

Record nRecord n+1Record n+2Record n+3Record n+4

AP – most recently applied event recordRecord n+5

LB – most recently event that has reached the local disk

GLB – most recently event that has reached the local disk at primary and backup

Record n+6

On failure the server restores the log and re-does all committed operations in the log

Page 15: Lecture XIII: Replication-II

15CMPT 431© A. Fedorova

Log Updates: Commit Pointer

• Primary receives the client request– A log record is created at the primary

• Primary forwards request to the backups– Backups add records to their logs

• Backups acknowledge receipt of records to the primary• Primary commits the operation

– Advances commit pointer CP– Sends the commit decision to the backup

• Backup advances its own CP

Page 16: Lecture XIII: Replication-II

16CMPT 431© A. Fedorova

Log Updates: Application Pointer

• The “Apply” process• Runs on the background• Applies committed records to disk• Advances AP pointer• Can we discard records older than the AP pointer?• No! Writes are asynchronous• A committed record may not necessarily be on disk

Page 17: Lecture XIII: Replication-II

17CMPT 431© A. Fedorova

Log Updates: LB and GLB pointers

• Another process checks when writes associated with log records have been applied to the file system

• When writes have finished, it advances the LB pointer• GLB: Global LB pointer: all records up to this pointer have

been applied to disk at both the primary and the backup• Records below GLB pointer can be discarded• Log invariant:

GLB <= LB <= AP <= CP

Page 18: Lecture XIII: Replication-II

18CMPT 431© A. Fedorova

Non-modification Operations

• Performed entirely at the primary• No communication with backups• Problem: what if the backup becomes disconnected from the

primary and forms a new view? – The primary may respond to a read operation with old state (i.e., it may

not know that a file has been updated)• Solution

– Backup sends a promise to the server to not change a view within time t + σ. Within that time, the primary can respond to read operations without talking to backup.

– After that, it must contact backup before performing a non-modification operation, to get a new promise.

Page 19: Lecture XIII: Replication-II

19CMPT 431© A. Fedorova

Handling Failures: View Changes

• View –a composition of the group and the roles of the members

• When some members fail, the view has to change• A view change selects the members of the new view and makes

sure that the state of the new view reflects all committed operations form previous views

• The designated primary and backup monitor other group members to detect changes in communication ability

• If they cannot communicate with some of the members, a view change is needed

• Either a primary or a backup can initiate a view change (not witness)

Page 20: Lecture XIII: Replication-II

20CMPT 431© A. Fedorova

View Changeprimarybackup

witness

• Primary cannot reach with backup, but can reach the witness

• Primary initiates a view change

primarybackup

witness

• Backup cannot reach the primary, but it can reach the witness

• Backup initiates the view change

Page 21: Lecture XIII: Replication-II

21CMPT 431© A. Fedorova

Causes and Outcomes of View Changes• A primary fails, so a new primary is needed

– A backup will become the primary after a view change• A backup fails, someone else needs to replicate the state

at the primary– Witness is configured to act as a backup – the witness is

promoted• A primary that had failed comes back

– It will bring itself up-to-date (using other servers’ logs) and will become the primary again

• A backup that had failed comes back– It will bring itself up-to-date; the previously promoted witness will

no longer act as backup – the witness is demoted

Page 22: Lecture XIII: Replication-II

22CMPT 431© A. Fedorova

View Change: The Algorithm• The node that starts the view change acts as coordinator• Phase 1:

– Coordinator tells others it wants to start a view change– Others stop processing any operations and send the

coordinator their state, i.e., log records (that the coordinator does not already have)

– The coordinator applies the log records to bring itself up-to-date

Page 23: Lecture XIII: Replication-II

23CMPT 431© A. Fedorova

View Change: The Algorithm• Phase 2:

– The coordinator writes the new view number to disk– Sends the view state to all participants– If both backup and witness responded, witness will be

demoted– If only the witness responded, witness will be

promoted– Other nodes write the view number to disk

Page 24: Lecture XIII: Replication-II

24CMPT 431© A. Fedorova

A Promoted Witness

• Does not have a copy of the file system state• Under normal operation, does not update the file system• A promoted witness begins logging filesystem state• Upon promotion receives all log records that have not

made it to disk (everything later than the GLB pointer)• Promoted witness never discards log records• When the log becomes too large, it is stored on disk or

tape

Page 25: Lecture XIII: Replication-II

25CMPT 431© A. Fedorova

Simultaneous View Changes

• Suppose primary and backup cannot communicate with each other

• They both initiate a view change simultaneously• One view change will be redundant – don’t want to waste

time/resources on a useless view change• Solution: delay the view change at the backup• This way the primary is most likely to “win the race” for

the view change• What happens if simultaneous view changes are in place?

Page 26: Lecture XIII: Replication-II

26CMPT 431© A. Fedorova

Optimizations for Fast View Changes

• User operations are not processed during a view change, so view changes must be fast

• A view change may be slow if the server that must bring itself up-to-date – It must receive lots of log records from other servers

• Therefore, the server that must bring itself up-to-date in a new view (i.e., the primary that comes back after failure) does so before initiating the view change

• If the server’s disk is intact it gets log records from the witness• If the disk is damaged, it get the file system state from the

backup and then it gets log records from the witness

Page 27: Lecture XIII: Replication-II

27CMPT 431© A. Fedorova

Other Optimizations

• When the witness is promoted, it must receive all log entries beyond GLB

• The number of entries is likely to be large, so the view change may be slow

• To expedite the view change, the witness is kept in hot standby

• The primary sends all updates to the witness. The witness logs them, but does not acknowledge them. It discards the old entries from memory, does not log them to disk or tape

Page 28: Lecture XIII: Replication-II

28CMPT 431© A. Fedorova

Guarding Against a “Killer Packet”

• Many crashes are due to software bugs• Some bugs may cause simultaneous failure at the primary and

backup – i.e., an OS bug is triggered by a certain FS operation• To guard against this, the backup waits with applying changes to

the FS until they have been applied at the primaryAPbackup ≤ APprimary

• If the primary fails after applying a certain change, the backup will likely initiate the view change and will send the log to the witness

• So even if the backup fails after applying the same operation that crashed the primary, the record of that operation won’t be lost

Page 29: Lecture XIII: Replication-II

29CMPT 431© A. Fedorova

A Potential Failure Scenario

primary backup

1. Receive operation from the client

2. Forward it to backup 3. Record the operation in the log

4. Respond to the primary5. Commit the operation

6. Respond to the client

7. Crash

• Backup does not know if the operation was committed

• Does it assume it was not committed and discard log entries?

• Does it assume it committed and apply the results?

Page 30: Lecture XIII: Replication-II

30CMPT 431© A. Fedorova

Questions about Harp

• Does HARP use the two-phase commit protocol? If so, when and how? How does it differ from the 2PC protocol we studied in class?

• How many replicas that keep copies of data do we need to survive n failures? How many total participants must we have to survive n failures?

• Describe normal operation in Harp. Explain the following:– What the primary does– What the replica does– What the witness does

• How does Harp survive failures without flushing updates to disk before responding to the client?

Page 31: Lecture XIII: Replication-II

31CMPT 431© A. Fedorova

More Questions

• What kind of logging does HARP use? Redo or Undo?• When are log records applied to disk? • Which state is volatile, which state is not volatile? • How fast are view changes? • What happens if a component crashes during the view

change? • Summarize the contributions of HARP as compared to

other replication systems that existed at the time

Page 32: Lecture XIII: Replication-II

32CMPT 431© A. Fedorova

Summary

• Primary-copy file system• Unlike other replicated file system, provides good

performance, because disk writes are not in the critical path

• Needs at least 2n+1 participants to handle n failures• Data is replicated only on n+1 servers, to save disk space• Wishing to have evidence/discussion on:

– How the system works with view changes– What happens if a component crashes during a view change? – What happens with log records of uncommitted operations?

Page 33: Lecture XIII: Replication-II

33CMPT 431© A. Fedorova

Google File System

• A real massive distributed file system• Hundreds of servers and clients

– The largest cluster has >1000 storage nodes, over 300 TB of disk storage, hundreds of clients

• Metadata replication• Data replication• Design driven by application workload and technological

environment• Avoided many of the difficulties traditionally associated

with replication by designing for a specific use case

Page 34: Lecture XIII: Replication-II

34CMPT 431© A. Fedorova

Specifics of the Google Environment

• FS is consists of hundreds of storage machines, built of inexpensive commodity parts

• Component failures are a norm– Application and OS bugs– Human errors– Hardware failures: disks, memory, network, power supplies

• Millions of files, each 100 MB or larger• Multi-GB files are common• Applications are written for GFS• Allows co-design of the file system and applications

Page 35: Lecture XIII: Replication-II

35CMPT 431© A. Fedorova

Specifics of the Google Workload

• Google applications:– Data analysis programs that scan through data repositories– Data streaming applications– Archiving– Indexing applications that produce (intermediate) search results

• Most files are mutated by appending new data – large sequential writes

• Random writes are very uncommon• Files are written once, then they are only read• Reads are sequential• Large streaming reads and small random reads• High bandwidth is more important than low latency

Page 36: Lecture XIII: Replication-II

36CMPT 431© A. Fedorova

GFS Architecture

Page 37: Lecture XIII: Replication-II

37CMPT 431© A. Fedorova

GFS Architecture (cont.)• Single master• Multiple chunk servers• Multiple clients• Each is a commodity Linux machine, a server is a user-level process• Files are divided into chunks • Each chunk has a handle (an ID assigned by the master)• Each chunk is replicated (on three machines by default)• Master stores metadata, manages chunks, does garbage collection,

etc. • What is metadata? • Clients communicate with master for metadata operations, but with

chunkservers for data operations• No additional caching (besides the Linux in-memory buffer caching)

Page 38: Lecture XIII: Replication-II

38CMPT 431© A. Fedorova

Client/GFS Interaction

• Client:– Takes file and offset– Translates it into the chunk index within the file– Sends request to master, containing file name and chunk index

• Master:– Replies with the corresponding chunk handle and location of the

replicas (the master must know where the replicas are)• Client:

– Caches this information– Contacts one of the replicas (i.e., a chunkserver) for data

Page 39: Lecture XIII: Replication-II

39CMPT 431© A. Fedorova

Master

• Stores metadata– The file and chunk namespaces– Mapping from files to chunks– Locations of each chunk’s replicas

• Interacts with clients• Creates chunk replicas• Orchestrates chunk modifications across multiple replicas

– Ensures atomic concurrent appends– Locks concurrent operations

• Deletes old files (via garbage collection)

Page 40: Lecture XIII: Replication-II

40CMPT 431© A. Fedorova

Metadata On Master

• Metadata – data about the data:– File names– Mapping of file names to chunk IDs– Chunk locations

• Metadata is kept in memory• File names and chunk mappings are also kept persistent in

an operation log• Chunk locations are kept in memory only

– They will be lost during the crash– The master asks chunk servers about their chunks at startup –

builds a table of chunk locations

Page 41: Lecture XIII: Replication-II

41CMPT 431© A. Fedorova

Why Keep Metadata In Memory?

• To keep master operations fast • Master can periodically scan its internal state in the

background, in order to implement:– Garbage collection– Re-replication (in case of chunk server failures)– Chunk migration (for load balancing)

• But the file system size is limited by the amount of memory on the master? – This has not been a problem for GFS – metadata is compact

Page 42: Lecture XIII: Replication-II

42CMPT 431© A. Fedorova

Why Not Keep Chunk Locations Persistent?

• Chunk location – which chunk server has a replica of a given chunk• Master polls chunk servers for that information on startup• Thereafter, master keeps itself up-to-date:

– It controls all initial chunk placement, migration and re-replication– It monitors chunkserver status with regular HeartBeat messages

• Motivation: simplicity• Eliminates the need to keep master and chunkservers synchronized • Synchronization would be needed when chunkservers:

– Join and leave the cluster– Change names– Fail and restart

Page 43: Lecture XIII: Replication-II

43CMPT 431© A. Fedorova

Operation Log

• Historical record of metadata changes• Maintains logical order of concurrent operations• Log is used for recovery – the master replays it in the

event of failures• Master periodically checkpoints the log• Checkpoint is a B-tree data structure

– Can be loaded into memory– Used for namespace lookup without extra parsing

• Checkpoint can be done on the background

Page 44: Lecture XIII: Replication-II

44CMPT 431© A. Fedorova

Updates of Replicated Data (cont.)

1. Client asks master for replica locations

2. Master responds3. Client pushes data to all replicas;

replicas store it in a buffer cache4. Client sends a write request to the

primary (identifying the data that had been pushed)

5. Primary forwards request to the secondaries (identifies the order)

6. The secondaries respond to the primary

7. The primary responds to the client

Page 45: Lecture XIII: Replication-II

45CMPT 431© A. Fedorova

Failure Handling During Updates

• If a write fails at the primary:– The primary may report failure to the client – the client will retry– If the primary does not respond, the client retries from Step 1 by

contacting the master• If a write succeeds at the primary, but fails at several

replicas– The client retries several times (Steps 3-7)

Page 46: Lecture XIII: Replication-II

46CMPT 431© A. Fedorova

Primary Replica in GFS

• Each mutation (modification) is performed at all the replicas

• Modifications are applied in the same order across all replicas

• Master grants a chunk lease to one replica – i.e., the primary

• The primary picks a serial order for all mutations to the chunk

• The client pushes data to all replicas• The primary tells the replicas in which order they should

apply modifications

Page 47: Lecture XIII: Replication-II

47CMPT 431© A. Fedorova

Data Consistency in GFS• Loose data consistency – applications are designed for it• Applications may see inconsistent data – data is different on

different replicas • Applications may see data from partially completed writes –

undefined file region• On successful modification the file region is consistent• Replicas are not guaranteed to be bytewise identical (we’ll see

why later, and how clients deal with this)

Page 48: Lecture XIII: Replication-II

48CMPT 431© A. Fedorova

Implications of Loose Data Consistency For Applications

• Applications are designed to handle loose data consistency

• Example 1: a file is generated from beginning to end– An application creates a file with a temporary name– Atomically renames the file – May periodically checkpoint the file while it is written– File is written via appends – more resilient to failures than random

writes• Example 2: producer-consumer file

– Many writers concurrently append to one file (for merged results)– Each record is self-validating (contains a checksum)– Client filters out padding and duplicate records

Page 49: Lecture XIII: Replication-II

49CMPT 431© A. Fedorova

Atomic Record Appends• Atomic append is a write where

– The primary chooses the offset where the append happens– Returns the offset to the client

• This way GFS can decide on serial order of concurrent appends without client synchronization

• If an append fails at some replicas – the client retries• As a result, the file may contain multiple copies of the

same record, plus replicas may be bytewise different• But after a successful update all replicas will be defined –

they will all have the data written by the client at the same offset

Page 50: Lecture XIII: Replication-II

50CMPT 431© A. Fedorova

Non-Identical Replicas

• Because of failed and retried record appends, replicas may be non-identical bytewise

• Some replicas may have duplicate records (because of failed and retried appends)

• Some replicas may have padded file space (empty space filled with junk) – if the master chooses record offset higher than the first available offset at a replica

• Clients must deal with it: they write self-identifying records so they can distinguish valid data from junk

• If clients cannot tolerate duplicates, they must insert version numbers in records

• GFS pushes complexity to the client; without this, complex failure recovery scheme would need to be in place

Page 51: Lecture XIII: Replication-II

51CMPT 431© A. Fedorova

Data Flow

• Data flow is decoupled from control flow• Data is pushed linearly across all chunkservers in a

pipelined fashion (not necessarily from client to primary and from primary to secondary)

• Client forwards data to the closest replica; that replica forwards to the next closest replica, etc.

• Pipelined fashion: while the data is incoming, the server begins forwarding it to the next replica

• This design ensures good network utilization

Page 52: Lecture XIII: Replication-II

55CMPT 431© A. Fedorova

Load Balancing

• Goals:– Maximize data availability and reliability– Maximize network bandwidth utilization

• Google infrastructure:– Cluster consists of hundreds of racks– Each rack has a dozen machines– Racks are connected by network

switches– A rack is on a single power circuit

• Must balance load across machines and across racks

Page 53: Lecture XIII: Replication-II

56CMPT 431© A. Fedorova

Creation, Re-replication, Rebalancing• Creation (initial replica placement):

– On chunk servers with low disk space utilization– Limit the number of recent creations on each chunkserver –

recent creations mean heavy write traffic– Spread replicas across racks

• Re-replication– When the number of replicas falls below the replication target– When a chunkserver becomes unavailable– When a replica becomes corrupted– A new replica is copied directly from an existing one

• Re-balancing– Master periodically examines replica distribution and moves them

to meet load-balancing criteria

Page 54: Lecture XIII: Replication-II

57CMPT 431© A. Fedorova

Fault Tolerance

• Fast recovery– No distinction between normal and abnormal shutdown– Servers are routinely restarted by “killing” a server process– Servers are designed for fast recovery – all state can be recovered

from the log• Chunk replication• Master replication• Data integrity• Diagnostic tools

Page 55: Lecture XIII: Replication-II

58CMPT 431© A. Fedorova

Chunk Replication

• Each chunk is replicated on multiple chunkservers on different racks

• Users can specify different replication levels for different parts of the file namespace (default is 3)

• The master clones existing replicas as needed to keep each chunk fully replicated

Page 56: Lecture XIII: Replication-II

59CMPT 431© A. Fedorova

Single Master

• Simplifies design• Master can make sophisticated load-balancing decisions

involving chunk placement using global knowledge• To prevent master from becoming the bottleneck

– Clients communicate with master only for metadata– Master keeps metadata in memory– Clients cache metadata– File data is transferred from chunkservers

Page 57: Lecture XIII: Replication-II

60CMPT 431© A. Fedorova

Master Replication

• Master state is replicated on multiple machines, so a new server can become master if the old master fails

• What is replicated: operation logs and checkpoints• A modification is considered successful only after it has been logged

on all master replicas• A single master is in charge; if it fails, it restarts almost

instantaneously• If a machine fails and the master cannot restart itself, a failure

detector outside GFS starts a new master with a replicated operation log (no master election)

• Master replicas are master’s shadows – they operate similarly to the master w.r.t. updating the log, the in-memory metadata, polling the chunkservers

Page 58: Lecture XIII: Replication-II

63CMPT 431© A. Fedorova

GFS Summary• Real replicated file system• Uses commodity hardware – hundreds of commodity PCs

and disks• Two levels of replication:

– Metadata is replicated via replicated masters– Data is replicated on replicated chunkservers

• Designed for specific use case – for Google applications– And applications are designed for GFS

• This is why it is simple and it actually works

Page 59: Lecture XIII: Replication-II

64CMPT 431© A. Fedorova

Outline

• Harp– A replicated research file system

• Google File System – A real replicated file system

• Amazon Distributed Data Store– A distributed database

Page 60: Lecture XIII: Replication-II

66CMPT 431© A. Fedorova

Dynamo: Amazon’s Key-Value Store

• A distributed database• Contains data about:

– Customer shopping carts– Customer sessions– Amazon search engine

• Highly replicated– Across data center– Across continents– “A customer must be able to update a shopping cart even if the

world is being destroyed by a tornado”.

Page 61: Lecture XIII: Replication-II

67CMPT 431© A. Fedorova

Dynamo: A Database?

• This is basically a database• But not your conventional database• Conventional (relational) database:

– Data organized in tables– Primary and secondary keys– Tables sorted by primary/secondary keys– Designed to answer any imaginable query– Does not scale to thousands of nodes– Difficult to replicate

• Amazon’s Dynamo– Access by primary key only

Page 62: Lecture XIII: Replication-II

68CMPT 431© A. Fedorova

ACID Properties

• Atomicity – yes– Updates are atomic by definition– There are no transactions

• Consistency – no– Data is eventually consistent– Loose consistency is tolerated– Reconciliation is performed by the client

• Isolation– No isolation – one update at a time

• Durability – yes– Durability is provided via replication

Page 63: Lecture XIII: Replication-II

69CMPT 431© A. Fedorova

High Availability

• Good service time is key for Amazon• Not good when a credit card transaction times out• Service-level agreement: the client’s response must be

answered within 300ms• Must provide this service for 99.9% of transactions at the

load of 500 requests/second.

Page 64: Lecture XIII: Replication-II

70CMPT 431© A. Fedorova

The Cost of Respecting the SLA

• Loose consistency• Synchronous replica reconciliation during the request

cannot be done• We contact a few replicas, if some do not reply, request is

considered failed• When to resolve conflicting updates? During reads or

during writes? • Usually resolved during writes• Dynamo resolves it during reads • Motivation: must have an always writable data store

(can’t lose customer shopping card data)

Page 65: Lecture XIII: Replication-II

71CMPT 431© A. Fedorova

System Interface

• get ( key )– Locate object replicas– Return:

• A single object• A list of objects with conflicting versions• Context (opaque information about object versioning)

• put (key, value, context) – Determines where the replicas should be placed– Writes them to disk

Page 66: Lecture XIII: Replication-II

72CMPT 431© A. Fedorova

Key System Architecture Components

• Partitioning• Replication• Versioning• Membership• Failure Handling• Scaling

Page 67: Lecture XIII: Replication-II

73CMPT 431© A. Fedorova

Partitioning

• How to partition data among nodes?• Use consistent hashing• Output of the hash maps to a circular space• The largest hash value wraps to the smallest hash value• Each node is assigned a random value in the space• This represents its position in the ring

Page 68: Lecture XIII: Replication-II

74CMPT 431© A. Fedorova

Assigning a Key to a Node

• Hash the key• Find the node with the corresponding ring position• Walk the ring clockwise to find the first node with the

greater position than that of the key• Similar search algorithms are used in distributed hash

tables

Page 69: Lecture XIII: Replication-II

75CMPT 431© A. Fedorova

Problems With Consistent Hashing

• May lead to unbalance data and load distribution• Solution:

– Each node is a virtual node– Assign multiple virtual nodes to one physical node

Page 70: Lecture XIII: Replication-II

76CMPT 431© A. Fedorova

Replication

• Each node has a coordinator (the node determined by the hash)

• The coordinator hashes the node at N other replicas• N replicas that are next to the coordinator node in the ring

in the clockwise fashion• Virtual nodes are skipped to ensure that replicas are

located on different physical nodes

Page 71: Lecture XIII: Replication-II

77CMPT 431© A. Fedorova

Versioning

• Dynamo stores multiple versions of each data item• Each update creates a new immutable version of the data

item• Versions are reconciled

– By the system– By the client

• Versioning is achieved using vector clocks

Page 72: Lecture XIII: Replication-II

78CMPT 431© A. Fedorova

Routing of Requests

• Through a generic load balancer– May forward request to a node who is NOT a coordinator– The recipient node will forward the request to the coordinator

• Through a partition-aware client library that directly selects a coordinator

Page 73: Lecture XIII: Replication-II

79CMPT 431© A. Fedorova

Maintaining Consistency Via Quorum

• Dynamo is configured with two parameters: R and W• R is the minimum number of nodes who participate in the

successful Read operation• W is the minimum number of nodes who participate in

the successful Write operation• Request handling protocol (for writes):

– Coordinator receives request– Coordinator computes vector clock and writes new version to disk– Coordinator sends the new version and vector clock to the N

replicas– If at least W-1 respond, the request is successful

Page 74: Lecture XIII: Replication-II

80CMPT 431© A. Fedorova

Sloppy Quorum

• What if some of the N replicas are temporarily unavailable?

• This could limit system’s availability• Cannot use strict quorum• Use sloppy quorum• If one of N replicas is unavailable, use another node that is

not a replica• That node will temporarily store the data• Will forward it to the real replica when the replica is back

up

Page 75: Lecture XIII: Replication-II

81CMPT 431© A. Fedorova

Replica Synchronization

• User Merkle trees• Leaves are hashes of keys• Can compare trees incrementally, without transferring the

whole tree• If a part the tree is not modified, the parent nodes’ hashes

will be identical• So parts of the tree can be compared without sending

data between two replicas• Only keys that are out of sync are transferred

Page 76: Lecture XIII: Replication-II

82CMPT 431© A. Fedorova

Membership

• Membership is always explicit• Nodes are added/removed by the operator• So there is no need for “coordinator election”• If a node is unavailable, this is considered temporary• A node that starts up chooses a set of tokens (virtual

nodes) and maps virtual nodes to physical nodes• Membership information is eventually propagated via

gossip protocol• Mapping is persisted on disk

Page 77: Lecture XIII: Replication-II

83CMPT 431© A. Fedorova

Preventing Logical Partitions

• A new node may be unaware of other nodes before memberships are propagated

• If several such nodes are added simultaneously, we may have a logical partition

• Partitions are prevented using seed nodes• Seed nodes are obtained from a static source, and they

are known to everyone• Memberships are propagated to everyone via seed nodes

Page 78: Lecture XIII: Replication-II

84CMPT 431© A. Fedorova

Failure Detection

• Failure discovery is local• Node A discovers that Node B has failed if Node B does

not respond• Failures (like memberships) are propagated via gossip

protocol

Page 79: Lecture XIII: Replication-II

85CMPT 431© A. Fedorova

Problem Solving (I)

• Design GFS over Dynamo– A system layer that presents GFS interface to Dynamo key-value

store– Present your design – How would you write a GFS-over-Dynamo application? Would you

need to change it? • Dynamo over GFS

– As above• Discussion

– Is this a good idea? – What system properties make this a fundamentally good/bad

idea?

Page 80: Lecture XIII: Replication-II

86CMPT 431© A. Fedorova

Problem Solving (II)

• Can you name similarities between GFS and Dynamo?• Can you name the differences? • Play in teams!