csc 536 lecture 10. outline case study google spanner consensus, revisited raft consensus algorithm

11
CSC 536 Lecture 10

Upload: laurence-jones

Post on 20-Jan-2016

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: CSC 536 Lecture 10. Outline Case study Google Spanner Consensus, revisited Raft Consensus Algorithm

CSC 536 Lecture 10

Page 2: CSC 536 Lecture 10. Outline Case study Google Spanner Consensus, revisited Raft Consensus Algorithm

Outline

Case studyGoogle Spanner

Consensus, revisitedRaft Consensus Algorithm

Page 3: CSC 536 Lecture 10. Outline Case study Google Spanner Consensus, revisited Raft Consensus Algorithm

Google Spanner

Page 4: CSC 536 Lecture 10. Outline Case study Google Spanner Consensus, revisited Raft Consensus Algorithm

Google Spanner

Scalable globally-distributed multi-versioned databaseMain features:

Focus on cross-datacenter data replication for availability and geographical locality

Automatic sharding and shard migrationfor load balancing and failure tolerance

Scales to millions of servers across hundreds of datacentersand to database tables with trillions of rows

Schematized, semi-relational (tabular) data modelto handle more structured data (than Bigtable, say)

Strong replica consistency modelsynchronous replication

Page 5: CSC 536 Lecture 10. Outline Case study Google Spanner Consensus, revisited Raft Consensus Algorithm

Google Spanner

Scalable globally-distributed databaseFollow-up to Google’s Bigtable and Megastore

Detailed DB featuresSQL-like query interface

to support schematized, semi-relational (tabular) data model

General-purpose distributed ACID transactionseven across distant data centers

Externally (strongly) consistent global write-transactions with synchronous replicationLock-free read-only transactionsTimestamped multiple-versions of data

Page 6: CSC 536 Lecture 10. Outline Case study Google Spanner Consensus, revisited Raft Consensus Algorithm

Google Spanner

Scalable globally-distributed databaseFollow-up to Google’s Bigtable and Megastore

Detailed DS featuresAuto-sharding, auto-rebalancing, automatic failure responseReplication and external (strong) consistency modelApp/user control of data replication and placement

number of replicas and replica locations (datacenters)how far the closest replica can be (to control reading latency)how distant replicas are from each other (to control writing latency)

Wide-areas system

Page 7: CSC 536 Lecture 10. Outline Case study Google Spanner Consensus, revisited Raft Consensus Algorithm

Google Spanner

Scalable globally-distributed databaseFollow-up to Google’s Bigtable and Megastore

Key implementation design choicesIntegration of concurrency control, replication, and 2PCTransaction serialization via global, wall-clock timestamps

using TrueTime API

TrueTime API uses GPS devices and Atomic clocks to get accurate time

acknowledges clock uncertainty and guarantees a bound on it

Page 8: CSC 536 Lecture 10. Outline Case study Google Spanner Consensus, revisited Raft Consensus Algorithm

Google Spanner

Scalable globally-distributed databaseFollow-up to Google’s Bigtable and Megastore

Production useUnrolled in Fall 2012Used by Google F1, Google’s advertising backend

Replaced a sharded MySQL database5 replicas across the USLess critical app may only need 3 replicas in a single region which would decrease latency (but also availability)

Future use: Gmail, Picasa, Calendar, Android Market, AppEngine, etc

Page 9: CSC 536 Lecture 10. Outline Case study Google Spanner Consensus, revisited Raft Consensus Algorithm

Google Spanner

spanner-osdi2012.pptx

Page 10: CSC 536 Lecture 10. Outline Case study Google Spanner Consensus, revisited Raft Consensus Algorithm

Raft Consensus Algorithm