hadoop and voldemort @ linkedin

49
06/07/22 1 Hadoop Voldemort @ LinkedIn Bhupesh Bansal 20 January , 2010

Upload: hadoop-user-group

Post on 15-Jan-2015

8.389 views

Category:

Technology


0 download

DESCRIPTION

Bhupesh Bansal, LinkedIn

TRANSCRIPT

Page 1: Hadoop and Voldemort @ LinkedIn

04/10/23 1

Hadoop Voldemort

@ LinkedIn

Bhupesh Bansal20 January , 2010

Page 2: Hadoop and Voldemort @ LinkedIn

The plan

What is Project Voldemort ? Motivation New features In production

Hadoop & LinkedIn LinkedIn Hadoop ecosystem Hadoop & Voldemort

References

Q&A

Page 3: Hadoop and Voldemort @ LinkedIn

Introduction

Project Voldemort is a distributed, scalable, highly available key/value storage system.– Inspired by Amazon Dynamo Paper and memcached– Online storage solution which scales horizontally

High throughput, low latency

What does it do for you ?– Provides a simple key/value APIs for client.– Data partitioning and replication– Provides consistency guarantees even in presence of failures– Scales well (amount of data, number of clients)

Page 4: Hadoop and Voldemort @ LinkedIn

Motivation I : Big Data

Proprietary & Confidential 04/10/23 4

Reference : algo2.iti.kit.edu/.../fopraext/index.html

Page 5: Hadoop and Voldemort @ LinkedIn

Motivation II: Data Driven Features

Page 6: Hadoop and Voldemort @ LinkedIn

Motivation III

Proprietary & Confidential 04/10/23 6

Page 7: Hadoop and Voldemort @ LinkedIn

Motivation IV

Proprietary & Confidential 04/10/23 7

Page 8: Hadoop and Voldemort @ LinkedIn

Why Is This Hard?

• Failures in a distributed system are much more complicated• A can talk to B does not imply B can talk to A

• Nodes will fail and come back to life with stale data

• I/O has high request latency variance• Is the node down or the node slow ?

• Intermittent failures are common• Should I try this node now ?

• There are fundamental trade-offs between availability and consistency• CAP theorem

• User must be isolated from these problems

Page 9: Hadoop and Voldemort @ LinkedIn

Some Problems we worked on lately ?

Performance improvements Push computation to data (Server side views) Data compression Failure detection

Reliable Testing Testing is hard, Distributed systems make it harder

Voldemort Rebalancing Dynamically add nodes to a running cluster

Administration Often ignored But very important

Page 10: Hadoop and Voldemort @ LinkedIn

Server side views

Motivation: push computation to data

Create custom views on server side for specific transformations.– Avoids network transfer of big blobs– Avoids serialization CPU and I/O cost

Examples1. Range query over denormalized list stored as value.2. Append() operation for list values.3. Filters/aggregates on specific fields etc.

Page 11: Hadoop and Voldemort @ LinkedIn

Failure Detection

Need to maintain up-to-date status of each server availability.

Detect failures earlier Avoid waiting for failed nodes while serving.

Reduce false positives Maintains proper load balancing

Allowance for intermittent or temporary failures. Re-establish node availability asynchronously. Contributed by Kirk True

Proprietary & Confidential 04/10/23 11

Page 12: Hadoop and Voldemort @ LinkedIn

EC2 based testing

Testing “in the cloud” Distributed systems have to be tested on multi-

node clusters Distributed systems have complex failure scenarios A storage system, above all, must be stable Automated testing allows rapid iteration while

maintaining confidence in systems’ correctness and stability

EC2-based testing framework Tests are invoked programmatically Adaptable to other cloud hosting providers Will run on a regular basis Contributed by Kirk True

Page 13: Hadoop and Voldemort @ LinkedIn

Coming this Jan (finally): Rebalancing

Voldemort Rebalancing – capability to add/delete nodes, move data around in an online voldemort

cluster.

Features– No downtime– Transparent to the client– Maintain data consistency guarantees– push button user interface

Page 14: Hadoop and Voldemort @ LinkedIn

Administration

Monitoring– View statistics (how many queries are made? How long are they taking?)– Perform diagnostic operations

Administrative functionalities – Functionality which is needed, but shouldn’t be performed by regular store

clients, example Ability to update and retrieve cluster/store metadata Efficient streaming of keys and values. Delete entries in bulk Truncate entire store Restore a node data from replicas

Page 15: Hadoop and Voldemort @ LinkedIn

Present day

In Production use – At LinkedIn

Multiple clusters Variety of customers

– Outside of LinkedIn Gilt Group, KaChing, others

Active developer community, inside and outside LinkedIn

Monthly release cycle– Continuous testing environment. – Daily performance tests.

Page 16: Hadoop and Voldemort @ LinkedIn

Performance

LinkedIn cluster: web event tracking logging and online lookups 6 nodes, 400 GB of data, 12 clients mixed load (67 % Get , 33 % Put)

Throughput 1433 QPS (node) 4299 QPS (cluster)

Latency GET

50 % percentile 0.05 ms 95 % percentile 36.07 ms 99 % percentile 60.65 ms

PUT50 % percentile 0.09 ms 95 % percentile 0.41 ms 99 % percentile 1.22 ms

Page 17: Hadoop and Voldemort @ LinkedIn

Hadoop @ Linkedin

Page 18: Hadoop and Voldemort @ LinkedIn

Batch Computing at Linkedin

• Some questions we want to answer• What do we use Hadoop for ?• How do we store data ?• How do we manage workflows ?• How do we do ETL ?• How do we prototype ideas

Page 19: Hadoop and Voldemort @ LinkedIn

What do we use Hadoop for ?

Proprietary & Confidential 04/10/23 19

Page 20: Hadoop and Voldemort @ LinkedIn

How do we store Data ?

• Compact, compressed, binary data (something like Avro)

• Type can be any combination of int, double, float, String, Map, List, Date, etc.

• Example member definition:

{ 'member_id': 'int32',

‘first_name': 'string', ’last_name': ’string’, ‘age’ : ‘int32’ … }• Data is stored in Hadoop as sequence files, serialized with this format

• The schema of data is saved in sequence files as metadata

• The schema is read dynamically by Java/Pig jobs on the fly.

Page 21: Hadoop and Voldemort @ LinkedIn

How do we manage workflows ?

• We wrote a workflow management tool (code name: Azkaban)• Dependency management (Hadoop, ETL, java, Unix jobs)

• Maintains a dependency directed acyclic graph• All dependencies must complete before the job itself can run• If a dependency fails the job itself fails

• Scheduling • workflows can be scheduled to run on repeating schedule.

• Configuration system (simple properties files)• GUI for visualizing and controlling job

• Historical logging and job success data retained for each run• Alerting of failures

• Will be open sourced soon (APL) !!

Page 22: Hadoop and Voldemort @ LinkedIn

Introducing Azkaban

Page 23: Hadoop and Voldemort @ LinkedIn

How do we do ETL ? : Getting data in

Two kind of data From Databases (user data, news, jobs etc.)

Need a way to get data reliably periodically Need test to verify data Support for incremental replication Our solution Transmogrify, A driver program which accepts an inputReader and outputWriter

InputReader: JDBCReader, CSV ReaderOutput Writer: JDBCWriter, HDFS writers

From web logs (page views, search, clicks etc) Weblogs files are rsynced and loaded up in HDFS Hadoop jobs for date cleaning and transformation.

Page 24: Hadoop and Voldemort @ LinkedIn

ETL II: Getting data out

Batch jobs generate output in 100GBs How do we finally serve this data to user ? Some constraints

Wants to show data to users ASAP Should not impact online serving Should have quick rollback capabilities. Should be horizontally scalable Should be fault tolerant (replication) High throughput, low latency

Page 25: Hadoop and Voldemort @ LinkedIn

ETL II : Getting Data Out : Existing Solutions

JDBC upload to Oracle – Very slow (3 days to push 20GB of data)– Long running transactions cause issues.

“Load Data” statement in MySQL– Tried many many tuning settings– Was still very slow – Almost unresponsive for online serving

Memcached– Need to have everything in memory– No support for batch inserts– Need to repush if server dies.

Oracle SQL*Loader– The whole process took 2-3 days– 20-24 hour actual data loading time– Needed a guy to baby sit the process– What if we want to do this daily ?

Hbase (didn’t try)

Proprietary & Confidential 04/10/23 25

Page 26: Hadoop and Voldemort @ LinkedIn

1. Index build runs 100% in Hadoop 2. MapReduce job outputs Voldemort Stores to

HDFS3. Job control initiates a fetch request to

Voldemort nodes.4. Voldemort nodes copies data from Hadoop in

parallel 5. Atomic swap to make the data live6. Heavily optimized storage engine for read-only

data7. I/O Throttling on the transfer to protect the live

servers

ETL II : Getting Data Out : Our solution

Our Solution Wrote a special Read-only storage engine for Voldemort Data is built on Hadoop and copied to Voldemort

Page 27: Hadoop and Voldemort @ LinkedIn

Voldemort Read only store: version I

Proprietary & Confidential 04/10/23 27

Simple File based storage engine– Two files: key file and value file– Key file have sorted MD5 key hash and file offset

of the value file for corresponding value.– Value file have value saved as size,data

Advantages– Index is built on hadoop, no load on production

servers– Files are copied in parallel to voldemort cluster– Supports rollback by keeping multiple copies

Page 28: Hadoop and Voldemort @ LinkedIn

Voldemort Read only store: version II

Proprietary & Confidential 04/10/23 28

The Version I read only stores have few issues– Only one reducer per node– Binary search can potentially take 32 steps.

Version II format– Make multiple key-file, value-file pairs (multiple reducer per node)– Mmap all keys file.– Use interpolation binary search

Keys are MD5 hash and very well uniformed distributed While searching do predicted binary search Much faster performance

Future work– Group values by frequency, so that frequent values are in operating

system cache– Transfer only the delta

Page 29: Hadoop and Voldemort @ LinkedIn

Performance

There are three performance numbers important now1. Hadoop Time to build read-only store indexes

10 mins to build 20 GB of data

2. File transfer time Limited only to network and disk throughputs

3. Online serving performance Read-Only store is fast (1ms – 5ms) Operation system page cache helps a lot Very predictable performance based on

Data size, disk speeds, RAM, cache hit ratios

Page 30: Hadoop and Voldemort @ LinkedIn

Batch Computing at LinkedIn

Page 31: Hadoop and Voldemort @ LinkedIn

Infrastructure At LinkedIn

Last year: QA lab: 20 machines, cheap dell linux servers Production: 20 machines, Heavy machines

“QA” cluster is for dev, analysis, and reporting uses Pig, Hadoop streaming for prototyping ideas Ad hoc jobs compete with scheduled jobs Tried different Hadoop schedulers

Production is for jobs that produce user-facing data

Hired Allen Wittenauer as our Hadoop Architect in Sep 2009 100 hadoop machines

Page 32: Hadoop and Voldemort @ LinkedIn

References

Amazon dynamo paper Project-voldemort.com NoSQL presentations at Last.fm (2009) Voldemort presentation by Jay Kreps

Proprietary & Confidential 04/10/23 32

Page 33: Hadoop and Voldemort @ LinkedIn

The End

Page 34: Hadoop and Voldemort @ LinkedIn

Core Concepts

Page 35: Hadoop and Voldemort @ LinkedIn

Core Concepts - I

ACID – Great for single centralized server.

CAP Theorem– Consistency (Strict), Availability , Partition Tolerance– Impossible to achieve all three at same time in distributed platform– Can choose 2 out of 3– Dynamo chooses High Availability and Partition Tolerance

by sacrificing Strict Consistency to Eventual consistency

Consistency Models– Strict consistency

2 Phase Commits PAXOS : distributed algorithm to ensure quorum for consistency

– Eventual consistency Different nodes can have different views of value In a steady state system will return last written value. BUT Can have much strong guarantees.

Proprietary & Confidential 04/10/23 35

Page 36: Hadoop and Voldemort @ LinkedIn

Core Concept - II

Consistent Hashing Key space is Partitioned

– Many small partitions

Partitions never change– Partitions ownership can change

Replication – Each partition is stored by ‘N’ nodes

Node Failures– Transient (short term)– Long term

Needs faster bootstrapping

Proprietary & Confidential 04/10/23 36

Page 37: Hadoop and Voldemort @ LinkedIn

Core Concept - III

• N - The replication factor • R - The number of blocking reads• W - The number of blocking writes

• If R+W > N • then we have a quorum-like algorithm• Guarantees that we will read latest writes OR fail

• R, W, N can be tuned for different use cases• W = 1, Highly available writes • R = 1, Read intensive workloads• Knobs to tune performance, durability and availability

Proprietary & Confidential 04/10/23 37

Page 38: Hadoop and Voldemort @ LinkedIn

Core Concepts - IV

• Vector Clock [Lamport] provides way to order events in a distributed system.

• A vector clock is a tuple {t1 , t2 , ..., tn } of counters.• Each value update has a master node

• When data is written with master node i, it increments ti.• All the replicas will receive the same version• Helps resolving consistency between writes on multiple replicas

• If you get network partitions• You can have a case where two vector clocks are not comparable.• In this case Voldemort returns both values to clients for conflict resolution

Proprietary & Confidential 04/10/23 38

Page 39: Hadoop and Voldemort @ LinkedIn

Implementation

Page 40: Hadoop and Voldemort @ LinkedIn

Voldemort Design

Page 41: Hadoop and Voldemort @ LinkedIn

Client API

• Data is organized into “stores”, i.e. tables• Key-value only• But values can be arbitrarily rich or complex• Maps, lists, nested combinations …

• Four operations• PUT (Key K, Value V) • GET (Key K)• MULTI-GET (Iterator<Key> K), • DELETE (Key K) / (Key K , Version ver)• No Range Scans

Page 42: Hadoop and Voldemort @ LinkedIn

Versioning & Conflict Resolution

• Eventual Consistency allows multiple versions of value• Need a way to understand which value is latest• Need a way to say values are not comparable

• Solutions• Timestamp• Vector clocks

• Provides global ordering.• No locking or blocking necessary

Page 43: Hadoop and Voldemort @ LinkedIn

Serialization

• Really important• Few Considerations• Schema free?• Backward/Forward compatible• Real life data structures• Bytes <=> objects <=> strings?• Size (No XML)

• Many ways to do it -- we allow anything• Compressed JSON, Protocol Buffers,

Thrift, Voldemort custom serializtion

Page 44: Hadoop and Voldemort @ LinkedIn

Routing

• Routing layer hides lot of complexity• Hashing schema• Replication (N, R , W) • Failures• Read-Repair (online repair mechanism)• Hinted Handoff (Long term recovery mechanism)

• Easy to add domain specific strategies• E.g. only do synchronous operations on nodes in

the local data center• Client Side / Server Side / Hybrid

Page 45: Hadoop and Voldemort @ LinkedIn

Voldemort Physical Deployment

Page 46: Hadoop and Voldemort @ LinkedIn

Routing With Failures

• Failure Detection•Requirements•Need to be very very fast• View of server state may be inconsistent

• A can talk to B but C cannot• A can talk to C , B can talk to A but not to C

• Currently done by routing layer (request timeouts)• Periodically retries failed nodes.• All requests must have hard SLAs

•Other possible solutions• Central server • Gossip protocol• Need to look more into this.

Page 47: Hadoop and Voldemort @ LinkedIn

Repair Mechanism

Read Repair– Online repair mechanism

Routing client receives values from multiple node Notify a node if you see an old value Only works for keys which are read after failures

Hinted Handoff– If a write fails write it to any random node– Just mark the write as a special write– Each node periodically tries to get rid of all special entries

Bootstrapping mechanism (We don’t have it yet)– If a node was down for long time

Hinted handoff can generate ton of traffic Need a better way to bootstrap and clear hinted handoff tables

Proprietary & Confidential 04/10/23 47

Page 48: Hadoop and Voldemort @ LinkedIn

Network Layer

• Network is the major bottleneck in many uses• Client performance turns out to be harder than server (client must wait!)•Lots of issue with socket buffer size/socket pool

• Server is also a Client• Two implementations• HTTP + servlet container• Simple socket protocol + custom server

• HTTP server is great, but http client is 5-10X slower• Socket protocol is what we use in production• Recently added a non-blocking version of the server

Page 49: Hadoop and Voldemort @ LinkedIn

Persistence

• Single machine key-value storage is a commodity• Plugins are better than tying yourself to a single strategy• Different use cases• optimize reads• optimize writes• Large vs Small values

•SSDs may completely change this layer• Couple of different options •BDB, MySQL and mmap’d file implementations•Berkeley DBs most popular•In memory plugin for testing

• Btrees are still the best all-purpose structure• No flush on write is a huge, huge win