apache kafka reliability guarantees stratahadoop nyc 2015

Post on 21-Apr-2017

2.026 Views

Category:

Data & Analytics

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

When it absolutely, positively,

has to be thereReliability Guarantees

in Apache Kafka

@jeffholoman @gwenshap

Gwen ShapiraConfluent

Jeff HolomanCloudera

Kafka High Throughput Low Latency Scalable Centralized Real-time

“If data is the lifeblood of high technology, Apache Kafka is the

circulatory system”

--Todd PalinoKafka SRE @ LinkedIn

If Kafka is a critical piece of our pipeline Can we be 100% sure that our data will get there? Can we lose messages? How do we verify? Who’s fault is it?

Distributed Systems Things Fail Systems are designed to

tolerate failure

We must expect failures and design our code and configure our systems to handle them

Network

Broker MachineClient Machine

Data Flow

Kafka ClientBroker

O/S Socket Buffer

NIC

NIC

Page Cache

Disk

Application Thread

O/S Socket Buffer

async

callback

✗✗

✗✗✗ data

ack / exception

Client Machine

Kafka Client

O/S Socket Buffer

NIC

Application Thread

✗✗Broker Machine

Broker

NIC

Page Cache

Disk

O/S Socket Buffer

miss

✗Network

Data Flow

data

offsets

ZKKafka✗

Replication is your friend Kafka protects against failures by replicating data The unit of replication is the partition One replica is designated as the Leader Follower replicas fetch data from the leader The leader holds the list of “in-sync” replicas

Replication and ISRs

0

1

2

0

1

2

0

1

2

Producer

Broker 100

Broker 101

Broker 102

Topic:Partitions

:Replicas:

my_topic33

Partition:

Leader:ISR:

1101

100,102

Partition:

Leader:ISR:

2102

101,100

Partition:

Leader:ISR:

0100

101,102

ISR 2 things make a replica in-sync

- Lag behind leader- replica.lag.time.max.ms – replica that didn’t fetch or is behind - replica.lag.max.messages – will go away in 0.9

- Connection to Zookeeper

Terminology Acked

- Producers will not retry sending. - Depends on producer setting

Committed- Consumers can read. - Only when message got to all ISR.

replica.lag.time.max.ms - how long can a dead replica prevent

consumers from reading?

Replication Acks = all

- only waits for in-sync replicas to reply.

Replica 3100

Replica 2100

Replica 1100

Time

Replication

Replica 2100

101

Replica 1100

101

Time

Replica 3 stopped replicating for some reason

Acked in acks = all

“committed”

Acked in acks = 1

but not “committed”

Replication

Replica 2100

101

Replica 1100

101

Time

One replica drops out of ISR, or goes offline All messages are now acked and committed

Replication

Replica 1100

101102103104Time

2nd Replica drops out, or is offline

Replication

Time

Now we’re in trouble

Replication

Replica 3100

Replica 2100

101

Time

All those are “acked” and “committed”

So what to do Disable Unclean Leader Election

- unclean.leader.election.enable = false Set replication factor

- default.replication.factor = 3 Set minimum ISRs

- min.insync.replicas = 2

Warning min.insync.replicas is applied at the topic-level. Must alter the topic configuration manually if created before the server level change Must manually alter the topic < 0.9.0 (KAFKA-2114)

Replication Replication = 3 Min ISR = 2

Replica 3100

Replica 2100

Replica 1100

Time

Replication

Replica 2100

101

Replica 1100

101

Time

One replica drops out of ISR, or goes offline

Replication

Replica 1100

101102103104

Time

2nd Replica fails out, or is out of sync

Buffers in

Producer

Producer Internals Producer sends batches of messages to a buffer

M3Application

ThreadApplication

ThreadApplication

Threadsend()

M2 M1 M0Batch 3Batch 2Batch 1

Fail? response

retry

Update Future

callback

drain

Metadata orException

Basics Durability can be configured with the producer configuration request.required.acks- 0 The message is written to the network (buffer)- 1 The message is written to the leader- all The producer gets an ack after all ISRs receive the data; the message is

committed

Make sure producer doesn’t just throws messages away!- block.on.buffer.full = true

All calls are non-blocking async 2 Options for checking for failures:

- Immediately block for response: send().get()- Do followup work in Callback, close producer after error threshold

- Be careful about buffering these failures. Future work? KAFKA-1955- Don’t forget to close the producer! producer.close() will block until in-flight txns

complete retries (producer config) defaults to 0 message.send.max.retries (server config) defaults to 3 In flight requests could lead to message re-ordering

Consumer Two choices for Consumer API

- Simple Consumer- High Level Consumer

Consumer OffsetsP0 P2 P3 P4 P5 P6

Consumer

Thread 1 Thread 2 Thread 3 Thread 4

Consumer OffsetsP0 P2 P3 P4 P5 P6

Consumer

Thread 1 Thread 2 Thread 3 Thread 4

Commit?

Consumer OffsetsP0 P2 P3 P4 P5 P6

Consumer

Thread 1 Thread 2 Thread 3 Thread 4

Commit?

Consumer OffsetsP0 P2 P3 P4 P5 P6

Consumer

Thread 1 Thread 2 Thread 3 Thread 4

Auto-commit

enabled

✗Commit

Consumer OffsetsP0 P2 P3 P4 P5 P6

Consumer

Thread 1 Thread 2 Thread 3 Thread 4

Auto-commit

enabled

Consumer OffsetsP0 P2 P3 P4 P5 P6

Consumer

Thread 1 Thread 2 Thread 3 Thread 4

Auto-commit

enabled Consumer

Picks up here

Consumer OffsetsP0 P2 P3 P4 P5 P6

Consumer

Thread 1 Thread 2 Thread 3 Thread 4

Commit

Consumer OffsetsP0 P2 P3 P4 P5 P6

Consumer

Thread 1 Thread 2 Thread 3 Thread 4

Commit

Offset commits

for all threads

P0 P2 P3 P4 P5 P6

Consumer 1

Consumer 2

Consumer 3

Consumer 4

Consumer Offsets

Auto-commit

DISABLED

Commit

Consumer Recommendations Set autocommit.enable = false Manually commit offsets after the message data is processed / persisted

consumer.commitOffsets(); Run each consumer in it’s own thread

New Consumer! No Zookeeper! At all! Rebalance listener Commit:

- Commit- Commit async- Commit( offset)

Seek(offset)

Exactly Once Semantics At most once is easy At least once is not bad either – commit after 100% sure data is safe Exactly once is tricky

- Commit data and offsets in one transaction- Idempotent producer

Monitoring for Data Loss Monitor for producer errors – watch the retry numbers Monitor consumer lag – MaxLag or via offsets Standard schema:

- Each message should contain timestamp and originating service and host Each producer can report message counts and offsets to a special topic “Monitoring consumer” reports message counts to another special topic “Important consumers” also report message counts Reconcile the results

Be Safe, Not Sorry Acks = all Block.on.buffer.full = true Retries = MAX_INT ( Max.inflight.requests.per.connect = 1 ) Producer.close() Replication-factor >= 3 Min.insync.replicas = 2 Unclean.leader.election = false Auto.offset.commit = false Commit after processing Monitor!

top related