microservices on top of kafka
TRANSCRIPT
Microservices on top of KafkaHow we got lost during the transition to Microservices,
and how we found our way out using Kafka
Vladi FeiginSoftware Architect, [email protected]
“This is an attempt to share our experience in breaking a monolith and
moving to a microservices world”
… Using Kafka
Preface
Active customers
Mission critical application
Complex business logic
Very high uptime SLA
The Monolith
Develop a new system from scratch
OR
Gradually break the existing monolith
We believe the second option is the way to go!
How can it be broken?
Monolith First app out(shadow mode)
Monolith First app
out
Monolith
without app
Gradual progress
Events to Kafka. Tests! App out in shadow mode. Tests! App out in production mode
The Gradual Approach
Define every service responsibility
Design services behavior first (the actions it does)
Define Data Model (see next slide)
Apply Domain Driven Design principles
Revise Data Model
Command (Request) is an action, addressed to, and consumed by, one single
service
Fact is a declaration to the rest of world about the service state change
(triggered by a command).
Entities - the business objects
Define Commands, Facts and Entities
A schema defines the language the services use to talk to each other
It can be JSON, Avro or Protobuf
Define and enforce rules for schema compatibility
Every schema change must be validated
Use schema
Separate the write from the read operations
Facts written chronologically into dedicated Kafka (event-sourced) topics
Create read-optimized views in external DB for queries
Use Event Sourcing and CQRS
Every service is allowed to write only on the topics it owns
Services are allowed to read from other service topics
Follow Single Writer Principle
Service topics should reflect the service data model
Topics are integral to all business flows
Every service usually has its own topic
Topic design is similar to the designing DB table
Carefully design Kafka topics
● Validate events before you write them to Kafka
● Validate against schema
● Validate against local state
Validate, Validate, Validate!
Data crucial for managing the core business flows should be managed in
Kafka
Avoid having other “moving parts”, such as a database for managing the
core business flow
Make your critical data mobility simple
Kafka is Single Source of Truth
Your application should be able to reprocess historical data from Kafka
This means reprocessing facts executed from the event-sourced topics
Design for reprocessing
Monitor
You must have full visibility over what’s going on in the system, from the
very beginning of the process
Constantly look for unexpected behaviour and anomalies
Be a smart pessimist
Be prepared for non-happy and edge case scenarios!
List all possible difficult scenarios and use your architecture to test them
Ensure you have a solution for every scenario
Postscript
Microservices Cons:
● Microservices is hard and challenging
● Operational costs are high
● Hard to debug in production
Microservices Pros:
● Unleash development speed
● Services scale-out is easier
● Clear roles and responsibilities of services
LivePerson is hiring!In Tel-Aviv and Raanana
https://www.liveperson.com/company/careers
We’re hiring
Consider Kafka Compacted topics
Long topic retention
Deletion by fact key
Smaller local state
Custom retention mechanisms
The only option for supporting GDPR
Producer parameters:
● acks=all
● Many retries on failure
If events order is critical set :
● max.in.flight.requests.per.connection=1
If your data throughput allow - use synchronous send
Server configuration:
● unclean.leader.election.enable=false
● min.isr=2
● replication factor at least 3
Configuration