the economics of scale: promises and perils of going distributed
TRANSCRIPT
The Economics of Scale Tyler TreatWorkivaPromises and Perils of Going Distributed
September 19, 2015
About The Speaker
• Backend engineer at Workiva
• Messaging platform tech lead
• Distributed systems
• bravenewgeek.com @tyler_treat [email protected]
About The Talk
• Why distributed systems?
• Case study
• Advantages/Disadvantages
• Strategies for scaling and resilience patterns
• Scaling Workiva
What does it mean to “scale” a system?
Scale Up vs. Scale Out
❖ Add resources to a node❖ Increases node capacity, load
is unaffected❖ System complexity unaffected
Vertical Scaling❖ Add nodes to a cluster❖ Decreases load, capacity is
unaffected❖ Availability and throughput w/
increased complexity
Horizontal Scaling
Okay, cool—but what does this actually mean?
Let’s look at a real-world example…
How does Twitter work?
Just write tweets to a big database table.
Getting a timeline is a simple join query.
But…this is crazy expensive.
Joins are sloooooooooow.
Prior to 5.5, MySQL used table-level locking. Now it uses row-level locking. Either way,
lock contention everywhere.
As the table grows larger, lock contention on indexes becomes worse too.
And UPDATES get higher priority than SELECTS.
So now what?
Distribute!
Specifically, shard.
Partition tweets into different databases using some consistent hash scheme
(put a hash ring on it).
This alleviates lock contention and improves throughput…
but fetching timelines is still extremely costly
(now scatter-gather query across multiple DBs).
Observation: Twitter is a consumption mechanism more than an ingestion one…
i.e. cheap reads > cheap writes
Move tweet processing to the write pathrather than read path.
Ingestion/Fan-Out Process
1. Tweet comes in
2. Query the social graph service for followers
3. Iterate through each follower and insert tweet ID into their timeline (stored in Redis)
4. Store tweet on disk (MySQL)
Ingestion/Fan-Out Process
• Lots of processing on ingest, no computation on reads
• Redis stores timelines in memory—very fast
• Fetching timeline involves no queries—get timeline from Redis cache and rehydrate with multi-get on IDs
• If timeline falls out of cache, reconstitute from disk
• O(n) on writes, O(1) on reads
• http://www.infoq.com/presentations/Twitter-Timeline-Scalability
Key Takeaway: think about your access patterns and design accordingly.
Optimize for the critical path.
Let’s Recap…
• Advantages of single database system:
• Simple!
• Data and invariants are consistent (ACID transactions)
• Disadvantages of single database system:
• Slow
• Doesn’t scale
• Single point of failure
Going distributed solved the problem, but at what cost?
(hint: your sanity)
Distributed systems are all about trade-offs.
By choosing availability, we give up consistency.
This problem happens all the time on Twitter.
For example, you tweet, someone else replies, and I see the reply before the original tweet.
Can we solve this problem?
Sure, just coordinate things before proceeding…
“Have you seen this tweet? Okay, good.” “Have you seen this tweet? Okay, good.” “Have you seen this tweet? Okay, good.” “Have you seen this tweet? Okay, good.” “Have you seen this tweet? Okay, good.” “Have you seen this tweet? Okay, good.”
Sooo what do you do when Justin Bieber tweets to his 67 million followers?
Coordinating for consistency is expensivewhen data is distributed
because processescan’t make progress independently.
Source: Peter Bailis, 2015 https://speakerdeck.com/pbailis/silence-is-golden-coordination-avoiding-systems-design
Key Takeaway: strong consistency is slow and distributed coordination is expensive (in terms of
latency and throughput).
Sharing mutable data at large scale is difficult.
If we don’t distribute, we risk scale problems.
Let’s say we want to count the number of times a tweet is retweeted.
“Get, add 1, and put” transaction will notscale.
If we do distribute, we risk consistency problems.
What do we do when our system is partitioned?
If we allow writes on both sides of the partition, how do we resolve conflicts when the partition
heals?
Distributed systems are hard!
But lots of good research going on to solve these problems…
CRDTs Lasp
SyncFree RAMP transactions
etc.
Twitter has 316 million monthly active users. Facebook has 1.49 billion monthly active users. Netflix has 62.3 million streaming subscribers.
How do you build resilient systems at this scale?
Embrace failure.
Provide partial availability.
If an overloaded service is not essential tocore business, fail fast to prevent availability or
latency problems upstream.
It’s better to fail predictably than fail in unexpected ways.
Use backpressure to reduce load.
Flow-Control Mechanisms
• Rate limit
• Bound queues/buffers
• Backpressure - drop messages on the floor
• Increment stat counters for monitoring/alerting
• Exponential back-off
• Use application-level acks for critical transactions
Bounding resource utilization and failing fast helps maintain predictable performance and
impedes cascading failures.
Going distributed means more wire time.How do you improve performance?
Cache everything.
“There are only two hard things in computer science: cache invalidation and naming
things.”
Embrace asynchrony.
Distributed systems are not just about workload scale, they’re about organizational scale.
In 2010, Workiva released a product to streamline financial reporting.
A specific solution to solve a very specific problem, originally built by a few dozen
engineers.
Fast forward to today: a couple hundred engineers,
more users, more markets, more solutions.
How do you ramp up new products quickly?
You stop thinking in terms of products and start thinking in terms of platform.
From Product to Platform
At this point, going distributed is all but necessary.
Service-Oriented Architecture allows us to independently build, deploy, and scale discrete
parts of the platform.
Loosely coupled services let us tolerate failure. And things fail constantly.
Shit happens — network partitions, hardware failure, GC pauses, latency, dropped packets…
Build resilient systems.
–Ken Arnold
“You have to design distributed systems with the expectation of failure.”
Design for failure.
Consider the trade-off between consistency and availability.
Most important?
Don’t distribute until you have a reason to!
Scale up until you have toscale out.
–Paul Barham
“You can have a second computer once you’ve shown you know how to use the first one.”
And when you do distribute, don’t go overboard.
Walk before you run.
Remember, when it comes to distributed systems…
for every promise there’s a peril.
Thanks!@tyler_treatgithub.com/tylertreatbravenewgeek.com