in-memory datagrids (with examples of hazelcast)
TRANSCRIPT
SUPERCOMPUTING FOR THE REST OF US
IN-MEMORY DATA-GRIDS
*
RALPH WINZINGER, SENACOR TECHNOLOGIES
RALPH WINZINGERWHO AM I?
• Senior Technical Leader @ Senacor Technologies
• evaluating new technologies
• panning of Senacor-internal academy
• hacker :-)
• Senacor Technologies, several offices in Germany
• partner for large IT transformations @ big brands
• finance, logistics, industry, insurance, government
• nature of applications is changing
• increased amount of connectivity and communication
• increased amount of data collected from sensors and apps
• expectations of customers are changing
• services have to be available at any time
• responses need to be delivered immediately
NEW CHALLENGES DUE TO DIGITIZATIONPHYSICAL MEETS DIGITAL
NEW CHALLENGES DUE TO DIGITIZATIONPHYSICAL MEETS DIGITAL
MORE DATA
MORE REQUESTS
HIGHER AVAILABILITY
HIGHER PERFORMANCE
?
Web 2.0
MASSIVE PARALLEL APPROACHDOES IT SCALE?
20032004
20062007
2009
Google Distributed Filesystem
Google Map Reduce
Google Big Table
Amazon Dynamo
Facebook Cassandra
2000
2010
Web 2.0
MASSIVE PARALLEL APPROACHDOES IT SCALE?
20032004
20062007
2009
Google Distributed Filesystem
Google Map Reduce
Google Big Table
Amazon Dynamo
Facebook Cassandra
scaling to billions of requests per day
with commodity hardware
- scaling out -
2000
2010
SCALE UP VS. SCALE OUTDOES IT SCALE?
• scaling up is easy but surely expensive
• every piece of technology has upper limits
• scaling out is cheap but has certain drawbacks
• clustering is commodity for many years now, but primary addresses logic, not data
• synchronization issues
EVOLUTION OF PERFORMANCE AND PRICINGCAPABILITIES & COSTS
NET
WO
RK L
ATE
NC
Y
MEM
ORY
CA
PAC
ITY
EVOLUTION OF PERFORMANCE AND PRICINGCAPABILITIES & COSTS
NET
WO
RK L
ATE
NC
Y
MEM
ORY
CA
PAC
ITY
Price
EVOLUTION OF PERFORMANCE AND PRICINGCAPABILITIES & COSTS
NET
WO
RK L
ATE
NC
Y
MEM
ORY
CA
PAC
ITY
Price
THIS IS THE BASE FOR IN-MEMORY DATA GRIDS
IN-MEMORY DATA -GRIDSJUST KEEP IT IN MIND
• IN-MEMORY DATA
• all data needed is supposed to be kept in memory
• HEAP / RAM is becoming a first class citizen
• GRID
• too big for one node, so data is distributed in cluster
• already a couple of players out there
• Hazelcast, Oracle, Terracotta, Infinispan, GridGain, …
IN-MEMORY DATA -GRIDSJUST KEEP IT IN MIND
• IN-MEMORY DATA
• all data needed is supposed to be kept in memory
• HEAP / RAM is becoming a first class citizen
• GRID
• too big for one node, so data is distributed in cluster
• already a couple of players out there
• Hazelcast, Oracle, Terracotta, Infinispan, GridGain, …
MEMORY X1 NETWORK X100 DISK X1000
IN-MEMORY DATA -GRIDSJUST KEEP IT IN MIND
• IN-MEMORY DATA
• all data needed is supposed to be kept in memory
• HEAP / RAM is becoming a first class citizen
• GRID
• too big for one node, so data is distributed in cluster
• already a couple of players out there
• Hazelcast, Oracle, Terracotta, Infinispan, GridGain, …
MEMORY X1 NETWORK X100 DISK X1000
EMBEDDED OR CLIENT/SERVER IMDG APPROACHMAKI AND NIGIRI
• embedded IMDG
• every node-instance of an app is contributing to overall memory
• client / server
• dedicated memory cluster, apart form application
EMBEDDED OR CLIENT/SERVER IMDG APPROACHMAKI AND NIGIRI
NODE1 NODE3NODE2 NODE4
MEMORY MEMORY MEMORY MEMORY
APP APP APP APP
NODE1 NODE3NODE2 NODE4
APP APP APP APP
MEMORY MEMORY MEMORY MEMORYMEMORY MEMORYMEMORY
DISTRIBUTED DATA AND THE CAP THEOREM… GO, CHOOSE TWO OF THEM!
or even better: „drop one of them“
Actually no choice - as long as we are in a network
Use a quorum - if there are enough nodes with the same data, that is the truth. Might get expensive
Tolerate a „split brain“ and keep on working. Might get hard to merge
P
C
A
C
PA
DISTRIBUTED DATA AND THE CAP THEOREM… GO, CHOOSE TWO OF THEM!
or even better: „drop one of them“
Actually no choice - as long as we are in a network
Use a quorum - if there are enough nodes with the same data, that is the truth. Might get expensive
Tolerate a „split brain“ and keep on working. Might get hard to merge
P
C
A
C
PA
HIGH DENSITY DATAHONEY, I SHRUNK THE DATA
• serialization has massive impact on
• performance - how fast can be de-/serialized?
• throughput - how big is data on the wire?
• volume - how much data can be put in memory?
• go & compare Java, XML, JSON, Protobuf, Capnproto, Thrift, …
• … and be suprised!
• hypercast = hazelcast +c24 preon
OFF-HEAP MEMORYLEAVING THE SANDBOX
• IMDGs keep lots of data in memory - say hello to our friend, the garbage collector!
• organizational overhead will be present if millions of objects are stored on the heap
• tuning and deep understanding garbage collection is mandatory
• off-heap memory to the rescue
• data is not stored on the heap but in explicitly allocated areas
• IMDG is responsible for deallocating memory
OFF-HEAP MEMORYLEAVING THE SANDBOX
• IMDGs keep lots of data in memory - say hello to our friend, the garbage collector!
• organizational overhead will be present if millions of objects are stored on the heap
• tuning and deep understanding garbage collection is mandatory
• off-heap memory to the rescue
• data is not stored on the heap but in explicitly allocated areas
• IMDG is responsible for deallocating memory
java.misc.Unsafe
OFF-HEAP MEMORYLEAVING THE SANDBOX
• IMDGs keep lots of data in memory - say hello to our friend, the garbage collector!
• organizational overhead will be present if millions of objects are stored on the heap
• tuning and deep understanding garbage collection is mandatory
• off-heap memory to the rescue
• data is not stored on the heap but in explicitly allocated areas
• IMDG is responsible for deallocating memory
java.misc.Unsafe
DATA SHARDING & ELASTICITYWHERE DID IT GO?
• scaling out with distributed data only makes sense when data is partitioned - how to find the right partition?
• an IMDG is quite close to a HashMap - partitions are buckets
• partitionID = hashcode() % num_partions
• now think of a distributed HashMap - partitions are scattered over our cluster
NODE 1
1
2
3
NODE 2
4
5
6
NODE 3
7
8
9
NODE N
P-2
P-1
P
…
DATA SHARDING & ELASTICITYWHERE DID IT GO?
• scaling out with distributed data only makes sense when data is partitioned - how to find the right partition?
• an IMDG is quite close to a HashMap - partitions are buckets
• partitionID = hashcode() % num_partions
• now think of a distributed HashMap - partitions are scattered over our cluster
NODE 1
1
2
3
NODE 2
4
5
6
NODE 3
7
8
9
NODE N
P-5
P-4
P-3
NODE N+1
P-2
P-1
P
…
DATA SHARDING & ELASTICITYWHERE DID IT GO?
• scaling out with distributed data only makes sense when data is partitioned - how to find the right partition?
• an IMDG is quite close to a HashMap - partitions are buckets
• partitionID = hashcode() % num_partions
• now think of a distributed HashMap - partitions are scattered over our cluster
NODE 1
1
2
3
NODE 2
4
5
6
NODE 3
7
8
9
NODE N
P-8
P-7
P-6
NODE N+1
P-5
P-4
P-3
EC3
P-2
P-1
P
…
ELDEST MEMBER VS. CENTRAL MANAGEMENTHAVING A PARTY
• there is no central management instance in a (Hazelcast) IMDG cluster, no single point of failure
• autodiscovery via network broadcast
• there is always one node which knows all other members - like the first person on a party which gets introduced to all other guest
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
1 2
3
backup partitions
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
1
2
3
backup partitions
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
1 2
3
backup partitions
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
1 2 3backup partitions
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
backup partitions
1 2 31 2 31 2
31 2 3
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
backup partitions
1 2 31 2 31 2
31 2 3
1 2
3
2 2
3
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
backup partitions
1 2 31 2 31 2
31 2 3
1
2
3
2 2
3
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
backup partitions
1 2 31 2 31 2
31 2 3
1 2
3
2 2
3
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
backup partitions
1 2 31 2 31 2
31 2 3
1 23
2 2
3
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
backup partitions
1 2 31 2 31 2
31 2 3
1 23
2 23
FAILOVERAND IF I PULLED THE PLUG???
• data is not only sharded but also redundant to recover from failing nodes
NODE 1
1 2
3
NODE 2
1 2
3
NODE 3
1 2
3
NODE 4
1 2
3
backup partitions
1 2 31 2 3
1 2 3
1’ 2’3’
1’ 2’
3’3 2 2
DISTRIBUTED COMPUTING IN AN IMDGDIVIDE AND CONQUER
• reading data from the cluster and processing it is a straightforward approach - but not always clever
• it might also be feasible to send algorithms to the cluster and distribute processing
• MapReduce
• Hazelcast has built-in support for distributed executors
• think of it as serializable Runnables which can be sent and executed on a different node
HAZELCAST FEATURESNOW, WHAT’S INSIDE?
CODE DEMOLET’S GET OUR HANDS DIRTY!
• not production grade!
• checkout on github
THANKS!
@rwinz [email protected] https://github.com/rwinzinger