distributed analytic frameworks - mitdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018....

97
DISTRIBUTED ANALYTIC FRAMEWORKS 6.830 / 6.814 LECTURE 19 TIM KRASKA

Upload: others

Post on 10-Oct-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

DISTRIBUTED ANALYTICFRAMEWORKS6.830 / 6.814 LECTURE 19TIM KRASKA

Page 2: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

RECAP: PARALLEL QUERY PROCESSING

Name 3 ways on how to parallelize a query

Page 3: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

PARALLEL QUERY PROCESSING

Three main ways to parallelize

1. Run multiple queries, each on a different thread2. Run operators in different threads (“pipeline”)

3. Partition data, process each partition in a different processor

A filter sort

Processor 1 Processor 2

Processor 1

A1 filter sortProcessor 2

A2 filter sort

merge

Runs on 1 of the processors

Page 4: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

PARALLEL AND DISTRIBUTED DATABASES

Parallel and Distributed Databases:

Data is fragmented and replicated to different nodes (i.e., servers)

Fragmentation/Replication should be transparent for the user

Postgres

Oracle DB2

MSQL

DB2

DB2 DB2

DB2

Homogeneous vs. Heterogeneous

Master/Slave vs. P2P Fragmented vs. Replicated

4

Page 5: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

SCALEOUT

#servers

Speedup Linear

Sub-linear

5

Page 6: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

SCALE-OUT AND ELASTICITY

Scale-out and down in Cloud Computing = Elasticity

the ability of a system to adapt to changes in load

Time

Load

/ T

hrou

ghp

ut

6

Page 7: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

FRAGMENTATION: HORIZONTAL AND VERTICAL PARTITIONING

Horizontal Partitioning: Assign tuples of one table to individual partitions

Vertical Partitioning: Assigns columns of one table to individual partitions

Goal: Provide intra- and inter-query parallelism

7

Table T

Partition P0(T)

Partition P1(T)

Partition Pn-1(T)

Table T

Partition P0 ()T

Partition P1 ()T

Partition Pn-1 ()T

Page 8: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

HORIZONTAL PARTITIONING: METHODS

8

Page 9: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

CLICKER

You have a data warehouse about sales data. The customer dimension table is mostly accessed using the customer id. What partitioning method would you use for the customer table?

a) Round-robin

b) Hash

c) Range

Page 10: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

CLICKER

You have a data warehouse about sales data. The fact table (containing the orders) is mostly filtered by time? What partitioning method would you use?

a) Round-robin

b) Hash

c) Range

Page 11: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

HORIZONTAL PARTITIONING: METHODS

11

Random / Round Robin

• Evenly distributes data (no skew)• Requires us to repartition for joins• No pruning

Hash partitioning

• Allows us to perform joins without repartitioning, when tables are partitioned on join attributes

• Only subject to skew when there are many duplicate values

• No pruning

Range partitioning

• Allows us to perform joins without repartitioning, when tables are partitioned on join attributes

• Data can be pruned• Subject to skew

Page 12: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

CLICKER

You have a data warehouse about sales data. The fact table (containing the orders) is mostly filtered by time but highly skewed (e.g., many more orders in December). What partitioning method would you use? Can you think of a way to deal with the skew?

a) Round-robin

b) Hash

c) Range

Page 13: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

QUESTION

Can you think of a reason why it might NOT be a good idea to range partition on time?

Page 14: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLE: RANGE-PARTITIONING

Partition fact table by ranges: <=2009, 2010, 2011, 2012

14

Orders

Orders<=2009

Orders=2010

Orders=2011

Orders=2012

Page 15: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLE: HASH-PARTITIONIERUNG

Partition table using hash-function: h(k)=k%4

15

Fact table(orders)

orders0

orders1

orders2

orders3

Dimension(product)

product0

product1

product2

product3

h(o_cid)=o_cid%4

h(c_cid)=c_cid%4

Page 16: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

PARTITIONING FOR STAR SCHEMA

Lineitem

D2: Date

D1: PaymentD4: Product

D3: Region

N

NN

N1

11

1

16

Page 17: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

APPROACH: PARTITIONING AND REPLICATION IN STAR SCHEMA

1. Fact table and largest dimension table are co-partitioned

2. Other smaller dimension tables are replicated

17

Page 18: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLE: REPLICATION AND PARTITIONING

Step 1: Co-partition fact and biggest dimension table

18

Lineitem1

D2: Date

D1: Payment

D4: Product1

NN

N1

1

1

D3: Region

N 1

Lineitem2

Lineitem3

Lineitem4

D4: Product2

D4: Product3

D4: Product4

Page 19: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLE: REPLICATION AND PARTITIONING

Step 2: Partitions are allocated to nodes

19

Nod

e 1

Nod

e 2

Nod

e 3N

ode 4

Lineitem1

D4: Product1

Lineitem3

D4: Product3

Lineitem2

D4: Product2

Lineitem4

D4: Product4

Page 20: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLE: REPLICATION AND PARTITIONING

Step3: Replicate non-partitioned dimension tables (D1-D3)

20

Nod

e 1

Nod

e 2

Nod

e 3N

ode 4

Lineitem1

D2: Date

D1: Payment

D4: Product1

D3: Region D2: Date

D1: Payment

D3: Region

Lineitem3

D4: Product3

D2: Date

D1: Payment

D3: Region

Lineitem2

D4: Product2

D2: Date

D1: Payment

D3: Region

Lineitem4

D4: Product4

Page 21: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLE: QUERY PROCESSING IN PDBMS

Compilation

Localization

SQL-Query

Logical Plan

Logical Plan(distributed)

SELECT SUM(F.price), D3.regionFROM F, D3 WHERE F.d3 = D3.idAND D3.country=‘Germany'GROUP BY D3.region

gD3.region,SUM(F.price)

sD3.country =‘Germany'

F D3

F.d3 = D3.id

21

...

Page 22: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

Node N1

EXAMPLE: QUERY PROCESSING IN PDBMS

U

22

gD3.region,SUM(F.price)

sD3.country =‘Germany'

P1(F) P1(D3)

F.d3 = D3.id

P2(F)

N1 N2 N1

Compilation

Localization

SQL-Query

Logical Plan

Logical Plan(distributed)

Naïve Logical Plan (parallel)

P2(F)

N3

P2(F)

N4

Page 23: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

Node 2

Node 2Node 1

EXAMPLE: DISTRIBUTED QUERY PROCESSING (GLOBAL OPTIMIZATION)

gD3.region,SUM(F.price)

sD3.country ='Switzerland'

P1(F) P1(D3)

F.d3 = D3.id

gD3.region,SUM(F.price)

sD3.country ='Switzerland'

P2(F) P2(D3)

F.d3 = D3.id

U

23

Global Optimization: minimize network costs

….

Page 24: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

SELECTIONS IN PDBMS

SELECT * FROM Emp WHERE salary > 1000

(Helga, 2000)(Hubert, 150)

(Peter, 20)(Rhadia, 15000)

s s

U

24

Page 25: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

SELECTIONS IN PDBMS

Approach

each server has a (horizontal) partition of the DB

each server carries out selection (scan+filter) locally

each server sends results to dedicated server

dedicated server carries out union (U) for final result

Assessment

scales almost linearly

skew in communicating results may limit scalability

25

Page 26: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

RECAP: DISTRIBUTED JOIN STRATEGIES

If tables are partitioned properly, just run local joins

• Also, if one table is replicated, no need to join

Otherwise, several options:

• Collect all tables at one node• Inferior except in extreme cases, i.e., very small tables

• Re-partition one or both tables• Depending on initial partitioning

• Replicate (smaller) table on all nodes

Page 27: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

JOIN: REPLICATE

SELECT * FROM R JOIN S ON R.b = S.c

P0(R) P0(S) P1(R) P1(S)P0(S) P1(S)

P0(R) ⨝ (P0(S) U P1(S)) P1(R) ⨝ (P0(S) U P1(S))

U

27

Page 28: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

JOIN: REPLICATEApproach

Ship (all partitions of) Table 2 to all servers

carry out join of Pi(T1) and T2 at each server

compute union of all local joins

Assessment

scales well if there is an efficient broadcast and Table 2 is small

even better if Table T2 is already replicated everywhere

28

Page 29: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

• SELECT * FROM A,B WHERE A.a = B.b

• Suppose we have hashed A on a, using hash function F to get F(A.a) à 1..n (n = # machines)

• Also hash B on b using same F

A2 B2

A1 B1

An Bn

Processor 1 Processor 2 Processor n

merge

CO-LOCATION

Page 30: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

A1 B1

Processor 1

split

A2 B2

Processor 2

split

An Bn

Processor n

split

• Suppose A pre-partitioned on a, but B needs to be repartitioned

REPARTITION

Page 31: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

• Suppose A pre-partitioned on a, but B needs to be repartitioned

A1 B1

Processor 1

split

A2 B2

Processor 2

split

An Bn

Processor n

split

A1 B1

Processor 1

A2 B2

Processor 2

An Bn

Processor n

B1 B2 Bn

⨝ ⨝ ⨝

merge Generalizes to the case of repartitioning both tables

REPARTITION

Page 32: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

• Suppose A pre-partitioned on a, but B needs to be repartitioned

A1 B1

Processor 1

split

A2 B2

Processor 2

split

An Bn

Processor n

split

A1 B1

Processor 1

A2 B2

Processor 2

An Bn

Processor n

B1 B2 Bn

⨝ ⨝ ⨝

merge Each node sends and receives|B|/n * (n-1)/n bytes

|B|/n/n

|B|/n

REPARTITION

Page 33: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

Replication vs Repartioning• Replication requires each node to send smaller table to all other nodes

• |T| / n * n-1 bytes sent by each node

• When would replication be preferred over repartitioning for joins?

• If size of smaller table < data sent to repartition one or both tables

• Should also account for cost of join, since will be higher with replicated table

• E.g., |B| = 1 MB, |A|=100 MB, n=3, need to repartition A

• Data to repartition A is |A|/3 * 2/3 = 22.22 MB per node

• Join .33 MB to 33 MB

• Data to broadcast B is |B| = 1/3 * 2 = .66 MB

• Join 1 MB to 33 MB

Page 34: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

Aggregation

Processor 1

A1 filter aggProcessor 2

A2 filter agg

merge

Runs on 1 of the processors

In general, each node will have data for the same groups

So merge will need to combine groups, e.g.:

MAX (MAX1, MAX2)SUM (SUM1, SUM2)

What about average?Maintain SUMs and COUNTs, combine in merge step

Page 35: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

Generalized Parallel Aggregates

• Express aggregates as 3 functions:• INIT – create partial aggregate value• MERGE – combine 2 partial aggregates• FINAL – compute final aggregate

• E.g., AVG:• INIT(tuple) à (SUM=tuple.value, COUNT=1)• MERGE (a1, a2) à (SUM=a1.SUM + a2.SUM, COUNT=a1.count+a2.count)• FINAL(a) à a.SUM/a.COUNT

Page 36: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

What does MERGE do?

• For aggregate queries, receives partial aggregates from each processor, MERGEs and FINALizes them

• For non-aggregates, just UNIONs results

Page 37: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,
Page 38: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

WHAT IF WE HAVE TOO MUCH DATA?

Internet Scale :)

To analyze large datasets

Using nodes (i.e. servers) in a cluster in parallel

40

Page 39: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

WHAT IS MAPREDUCE?

Simple programming model for data-intensive computing on large commodity clusters

Pioneered by Google

• Processes PB’s of data of per day (e.g., process user logs, web crawls, …)

Popularized by Apache Hadoop project (Yahoo)

• Used by Yahoo!, Facebook, Amazon, …

Many other MapReduce-based frameworks today

Page 40: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

WHAT IS MAPREDUCE USED FOR?

• At Google:– Index building for Google Search–Analyzing search logs (Google Trends)–Article clustering for Google News–Statistical language translation

• At Facebook:–Data mining–Ad optimization–Machine learning (e.g., spam detection)

Page 41: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLE: GOOGLE TRENDS

https://www.google.com/trends/

Page 42: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

WHAT ELSE IS MAPREDUCE USED FOR?

In research:

• Topic distribution in Wikipedia (PARC)• Natural language processing (CMU) • Climate simulation (UW)• Bioinformatics (Maryland)• Particle physics (Nebraska)• <Your application here>

Page 43: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

MAPREDUCE GOALS

• Scalability to large data volumes:–Scan 100 TB on 1 node @ 50 MB/s = 24 days–Scan on 1000-node cluster = 35 minutes

• Cost-efficiency:–Commodity nodes (cheap, but unreliable)–Commodity network (low bandwidth)–Automatic fault-tolerance (fewer admins)–Easy to use (fewer programmers)

Page 44: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

ARCHITECTURE & PROGRAMMING MODEL

MAPREDUCE & HADOOP

Page 45: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

HADOOP 1.0: ARCHITECTURE

Hadoop Distributed Filesystem (HDFS)

MapReduce (Programming Model and Runtime System)

Hive(“SQL”)

Pig(Data-Flow)

Others(Mahout, Giraph)

DistributedStorage

DistributedExecution

High-levelProgramming

47

Open-Source MapReduce System:

Page 46: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

MAIN HADOOP COMPONENTS

HDFS = Hadoop Distributed File System

• Single namespace for entire cluster• Replicates data 3x for fault-tolerance

MapReduce framework

• Runs jobs submitted by users•Manages work distribution & fault-tolerance• Co-locates work with file system

Page 47: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

TYPICAL HADOOP CLUSTER

• 40 nodes/rack, 1000-4000 nodes in cluster• 1 Gbps bandwidth in rack, 8 Gbps out of rack• Node specs (Facebook): 8-16 cores, 32 GB RAM, 8×1.5

TB disks, no RAID

Aggregation switch

Rack switch

Page 48: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

TYPICAL HADOOP CLUSTER

Page 49: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

CHALLENGES OF COMMODITY CLUSTERS

Cheap nodes fail, especially when you have many• Mean time between failures for 1 node = 3 years• MTBF for 1000 nodes = 1 day• Solution: Build fault tolerance into system

Commodity network = low bandwidth• Solution: Push computation to the data

Programming in a cluster is hard• Parallel programming is hard• Distributed parallel programming is even harder• Solution: Restricted programming model: Users write

data-parallel map and reduce functions, system handles work distribution and failures

Page 50: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

HADOOP DISTRIBUTED FILE SYSTEM

• Files split into 64 MB blocks

• Blocks replicated across several datanodes (often 3)

• Namenode stores metadata (file names, locations, etc)

• Optimized for large files, sequential reads

• Files are append-only

Namenode

Datanodes

1234

124

213

143

324

File1

Page 51: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

MAPREDUCE PROGRAMMING MODEL

Data type: key-value records

Map function:

(Kin, Vin) èè list(Kinter, Vinter)

Reduce function:(Kinter, list(Vinter)) èè list(Kout, Vout)

Page 52: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLE: WORD COUNT

def mapper(line):foreach word in line.split():

emit(word, 1)

def reducer(key, values): //values={1,1,1,…}emit(key, count(values))

Page 53: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

WORD COUNT EXECUTION

the quickbrown

fox

the fox ate the mouse

how nowbrown

cow

Map

Map

Map

Reduce

Reduce

brown, 2fox, 2how, 1now, 1the, 3

ate, 1cow, 1

mouse, 1quick, 1

the, 1brown, 1

fox, 1

the, 1fox, 1the, 1

how, 1now, 1

brown, 1

Input Map Shuffle & Sort Reduce Output

brown, {1,1}fox, {1,1}

ate, {1}cow, {1}

Page 54: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

AN OPTIMIZATION: THE COMBINER

Local reduce function for repeated keys produced by same map

• For associative operations like sum, count, max

• Decreases amount of intermediate data

Example: local counting for Word Count:

def combiner(key, values):output(key, sum(values))

def reducer(key, values):output(key, sum(values))

Page 55: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

WORD COUNT WITH COMBINER

the quickbrown

fox

the fox ate the mouse

how nowbrown

cow

Map

Map

Map

Reduce

Reduce

brown, 2fox, 2how, 1now, 1the, 3

ate, 1cow, 1

mouse, 1quick, 1

the, 1brown, 1

fox, 1

the, 2fox, 1

how, 1now, 1

brown, 1

Input Map Shuffle & Sort Reduce Output

the, {2,1}…

Page 56: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

CLICKER QUESTION

You want to implement the dot product a1*b1 + a2*b2 + … + an*bn of two large vectors a=[a1, a2,…,an] and b=[b1, b2,…,bn]. The input to the mapper is a pair (ai,bi).

Which of the following implementations is correct?

A)def mapper(a,b):

emit(a*b, 1)

def reducer(key, values):emit(key, sum(values))

B)def mapper(a,b):

emit(1, a+b)

def reducer(key, values):emit(key, sum(values))

C)def mapper(a,b):

emit(1, a*b)

def reducer(key, values):emit(key, mult(values))

D)def mapper(a,b):

emit(1, a*b)

def reducer(key, values):emit(key, sum(values))

Page 57: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

MAPREDUCE EXECUTION DETAILS

Mappers preferentially scheduled on same node or same rack as their input block

• Minimize network use to improve performance

Mappers save outputs to local disk before serving to reducers

• Allows recovery if a reducer crashes

Reducers save outputs to HDFS

Page 58: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

FAULT TOLERANCE IN MAPREDUCE

1. If a task crashes:

• Retry on another node• OK for a map because it had no dependencies• OK for reduce because map outputs are on disk

• If the same task repeatedly fails, fail the job or ignore that input block

ØNote: For the fault tolerance to work, user tasks must be deterministic and side-effect-free

Page 59: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

FAULT TOLERANCE IN MAPREDUCE

2. If a node crashes:

• Relaunch its current tasks on other nodes• Relaunch also any maps the node previously ran

• Necessary because their output files were lost along with the crashed node

Page 60: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

FAULT TOLERANCE IN MAPREDUCE3. If a task is going slowly (straggler):

• Launch second copy of task on another node• Take the output of whichever copy finishes first, and kill the other

one

Critical for performance in large clusters (many possible causes of stragglers)

Page 61: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

TAKEAWAYS

By providing a restricted data-parallel programming model, MapReduce can control job execution in useful ways:

• Automatic division of job into tasks• Placement of computation near data• Load balancing• Recovery from failures & stragglers

Page 62: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

SAMPLE APPLICATIONS

MAPREDUCE & HADOOP

Page 63: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

INVERTED INDEX

Input: (filename, text) recordsOutput: list of files containing each word

Map:

foreach word in text.split():output(word, filename)

Combine: uniquify filenames for each word

Reduce:

def reduce(word, filenames): output(word, sort(filenames))

Page 64: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

INVERTED INDEX EXAMPLE

afraid, (12th.txt)be, (12th.txt, hamlet.txt)greatness, (12th.txt)not, (12th.txt, hamlet.txt)of, (12th.txt)or, (hamlet.txt)to, (hamlet.txt)

to be or not to be

hamlet.txt

be not afraid of greatness

12th.txt

to, hamlet.txtbe, hamlet.txtor, hamlet.txtnot, hamlet.txt

be, 12th.txtnot, 12th.txtafraid, 12th.txtof, 12th.txtgreatness, 12th.txt

Page 65: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

IN CLASS TASK1.Sort all words of all documents

alphabetically

2.Return the most popular words of all documents

Page 66: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

SORT

Input: (key, value) records

Output: same records, sorted by key

Map: identity function

Reduce: identify function

Trick: Pick partitioningfunction p such thatk1 < k2 => p(k1) < p(k2) pig

sheepyakzebra

aardvarkant

beecowelephant

Map

Map

Map

Reduce

Reduce

ant, bee

zebra

aardvark,elephant

cow

pig

sheep, yak

[A-M]

[N-Z]

Page 67: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

MOST POPULAR WORDS

Input: (filename, text) recordsOutput: the 100 words occurring in most files

Two-stage solution:• MapReduce Job 1:

Create inverted index, giving (word, list(file)) records

• MapReduce Job 2:Map each (word, list(file)) to (count, word)Sort these records by count as in sort job

Page 68: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

HADOOP APIS

MAPREDUCE & HADOOP

Page 69: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

INTRODUCTION TO HADOOP

Download from hadoop.apache.org

To install locally, unzip and set JAVA_HOME

Docs: hadoop.apache.org/common/docs/current

Three ways to write jobs:

• Java API• Hadoop Streaming (for Python, Perl, etc.): Uses std.in and out• Pipes API (C++)

Page 70: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

WORD COUNT IN JAVA

public static class MapClass extends MapReduceBaseimplements Mapper<LongWritable, Text, Text, IntWritable> {

private final static IntWritable ONE = new IntWritable(1);

public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {

String line = value.toString();StringTokenizer itr = new StringTokenizer(line);while (itr.hasMoreTokens()) {

output.collect(new Text(itr.nextToken()), ONE);}

}}

Page 71: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

WORD COUNT IN JAVA

public static class Reduce extends MapReduceBase

implements Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterator<IntWritable> values,

OutputCollector<Text, IntWritable> output,

Reporter reporter) throws IOException {int sum = 0;

while (values.hasNext()) {

sum += values.next().get();

}output.collect(key, new IntWritable(sum));

}

}

Page 72: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

WORD COUNT IN JAVA

public static void main(String[] args) throws Exception {

JobConf conf = new JobConf(WordCount.class);conf.setJobName("wordcount");

conf.setMapperClass(MapClass.class);

conf.setCombinerClass(Reduce.class);

conf.setReducerClass(Reduce.class);

FileInputFormat.setInputPaths(conf, args[0]);FileOutputFormat.setOutputPath(conf, new Path(args[1]));

conf.setOutputKeyClass(Text.class); // out keys are words (strings)conf.setOutputValueClass(IntWritable.class); // values are counts

JobClient.runJob(conf);

}

Page 73: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

HADOOP STREAMING

import sys

for line in sys.stdin:

for word in line.split():

print(word.lower() + "\t" + 1)

import syscounts = {}for line in sys.stdin:

word, count = line.split("\t")dict[word] = dict.get(word, 0) + int(count)

for word, count in counts:print(word.lower() + "\t" + 1)

Mapper.py:

Reducer.py:

Page 74: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

PIG & HIVE

MAPREDUCE & HADOOP

Page 75: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

MOTIVATIONMapReduce is powerful: many algorithmscan be expressed as a series of MR jobs

But it’s fairly low-level: must think about keys, values, partitioning, etc.

Can we capture common “job patterns”?

Page 76: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

PIG

Started at Yahoo! Research

Features:

• Expresses sequences of MapReduce jobs• Data model: nested bags of items• Provides relational (SQL) operators

(JOIN, GROUP BY, etc)• Easy to plug in Java functions

Page 77: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

AN EXAMPLE PROBLEM

Suppose you have user data in one file, website data in another, and you need to find the top 5 most visited pages by users aged 18-25.

Load Users Load Pages

Filter by age

Join on name

Group on url

Count clicks

Order by clicks

Take top 5

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt

Page 78: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

IN MAPREDUCE

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt

Page 79: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

Users = load users as (name, age);Filtered = filter Users by

age >= 18 and age <= 25; Pages = load pages as (user, url);Joined = join Filtered by name, Pages by user;Grouped = group Joined by url;Summed = foreach Grouped generate group,

count(Joined) as clicks;Sorted = order Summed by clicks desc;Top5 = limit Sorted 5;

store Top5 into top5sites ;

IN PIG LATIN

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt

Page 80: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

TRANSLATION TO MAPREDUCE

Notice how naturally the components of the job translate into Pig Latin.

Load Users Load Pages

Filter by age

Join on name

Group on url

Count clicks

Order by clicks

Take top 5

Users = load …Filtered = filter … Pages = load …Joined = join …Grouped = group …Summed = … count()…Sorted = order …Top5 = limit …

Job 1

Job 2

Job 3

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt

Page 81: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

HIVE

Developed at Facebook

Used for most Facebook jobs

SQL-like interface built on Hadoop

• Maintains table schemas• SQL-like query language (which can also call

Hadoop Streaming scripts)• Supports table partitioning,

complex data types, sampling,some query optimization

Page 82: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

SUMMARY

MapReduce’s data-parallel programming model hides complexity of distribution and fault tolerance

Principal philosophies:

• Make it scale, so you can throw hardware at problems• Make it cheap, saving hardware, programmer and

administration costs (but necessitating fault tolerance)

Hive and Pig further simplify programming

MapReduce is not suitable for all problems, but when it works, it may save you a lot of time

Page 83: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

OTHER SYSTEMS

More general execution engines•Dryad (Microsoft): general task DAG• Pregel (Google): in-memory iterative graph algs.• Spark (Berkeley): general in-memory computing

Language-integrated interfaces• Run computations directly from host language•DryadLINQ (MS), FlumeJava (Google), Spark

Page 84: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

SPARK MOTIVATION

MapReduce simplified “big data” analysis on large, unreliable clusters

But as soon as organizations started using it widely, users wanted more:

• More complex, multi-stage applications• More interactive queries• More low-latency online processing

Page 85: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

SPARK MOTIVATION

Complex jobs, interactive queries and online processing all need one thing that MR lacks:

Efficient primitives for data sharing

Stag

e 1

Stag

e 2

Stag

e 3

Iterative job

Query 1

Query 2

Query 3

Interactive mining

Job

1

Job

2 …

Stream processing

Page 86: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLES

iter. 1 iter. 2 . . .

Input

HDFSread

HDFSwrite

HDFSread

HDFSwrite

Input

query 1

query 2

query 3

result 1

result 2

result 3

. . .

HDFSread

Page 87: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLES

iter. 1 iter. 2 . . .

Input

HDFSread

HDFSwrite

HDFSread

HDFSwrite

Input

query 1

query 2

query 3

result 1

result 2

result 3

. . .

HDFSread

Problem: in MR, only way to share data across jobs is stable storage (e.g. file system)

-> slow!

Page 88: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

iter. 1 iter. 2 . . .

Input

GOAL: IN-MEMORY DATA SHARING

Distributedmemory

Input

query 1

query 2

query 3

. . .

one-timeprocessing

10-100× faster than network and disk

Page 89: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

SOLUTION: RESILIENT DISTRIBUTED DATASETS (RDDS)

Partitioned collections of records that can be stored in memory across the cluster

Manipulated through a diverse set of transformations (map, filter, join, etc)

Fault recovery without costly replication

• Remember the series of transformations that built an RDD (its lineage) to recompute lost data

Page 90: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

Scala programming language

EXAMPLE: LOG MINING

Load error messages from a log into memory, then interactively search for various patterns

lines = spark.textFile(“hdfs://...”)

errors = lines.filter(_.startsWith(“ERROR”))

messages = errors.map(_.split(‘\t’)(2))

messages.cache()

Block 1

Block 2

Block 3

Worker

Worker

Worker

Driver

messages.filter(_.contains(“foo”)).count

messages.filter(_.contains(“bar”)).count

. . .

tasks

results

Cache 1

Cache 2

Cache 3

Base RDDTransformed RDD

Result: full-text search of Wikipedia in <1 sec (vs 20 sec for

on-disk data)

Result: scaled to 1 TB data in 5-7 sec(vs 170 sec for on-disk data)

Page 91: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

Fault Tolerance

Page 92: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

Scheduling Stages

NotCached RDD

CachedRDD

Page 93: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

EXAMPLE: LINEAR REGRESSION

Find best line separating two sets of points

+

+ ++

+

+

++ +

– ––

–– –

+

target

random initial line

Page 94: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

LINEAR REGRESSION CODE

vvaall ddaattaa == ssppaarrkk..tteexxttFFiillee((......))..mmaapp((rreeaaddPPooiinntt))..ccaacchhee(())

vvaarr ww == VVeeccttoorr..rraannddoomm((DD))

ffoorr ((ii <<-- 11 ttoo IITTEERRAATTIIOONNSS)) {{

vvaall ggrraaddiieenntt == ddaattaa..mmaapp((pp ==>>

((11 // ((11 ++ eexxpp((--pp..yy**((ww ddoott pp..xx)))))) -- 11)) ** pp..yy ** pp..xx

))..rreedduuccee((__ ++ __))

ww --== ggrraaddiieenntt

}}

pprriinnttllnn((""FFiinnaall ww:: "" ++ ww))

Page 95: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

LINEAR REGRESSION PERFORMANCE

127 s / iteration

first iteration 174 sfurther iterations 6 s

Page 96: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

OTHER PROJECTS

Hive on Spark (SparkSQL): SQL engine

Spark Streaming: incremental processing with in-memory state

MLLib: Machine learning library

GraphX: Graph processing on top of Spark

Page 97: DISTRIBUTED ANALYTIC FRAMEWORKS - MITdb.lcs.mit.edu/6.830/lectures/lec19_2018_print.pdf · 2018. 11. 20. · RECAP: DISTRIBUTED JOIN STRATEGIES If tables are partitioned properly,

OTHER RESOURCES

Hadoop: http://hadoop.apache.org/common

Pig: http://hadoop.apache.org/pig

Hive: http://hadoop.apache.org/hive

Spark: http://spark-project.org

Hadoop video tutorials: www.cloudera.com/hadoop-training

Amazon Elastic MapReduce:http://aws.amazon.com/elasticmapreduce/