parallel processing - mapreduce and flumejava

95
Parallel Processing - MapReduce and FlumeJava Amir H. Payberah [email protected] 2020-09-13

Upload: others

Post on 19-Nov-2021

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Parallel Processing - MapReduce and FlumeJava

Parallel Processing - MapReduce and FlumeJava

Amir H. [email protected]

2020-09-13

Page 2: Parallel Processing - MapReduce and FlumeJava

The Course Web Page

https://id2221kth.github.io

https://tinyurl.com/f6x544h

1 / 84

Page 3: Parallel Processing - MapReduce and FlumeJava

Where Are We?

2 / 84

Page 4: Parallel Processing - MapReduce and FlumeJava

What do we do when there is too much data to process?

3 / 84

Page 5: Parallel Processing - MapReduce and FlumeJava

Scale Up vs. Scale Out

I Scale up or scale vertically: adding resources to a single node in a system.

I Scale out or scale horizontally: adding more nodes to a system.

4 / 84

Page 6: Parallel Processing - MapReduce and FlumeJava

Taxonomy of Parallel Architectures

DeWitt, D. and Gray, J. “Parallel database systems: the future of high performance database systems”. ACM Communications, 35(6), 85-98, 1992.

5 / 84

Page 7: Parallel Processing - MapReduce and FlumeJava

MapReduce

I A shared nothing architecture for processing large data sets with a parallel/distributedalgorithm on clusters of commodity hardware.

6 / 84

Page 8: Parallel Processing - MapReduce and FlumeJava

Challenges

I How to distribute computation?

I How can we make it easy to write distributed programs?

I Machines failure.

7 / 84

Page 9: Parallel Processing - MapReduce and FlumeJava

Simplicity

I MapReduce takes care of parallelization, fault tolerance, and data distribution.

I Hide system-level details from programmers.

[http://www.johnlund.com/page/8358/elephant-on-a-scooter.asp]

8 / 84

Page 10: Parallel Processing - MapReduce and FlumeJava

MapReduce Definition

I A programming model: to batch process large data sets (inspired by functional pro-gramming).

I An execution framework: to run parallel algorithms on clusters of commodity hard-ware.

9 / 84

Page 11: Parallel Processing - MapReduce and FlumeJava

Programming Model

10 / 84

Page 12: Parallel Processing - MapReduce and FlumeJava

11 / 84

Page 13: Parallel Processing - MapReduce and FlumeJava

Word Count

I Count the number of times each distinct word appears in the file

I If the file fits in memory: words(doc.txt) | sort | uniq -c

I If not?

12 / 84

Page 14: Parallel Processing - MapReduce and FlumeJava

Word Count

I Count the number of times each distinct word appears in the file

I If the file fits in memory: words(doc.txt) | sort | uniq -c

I If not?

12 / 84

Page 15: Parallel Processing - MapReduce and FlumeJava

Data-Parallel Processing (1/2)

I Parallelize the data and process.

13 / 84

Page 16: Parallel Processing - MapReduce and FlumeJava

Data-Parallel Processing (2/2)

I MapReduce

14 / 84

Page 17: Parallel Processing - MapReduce and FlumeJava

MapReduce Stages - Map

I Each Map task (typically) operates on a single HDFS block.

I Map tasks (usually) run on the node where the block is stored.

I Each Map task generates a set of intermediate key/value pairs.

15 / 84

Page 18: Parallel Processing - MapReduce and FlumeJava

MapReduce Stages - Shuffle and Sort

I Sorts and consolidates intermediate data from all mappers.

I Happens after all Map tasks are complete and before Reduce tasks start.

16 / 84

Page 19: Parallel Processing - MapReduce and FlumeJava

MapReduce Stages - Reduce

I Each Reduce task operates on all intermediate values associated with the same in-termediate key.

I Produces the final output.

17 / 84

Page 20: Parallel Processing - MapReduce and FlumeJava

MapReduce Data Flow (1/5)

18 / 84

Page 21: Parallel Processing - MapReduce and FlumeJava

MapReduce Data Flow (2/5)

19 / 84

Page 22: Parallel Processing - MapReduce and FlumeJava

MapReduce Data Flow (3/5)

20 / 84

Page 23: Parallel Processing - MapReduce and FlumeJava

MapReduce Data Flow (4/5)

21 / 84

Page 24: Parallel Processing - MapReduce and FlumeJava

MapReduce Data Flow (5/5)

22 / 84

Page 25: Parallel Processing - MapReduce and FlumeJava

Word Count in MapReduce

I Consider doing a word count of the following file using MapReduce

23 / 84

Page 26: Parallel Processing - MapReduce and FlumeJava

Word Count in MapReduce - Map (1/2)

24 / 84

Page 27: Parallel Processing - MapReduce and FlumeJava

Word Count in MapReduce - Map (2/2)

25 / 84

Page 28: Parallel Processing - MapReduce and FlumeJava

Word Count in MapReduce - Shuffle and Sort (1/3)

26 / 84

Page 29: Parallel Processing - MapReduce and FlumeJava

Word Count in MapReduce - Shuffle and Sort (2/3)

27 / 84

Page 30: Parallel Processing - MapReduce and FlumeJava

Word Count in MapReduce - Shuffle and Sort (3/3)

28 / 84

Page 31: Parallel Processing - MapReduce and FlumeJava

Word Count in MapReduce - Reduce (1/2)

29 / 84

Page 32: Parallel Processing - MapReduce and FlumeJava

Word Count in MapReduce - Reduce (2/2)

30 / 84

Page 33: Parallel Processing - MapReduce and FlumeJava

Mapper

31 / 84

Page 34: Parallel Processing - MapReduce and FlumeJava

The Mapper

I Input: (key, value) pairs

I Output: a list of (key, value) pairs

I The Mapper may use or completely ignore the input key.

I A standard pattern is to read one line of a file at a time.• Key: the byte offset• Value: the content of the line

map(in_key, in_value) -> list of (inter_key, inter_value)

(in key, in value) ⇒ map() ⇒ (inter key1, inter value1)

(inter key2, inter value2)

(inter key3, inter value3)

(inter key4, inter value4)

· · ·

32 / 84

Page 35: Parallel Processing - MapReduce and FlumeJava

The Mapper

I Input: (key, value) pairs

I Output: a list of (key, value) pairs

I The Mapper may use or completely ignore the input key.

I A standard pattern is to read one line of a file at a time.• Key: the byte offset• Value: the content of the line

map(in_key, in_value) -> list of (inter_key, inter_value)

(in key, in value) ⇒ map() ⇒ (inter key1, inter value1)

(inter key2, inter value2)

(inter key3, inter value3)

(inter key4, inter value4)

· · ·

32 / 84

Page 36: Parallel Processing - MapReduce and FlumeJava

The Mapper

I Input: (key, value) pairs

I Output: a list of (key, value) pairs

I The Mapper may use or completely ignore the input key.

I A standard pattern is to read one line of a file at a time.• Key: the byte offset• Value: the content of the line

map(in_key, in_value) -> list of (inter_key, inter_value)

(in key, in value) ⇒ map() ⇒ (inter key1, inter value1)

(inter key2, inter value2)

(inter key3, inter value3)

(inter key4, inter value4)

· · ·

32 / 84

Page 37: Parallel Processing - MapReduce and FlumeJava

The Mapper Example (1/3)

I Turn input into upper case

map(k, v) = emit (k.to_upper, v.to_upper)

(kth, this is the course id2221) ⇒ map() ⇒ (KTH, THIS IS THE COURSE ID2221)

33 / 84

Page 38: Parallel Processing - MapReduce and FlumeJava

The Mapper Example (2/3)

I Count the number of characters in the input

map(k, v) = emit (k, v.length)

(kth, this is the course id2221) ⇒ map() ⇒ (kth, 26)

34 / 84

Page 39: Parallel Processing - MapReduce and FlumeJava

The Mapper Example (3/3)

I Turn each word in the input into pair of (word, 1)

map(k, v) = foreach w in v emit (w, 1)

(21, Hello Hadoop Goodbye Hadoop) ⇒ map() ⇒ (Hello, 1)

(Hadoop, 1)

(Goodbye, 1)

(Hadoop, 1)

35 / 84

Page 40: Parallel Processing - MapReduce and FlumeJava

Reducer

36 / 84

Page 41: Parallel Processing - MapReduce and FlumeJava

Shuffle and Sort

I After the Map phase, all intermediate (key, value) pairs are grouped by the interme-diate keys.

I Each (key, list of values) is passed to a Reducer.

37 / 84

Page 42: Parallel Processing - MapReduce and FlumeJava

The Reducer

I Input: (key, list of values) pairs

I Output: a (key, value) pair or list of (key, value) pairs

I The Reducer outputs zero or more final (key, value) pairs

reduce(inter_key, [inter_value1, inter_value2, ...]) -> (out_key, out_value)

(inter k, [inter v1, inter v2, · · · ]) ⇒ reduce() ⇒ (out k, out v)

or

(inter k, [inter v1, inter v2, · · · ]) ⇒ reduce() ⇒ (out k, out v1)

(out k, out v2)

· · ·

38 / 84

Page 43: Parallel Processing - MapReduce and FlumeJava

The Reducer Example (1/3)

I Add up all the values associated with each intermediate key

reduce(k, vals) = {

sum = 0

foreach v in vals sum += v

emit (k, sum)

}

(Hello, [1, 1, 1]) ⇒ reduce() ⇒ (Hello, 3)

(Bye, [1]) ⇒ reduce() ⇒ (Bye, 1)

39 / 84

Page 44: Parallel Processing - MapReduce and FlumeJava

The Reducer Example (2/3)

I Get the maximum value of each intermediate key

reduce(k, vals) = emit (k, max(vals))

(KTH, [5, 1, 12, 7]) ⇒ reduce() ⇒ (KTH, 12)

40 / 84

Page 45: Parallel Processing - MapReduce and FlumeJava

The Reducer Example (3/3)

I Identify reducer

reduce(k, vals) = foreach v in vals emit (k, v))

(KTH, [5, 1, 12, 7]) ⇒ reduce() ⇒ (KTH, 5)

(KTH, 1)

(KTH, 12)

(KTH, 7)

41 / 84

Page 46: Parallel Processing - MapReduce and FlumeJava

Example: Word Count - map

public static class MyMap extends Mapper<...> {

private final static IntWritable one = new IntWritable(1);

private Text word = new Text();

public void map(LongWritable key, Text value, Context context)

throws IOException, InterruptedException {

String line = value.toString();

StringTokenizer tokenizer = new StringTokenizer(line);

while (tokenizer.hasMoreTokens()) {

word.set(tokenizer.nextToken());

context.write(word, one);

}

}

}

42 / 84

Page 47: Parallel Processing - MapReduce and FlumeJava

Example: Word Count - reduce

public static class MyReduce extends Reducer<...> {

public void reduce(Text key, Iterator<...> values, Context context)

throws IOException, InterruptedException {

int sum = 0;

while (values.hasNext())

sum += values.next().get();

context.write(key, new IntWritable(sum));

}

}

43 / 84

Page 48: Parallel Processing - MapReduce and FlumeJava

Example: Word Count - driver

public static void main(String[] args) throws Exception {

Configuration conf = new Configuration();

Job job = new Job(conf, "wordcount");

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(IntWritable.class);

job.setMapperClass(MyMap.class);

job.setCombinerClass(MyReduce.class);

job.setReducerClass(MyReduce.class);

job.setInputFormatClass(TextInputFormat.class);

job.setOutputFormatClass(TextOutputFormat.class);

FileInputFormat.addInputPath(job, new Path(args[0]));

FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.waitForCompletion(true);

}

44 / 84

Page 49: Parallel Processing - MapReduce and FlumeJava

45 / 84

Page 50: Parallel Processing - MapReduce and FlumeJava

Implementation

46 / 84

Page 51: Parallel Processing - MapReduce and FlumeJava

Architecture

47 / 84

Page 52: Parallel Processing - MapReduce and FlumeJava

MapReduce Execution (1/7)

I The user program divides the input files into M splits.• A typical size of a split is the size of a HDFS block (64 MB).• Converts them to key/value pairs.

I It starts up many copies of the program on a cluster of machines.

48 / 84

Page 53: Parallel Processing - MapReduce and FlumeJava

MapReduce Execution (2/7)

I One of the copies of the program is master, and the rest are workers.

I The master assigns works to the workers.• It picks idle workers and assigns each one a map task or a reduce task.

49 / 84

Page 54: Parallel Processing - MapReduce and FlumeJava

MapReduce Execution (3/7)

I A map worker reads the contents of the corresponding input splits.

I It parses key/value pairs out of the input data and passes each pair to the user definedmap function.

I The key/value pairs produced by the map function are buffered in memory.

50 / 84

Page 55: Parallel Processing - MapReduce and FlumeJava

MapReduce Execution (4/7)

I The buffered pairs are periodically written to local disk.• They are partitioned into R regions (hash(key) mod R).

I The locations of the buffered pairs on the local disk are passed back to the master.

I The master forwards these locations to the reduce workers.

51 / 84

Page 56: Parallel Processing - MapReduce and FlumeJava

MapReduce Execution (5/7)

I A reduce worker reads the buffered data from the local disks of the map workers.

I When a reduce worker has read all intermediate data, it sorts it by the intermediatekeys.

52 / 84

Page 57: Parallel Processing - MapReduce and FlumeJava

MapReduce Execution (6/7)

I The reduce worker iterates over the intermediate data.

I For each unique intermediate key, it passes the key and the corresponding set ofintermediate values to the user defined reduce function.

I The output of the reduce function is appended to a final output file for this reducepartition.

53 / 84

Page 58: Parallel Processing - MapReduce and FlumeJava

MapReduce Execution (7/7)

I When all map tasks and reduce tasks have been completed, the master wakes up theuser program.

54 / 84

Page 59: Parallel Processing - MapReduce and FlumeJava

Hadoop MapReduce and HDFS

55 / 84

Page 60: Parallel Processing - MapReduce and FlumeJava

Fault Tolerance - Worker

I Detect failure via periodic heartbeats.

I Re-execute in-progress map and reduce tasks.

I Re-execute completed map tasks: their output is stored on the local disk of the failedmachine and is therefore inaccessible.

I Completed reduce tasks do not need to be re-executed since their output is stored ina global filesystem.

56 / 84

Page 61: Parallel Processing - MapReduce and FlumeJava

Fault Tolerance - Master

I State is periodically checkpointed: a new copy of master starts from the lastcheckpoint state.

57 / 84

Page 62: Parallel Processing - MapReduce and FlumeJava

MapReduce Algorithm Design

58 / 84

Page 63: Parallel Processing - MapReduce and FlumeJava

MapReduce Algorithm Design

I Local aggregation

I Joining

I Sorting

59 / 84

Page 64: Parallel Processing - MapReduce and FlumeJava

MapReduce Algorithm Design

I Local aggregation

I Joining

I Sorting

60 / 84

Page 65: Parallel Processing - MapReduce and FlumeJava

Local Aggregation - In-Map Combiner (1/2)

I In some cases, there is significant repetition in the intermediate keys produced byeach map task, and the reduce function is commutative and associative.

61 / 84

Page 66: Parallel Processing - MapReduce and FlumeJava

Local Aggregation - In-Map Combiner (2/2)

I Merge partially data before it is sent over the network to the reducer.

I Typically the same code for the combiner and the reduce function.

62 / 84

Page 67: Parallel Processing - MapReduce and FlumeJava

MapReduce Algorithm Design

I Local aggregation

I Joining

I Sorting

63 / 84

Page 68: Parallel Processing - MapReduce and FlumeJava

Joins

I Joins are relational constructs you use to combine relations together.

I In MapReduce joins are applicable in situations where you have two or moredatasets you want to combine.

64 / 84

Page 69: Parallel Processing - MapReduce and FlumeJava

Joins - Two Strategies

I Reduce-side join• Repartition join• When joining two or more large datasets together

I Map-side join• Replication join• When one of the datasets is small enough to cache

65 / 84

Page 70: Parallel Processing - MapReduce and FlumeJava

Joins - Reduce-Side Join

[M. Donald et al., MapReduce design patterns, O’Reilly, 2012.]

66 / 84

Page 71: Parallel Processing - MapReduce and FlumeJava

Joins - Map-Side Join

[M. Donald et al., MapReduce design patterns, O’Reilly, 2012.]

67 / 84

Page 72: Parallel Processing - MapReduce and FlumeJava

MapReduce Algorithm Design

I Local aggregation

I Joining

I Sorting

68 / 84

Page 73: Parallel Processing - MapReduce and FlumeJava

Sort (1/3)

I Assume you want to have your job output in total sort order.

I Trivial with a single Reducer.• Keys are passed to the Reducer in sorted order.

69 / 84

Page 74: Parallel Processing - MapReduce and FlumeJava

Sort (2/3)

I What if we have multiple Reducer?

70 / 84

Page 75: Parallel Processing - MapReduce and FlumeJava

Sort (3/3)

I For multiple Reducers we need to choose a partitioning function

key1 < key2 ⇒ partition(key1) ≤ partition(key2)

71 / 84

Page 76: Parallel Processing - MapReduce and FlumeJava

FlumeJava

72 / 84

Page 77: Parallel Processing - MapReduce and FlumeJava

Motivation (1/2)

I It is easy in MapReduce:words(doc.txt) | sort | uniq -c

I What about this one?words(doc.txt) | grep | sed | sort | awk | perl

73 / 84

Page 78: Parallel Processing - MapReduce and FlumeJava

Motivation (1/2)

I It is easy in MapReduce:words(doc.txt) | sort | uniq -c

I What about this one?words(doc.txt) | grep | sed | sort | awk | perl

73 / 84

Page 79: Parallel Processing - MapReduce and FlumeJava

Motivation (2/2)

I Big jobs in MapReduce run in more than one Map-Reduce stages.

I Reducers of each stage write to replicated storage, e.g., HDFS.

74 / 84

Page 80: Parallel Processing - MapReduce and FlumeJava

FlumeJava

I FlumeJava is a library provided by Google to simply the creation of pipelined MapRe-duce tasks.

75 / 84

Page 81: Parallel Processing - MapReduce and FlumeJava

Core Idea

I Providing a couple of immutable parallel collections• PCollection<T> and PTable<K, V>

I Providing a number of parallel operations for processing the parallel collections• parallelDo, groupByKey, combineValues and flatten

I Building an execution plan dataflow graph

I Optimizing the execution plan, and then executing it

76 / 84

Page 82: Parallel Processing - MapReduce and FlumeJava

Core Idea

I Providing a couple of immutable parallel collections• PCollection<T> and PTable<K, V>

I Providing a number of parallel operations for processing the parallel collections• parallelDo, groupByKey, combineValues and flatten

I Building an execution plan dataflow graph

I Optimizing the execution plan, and then executing it

76 / 84

Page 83: Parallel Processing - MapReduce and FlumeJava

Core Idea

I Providing a couple of immutable parallel collections• PCollection<T> and PTable<K, V>

I Providing a number of parallel operations for processing the parallel collections• parallelDo, groupByKey, combineValues and flatten

I Building an execution plan dataflow graph

I Optimizing the execution plan, and then executing it

76 / 84

Page 84: Parallel Processing - MapReduce and FlumeJava

Parallel Collections

I A few classes that represent parallel collections and abstract away the details of howdata is represented.

I PCollection<T>: an immutable bag of elements of type T.

I PTable<K, V>: an immutable multi-map with keys of type K and values of type V.

I The main way to manipulate these collections is to invoke a data-parallel operationon them.

77 / 84

Page 85: Parallel Processing - MapReduce and FlumeJava

Parallel Collections

I A few classes that represent parallel collections and abstract away the details of howdata is represented.

I PCollection<T>: an immutable bag of elements of type T.

I PTable<K, V>: an immutable multi-map with keys of type K and values of type V.

I The main way to manipulate these collections is to invoke a data-parallel operationon them.

77 / 84

Page 86: Parallel Processing - MapReduce and FlumeJava

Parallel Collections

I A few classes that represent parallel collections and abstract away the details of howdata is represented.

I PCollection<T>: an immutable bag of elements of type T.

I PTable<K, V>: an immutable multi-map with keys of type K and values of type V.

I The main way to manipulate these collections is to invoke a data-parallel operationon them.

77 / 84

Page 87: Parallel Processing - MapReduce and FlumeJava

Parallel Operations (1/2)

I parallelDo(): elementwise computation over an input PCollection<T> to pro-duce a new output PCollection<S>.

I groupByKey(): converts a multi-map of type PTable<K, V> into a uni-map of typePTable<K, Collection<V>>.

78 / 84

Page 88: Parallel Processing - MapReduce and FlumeJava

Parallel Operations (1/2)

I parallelDo(): elementwise computation over an input PCollection<T> to pro-duce a new output PCollection<S>.

I groupByKey(): converts a multi-map of type PTable<K, V> into a uni-map of typePTable<K, Collection<V>>.

78 / 84

Page 89: Parallel Processing - MapReduce and FlumeJava

Parallel Operations (2/2)

I combineValues(): takes an input PTable<K, Collection<V>> and an associativecombining function on Vs, and returns a PTable<K, V>, where each input collectionof values has been combined into a single output value.

I flatten(): takes a list of PCollection<T>s and returns a single PCollection<T>

that contains all the elements of the input PCollections.

79 / 84

Page 90: Parallel Processing - MapReduce and FlumeJava

Parallel Operations (2/2)

I combineValues(): takes an input PTable<K, Collection<V>> and an associativecombining function on Vs, and returns a PTable<K, V>, where each input collectionof values has been combined into a single output value.

I flatten(): takes a list of PCollection<T>s and returns a single PCollection<T>

that contains all the elements of the input PCollections.

79 / 84

Page 91: Parallel Processing - MapReduce and FlumeJava

Word Count in FlumeJava

public class WordCount {

public static void main(String[] args) throws Exception {

Pipeline pipeline = new MRPipeline(WordCount.class);

PCollection<String> lines = pipeline.readTextFile(args[0]);

PCollection<String> words = lines.parallelDo(new DoFn<String, String>() {

public void process(String line, Emitter<String> emitter) {

for (String word : line.split("\\s+")) {

emitter.emit(word);

}

}

}, Writables.strings());

PTable<String, Long> counts = Aggregate.count(words);

pipeline.writeTextFile(counts, args[1]);

pipeline.done();

}

}

80 / 84

Page 92: Parallel Processing - MapReduce and FlumeJava

Summary

81 / 84

Page 93: Parallel Processing - MapReduce and FlumeJava

Summary

I Scaling out: shared nothing architecture

I MapReduce• Programming model: Map and Reduce• Execution framework

I FlumeJava• Dataflow DAG• Parallel collection: PCollection and PTable• Transforms: ParallelDo, GroupByKey, CombineValues, Flatten

82 / 84

Page 94: Parallel Processing - MapReduce and FlumeJava

References

I J. Dean et al., ”MapReduce: simplified data processing on large clusters”, Commu-nications of the ACM, 2008.

I C. Chambers et al., ”FlumeJava: easy, efficient data-parallel pipelines”, ACM SigplanNotices, 2010.

I J. Lin et al., ”Data-intensive text processing with MapReduce”, Synthesis Lectureson Human Language Technologies, 2010.

83 / 84

Page 95: Parallel Processing - MapReduce and FlumeJava

Questions?

84 / 84