joint mplane / bigfoot phd school: day 1 · day2: in-depth description of hadoop mapreduce i...

86
Joint mPlane / BigFoot PhD School: DAY 1 Hadoop MapReduce: Theory and Practice Pietro Michiardi Eurecom bigfootproject.eu ict-mplane.eu Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 1 / 86

Upload: others

Post on 23-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Joint mPlane / BigFoot PhD School: DAY 1Hadoop MapReduce: Theory and Practice

Pietro Michiardi

Eurecom

bigfootproject.eu ict-mplane.eu

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 1 / 86

Page 2: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Sources and Acks

Jimmy Lin and Chris Dyer, “Data-Intensive Text Processing withMapReduce,” Morgan & Claypool Publishers, 2010.http://lintool.github.io/MapReduceAlgorithms/

Tom White, “Hadoop, The Definitive Guide,” O’Reilly / YahooPress, 2012

Anand Rajaraman, Jeffrey D. Ullman, Jure Leskovec, “Mining ofMassive Datasets”, Cambridge University Press, 2013

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 2 / 86

Page 3: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Introduction and Motivations

Introduction andMotivations

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 3 / 86

Page 4: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Introduction and Motivations

What is MapReduce?

A programming model:I Inspired by functional programmingI Parallel computations on massive amounts of data

An execution framework:I Designed for large-scale data processingI Designed to run on clusters of commodity hardware

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 4 / 86

Page 5: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Introduction and Motivations

What is this PhD School About

DAY1: The MapReduce Programming ModelI Principles of functional programmingI Scalable algorithm design

DAY2: In-depth description of Hadoop MapReduceI Architecture internalsI Software componentsI Cluster deployments

DAY3: Relational Algebra and High-Level LanguagesI Basic operators and their equivalence in MapReduceI Hadoop Pig and PigLatin

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 5 / 86

Page 6: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Introduction and Motivations

What is Big Data?

Vast repositories of dataI The WebI PhysicsI AstronomyI Finance

Volume, Velocity, Variety

It’s not the algorithm, it’s the data! [1]I More data leads to better accuracyI With more data, accuracy of different algorithms converges

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 6 / 86

Page 7: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Key Principles

Key Principles

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 7 / 86

Page 8: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Key Principles

Scale out, not up!

For data-intensive workloads, a large number of commodityservers is preferred over a small number of high-end servers

I Cost of super-computers is not linearI But datacenter efficiency is a difficult problem to solve [2, 4]

Some numbers (∼ 2012):I Data processed by Google every day: 100+ PBI Data processed by Facebook every day: 10+ PB

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 8 / 86

Page 9: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Key Principles

Implications of Scaling Out

Processing data is quick, I/O is very slowI 1 HDD = 75 MB/secI 1000 HDDs = 75 GB/sec

Sharing vs. Shared nothing:I Sharing: manage a common/global stateI Shared nothing: independent entities, no common state

Sharing is difficult:I Synchronization, deadlocksI Finite bandwidth to access data from SANI Temporal dependencies are complicated (restarts)

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 9 / 86

Page 10: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Key Principles

Failures are the norm, not the exception

LALN data [DSN 2006]I Data for 5000 machines, for 9 yearsI Hardware: 60%, Software: 20%, Network 5%

DRAM error analysis [Sigmetrics 2009]I Data for 2.5 yearsI 8% of DIMMs affected by errors

Disk drive failure analysis [FAST 2007]I Utilization and temperature major causes of failures

Amazon Web Service(s) failures [Several!]I Cascading effect

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 10 / 86

Page 11: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Key Principles

Implications of Failures

Failures are part of everyday lifeI Mostly due to the scale and shared environment

Sources of FailuresI Hardware / SoftwareI Electrical, Cooling, ...I Unavailability of a resource due to overload

Failure TypesI PermanentI Transient

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 11 / 86

Page 12: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Key Principles

Move Processing to the Data

Drastic departure from high-performance computing modelI HPC: distinction between processing nodes and storage nodesI HPC: CPU intensive tasks

Data intensive workloadsI Generally not processor demandingI The network becomes the bottleneckI MapReduce assumes processing and storage nodes to be

collocated→ Data Locality Principle

Distributed filesystems are necessary

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 12 / 86

Page 13: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Key Principles

Process Data Sequentially and Avoid Random Access

Data intensive workloadsI Relevant datasets are too large to fit in memoryI Such data resides on disks

Disk performance is a bottleneckI Seek times for random disk access are the problem

F Example: 1 TB DB with 1010 100-byte records. Updates on 1%requires 1 month, reading and rewriting the whole DB would take 1day1

I Organize computation for sequential reads

1From a post by Ted Dunning on the Hadoop mailing listPietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 13 / 86

Page 14: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Key Principles

Implications of Data Access Patterns

MapReduce is designed for:I Batch processingI involving (mostly) full scans of the data

Typically, data is collected “elsewhere” and copied to thedistributed filesystem

I E.g.: Apache Flume, Hadoop Sqoop, · · ·

Data-intensive applicationsI Read and process the whole Web (e.g. PageRank)I Read and process the whole Social Graph (e.g. LinkPrediction,

a.k.a. “friend suggest”)I Log analysis (e.g. Network traces, Smart-meter data, · · · )

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 14 / 86

Page 15: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Key Principles

Hide System-level Details

Separate the what from the howI MapReduce abstracts away the “distributed” part of the systemI Such details are handled by the framework

BUT: In-depth knowledge of the framework is keyI Custom data reader/writerI Custom data partitioningI Memory utilization

Auxiliary componentsI Hadoop PigI Hadoop HiveI Cascading/ScaldingI ... and many many more!

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 15 / 86

Page 16: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Key Principles

Seamless Scalability

We can define scalability along two dimensionsI In terms of data: given twice the amount of data, the same

algorithm should take no more than twice as long to runI In terms of resources: given a cluster twice the size, the same

algorithm should take no more than half as long to run

Embarrassingly parallel problemsI Simple definition: independent (shared nothing) computations on

fragments of the datasetI How to to decide if a problem is embarrassingly parallel or not?

MapReduce is a first attempt, not the final answer

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 16 / 86

Page 17: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

The Programming Model

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 17 / 86

Page 18: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

Functional Programming Roots

Key feature: higher order functionsI Functions that accept other functions as argumentsI Map and Fold

f f f f f

g g g g g

Figure: Illustration of map and fold.

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 18 / 86

Page 19: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

Functional Programming Roots

map phase:I Given a list, map takes as an argument a function f (that takes a

single argument) and applies it to all element in a list

fold phase:I Given a list, fold takes as arguments a function g (that takes two

arguments) and an initial value (an accumulator)I g is first applied to the initial value and the first item in the listI The result is stored in an intermediate variable, which is used as an

input together with the next item to a second application of gI The process is repeated until all items in the list have been

consumed

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 19 / 86

Page 20: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

Functional Programming Roots

We can view map as a transformation over a datasetI This transformation is specified by the function fI Each functional application happens in isolationI The application of f to each element of a dataset can be

parallelized in a straightforward manner

We can view fold as an aggregation operationI The aggregation is defined by the function gI Data locality: elements in the list must be “brought together”I If we can group elements of the list, also the fold phase can

proceed in parallel

Associative and commutative operationsI Allow performance gains through local aggregation and reordering

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 20 / 86

Page 21: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

Functional Programming and MapReduce

Equivalence of MapReduce and Functional Programming:I The map of MapReduce corresponds to the map operationI The reduce of MapReduce corresponds to the fold operation

The framework coordinates the map and reduce phases:I Grouping intermediate results happens in parallel

In practice:I User-specified computation is applied (in parallel) to all input

records of a datasetI Intermediate results are aggregated by another user-specified

computation

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 21 / 86

Page 22: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

What can we do with MapReduce?

MapReduce “implements” a subset of functionalprogramming

I The programming model appears quite limited and strict

There are several important problems that can be adapted toMapReduce

I We will focus on illustrative casesI We will see in detail “design patterns”

F How to transform a problem and its inputF How to save memory and bandwidth in the system

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 22 / 86

Page 23: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

Data Structures

Key-value pairs are the basic data structure in MapReduceI Keys and values can be: integers, float, strings, raw bytesI They can also be arbitrary data structures

The design of MapReduce algorithms involves:I Imposing the key-value structure on arbitrary datasets2

F E.g.: for a collection of Web pages, input keys may be URLs andvalues may be the HTML content

I In some algorithms, input keys are not used, in others they uniquelyidentify a record

I Keys can be combined in complex ways to design variousalgorithms

2There’s more about it: here we only look at the input to the map function.Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 23 / 86

Page 24: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

A Generic MapReduce Algorithm

The programmer defines a mapper and a reducer as follows3:I map: (k1, v1)→ [(k2, v2)]I reduce: (k2, [v2])→ [(k3, v3)]

In words:I A dataset stored on an underlying distributed filesystem, which is

split in a number of blocks across machinesI The mapper is applied to every input key-value pair to generate

intermediate key-value pairsI The reducer is applied to all values associated with the same

intermediate key to generate output key-value pairs

3We use the convention [· · · ] to denote a list.Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 24 / 86

Page 25: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

Where the magic happens

Implicit between the map and reduce phases is a parallel“group by” operation on intermediate keys

I Intermediate data arrive at each reducer in order, sorted by the keyI No ordering is guaranteed across reducers

Output keys from reducers are written back to the distributedfilesystem

I The output may consist of r distinct files, where r is the number ofreducers

I Such output may be the input to a subsequent MapReduce phase4

Intermediate keys are transient:I They are not stored on the distributed filesystemI They are “spilled” to the local disk of each machine in the cluster

4Think of iterative algorithms.Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 25 / 86

Page 26: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

A Simplified view of MapReduce

Figure: Mappers are applied to all input key-value pairs, to generate anarbitrary number of intermediate pairs. Reducers are applied to allintermediate values associated with the same intermediate key. Between themap and reduce phase lies a barrier that involves a large distributed sort andgroup by.

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 26 / 86

Page 27: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

“Hello World” in MapReduce

Figure: Pseudo-code for the word count algorithm.

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 27 / 86

Page 28: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

The Programming Model

“Hello World” in MapReduce

Input:I Key-value pairs: (docid, doc) stored on the distributed filesystemI docid: unique identifier of a documentI doc: is the text of the document itself

Mapper:I Takes an input key-value pair, tokenize the documentI Emits intermediate key-value pairs: the word is the key and the

integer is the valueThe framework:

I Guarantees all values associated with the same key (the word) arebrought to the same reducer

The reducer:I Receives all values associated to some keysI Sums the values and writes output key-value pairs: the key is the

word and the value is the number of occurrences

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 28 / 86

Page 29: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns

Basic Design Patterns

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 29 / 86

Page 30: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns

Algorithm Design

Developing algorithms involve:I Preparing the input dataI Implement the mapper and the reducerI Optionally, design the combiner and the partitioner

How to recast existing algorithms in MapReduce?I It is not always obvious how to express algorithmsI Data structures play an important roleI Optimization is hard→ The designer needs to “bend” the framework

Learn by examplesI “Design patterns”I “Shuffle” is perhaps the most tricky aspect

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 30 / 86

Page 31: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns

Algorithm Design

Aspects that are not under the control of the designerI Where a mapper or reducer will runI When a mapper or reducer begins or finishesI Which input key-value pairs are processed by a specific mapperI Which intermediate key-value pairs are processed by a specific

reducer

Aspects that can be controlledI Construct data structures as keys and valuesI Execute user-specified initialization and termination code for

mappers and reducersI Preserve state across multiple input and intermediate keys in

mappers and reducersI Control the sort order of intermediate keys, and therefore the order

in which a reducer will encounter particular keysI Control the partitioning of the key space, and therefore the set of

keys that will be encountered by a particular reducer

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 31 / 86

Page 32: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns

Algorithm Design

MapReduce algorithms can be complexI Many algorithms cannot be easily expressed as a single

MapReduce jobI Decompose complex algorithms into a sequence of jobs

F Requires orchestrating data so that the output of one job becomesthe input to the next

I Iterative algorithms require an external driver to check forconvergence

Basic design patterns5

I Local AggregationI Pairs and StripesI Order inversion

5You will see them in action during the DAY2 laboratory session.Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 32 / 86

Page 33: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Local Aggregation

In the context of data-intensive distributed processing, themost important aspect of synchronization is the exchange ofintermediate results

I This involves copying intermediate results from the processes thatproduced them to those that consume them

I In general, this involves data transfers over the networkI In Hadoop, also disk I/O is involved, as intermediate results are

written to disk

Network and disk latencies are expensiveI Reducing the amount of intermediate data translates into

algorithmic efficiency

Combiners and preserving state across inputsI Reduce the number and size of key-value pairs to be shuffled

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 33 / 86

Page 34: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Combiners

Combiners are a general mechanism to reduce the amount ofintermediate data

I They could be thought of as “mini-reducers”

Back to our running example: word countI Combiners aggregate term counts across documents processed by

each map taskI If combiners take advantage of all opportunities for local

aggregation we have at most m × V intermediate key-value pairsF m: number of mappersF V : number of unique terms in the collection

I Note: due to Zipfian nature of term distributions, not all mappers willsee all terms

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 34 / 86

Page 35: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Combiners: an illustration

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 35 / 86

Page 36: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Word Counting in MapReduce

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 36 / 86

Page 37: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

In-Mapper Combiners

In-Mapper Combiners, a possible improvementI Hadoop does not guarantee combiners to be executed

Use an associative array to cumulate intermediate resultsI The array is used to tally up term counts within a single documentI The Emit method is called only after all InputRecords have been

processed

Example (see next slide)I The code emits a key-value pair for each unique term in the

document

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 37 / 86

Page 38: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

In-Memory Combiners

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 38 / 86

Page 39: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

In-Memory Combiners

Taking the idea one step furtherI Exploit implementation details in Hadoop6

I A Java mapper object is created for each map taskI JVM reuse must be enabled

Preserve state within and across calls to the Map methodI Initialize method, used to create a across-map persistent data

structureI Close method, used to emit intermediate key-value pairs only

when all map task scheduled on one machine are done

6Forward reference! We’ll see more tomorrow.Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 39 / 86

Page 40: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

In-Memory Combiners

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 40 / 86

Page 41: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

In-Memory Combiners

Summing up: a first “design pattern”, in-memory combiningI Provides control over when local aggregation occursI Designer can determine how exactly aggregation is done

Efficiency vs. CombinersI There is no additional overhead due to the materialization of

key-value pairsF Un-necessary object creation and destruction (garbage collection)F Serialization, deserialization when memory bounded

I Mappers still need to emit all key-value pairs, combiners onlyreduce network traffic

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 41 / 86

Page 42: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

In-Memory Combiners

PrecautionsI In-memory combining breaks the functional programming paradigm

due to state preservationI Preserving state across multiple instances implies that algorithm

behavior might depend on execution orderF Ordering-dependent bugs are difficult to find

Scalability bottleneckI The in-memory combining technique strictly depends on having

sufficient memory to store intermediate resultsF And you don’t want the OS to deal with swapping

I Multiple threads compete for the same resourcesI A possible solution: “block” and “flush”

F Implemented with a simple counter

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 42 / 86

Page 43: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Further Remarks

The extent to which efficiency can be increased with localaggregation depends on the size of the intermediate keyspace

I Opportunities for aggregation arise when multiple values areassociated to the same keys

Local aggregation also effective to deal with reducestragglers

I Reduce the number of values associated with frequently occuringkeys

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 43 / 86

Page 44: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Algorithmic correctness with local aggregation

The use of combiners must be thought carefullyI In Hadoop, they are optional: the correctness of the algorithm

cannot depend on computation (or even execution) of thecombiners

In MapReduce, the reducer input key-value type must matchthe mapper output key-value type

I Hence, for combiners, both input and output key-value types mustmatch the output key-value type of the mapper

Commutative and Associative computationsI This is a special case, which worked for word counting

F There the combiner code is actually the reducer codeI In general, combiners and reducers are not interchangeable

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 44 / 86

Page 45: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Algorithmic Correctness: an ExampleProblem statement

I We have a large dataset where input keys are strings and inputvalues are integers

I We wish to compute the mean of all integers associated with thesame key

F In practice: the dataset can be a log from a website, where the keysare user IDs and values are some measure of activity

Next, a baseline approachI We use an identity mapper, which groups and sorts appropriately

input key-value pairsI Reducers keep track of running sum and the number of integers

encounteredI The mean is emitted as the output of the reducer, with the input

string as the key

Inefficiency problems in the shuffle phasePietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 45 / 86

Page 46: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Example: basic way to compute the mean of values

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 46 / 86

Page 47: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Algorithmic Correctness

Note: operations are not distributiveI Mean(1,2,3,4,5) 6= Mean(Mean(1,2), Mean(3,4,5))I Hence: a combiner cannot output partial means and hope that the

reducer will compute the correct final mean

Next, a failed attempt at solving the problemI The combiner partially aggregates results by separating the

components to arrive at the meanI The sum and the count of elements are packaged into a pairI Using the same input string, the combiner emits the pair

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 47 / 86

Page 48: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Example: Wrong use of combiners

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 48 / 86

Page 49: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Algorithmic Correctness: an Example

What’s wrong with the previous approach?I Trivially, the input/output keys are not correctI Remember that combiners are optimizations, the algorithm should

work even when “removing” them

Executing the code omitting the combiner phaseI The output value type of the mapper is integerI The reducer expects to receive a list of integersI Instead, we make it expect a list of pairs

Next, a correct implementation of the combinerI Note: the reducer is similar to the combiner!I Exercise: verify the correctness

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 49 / 86

Page 50: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Example: Correct use of combiners

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 50 / 86

Page 51: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Local Aggregation

Advanced technique

Using in-mapper combiningI Inside the mapper, the partial sums and counts are held in memory

(across inputs)I Intermediate values are emitted only after the entire input split is

processedI Similarly to before, the output value is a pair

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 51 / 86

Page 52: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Pairs and Stripes

Pairs and Stripes

A common approach in MapReduce: build complex keysI Data necessary for a computation are naturally brought together by

the framework

Two basic techniques:I Pairs: similar to the example on the averageI Stripes: uses in-mapper memory data structures

Next, we focus on a particular problem that benefits fromthese two methods

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 52 / 86

Page 53: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Pairs and Stripes

Problem statement

The problem: building word co-occurrence matrices for largecorpora

I The co-occurrence matrix of a corpus is a square n × n matrixI n is the number of unique words (i.e., the vocabulary size)I A cell mij contains the number of times the word wi co-occurs with

word wj within a specific contextI Context: a sentence, a paragraph a document or a window of m

wordsI NOTE: the matrix may be symmetric in some cases

MotivationI This problem is a basic building block for more complex operationsI Estimating the distribution of discrete joint events from a large

number of observationsI Similar problem in other domains:

F Customers who buy this tend to also buy that

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 53 / 86

Page 54: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Pairs and Stripes

Observations

Space requirementsI Clearly, the space requirement is O(n2), where n is the size of the

vocabularyI For real-world (English) corpora n can be hundreds of thousands of

words, or even billions of worlds in some specific cases

So what’s the problem?I If the matrix can fit in the memory of a single machine, then just use

whatever naive implementationI Instead, if the matrix is bigger than the available memory, then

paging would kick in, and any naive implementation would break

CompressionI Such techniques can help in solving the problem on a single

machineI However, there are scalability problems

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 54 / 86

Page 55: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Pairs and Stripes

Word co-occurrence: the Pairs approachInput to the problem

I Key-value pairs in the form of a docid and a doc

The mapper:I Processes each input documentI Emits key-value pairs with:

F Each co-occurring word pair as the keyF The integer one (the count) as the value

I This is done with two nested loops:F The outer loop iterates over all wordsF The inner loop iterates over all neighbors

The reducer:I Receives pairs related to co-occurring words

F This requires modifying the partitionerI Computes an absolute count of the joint eventI Emits the pair and the count as the final key-value output

F Basically reducers emit the cells of the output matrix

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 55 / 86

Page 56: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Pairs and Stripes

Word co-occurrence: the Pairs approach

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 56 / 86

Page 57: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Pairs and Stripes

Word co-occurrence: the Stripes approach

Input to the problemI Key-value pairs in the form of a docid and a doc

The mapper:I Same two nested loops structure as beforeI Co-occurrence information is first stored in an associative arrayI Emit key-value pairs with words as keys and the corresponding

arrays as values

The reducer:I Receives all associative arrays related to the same wordI Performs an element-wise sum of all associative arrays with the

same keyI Emits key-value output in the form of word, associative array

F Basically, reducers emit rows of the co-occurrence matrix

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 57 / 86

Page 58: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Pairs and Stripes

Word co-occurrence: the Stripes approach

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 58 / 86

Page 59: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Pairs and Stripes

Pairs and Stripes, a comparison

The pairs approachI Generates a large number of key-value pairs

F In particular, intermediate ones, that fly over the networkI The benefit from combiners is limited, as it is less likely for a

mapper to process multiple occurrences of a wordI Does not suffer from memory paging problems

The stripes approachI More compactI Generates fewer and shorted intermediate keys

F The framework has less sorting to doI The values are more complex and have serialization/deserialization

overheadI Greatly benefits from combiners, as the key space is the vocabularyI Suffers from memory paging problems, if not properly engineered

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 59 / 86

Page 60: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Order Inversion

Computing relative frequencies

“Relative” Co-occurrence matrix constructionI Similar problem as before, same matrixI Instead of absolute counts, we take into consideration the fact that

some words appear more frequently than othersF Word wi may co-occur frequently with word wj simply because one of

the two is very commonI We need to convert absolute counts to relative frequencies f (wj |wi)

F What proportion of the time does wj appear in the context of wi?

Formally, we compute:

f (wj |wi) =N(wi ,wj)∑w ′ N(wi ,w ′)

I N(·, ·) is the number of times a co-occurring word pair is observedI The denominator is called the marginal

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 60 / 86

Page 61: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Order Inversion

Computing relative frequencies

The stripes approachI In the reducer, the counts of all words that co-occur with the

conditioning variable (wi ) are available in the associative arrayI Hence, the sum of all those counts gives the marginalI Then we divide the the joint counts by the marginal and we’re done

The pairs approachI The reducer receives the pair (wi ,wj) and the countI From this information alone it is not possible to compute f (wj |wi)I Fortunately, as for the mapper, also the reducer can preserve state

across multiple keysF We can buffer in memory all the words that co-occur with wi and their

countsF This is basically building the associative array in the stripes method

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 61 / 86

Page 62: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Order Inversion

Computing relative frequencies: a basic approachWe must define the sort order of the pair

I In this way, the keys are first sorted by the left word, and then by theright word (in the pair)

I Hence, we can detect if all pairs associated with the word we areconditioning on (wi ) have been seen

I At this point, we can use the in-memory buffer, compute the relativefrequencies and emit

We must define an appropriate partitionerI The default partitioner is based on the hash value of the

intermediate key, modulo the number of reducersI For a complex key, the raw byte representation is used to compute

the hash valueF Hence, there is no guarantee that the pair (dog, aardvark) and

(dog,zebra) are sent to the same reducerI What we want is that all pairs with the same left word are sent to

the same reducer

Limitations of this approachI Essentially, we reproduce the stripes method on the reducer and

we need to use a custom partitionerI This algorithm would work, but present the same

memory-bottleneck problem as the stripes method

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 62 / 86

Page 63: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Order Inversion

Computing relative frequencies: order inversion

The key is to properly sequence data presented to reducersI If it were possible to compute the marginal in the reducer before

processing the joint counts, the reducer could simply divide the jointcounts received from mappers by the marginal

I The notion of “before” and “after” can be captured in the ordering ofkey-value pairs

I The programmer can define the sort order of keys so that dataneeded earlier is presented to the reducer before data that isneeded later

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 63 / 86

Page 64: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Order Inversion

Computing relative frequencies: order inversion

Recall that mappers emit pairs of co-occurring words as keys

The mapper:I additionally emits a “special” key of the form (wi , ∗)I The value associated to the special key is one, that represents the

contribution of the word pair to the marginalI Using combiners, these partial marginal counts will be aggregated

before being sent to the reducers

The reducer:I We must make sure that the special key-value pairs are processed

before any other key-value pairs where the left word is wiI We also need to modify the partitioner as before, i.e., it would take

into account only the first word

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 64 / 86

Page 65: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Basic Design Patterns Order Inversion

Computing relative frequencies: order inversion

Memory requirements:I Minimal, because only the marginal (an integer) needs to be storedI No buffering of individual co-occurring wordI No scalability bottleneck

Key ingredients for order inversionI Emit a special key-value pair to capture the marginalI Control the sort order of the intermediate key, so that the special

key-value pair is processed firstI Define a custom partitioner for routing intermediate key-value pairsI Preserve state across multiple keys in the reducer

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 65 / 86

Page 66: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional]

Graph Algorithms[Optional]

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 66 / 86

Page 67: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] Preliminaries

Motivations

Examples of graph problemsI ClusteringI Matching problemsI Element analysis: node and edge centralities

The problem: big graphs

Why MapReduce?I Algorithms for the above problems on a single machine are not

scalableI Recently, Google designed a new system, Pregel, for large-scale

(incremental) graph processingI Even more recently, [5] indicate a fundamentally new design pattern

to analyze graphs in MapReduceI New trend: graph databases, graph processing systems7

7If you’re interested, we’ll discuss this off-line.Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 67 / 86

Page 68: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] Preliminaries

Graph Representations

Basic data structuresI Adjacency matrixI Adjacency list

Are graphs sparse or dense?I Determines which data-structure to use

F Adjacency matrix: operations on incoming links are easy (columnscan)

F Adjacency list: operations on outgoing links are easyF The shuffle and sort phase can help, by grouping edges by their

destination reducerI [6] dispelled the notion of sparseness of real-world graphs

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 68 / 86

Page 69: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] Breadth-First Search

Parallel Breadth-First Search

Single-source shortest pathI Dijkstra algorithm using a global priority queue

F Maintains a globally sorted list of nodes by current distanceI How to solve this problem in parallel?

F “Brute-force” approach: breadth-first search

Parallel BFS: intuitionI FloodingI Iterative algorithm in MapReduceI Shoehorn message passing style algorithms

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 69 / 86

Page 70: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] Breadth-First Search

Parallel Breadth-First Search

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 70 / 86

Page 71: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] Breadth-First Search

Parallel Breadth-First Search

AssumptionsI Connected, directed graphI Data structure: adjacency listI Distance to each node is stored alongside the adjacency list of that

node

The pseudo-codeI We use n to denote the node id (an integer)I We use N to denote the node adjacency list and current distanceI The algorithm works by mapping over all nodesI Mappers emit a key-value pair for each neighbor on the node’s

adjacency listF The key: node id of the neighborF The value: the current distance to the node plus oneF If we can reach node n with a distance d , then we must be able to

reach all the nodes connected to n with distance d + 1

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 71 / 86

Page 72: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] Breadth-First Search

Parallel Breadth-First Search

The pseudo-code (continued)I After shuffle and sort, reducers receive keys corresponding to the

destination node ids and distances corresponding to all pathsleading to that node

I The reducer selects the shortest of these distances and update thedistance in the node data structure

Passing the graph alongI The mapper: emits the node adjacency list, with the node id as the

keyI The reducer: must distinguish between the node data structure and

the distance values

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 72 / 86

Page 73: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] Breadth-First Search

Parallel Breadth-First Search

MapReduce iterationsI The first time we run the algorithm, we “discover” all nodes

connected to the sourceI The second iteration, we discover all nodes connected to those→ Each iteration expands the “search frontier” by one hopI How many iterations before convergence?

This approach is suitable for small-world graphsI The diameter of the network is smallI See [5] for advanced topics on the subject

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 73 / 86

Page 74: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] Breadth-First Search

Parallel Breadth-First Search

Checking the termination of the algorithmI Requires a “driver” program which submits a job, check termination

condition and eventually iteratesI In practice:

F Hadoop countersF Side-data to be passed to the job configuration

ExtensionsI Storing the actual shortest-pathI Weighted edges (as opposed to unit distance)

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 74 / 86

Page 75: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] Breadth-First Search

The story so far

The graph structure is stored in an adjacency listsI This data structure can be augmented with additional information

The MapReduce frameworkI Maps over the node data structures involving only the node’s

internal state and it’s local graph structureI Map results are “passed” along outgoing edgesI The graph itself is passed from the mapper to the reducer

F This is a very costly operation for large graphs!I Reducers aggregate over “same destination” nodes

Graph algorithms are generally iterativeI Require a driver program to check for termination

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 75 / 86

Page 76: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] PageRank

Introduction

What is PageRankI It’s a measure of the relevance of a Web page, based on the

structure of the hyperlink graphI Based on the concept of random Web surfer

Formally we have:

P(n) = α( 1|G|

)+ (1− α)

∑m∈L(n)

P(m)

C(m)

I |G| is the number of nodes in the graphI α is a random jump factorI L(n) is the set of out-going links from page nI C(m) is the out-degree of node m

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 76 / 86

Page 77: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] PageRank

PageRank in Details

PageRank is defined recursively, hence we need an iterativealgorithm

I A node receives “contributions” from all pages that link to it

Consider the set of nodes L(n)I A random surfer at m arrives at n with probability 1/C(m)I Since the PageRank value of m is the probability that the random

surfer is at m, the probability of arriving at n from m is P(m)/C(m)

To compute the PageRank of n we need:I Sum the contributions from all pages that link to nI Take into account the random jump, which is uniform over all nodes

in the graph

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 77 / 86

Page 78: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] PageRank

PageRank in MapReduce

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 78 / 86

Page 79: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] PageRank

PageRank in MapReduce

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 79 / 86

Page 80: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] PageRank

PageRank in MapReduce

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 80 / 86

Page 81: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] PageRank

PageRank in MapReduce

Sketch of the MapReduce algorithmI The algorithm maps over the nodesI For each node computes the PageRank mass the needs to be

distributed to neighborsI Each fraction of the PageRank mass is emitted as the value, keyed

by the node ids of the neighborsI In the shuffle and sort, values are grouped by node id

F Also, we pass the graph structure from mappers to reducers (forsubsequent iterations to take place over the updated graph)

I The reducer updates the value of the PageRank of every singlenode

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 81 / 86

Page 82: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

Graph Algorithms [Optional] PageRank

PageRank in MapReduce

Implementation detailsI Loss of PageRank mass for sink nodesI Auxiliary state informationI One iteration of the algorithm

F Two MapReduce jobs: one to distribute the PageRank mass, theother for dangling nodes and random jumps

I Checking for convergenceF Requires a driver programF When updates of PageRank are “stable” the algorithm stops

Further reading on convergence and attacksI Convergence: [7, 3]

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 82 / 86

Page 83: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

References

References

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 83 / 86

Page 84: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

References

References I

[1] Michele Banko and Eric Brill.Scaling to very very large corpora for natural languagedisambiguation.In Proc. of the 39th Annual Meeting of the Association forComputational Linguistic (ACL), 2001.

[2] Luiz Andre Barroso and Urs Holzle.The datacebter as a computer: An introduction to the design ofwarehouse-scale machines.Morgan & Claypool Publishers, 2009.

[3] Monica Bianchini, Marco Gori, and Franco Scarselli.Inside pagerank.In ACM Transactions on Internet Technology, 2005.

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 84 / 86

Page 85: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

References

References II

[4] James Hamilton.Cooperative expendable micro-slice servers (cems): Low cost, lowpower servers for internet-scale services.In Proc. of the 4th Biennal Conference on Innovative Data SystemsResearch (CIDR), 2009.

[5] Silvio Lattanzi, Benjamin Moseley, Siddharth Suri, and SergeiVassilvitskii.Filtering: a method for solving graph problems in mapreduce.In Proc. of SPAA, 2011.

[6] Jure Leskovec, Jon Kleinberg, and Christos Faloutsos.Graphs over time: Densification laws, shrinking diamters andpossible explanations.In Proc. of SIGKDD, 2005.

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 85 / 86

Page 86: Joint mPlane / BigFoot PhD School: DAY 1 · DAY2: In-depth description of Hadoop MapReduce I Architecture internals I Software components I Cluster deployments DAY3: Relational Algebra

References

References III

[7] Lawrence Page, Sergey Brin, Rajeev Motwani, and TerryWinograd.The pagerank citation ranking: Bringin order to the web.In Stanford Digital Library Working Paper, 1999.

Pietro Michiardi (Eurecom) Joint mPlane / BigFoot PhD School: DAY 1 86 / 86