apache hadoop talk at qcon

48
Apache Hadoop, Big Data, and You Philip Zeyliger [email protected] @philz42 @cloudera November 18, 2009 Wednesday, November 18, 2009

Upload: cloudera-inc

Post on 20-Aug-2015

3.383 views

Category:

Business


0 download

TRANSCRIPT

Apache Hadoop,Big Data, and You

Philip [email protected]@philz42 @clouderaNovember 18, 2009

Wednesday, November 18, 2009

Hi there!

Software Engineer

Worked at

Wednesday, November 18, 2009

I work on stuff...

Wednesday, November 18, 2009

Outline

Why should you care? (Intro)

Challenging yesteryear’s assumptions

The MapReduce Model

HDFS, Hadoop Map/Reduce

The Hadoop Ecosystem

Questions

Wednesday, November 18, 2009

Data is everywhere.

Data is important.

Wednesday, November 18, 2009

Wednesday, November 18, 2009

Wednesday, November 18, 2009

Wednesday, November 18, 2009

“I keep saying that the sexy job in the next 10 years will be

statisticians, and I’m not kidding.”

Hal Varian (Google’s chief economist)

Wednesday, November 18, 2009

So, what’s Hadoop?

The Little Prince, Antoine de Saint-Exupéry, Irene Testot-Ferry

Wednesday, November 18, 2009

Apache Hadoop is an open-source system (written in Java!) to store and

process

gobs of data across many commodity computers.

The Little Prince, Antoine de Saint-Exupéry, Irene Testot-Ferry

Wednesday, November 18, 2009

Two Big Components

HDFS Map/Reduce

Self-healing high-bandwidth

clustered storage.

Fault-tolerant distributed computing.

Wednesday, November 18, 2009

Challenging some of yesteryear’s

assumptions...

Wednesday, November 18, 2009

Assumption 1: Machines can be reliable...

Image: MadMan the Mighty CC BY-NC-SA

Wednesday, November 18, 2009

Hadoop Goal:

Separate distributed system fault-tolerance

code from application logic.

Systems Programmers Statisticians

Wednesday, November 18, 2009

Assumption 2: Machines have identities...

Image:Laughing Squid CC BY-NC-SA

Wednesday, November 18, 2009

Hadoop Goal:

Users should interact with clusters, not machines.

Wednesday, November 18, 2009

Assumption 3: A data set fits on one machine...Image: Matthew J. Stinson CC-BY-NCWednesday, November 18, 2009

Hadoop Goal:

System should scale linearly (or better) with

data size.

Wednesday, November 18, 2009

The M/R Programming Model

Wednesday, November 18, 2009

You specify map() and reduce() functions.

The framework does the rest.

Wednesday, November 18, 2009

map()

map: K₁,V₁→list K₂,V₂

public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> { /** * Called once for each key/value pair in the input split. Most applications * should override this, but the default is the identity function. */ protected void map(KEYIN key, VALUEIN value, Context context) throws IOException, InterruptedException { // context.write() can be called many times // this is default “identity mapper” implementation context.write((KEYOUT) key, (VALUEOUT) value); }}

Wednesday, November 18, 2009

(the shuffle)

map output is assigned to a “reducer”

map output is sorted by key

Wednesday, November 18, 2009

reduce()

K₂, iter(V₂)→list(K₃,V₃)

public class Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT> { /** * This method is called once for each key. Most applications will define * their reduce class by overriding this method. The default implementation * is an identity function. */ @SuppressWarnings("unchecked") protected void reduce(KEYIN key, Iterable<VALUEIN> values, Context context ) throws IOException, InterruptedException { for(VALUEIN value: values) { context.write((KEYOUT) key, (VALUEOUT) value); } }}

Wednesday, November 18, 2009

Physical Flow

Putting it together...

Logical Flow

Logical

Physical

Wednesday, November 18, 2009

Some samples...Build an inverted index.

Summarize data grouped by a key.

Build map tiles from geographic data.

OCRing many images.

Learning ML models. (e.g., Naive Bayes for text classification)

Augment traditional BI/DW technologies (by archiving raw data).

Wednesday, November 18, 2009

There’s more than the Java API

perl, python, ruby, whatever.

stdin/stdout/stderr

Higher-level dataflow language for easy ad-hoc analysis.

Developed at Yahoo!

SQL interface.

Great for analysts.

Developed at Facebook

Streaming Pig Hive

Friday, @10:10

Wednesday, November 18, 2009

A typical look...

Commodity servers (8-core, 8-16GB RAM, 4-12 TB, 2x1 gE NIC)

2-level network architecture

20-40 nodes per rack

Wednesday, November 18, 2009

The cast...

NameNode (metadata server and database)

SecondaryNameNode (assistant to NameNode)

JobTracker (scheduler)

DataNodes (block storage)

TaskTrackers (task execution)

Thanks to Zak Stone for earmuff image!

Starring...

The Chorus…

Wednesday, November 18, 2009

HDFS

Namenode

Datanodes

One Rack A Different Rack

3x64MB file, 3 rep

4x64MB file, 3 rep

Small file, 7 rep

Wednesday, November 18, 2009

HDFS Write Path

file in the filesystem’s namespace, with no blocks associated with it. (Step 2.) Thenamenode performs various checks to make sure the file doesn’t already exist, and thatthe client has the right permissions to create the file. If these checks pass, the namenodemakes a record of the new file, otherwise file creation fails and the client is thrown anIOException. The DistributedFileSystem returns a FSDataOutputStream for the client tostart writing data to. Just as in the read case, FSDataOutputStream wraps a DFSOutputStream, which handles communication with the datanodes and namenode.

As the client writes data (step 3.), DFSOutputStream splits it into packets, which it writesto an internal queue, called the data queue. The data queue is consumed by the DataStreamer, whose responsibility it is to ask the namenode to allocate new blocks bypicking a list of suitable datanodes to store the replicas. The list of datanodes forms apipeline—we’ll assume the replication level is three, so there are three nodes in thepipeline. The DataStreamer streams the packets to the first datanode in the pipeline,which stores the packet and forwards it to the second datanode in the pipeline. Similarlythe second datanode stores the packet and forwards it to the third (and last) datanodein the pipeline. (Step 4.)

DFSOutputStream also maintains an internal queue of packets that are waiting to beacknowledged by datanodes, called the ack queue. A packet is only removed from theack queue when it has been acknowledged by all the datanodes in the pipeline. (Step 5.)

If a datanode fails while data is being written to it, then the following actions are taken,which are transparent to the client writing the data. First the pipeline is closed, and anypackets in the ack queue are added to the front of the data queue so that datanodesthat are downstream from the failed node will not miss any packets. The current blockon the good datanodes is given a new identity, which is communicated to the name-node, so that the partial block on the failed datanode will be deleted if the failed data-node recovers later on. The failed datanode is removed from the pipeline and the re-mainder of the block’s data is written to the two good datanodes in the pipeline. Thenamenode notices that the block is under-replicated, and it arranges for a further replicato be created on another node. Subsequent blocks are then treated as normal.

It’s possible, but unlikely, that multiple datanodes fail while a block is being written.As long as dfs.replication.min replicas (default one) are written the write will succeed,and the block will be asynchronously replicated across the cluster until its target rep-lication factor is reached (dfs.replication which defaults to three).

Figure 3-3. A client writing data to HDFS

Data Flow | 61

Wednesday, November 18, 2009

HDFS Failures?Datanode crash?

Clients read another copy

Background rebalance

Namenode crash?

uh-oh

Wednesday, November 18, 2009

M/R

Tasktrackers on the same machines as datanodes

One Rack A Different Rack

Job on starsDifferent jobIdle

Wednesday, November 18, 2009

M/R

CHAPTER 6

How MapReduce Works

In this chapter we’ll look at how MapReduce in Hadoop works in detail. This knowl-edge provides a good foundation for writing more advanced MapReduce programs,which we will cover in the following two chapters.

Anatomy of a MapReduce Job RunYou can run a MapReduce job with a single line of code: JobClient.runJob(conf). It’svery short, but it conceals a great deal of processing behind the scenes. This sectionuncovers the steps Hadoop takes to run a job.

The whole process is illustrated in Figure 6-1. At the highest level there are four inde-pendent entities:

• The client, which submits the MapReduce job.

• The jobtracker, which coordinates the job run. The jobtracker is a Java applicationwhose main class is JobTracker.

• The tasktrackers, which run the tasks that the job has been split into. Tasktrackersare Java applications whose main class is TaskTracker.

• The distributed filesystem (normally HDFS, covered in Chapter 3), which is usedfor sharing job files between the other entities.

Figure 6-1. How Hadoop runs a MapReduce job

145

Wednesday, November 18, 2009

Task fails

Try again?

Try again somewhere else?

Report failure

Retries possible because of idempotence

M/R Failures

Wednesday, November 18, 2009

Hadoop in the Wild

Yahoo! Hadoop Clusters: > 82PB, >25k machines (Eric14, HadoopWorld NYC ’09)

Google: 40 GB/s GFS read/write load (Jeff Dean, LADIS ’09) [~3,500 TB/day]

Facebook: 4TB new data per day; DW: 4800 cores, 5.5 PB (Dhruba Borthakur, HadoopWorld)

Wednesday, November 18, 2009

The Hadoop Ecosystem

HDFS(Hadoop Distributed File System)

HBase (Key-Value store)

MapReduce (Job Scheduling/Execution System)

Pig (Data Flow) Hive (SQL)

BI ReportingETL Tools

Avro

(S

erializ

ation)

Zookeepr

(Coord

ination) Sqoop

RDBMS

Wednesday, November 18, 2009

Ok, fine, what next?

Get Hadoop!

http://hadoop.apache.org/

Cloudera Distribution for Hadoop

Try it out! (Locally, or on EC2)

Wednesday, November 18, 2009

Just one slide...

Software: Cloudera Distribution for Hadoop, Cloudera Desktop, more…

Training and certification…

Free on-line training materials (including video)

Support & Professional Services

@cloudera, blog, etc.Wednesday, November 18, 2009

Questions?

[email protected]

Wednesday, November 18, 2009

Backup Slides

Wednesday, November 18, 2009

Important APIsInput Format

Mapper

Reducer

Partitioner

Combiner

Out. Format

M/R

Flo

w

Oth

er

Writable

JobClient

*Context

Filesystem

K₁,V₁→K₂,V₂

data→K₁,V₁

K₂,iter(V₂)→K₂,V₂

K₂,V₂→int

K₂, iter(V₂)→K₃,V₃

K₃, V₃→data

→ is 1:many

Wednesday, November 18, 2009

$ cat input.txt adams dunster kirkland dunsterkirland dudley dunsteradams dunster winthrop

$ bin/hadoop jar hadoop-0.18.3-examples.jar grep input.txt output1 'dunster|adams'

$ cat output1/part-00000 4 dunster2 adams

Wednesday, November 18, 2009

public int run(String[] args) throws Exception { if (args.length < 3) { System.out.println("Grep <inDir> <outDir> <regex> [<group>]"); ToolRunner.printGenericCommandUsage(System.out); return -1; } Path tempDir = new Path("grep-temp-"+Integer.toString(new Random().nextInt(Integer.MAX_VALUE))); JobConf grepJob = new JobConf(getConf(), Grep.class); try { grepJob.setJobName("grep-search");

FileInputFormat.setInputPaths(grepJob, args[0]);

grepJob.setMapperClass(RegexMapper.class); grepJob.set("mapred.mapper.regex", args[2]); if (args.length == 4) grepJob.set("mapred.mapper.regex.group", args[3]);

grepJob.setCombinerClass(LongSumReducer.class);

grepJob.setReducerClass(LongSumReducer.class);

FileOutputFormat.setOutputPath(grepJob, tempDir); grepJob.setOutputFormat(SequenceFileOutputFormat.class); grepJob.setOutputKeyClass(Text.class); grepJob.setOutputValueClass(LongWritable.class);

JobClient.runJob(grepJob);

JobConf sortJob = new JobConf(Grep.class); sortJob.setJobName("grep-sort");

FileInputFormat.setInputPaths(sortJob, tempDir); sortJob.setInputFormat(SequenceFileInputFormat.class);

sortJob.setMapperClass(InverseMapper.class);

// write a single file sortJob.setNumReduceTasks(1);

FileOutputFormat.setOutputPath(sortJob, new Path(args[1])); // sort by decreasing freq sortJob.setOutputKeyComparatorClass(LongWritable.DecreasingComparator.class);

JobClient.runJob(sortJob); } finally { FileSystem.get(grepJob).delete(tempDir, true); } return 0;}

the “grep” example

Wednesday, November 18, 2009

JobConf grepJob = new JobConf(getConf(), Grep.class); try { grepJob.setJobName("grep-search");

FileInputFormat.setInputPaths(grepJob, args[0]);

grepJob.setMapperClass(RegexMapper.class); grepJob.set("mapred.mapper.regex", args[2]); if (args.length == 4) grepJob.set("mapred.mapper.regex.group", args[3]);

grepJob.setCombinerClass(LongSumReducer.class); grepJob.setReducerClass(LongSumReducer.class);

FileOutputFormat.setOutputPath(grepJob, tempDir); grepJob.setOutputFormat(SequenceFileOutputFormat.class); grepJob.setOutputKeyClass(Text.class); grepJob.setOutputValueClass(LongWritable.class);

JobClient.runJob(grepJob); } ...

Job1of 2

Wednesday, November 18, 2009

JobConf sortJob = new JobConf(Grep.class); sortJob.setJobName("grep-sort");

FileInputFormat.setInputPaths(sortJob, tempDir); sortJob.setInputFormat(SequenceFileInputFormat.class);

sortJob.setMapperClass(InverseMapper.class);

// write a single file sortJob.setNumReduceTasks(1);

FileOutputFormat.setOutputPath(sortJob, new Path(args[1])); // sort by decreasing freq sortJob.setOutputKeyComparatorClass( LongWritable.DecreasingComparator.class);

JobClient.runJob(sortJob); } finally { FileSystem.get(grepJob).delete(tempDir, true); } return 0;}

Job2 of 2

(implicit identity reducer)

Wednesday, November 18, 2009

The types there...?, Text

Text, Long

Long, Text

Text, list(Long)

Text, Long

Wednesday, November 18, 2009

A Simple JoinId Last First1 Washington George2 Lincoln Abraham

Location Id TimeDunster 1 11:00amDunster 2 11:02amKirkland 2 11:08am

You want to track individuals throughout the day.How would you do this in M/R, if you had to?

Peop

leK

ey E

ntry

Log

Wednesday, November 18, 2009