hadoop architecture (delhi hadoop user group meetup 10 sep 2011)

20
Hadoop architecture An overview Hari Shankar Sreekumar Software Engineer @Clickable

Upload: hari-shankar

Post on 26-Jan-2015

104 views

Category:

Technology


2 download

DESCRIPTION

These slides cover the very basics of Hadoop architecture, in particular HDFS. This was my presentation in the first Delhi Hadoop User Group (DHUG) meetup held at Gurgaon on 10th September 2011. Loved the positive feedback. I'll also upload a more elaborate version covering Hadoop mapreduce architecture as well soon. Most of the stuff covered in these slides can be found in Tom White's book as well (See the last slide)

TRANSCRIPT

Page 1: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Hadoop architectureAn overview

Hari Shankar SreekumarSoftware Engineer @Clickable

Page 2: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Ideas

• Store and process large amounts of data (PetaBytes)

• Scale horizontally • Failure is normal

• Distributed computing (MapReduce)

• Moving computation is cheaper than moving data

Page 3: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

What is Hadoop?

HDFSHadoop CommonMapReducePigHiveHBaseZookeeperAvroCassandraMahout. . .. . .. . .

Page 4: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

What is Hadoop?

HDFSHadoop CommonMapReducePigHiveHBaseZookeeperAvroCassandraMahout. . .. . .. . .

Page 5: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Hadoop Distributed File System

A distributed filesystem designed for storing very large files with streaming data access running on clusters of commodity hardware.

HDFS has been designed keeping MapReduce in mind

Consists of a cluster of machines, each machine performing one or more of the following roles:

Namenode (Only one per cluster)Secondary namenode (Checkpoint node) (Only one per cluster)Datanodes (Many per cluster)

Page 6: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

HDFS Blocks• Blocks in disks: Minimum amount of data that can be read

or written. (~ 512 bytes)• Filesystem blocks: Abstraction over disk blocks. (~ few

kilobytes)• HDFS block: Abstraction over Filesystem blocks, to facilitate

distribution over network and other requirements of Hadoop. Usually 64 MB or 128 MB.

• Block abstraction keeps the design simple. e.g, replication is at block level rather than file level.

• File is split into blocks for storing in HDFS. Blocks of the same file can reside on multiple machines in the cluster.

• Each block is stored as a file in the Local FS of the DataNode.

• Block size does not refer to size on disk. 1 MB file will not take up 64 MB on disk.

Page 7: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Namenode and Datanodes• The "master" node• Maintains the HDFS namespace, filesystem tree and

metadata.• Maintains the mapping from each file to the list of blockIDs

where the file is.• Metadata mapping is maintained in memory as well as

persisted on disk.• Maintains in memory the locations of each block. (Block to

datanode mapping)• Memory requirement: ~150 bytes/file• Issues instructions to datanode to create/replicate/delete

blocks• Single point of failure

Page 8: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Datanodes

• The "slaves"• Serve as storage for data blocks• No metadata• Report all blocks to namenode at startup (BlockReport)• Sends periodic "heartbeat" to Namenode• Serves read, write requests, performs block creation, deletion, and

replication upon instruction from Namenode.• User data never flows through the NameNode.

Page 9: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Secondary namenode/Checkpoint node

• To reduce data-loss risk if Namenode fails.• Persistent data is stored in two files in Namenode - The

FsImage and the Edit log.• Changes in file metadata go into the Edit log.• Secondary namenode periodically merges Edit log with

FsImage.• Data loss will still happen if Namenode fails.• Configure Hadoop to write Editlog into a remote NFS mount

as well. In case of failure, copy metadata files from NFS to Secondary Namenode and run it.

• NFS idea has a (very low) performance impact• Failover is NOT automatic

Page 10: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Image: Hadoop, The definitive Guide (Tom White)

Page 11: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Replication and rack-awareness

• Replication in Hadoop is at the block level.• Replication is "Rack-aware"• Three levels for replication preference: 

                     Same machine > Same rack > Different rack• Replication can be configured per file. Can also be

configured from application• Selection of blocks to process in a MapReduce job takes

advantage of rack-awareness.• Reading and writing on HDFS also makes use of rack-

awareness.• Rack-awareness is NOT automatic, and needs to be

configured. By default, all nodes are assumed to be in the same rack.

Page 12: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Reading from HDFS

Image: Hadoop, The definitive Guide (Tom White)

Failure=>Move to next 'closest' node with the block.Direct connection between client and datanode

Page 13: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Writing to HDFS

Minimum replication for successful write: dfs.replication.minFiles in HDFS are write-once and have strictly one writer at any time.

Image: Hadoop, The definitive Guide (Tom White)

Page 14: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Hadoop Common

File system abstraction:The File System (FS) shell includes various shell-like commands that directly interact with the Hadoop Distributed File System (HDFS) as well as other file systems that Hadoop supports, such as Local FS, HFTP FS, S3 FS, and others.

Service-level authorization:Service Level Authorization is the initial authorization mechanism to ensure clients connecting to a particular Hadoop service have the necessary, pre-configured, permissions and are authorized to access the given service. For example, a MapReduce cluster can use this mechanism to allow a configured list of users/groups to submit jobs.

Page 15: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

• A separate 32-bit checksum is created for every io.bytes.per.checksum bytes (Default is 512 bytes. Overhead < 1 %)

 • Checksums are stored with each data block.

• Verified after each operation that might result in data corruption. Also checked periodically.

• Can be used in non-HDFS filesystems also.

Data Integrity

Page 16: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Compression utilities• Reduces space usage

• Reduces bandwidth usage

Ref: Hadoop, The definitive Guide (Tom White)

Splittable LZO is available separately and is a good trade-off between compression speed and compressed size.

Page 17: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Serialization utilities

• Extremely important for Hadoop. A good serialization format is Compact, Fast, Extensible and Interoperable.

• Java Serialization is very cumbersome and heavy for Hadoop. So it uses its own serialization, based on the Writable interface.

• Other frameworks such as Avro, Thrift and protocol buffers are also used.

Page 18: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

MapReduce Framework

• Jobtracker receives map-reduce job execution request from Client.

• Does sanity checks to see if the job is configured properly.• Computes the input splits.• Loads resources required for the job into HDFS• Assigns splits to tasktrackers for map and reduce phases• Map split assignment is data-locality-aware• Single point of failure

 

• Tasktracker creates a new process for the task and executes it. 

• Sends periodic heartbeats to the Jobtracker, along with other information about the task.

Page 19: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

Image: Hadoop, The definitive Guide (Tom White)

Page 20: Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

References

http://hadoop.apache.org/common/docs/current/hdfs_design.html

Hadoop: The Definitive Guide, by TomWhite. Copyright 2009 Tom White, 978-0-596-52197-4