handling big data

91
Handling Big Data Howles Credits to Sources on Final Slide

Upload: loc

Post on 23-Feb-2016

48 views

Category:

Documents


0 download

DESCRIPTION

Handling Big Data. Howles Credits to Sources on Final Slide. Handling Large Amounts of Data. Current technologies are to: Parallelize – use multiple processors or threads. Can be a single machine, or a machine with multiple processors - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Handling Big Data

Handling Big Data

HowlesCredits to Sources on Final Slide

Page 2: Handling Big Data

Handling Large Amountsof Data

• Current technologies are to:– Parallelize – use multiple processors or threads.

Can be a single machine, or a machine with multiple processors

– Distribute – use a network to partition work across many computers

Page 3: Handling Big Data

Parallelized Operations

• This is relatively easy if the task itself can easily be split into units. Still presents some problems, including:– How is the work assigned?– What happens if we have more work units than

threads or processors?– How do we know when all work units have completed?– How do we aggregate results in the end?– How do we handle if the work can’t be cleanly divided?

Page 4: Handling Big Data

Parallelized Operations

• To solve this problem, we need communication mechanisms

• Need synchronization mechanism for communication (timing/notification of events), and to control sharing (mutex)

Page 5: Handling Big Data

Why is it needed?

• Data consistency• Orderly execution of instructions or

activities• Timing – control race conditions

Page 6: Handling Big Data

Examples

• Two people want to buy the same seat on a flight

• Readers and writers• P1 needs a resource but it’s being held by P2• Two threads updating a single counter• Bounded Buffer• Producer/Consumer• …….

Page 7: Handling Big Data

Synchronization Primitives

• Review:• A special shared variable used to guarantee

atomic operations• Hardware support

– Processor may lock down memory bus while other reads/write occur

• Semaphores, monitors, conditions are examples of language-level synchronization mechanisms

Page 8: Handling Big Data

Needed when:

• Resources need to be shared• Timing needs to be coordinated

• Access data• Send messages or data

• Potential race conditions – timing• Difficult to predict• Results in inconsistent, corrupt or destroyed info• Tricky to find; difficult to recreate

• Activities need to be synchronized

Page 9: Handling Big Data

Producer/Consumer

• Producerwhile count == MAX

NOPPut in buffercounter++

• Consumerwhile count == 0

NOPRemove from buffercounter--

Page 10: Handling Big Data

Race Conditions

• … can result in an incorrect solution• An issue with any shared resource

(including devices)– Printer– Writers to a disk

Page 11: Handling Big Data

Critical Section

• Also called the critical region• Segment of code (or device) for which a

process must have exclusive use

Page 12: Handling Big Data

Examples of Critical Sections

• Updating/reading a shared counter• Controlling access to a device or other

resource• Two users want write access to a file

Page 13: Handling Big Data

Rules for solutions

• Must enforce mutex• Must not postpone process if not

warranted (exclude from CR if no other process in CR)

• Bounded Waiting (to enter the CR)• No execution time guarantees

Page 14: Handling Big Data

Atomic Operation

• Operation is guaranteed to process without interruption

• How do we enforce atomic operations?

Page 15: Handling Big Data

Semaphores

• Dijkstra, circa 1965• Two standard operations: wait() and

signal()• Older books may still use P() and V(),

respectively (or Up() and Down()). You should be familiar with any notation

Page 16: Handling Big Data

Semaphores

• A semaphore is comprised of an integer counter and a waiting list of blocked processes

• Initialize the counter (depends on application)• wait() decrements the counter and determines if

the process must block• signal() increments the counter and determines

if a blocked process can unblock

Page 17: Handling Big Data

Semaphores

• wait() and signal() are atomic operations• What is the other advantage of a

semaphore over the previous solutions?

Page 18: Handling Big Data

Binary Semaphore

• Initialized to one• Allows only one process access at a time

Page 19: Handling Big Data

Semaphores

• wait() and signal() are usually system calls. Within the kernel, interrupts are disabled to make the counter operations atomic.

Page 20: Handling Big Data

Problems with Semaphores Process 0:wait (s); // 1stwait (q); // 3rd…….

signal (s);signal (q);

Assume both semaphores initialized to 1

Process 1:wait (q); // 2ndwait (s); // 4th…….

signal (q);signal (s);

Page 21: Handling Big Data

Other problems

• Incorrect order• Forgetting to signal() • Incorrect initial value

Page 22: Handling Big Data

Monitors

• Encapsulates the synchronization with the code

• Only one process may be active in the monitor at a time

• Waiting processes are blocked (no busy waiting)

Page 23: Handling Big Data

Monitors

• Condition variables control access to the monitor

• Two operations: wait() and signal() (easy to confuse with semaphores, so be careful!)

• enter() and leave() or other named functions may be used

Page 24: Handling Big Data

Monitors

if (some condition)call wait() on the monitor

<<mutex>>call signal() on the monitor

Page 25: Handling Big Data

States in the Monitor

• Active (running)• Waiting (blocked, waiting on a condition)

Page 26: Handling Big Data

Examples

Page 27: Handling Big Data

Signals in the Monitor

• When an ACTIVE process issues a signal(), it must allow a blocked process to become active

• This would allow 2 ACTIVE processes and can’t allow this in a CR.

• So – the first process that wants to execute the signal() must be active in order to issue the signal(); the signal() will make a waiting process become active.

Page 28: Handling Big Data

Signals

• Two solutions:• Delay the signal• Delay the waiting process from becoming

active

Page 29: Handling Big Data

Gladiator monitor (Cavers & Brown, 1978)

• Delay the signaled process, signaling process continues

• Create a new state (URGENT) to hold the process that has just been signaled. This signals the process but delays execution of the process just signaled.

• When the signal-er leaves the monitor (or wait()s again), the process in URGENT is allowed to run.

Page 30: Handling Big Data

Mediator (Cavers & Brown adapted from Hoare, 1974)

• Delay the signaling process• When the process signal()s, it is blocked

so the signaled process becomes active right away.

• This monitor may be more difficult to get correct interaction. Be warned, especially if you have loops in your CR.

Page 31: Handling Big Data

Tips for Using Monitors

• Remember that excess signal() instructions don’t matter so don’t test for them or try to count them.

• Don’t intermix with semaphores.• Be sure everything shared is declared

inside the monitor• Carefully think about the process ordering

(which monitor you wish to use)

Page 32: Handling Big Data

Deadlocks• Deadlock occurs whenever a transaction T1 holds a

lock on an item A and is requesting a lock on an item B and a transaction T2 holds a lock on item B and is requesting a lock on item A.

• Are T3 and T4 deadlocked here?

T3 T4Lock-X(B)Read(B)B=B-50Write(B)

Lock-S(A)Read(A)Lock-S(B)

Lock-X(A)

Page 33: Handling Big Data

Deadlock:

• T1 is waiting for T2 to release lock on X• T2 is waiting for T1 to release lock on Y• Deadlock: graph cycle

Page 34: Handling Big Data

Two strategies:

• Pessimistic: deadlock will happen and therefore should use “preventive” measures: Deadlock prevention

• Optimistic: deadlock will rarely occur and therefore wait until it happens and then try to fix it. Therefore, need to have a mechanism to “detect” a deadlock: Deadlock detection.

Page 35: Handling Big Data

Deadlock Prevention

• Locks:– Lock all items before transaction begins execution– Either all are locked in one step or none are locked– Disadvantages:

• Hard to predict what data items need to be locked• Data-item utilization may be very low

Page 36: Handling Big Data

Detection

• Circular Wait– Graph the resources. If a cycle, you are

deadlocked• No (or reduced) throughput (because the

deadlock may not involve all users)

Page 37: Handling Big Data

Deadlock Recovery

• Pick a victim and rollback– Select a transaction, rollback, and restart

• What criteria would you use to determine a victim?

Page 38: Handling Big Data

Synchronization is Tricky

• Forgetting to signal or release a semaphore• Blocking while holding a lock• Synchronizing on the wrong synchronization

mechanism• Deadlock• Must use locks consistently, and minimize

amount of shared resources

Page 39: Handling Big Data

Java

• Synchronization keyword• wait() and notify() notifyAll()• Code examples

Page 40: Handling Big Data

Java Threads

• P1 is in the monitor (synchronized block of code)• P2 wants to enter the monitor• P2 must wait until P1 exits• While P2 is waiting, think of it as “waiting at the

gate”• When P1 finishes, monitor allows one process

waiting at the gate to become active. • Leaving the gate is not initiated by P2 – it is a

side effect of P1 leaving the monitor

Page 41: Handling Big Data

Big Data

Page 42: Handling Big Data

What does “Big Data” mean?

• Most everyone thinks “volume”• Laney [3] expanded to include velocity and

variety

Page 43: Handling Big Data

Defining “Big Data”

• It’s more than just big – meaning a lot of data• Can be viewed as 3 issues

– Volume• Size

– Velocity• How quickly it arrives vs consumed or response time

– Variety• Diverse sources, formats, quality, structures

Page 44: Handling Big Data

Specific Problems withBig Data

• I/O Bottlenecks• The cost of failure• Resource limitations

Page 45: Handling Big Data

I/O Bottlenecks

• Moore’s Law: Gordon Moore, the co-founder of Intel

• Stated that processor ability roughly doubles every 2 years (often quoted at 18 months)– Regardless …

• The issue is that I/O, network, and memory speeds have not kept up with processor speeds

• This creates a huge bottleneck

Page 46: Handling Big Data

Other Issues

• What are the restart operations if a thread/processor fails?– If dealing with “Big Data”, parallelized solutions

may not be sufficient because of the high cost of failure

• Distributed systems involve network communication that brings an entirely different and complex set of problems

Page 47: Handling Big Data

Cost of Failure

• The failure of many jobs is a problem– Can’t just restart because data has been modified– Need to roll-back and restart– May require human intervention– Resource costly (time, lost processor cycles,

delayed results)• This is especially problematic if a process has

been running a very long time

Page 48: Handling Big Data

Using a DBMS for Big Data

• Due to the volume of data:– May overwhelm a traditional DBMS system– The data may lack structure to easily integrate into

a DBMS system– The time or cost to clean/prepare the data for use

in a traditional DBMS may be prohibitive– Time may be critical. Need to look at today’s

online transactions to know how to run business tomorrow

Page 49: Handling Big Data

Memory & NetworkResources

– Might be too much data to use existing storage or software mechanisms

• Too much data for memory• Files too large to realistically distribute over a network

– Because of the volume, need new approaches

Page 50: Handling Big Data

Would this work?

• Reduce the data– Dimensionality reduction– Sampling

Page 51: Handling Big Data

Weaknesses in Current Architectures

• Monolithic Servers scale-up– Large server farms– Buy more equipment as the load increases

• Distributed systems scale-out– Duplicate data across >1 machine or server– Remaining problem of efficiency: I/O still the

bottleneck because of large file sizes• What are other issues with these architectures?

Page 52: Handling Big Data

Needed: New Tools and Approaches

• Need tools and architectures that are:– Able to handle very large amounts of data– Available and accessible– Robust– Simple to use and easy to learn– Cost effective

Page 53: Handling Big Data

A New Generation of Tools and Technologies

Page 54: Handling Big Data

Hadoop

Page 55: Handling Big Data

Advantages of Hadoop

• Can support very large datasets (multi-terabytes)

• Runs as a cluster using commodity hardware

Page 56: Handling Big Data

Hadoop

• Open source• Derived from Google’s MapReduce and

Google File System (GFS) papers• We have a Hadoop cluster under development

Page 57: Handling Big Data

File Systems

• Uses a distributed file system for persistent data storage

• More efficient than trying to store one file in one location

• Provides options for recovery if failures

Page 58: Handling Big Data

GFS

• Proprietary• Uses commodity machines, not specialized

hardware• Scalable – easy to increase capacity when

needed• Fault-tolerant

Page 59: Handling Big Data

DFS

• File is chunked• Typical chunk is 16-64 MB• Chunks are replicated across multiple hosts in

case of failure• When distributing chunks, tries to move

copies to different racks (physical location)

Page 60: Handling Big Data

HDFS

• Hadoop Distributed File System• DataNodes – communicate with each other

for pipeline file reads/writes• Files on the DataNodes are chunked in blocks• Copies of blocks appear across several

DataNodes (default is 3)• The NameNode tracks the DataNodes and the

file blocks assigned to each

Page 61: Handling Big Data

How does this differ?

• Suppose you are to count all words and the number of times each occurs– The file may fit in memory– The file may be too large for memory but your data

structures to store the word counts may fit in memory

– Neither may fit in memory• Need some type of parallelizes solution – but we

previously looked at some associated problems

Page 62: Handling Big Data

Map Reduce

Page 63: Handling Big Data

Map Reduce

• Distributes the work over a large set of computers – divide and conquer

• Has built-in fault tolerance; if one node fails, detects and send to another

Page 64: Handling Big Data

Map/Reduce View for Programmers

• Map: Maps to a key, emitting a temporary (k,v)• Reduce: Receives data arranged by the key; you

apply what you need done with the (k,value-list)• Example: Count words: Map each key (word) to

a count (e.g. 1 if you are reading a file, tokenize, and emit (theWord, 1)

• Reduce: You receive a (key, list of counts) – process the list and emit the (key, aggregagedData)

Page 65: Handling Big Data

Credit: aws.typepad.com

Page 66: Handling Big Data

Under the Hood

• Reader processes split the file and send the assigned blocks to the worker machines

• A Combiner process may take the Map results and aggregate to optimize performance– Example: May aggregate emitted results before sending

out over the network• A Shuffle and Sort process sits between Map and

Reduce to – Determine which Reducer should receive the interim result– Ensure the keys sent to a Reducer are sorted

Page 67: Handling Big Data

Under the Hood [2]

• TaskTrackers create the jobs to perform the Map Reduce work

• A JobTracker controls and schedules the TaskTracker

Page 68: Handling Big Data

Map/Reduce

• You need to program a Map and Reduce function – plus any other functions you may need for your specific application or problem– Map: Input is a <k,v> pair; emit intermediate

values consisting of 0 or more <k,v>– Reduce: Input is a <k,list-of-values>

• No data model – data is stored in files

Page 69: Handling Big Data

Example: Word Count

• Suppose we have a very large file of text data• In the Map phase, we will expect lines of the

file• Remember any one Map function will not see

all the data, only the portion of the file assigned to that node

• How would you construct the Map function?

Page 70: Handling Big Data

Example: Word Count

• How would you construct the Reduce function?– Remember that Reduce will receive a <key, list-of-

values> from all the Mappers• What will be contained in the list-of-values?• What should be emitted?• Think back to the Map phase: How could this

be made more efficient?

Page 71: Handling Big Data

In Pseudocode

• Map (key, value)– For each word in value (sentence, paragraph,

document – whatever)• Emit (w,1)

• Reduce (key, list of values)• For each item in the list

• Emit (result)

Page 72: Handling Big Data

Real M-R Example in Python

• Expects json format file where [0] is a book title; [1] is the text

• Map breaks the text up by tokens• Reduce counts the occurrences in (key, list)

that is provided to the function

Page 73: Handling Big Data

Sample CodeMap:

value = record[1] // the text ([0] is the title –ignored)

words = value.split()for w in words:mr.emit_intermediate (w, 1)

Reduce:total = 0

for v in list_of_values: // list_of_values is arg provided to function – the counts for each word (a list of ones in this case)

total += v mr.emit((key, total))

Page 74: Handling Big Data

Under the Hood (simplified)

• Prior to the Map function, the data was split (chunked) and sent to each of the worker nodes (the user does not see this happen)

• The output of each Map function is grouped by keys (the user does not see this happen either). This grouping is sent to the Reducer as the (key, list of values)

• Another function (not seen by the user) aggregates the results from the Reducer functions and returns to the main program

Page 75: Handling Big Data

Detecting Failures

• A Master node pings Workers to determine if a Worker node has failed/crashed

• The Master waits for all Worker nodes to complete. If it detects that a Worker has failed or is too slow (bottleneck) it reassigns the work to another Worker node

• It checks for completion and ignores a second Worker reporting completion of the same work (e.g. a Worker was slow and the work was reassigned to another Worker)

Page 76: Handling Big Data

Optimizing

• You can optimize MR jobs by– Preprocessing some of the data– Aggregating results in the Map function

• Example: Map (key, sentence) • If counting words, emit one count for all occurrences of a word in a

sentence instead of emitting for each word• “The dog chased another dog”

– The file system does support writes – typically done in the Reduce phase

– I have also seen examples where users indicate a single Reducer – all work goes to this node where post-processing can be done

Page 77: Handling Big Data

Counting Cars["Mustang", "Joe"]["Firebird", "Sally"]["Mustang", "Fred"]["Cutlass", "Jim"]["Firebird", "Doug"]["Firebird", "Brian"]["Saturn", "Joe"]["Saturn", "Sally"]

Map: car = record[0] owner = record[1] mr.emit_intermediate (car, owner)

Reduce: ownerList = [] for v in list_of_values: ownerList.append (v) mr.emit((key, ownerList))

What is the result?

Page 78: Handling Big Data

NoSQL

Page 79: Handling Big Data

Traditional DBMS

• Are efficient, reliable, convenient – Well understood schemas– Query language support– Transaction guarantees

• Enforce security so multiple users can access• Can store massive amounts of persistent data

Page 80: Handling Big Data

Traditional DBMS

• Usually designed from the bottom up• Generally maintained by highly skilled DBA• Changes to the schema must be carefully

managed• As the volume grows, more hardware (scale

up) is often needed to support• Data accessed through a schema, using a

structured query language

Page 81: Handling Big Data

Shortcomings of DBMS

• Not all problems fit into a DBMS data model• DBMS may provide more than what we need

Page 82: Handling Big Data

NoSQL

• NoSQL doesn’t quite mean what the name may imply– It means that a traditional SQL database structure

may not work for the current problem or data• May not have a specific schema• May have few restrictions on the data model• Most data represented as <k,v> pairs

Page 83: Handling Big Data

Why NoSQL

• Data may be “unruly”, unstructured– May need a flexible schema

• Time– May need something quick and cheap to set up

• May need to be updated or purged frequently• May have too much data

– Massive scalability

Page 84: Handling Big Data

Example

• Analyzing web logs– We want to find all entries for a given user, URL, or within

a time window• Cleaning this data, designing a schema, loading into

DBMS is time consuming– Data may be obsolete tomorrow

• If we don’t use a DBMS, we can parallelize the solution

• If concerns about consistency are relaxed, “close enough” may be “good enough”

Page 85: Handling Big Data

NoSQL: Love it or hate it• Hate it:

– It lacks structure– Lack of a query language leaves the data difficult to use unless users have

knowledge of other tools– Not well understood – has not been around very long– Significant skill to install and maintain (same is true for DBMS, but that

technology has been around a long time)– All NoSQL developers are in a “learning mode”

• Love it:– All open source (which could also be a reason to hate it)– Don’t need DBA (cost)– Flexible– Quick and cheap to set up

Page 86: Handling Big Data

Non-DBMS Solutions

• Map Reduce is an example of a NoSQL solution– NoSQL: A SQL-only solution may not fit for all

problems• But a NoSQL solution (exclusively without a

DBMS) may not work either– May miss the structure of a schema and the ability

to perform SQL-like queries

Page 87: Handling Big Data

Other non-DBMS (NoSQL) Approaches

• Found that a lack of a schema was limiting for some problems

• Some problems made more difficult because no schema

• Pig and Hive

Page 88: Handling Big Data

Hive

• Supports a schema• A SQL-like query language• Compiles to a workload of Hadoop (M/R jobs)

Page 89: Handling Big Data

Pig

• Supports relational operators• Also compiles to a workload of Hadoop (M/R

jobs)

Page 90: Handling Big Data

Limitations of Hadoop

• Not everything is fault-tolerant– Prior to the second version, used single-master

model– No redundancy if the master fails

• Security– Most services are not protected– Malicious users can subvert and assume identities– Any user can kill another user’s jobs

Page 91: Handling Big Data

References & Credits

[1] Chuck Lam’s “Hadoop in Action” book. Manning Publications, 2010.[2] Aaron Kimball videos @Google, UWashington[3] Doug Laney’s “3D Data Management: Controlling Data Volume, Velocity, and Variety”[4] Alex Holmes’ “Hadoop in Practice” book.[5] Jennifer Widom at Stanford