pig interview qusetions

Upload: spsoftspsoft

Post on 03-Jun-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/11/2019 PIG Interview Qusetions

    1/15

    PIG INTERVIEW QUESTIONS

    1.

    Can you give us some examples how Hadoop is used in real time environment?

    Let us assume that we have an exam consisting of 10 Multiple-choice questions and 20 students appear

    for that exam. Every student will attempt each question. For each question and each answer option, a

    key will be generated. So we have a set of key-value pairs for all the questions and all the answer

    options for every student. Based on the options that the students have selected, you have to analyze

    and find out how many students have answered correctly. This isnt an easy task. Here Hadoop comes

    into picture! Hadoop helps you in solving these problems quickly and without much effort. You may

    also take the case of how many students have wrongly attempted a particular question.

    2.

    What is BloomMapFile used for?

    The BloomMapFile is a class that extends MapFile. So its functionality is similar to MapFile.

    BloomMapFile uses dynamic Bloom filters to provide quick membership test for the keys. It is used in

    Hbase table format.

    3.

    What is PIG?

    PIG is a platform for analyzing large data sets that consist of high level language for expressing data

    analysis programs, coupled with infrastructure for evaluating these programs. PIGs infrastructure

    layer consists of a compiler that produces sequence of MapReduce Programs. There are four stages of

    Pig Query (Pig Latin)

    Load

    Transform (Group/Filter)

    Dump

    Store

    A sample Pig query looks like this.

    -loading persons datapersons = LOAD 'PersonData' USING PigStorage(' ') AS

    (personId:int,personName:chararray,city:chararray);

    --loading orders data

    orders = LOAD 'OrderData' USING PigStorage(' ') AS (orderId:int,orderNumber:int,personId:int);

    -- join the relations persons and order with the personId columns

    result = JOIN persons BY personId FULL OUTER, orders BY personId;

    dump result;

    4.

    What is the difference between logical and physical plans?

    Pig undergoes some steps when a Pig Latin Script is converted into MapReduce jobs. After performingthe basic parsing and semantic checking, it produces a logical plan. The logical plan describes the

    logical operators that have to be executed by Pig during execution. After this, Pig produces a physical

    plan. The physical plan describes the physical operators that are needed to execute the script.

    #-----------------------------------------------

    # New Logical Plan:

  • 8/11/2019 PIG Interview Qusetions

    2/15

    PIG INTERVIEW QUESTIONS

    #-----------------------------------------------

    orders: (Name: LOStore Schema: orderid#250:int,orderqnt#251:int,personId#252:int)

    |

    |---orders: (Name: LOForEach Schema: orderid#250:int,orderqnt#251:int,personId#252:int)

    | |

    | (Name: LOGenerate[false,false,false] Schema:orderid#250:int,orderqnt#251:int,personId#252:int)ColumnPrune:InputUids=[252, 250,

    251]ColumnPrune:OutputUids=[252, 250, 251]

    | | |

    | | (Name: Cast Type: int Uid: 250)

    | | |

    | | |---orderid:(Name: Project Type: bytearray Uid: 250 Input: 0 Column: (*))

    | | |

    | | (Name: Cast Type: int Uid: 251)

    | | |

    | | |---orderqnt:(Name: Project Type: bytearray Uid: 251 Input: 1 Column: (*))

    | | |

    | | (Name: Cast Type: int Uid: 252)

    | | |

    | | |---personId:(Name: Project Type: bytearray Uid: 252 Input: 2 Column: (*))

    | |

    | |---(Name: LOInnerLoad[0] Schema: orderid#250:bytearray)

    | |

    | |---(Name: LOInnerLoad[1] Schema: orderqnt#251:bytearray)

    | |

    | |---(Name: LOInnerLoad[2] Schema: personId#252:bytearray)

    ||---orders: (Name: LOLoad Schema:

    orderid#250:bytearray,orderqnt#251:bytearray,personId#252:bytearray)RequiredFields:null

    # Physical Plan:

    #-----------------------------------------------

    orders: Store(fakefile:org.apache.pig.builtin.PigStorage) - scope-312

    |

    |---orders: New For Each(false,false,false)[bag] - scope-311

    | |

    | Cast[int] - scope-303

    | |

    | |---Project[bytearray][0] - scope-302

    | |

    | Cast[int] - scope-306

    | |

    | |---Project[bytearray][1] - scope-305

    | |

  • 8/11/2019 PIG Interview Qusetions

    3/15

    PIG INTERVIEW QUESTIONS

    | Cast[int] - scope-309

    | |

    | |---Project[bytearray][2] - scope-308

    |

    |---orders: Load(hdfs://localhost:54310/user/training/PigScript1/OrderData:PigStorage(' ')) - scope-

    301

    5.

    Does ILLUSTRATE run MR job?

    No, Illustrate doesn't run the mapreduce job. It just shows that how the data would look when map

    reduce job is run. Illustrate will not pull any MR, it will pull the internal data. On the console, illustrate

    will not do any job. It just shows output of each stage and not the final output.

    6. Is the keyword DEFINE like a function name?

    Yes, the keyword DEFINE is like afunction name. Once you have registered, you have to define it.

    Whatever logic you have written in Java program, you have an exported jar and also a jar registered byyou. Now the compiler will check the function in exported jar. When the function is not present in the

    library, it looks into your jar.

    7.

    Is the keyword FUNCTIONAL a User Defined Function (UDF)?

    No, the keyword FUNCTIONAL is not a User Defined Function (UDF). While using UDF, we have

    to override some functions. Certainly you have to do your job with the help of these functions only.

    But the keyword FUNCTIONAL is a built-in function i.e a pre-defined function, therefore it does not

    work as a UDF.

    8.

    Why do we need MapReduce during Pig programming?

    Pig is a high-level platform that makes many Hadoop data analysis issues easier to execute. The

    language we use for this platform is: Pig Latin. A program written in Pig Latin is like a query written

    in SQL, where we need an execution engine to execute the query. So, when a program is written in Pig

    Latin, Pig compiler will convert the program into MapReduce jobs. Here, MapReduce acts as the

    execution engine.

    9.

    Are there any problems which can only be solved by MapReduce and cannot be solved by PIG?

    In which kind of scenarios MR jobs will be more useful than PIG?

    Let us take a scenario where we want to count the population in two cities. I have a data set and sensorlist of different cities. I want to count the population by using one mapreduce for two cities. Let us

    assume that one is Bangalore and the other is Noida. So I need to consider key of Bangalore city

    similar to Noida through which I can bring the population data of these two cities to one reducer. The

    idea behind this is some how I have to instruct map reducer programwhenever you find city with the

    name Bangalore and city with the name Noida, you create the alias name which will be the

    common name for these two cities so that you create a common key for both the cities and it get

    passed to the same reducer. For this, we have to write custom partitioner.

  • 8/11/2019 PIG Interview Qusetions

    4/15

    PIG INTERVIEW QUESTIONS

    In mapreduce when you create a key for city, you have to consider city as the key. So, whenever

    the framework comes across a different city, it considers it as a different key. Hence, we need to use

    customized partitioner. There is a provision in mapreduce only, where you can write your custom

    partitioner and mention if city = bangalore or noida then pass similar hashcode. However, we cannot

    create custom partitioner in Pig. As Pig is not a framework, we cannot direct execution engine tocustomize the partitioner. In such scenarios, MapReduce works better than Pig.

    10.

    Does Pig give any warning when there is a type mismatch or missing field?

    No, Pig will not show any warning if there is no matching field or a mismatch. If you assume that Pig

    gives such a warning, then it is difficult to find in log file. If any mismatch is found, it assumes a null

    value in Pig.

    11.What co-group does in Pig?

    Co-group joins the data set by grouping one particular data set only. It groups the elements by theircommon field and then returns a set of records containing two separate bags. The first bag consists of

    the record of the first data set with the common data set and the second bag consists of the records of

    the second data set with the common data set.

    12.

    Can we say cogroup is a group of more than 1 data set?

    Cogroup is a group of one data set. But in the case of more than one data sets, cogroup will group all

    the data sets and join them based on the common field. Hence, we can say that cogroup is a group of

    more than one data set and join of that data set as well.

    13.

    What does FOREACH do?

    FOREACH is used to apply transformations to the data and to generate new data items. The name

    itself is indicating that for each element of a data bag, the respective action will be performed.

    Syntax :FOREACH bagname GENERATE expression1, expression2, ..

    The meaning of this statement is that the expressions mentioned after GENERATE will be applied tothe current record of the data bag.

    14.

    What is bag?

    A bag is one of the data models present in Pig. It is an unordered collection of tuples with possible

    duplicates. Bags are used to store collections while grouping. The size of bag is the size of the local

    disk, this means that the size of the bag is limited. When the bag is full, then Pig will spill this bag into

    local disk and keep only some parts of the bag in memory. There is no necessity that the complete bag

    should fit into memory. We represent bags with {}.

    15.

    Why Pig ?

    i)Ease of programming

  • 8/11/2019 PIG Interview Qusetions

    5/15

    PIG INTERVIEW QUESTIONS

    ii)Optimization opportunities.

    iii)Extensibility

    i) Ease of programming :-

    It is trivial to achieve parallel execution of simple, embarrassingly parallel data analysis tasks.

    Complex tasks comprised of multiple interrelated data transformations are explicitly encoded as dataflow sequences, making them easy to write, understand, and maintain.

    ii) Optimization opportunities :-

    The way in which tasks are encoded permits the system to optimize their execution automatically,

    allowing the user to focus on semantics rather than efficiency.

    iii) Extensibility :-

    Users can create their own functions to do special-purpose processing.

    16.

    Advantages of Using Pig ?

    i) Pig can be treated as a higher level language

    a) Increases Programming Productivity

    b) Decreases duplication of Effort

    c) Opens the M/R Programming system to more uses

    ii) Pig Insulates against hadoop complexity

    a) Hadoop version Upgrades

    b) Job configuration Tunning

    17.

    Pig Features ?

    i) Data Flow Language

    User Specifies a Sequence of Steps where each step specifies only a single high-level data

    transformation.

    ii) User Defined Functions (UDF)

    iii)Debugging Environment

    iv) Nested data Model

    18.Explain Pig Structure In Brief ?

  • 8/11/2019 PIG Interview Qusetions

    6/15

    PIG INTERVIEW QUESTIONS

    19.Explain Logical Plan,Physical Plan,Map Reduce Plan ?

    20.

    Difference Between Pig and SQL ?

    Pig is a Procedural SQL is Declarative

    Nested relational data model SQL flat relational

    Schema is optional SQL schema is required

    OLAP works SQL supports OLAP+OLTP works loads

    Limited Query Optimization and Siginficent opportunity for query Optimization .

  • 8/11/2019 PIG Interview Qusetions

    7/15

    PIG INTERVIEW QUESTIONS

    21.

    What Is Difference Between Mapreduce and Pig ?

    In MR Need to write entire logic for operations like join,group,filter,sum etc ..

    In Pig Bulit in functions are available

    In MR Number of lines of code required is too much even for a simple functionality

    In Pig 10 lines of pig latin equal to 200 lines of java In MR Time of effort in coding is high

    In Pig What took 4hrs to write in java took 15 mins in pig latin (approx)

    In MRLess productivity

    In PIG High Productivity

    22.

    Data Types In Pig ?

    Normal DataTypes Pig DataTypes

    int int

    String chararray

    float float

    long long

    double double

    boolean boolean

    23.

    Custom Data types in Pig ?

    Atom,Tuple,Bag (or) Relation (or) OutBag,Inner Bag

    24.

    What is Atom ?

    The smallest individual unit in apache pig (each and every character)

    Ex :- H A D O O P

    25.

    What is Tuple ?

    Collection of all the ordered fileds

    Ex :- 100 rajesh syntel

    26.

    What is Bag or Relation or Outer Bag ?

    Collection of all the tuples is a bag

    Ex:- 100 rajesh 30000

    101 rakesh 40000

    27.

    What is Inner Bag ?

    A subset of the outerbag

    Ex :- 100 rajesh

    101 rakesh

  • 8/11/2019 PIG Interview Qusetions

    8/15

    PIG INTERVIEW QUESTIONS

    28.

    Atomic Data Type Values In Pig ?

    29.

    What Are Pig Data Type Implementation Classes?

    30.

    Some Built In Commands In Pig.

  • 8/11/2019 PIG Interview Qusetions

    9/15

    PIG INTERVIEW QUESTIONS

    31.

    Explain About Eval Pig Built In Functions?

    32.

    Explain About Expression Functions In Pig ?

  • 8/11/2019 PIG Interview Qusetions

    10/15

    PIG INTERVIEW QUESTIONS

    33.

    Pig Data Type Vs Java Data Type ?

    34.

    What does LOAD do ?

    Loads data from the file system

    35.What does FOREACH do ?

    Generates data transformations based on columns of data.

    36.

    What does DUMP do ?

    Display the output Data Temporarly.If we close the terminal output data will loss

    37.

    What does STORE do ?

    Stores or saves results to the file system Permanently

    38.What does FILTER do ?

    Selects tuples from a relation based on some condition.

    39.

    What does LIMIT do ?

    Limits the number of output tuples.

    40.

    What does ORDER BY do ?

    Sorts a relation based on one or more fields.

    41.

    What does GROUP do ?

    Groups the data in one or more relations.

  • 8/11/2019 PIG Interview Qusetions

    11/15

    PIG INTERVIEW QUESTIONS

    42.

    What does SPLIT do ?

    Partitions a relation into two or more relations.

    43.What does GENERATE do ?

    Generate the output values from given columns

    44.

    What does DISTINCT do ?

    Removes duplicate tuples in a relation.

    45.

    What does COGROUP do ?

    Cogroup also having same functionality of Group but comparing to group cogroup is faster

    46.

    What does JOIN (inner) do ?

    Performs an inner join of two or more relations based on common field values.

    47.

    What does DISTINCT do ?

    Removes duplicate tuples in a relation.

    48.

    What does COGROUP do ?

    Cogroup also having same functionality of Group but comparing to group cogroup is faster

    49.What does JOIN (inner) do ?

    Performs an inner join of two or more relations based on common field values.

    50.

    What does JOIN (outer) do ?

    Performs an outer join of two or more relations based on common field values.

    51.

    What does Union do ?

    Computes the union of two or more relations.

    52.

    What does AVG do ?

    Computes the average of the numeric values in a single-column bag.

    53.

    What does SUM do ?

    Use the SUM function to compute the sum of a set of numeric values in a single-column bag.

  • 8/11/2019 PIG Interview Qusetions

    12/15

    PIG INTERVIEW QUESTIONS

    54.

    What does MAX do ?

    Use the MAX function to compute the maximum of the numeric values or chararrays in a single-

    column bag.

    55.What does MIN do?

    Use the MIN function to compute the minimum of a set of numeric values or chararrays in a single-

    column bag.

    56.

    What does JOIN (outer) do?

    Performs an outer join of two or more relations based on common field values.

    57.

    What does COUNT do?

    Computes the number of elements in a bag.

    58.What does CONCAT do?

    Concatenates two expressions of identical type.

    59.

    What does DESCRIBE do?

    Returns the schema of a relation.

    60.

    What does TOKENIZE do?

    Splits a string and outputs a bag of words.

    61.What does EXPLAIN do?

    Displays execution plans.

    62.

    What does ILLUSTRATE do?

    Displays a step-by-step execution of a sequence of statements.

    63.

    What does Flatten do ?

    Flatten un-nests tuples as well as bags.

    64.What does CROSS do?

    Computes the cross product of two or more relations.

    65.

    What does SPLIT do?

  • 8/11/2019 PIG Interview Qusetions

    13/15

    PIG INTERVIEW QUESTIONS

    Partitions a relation into two or more relations.

    66.

    What does PigStorage do?

    Loads and stores data as structured text files.

    67.Pig Execution Types?

    There is 3 types of Flavors for Pig Execution

    i)Grunt Shell Mode

    ii)Script Mode

    iii)Embedded Mode

    Grunt Shell Mode:

    i) Grunt Shell is the default mode of Pig Execution.

    ii) In this mode we have to write the script on top of Grunt only

    iii) We cant save the script In this mode but we can save the output of the script

    Example :-

    grunt > A = LOAD file.txt using PigStorage( );

    grunt> B = FOREACH A GENERATE $0+$1;

    grunt > STORE B INTO output;

    Script Mode:

    In This mode we write the script in a separate .pig format file. We will write bunch of lines of script

    in one file only. That script file extension should be ends with .pig only

    Example :-

    Pigx local filename.pig

  • 8/11/2019 PIG Interview Qusetions

    14/15

    PIG INTERVIEW QUESTIONS

    Embedded Mode:

    We have some in built Pig commands are available in our Pig Language. If at all we are not achieving

    the desired functionalities by using this built in operations, generally we can go ahead with apache pig

    User Defined functions (UDFS). This type of functionality is possible only through embedded mode.

    68.Different Modes of Execution In Pig ?

    You can run Pig (execute Pig Latin statements and Pig commands) using various modes

    There is 2 types of Modesi)Interactive Mode

    ii)Batch Mode

    Interactive Mode is Nothing But a Grunt shell Modes like both

    i)Pigx local

    ii)pig

    Batch Mode is Nothing but a Script Modes like both

    i)Pigx local filename.pig

    ii)Pig filename.pig

    Local Mode

    In Local mode of Execution PIG will take the input data from local path only(not from HDFS path)

    and after processing the data, the output also stored in the local paths.

    Syntax :-

  • 8/11/2019 PIG Interview Qusetions

    15/15

    PIG INTERVIEW QUESTIONS

    Pigx local filename.pig (script mode)

    Pigx local (grunt Mode)

    HDFS Mode/MR Mode/Clustred Mode

    In this mode PIG will take all input from HDFS paths only(but not from Local paths) and obviously

    PIG will store the output data on top of HDFS paths only

    Syntax :-

    i) Script Mode :-

    Pig file.pig (or) pigx mapred file.pig (or)

    pigx mapreduce file.pig

    ii) Grunt Mode :-

    pig (or) pigx mapred (or) pigx mapreduce