cca spark and hadoop certification on 2nd february 2016 with 9 out of 10 questions

Upload: samaram-sam

Post on 28-Feb-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/25/2019 CCA Spark and Hadoop Certification on 2nd February 2016 With 9 Out of 10 Questions.

    1/1

    I cleared the CCA Spark and Hadoop certification on 2nd February 2016 with 9 outof 10 questions.

    If you are expecting the Dump then please close this link.

    1.Answering the question will be easy but doing it in right way is trick.

    2.Before answering any question go through the question thoroughly and repeatedly .

    3. Do not be in hurry to answer the question.

    4.Prepare according the skills mentioned in Cloudera site and this will be enough to answer the question .

    Cloudera

    Data Ingest

    The skills to transfer data between external systems and your cluster. This includes the following:

    Import data from a MySQL database into HDFS using SqoopExport data to a MySQL database from HDFS using SqoopChange the delimiter and file format of data during import using SqoopIngest real-time and near-real time (NRT) streaming data into HDFS using FlumeLoad data into and out of HDFS using the Hadoop File System (FS) commandsTransform, Stage, Store

    Convert a set of data values in a given format stored in HDFS into new data values and/or a new data format and write them into HDFS. This includes writing Spark applications in both Scala and Python:

    Load data from HDFS and store results back to HDFS using SparkJoin disparate datasets together using SparkCalculate aggregate statistics (e.g., average or sum) using SparkFilter data into a smaller dataset using SparkWrite a query that produces ranked or sorted data using SparkData Analysis

    Use Data Definition Language (DDL) to create tables in the Hive metastore for use by Hive and Impala.

    Read and/or create a table in the Hive metastore in a given schema

    Extract an Avro schema from a set of datafiles using avro-toolsCreate a table in the Hive metastore using the Avro file format and an externalschema fileImprove query performance by creating partitioned tables in the Hive metastoreEvolve an Avro schema by changing JSON files.LikeCCA spark and hadoop developer CertificationCommentShareShare CCA spark andhadoop developer Certification