efficient data storage for analytics with parquet 2.0 - hadoop summit 2014

40
Ecient Data Storage for Analytics with Apache Parquet 2.0 Julien Le Dem @J_ Processing tools tech lead, Data Platform at Twitter Nong Li [email protected] Software engineer, Cloudera Impala @ApacheParquet

Upload: julien-le-dem

Post on 05-Dec-2014

6.062 views

Category:

Technology


7 download

DESCRIPTION

Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

TRANSCRIPT

Page 1: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Efficient Data Storage for Analytics with Apache Parquet 2.0

Julien Le Dem @J_

Processing tools tech lead, Data Platform at Twitter

Nong Li [email protected]

Software engineer, Cloudera Impala

@ApacheParquet

Page 2: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Outline

2

- Why we need efficiency - Properties of efficient algorithms - Enabling efficiency - Efficiency in Apache Parquet

Page 3: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Why we need efficiency

Page 4: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Producing a lot of data is easy

4

Producing a lot of derived data is even easier.Solution: Compress all the things!

Page 5: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Scanning a lot of data is easy

5

1% completed

… but not necessarily fast.Waiting is not productive. We want faster turnaround.Compression but not at the cost of reading speed.

Page 6: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Trying new tools is easy

6

ETL

Storage

ad-hoc queries

log collection

automated dashboard

machine learning

graph processing

external datasources and schema definition

...

...

We need a storage format interoperable with all the tools we use and keep our options open for the next big thing.

Page 7: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Enter Apache Parquet

Page 8: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Parquet design goals

8

- Interoperability

- Space efficiency

- Query efficiency

Page 9: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Parquet timeline

9

- Fall 2012: Twitter & Cloudera merge efforts to develop columnar formats

- March 2013: OSS announcement; Criteo signs on for Hive integration

- July 2013: 1.0 release. 18 contributors from more than 5 organizations.

- May 2014: Apache Incubator. 40+ contributors, 18 with 1000+ LOC. 26 incremental releases.

- Parquet 2.0 coming as Apache release

Page 10: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Interoperability

Page 11: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Interoperable

11

Model agnostic

Language agnostic

Java C++

Avro Thrift Protocol Buffer Pig Tuple Hive SerDe

Assembly/striping

Parquet file format

Object model

parquet-avroConverters parquet-thrift parquet-proto parquet-pig parquet-hive

Column encoding

Impala

...

...

Encoding

Queryexecution

Page 12: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Frameworks and libraries integrated with Parquet

12

Query engines: Hive, Impala, HAWQ, IBM Big SQL, Drill, Tajo, Pig, Presto !

Frameworks: Spark, MapReduce, Cascading, Crunch, Scalding, Kite !

Data Models: Avro, Thrift, ProtocolBuffers, POJOs

Page 13: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Enabling efficiency

Page 14: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Columnar storage

14

Logical table representation

Row layout

Column layout

encoding

Nested schema

a b c

a b c

a1 b1 c1

a2 b2 c2

a3 b3 c3

a4 b4 c4

a5 b5 c5

a1 b1 c1 a2 b2 c2 a3 b3 c3 a4 b4 c4 a5 b5 c5

a1 b1 c1a2 b2 c2a3 b3 c3a4 b4 c4a5 b5 c5

encoded chunk encoded chunk encoded chunk

Page 15: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Parquet nested representation

15

Document

DocId Links Name

Backward Forward Language Url

Code Country

Columns: docid links.backward links.forward name.language.code name.language.country name.url

Schema:

Borrowed from the Google Dremel paper

https://blog.twitter.com/2013/dremel-made-simple-with-parquet

Page 16: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Statistics for filter and query optimization

16

Vertical partitioning (projection push down)

Horizontal partitioning (predicate push down)

Read only the data you need!+ =

a b c

a1 b1 c1

a2 b2 c2

a3 b3 c3

a4 b4 c4

a5 b5 c5

a b c

a1 b1 c1

a2 b2 c2

a3 b3 c3

a4 b4 c4

a5 b5 c5

a b c

a1 b1 c1

a2 b2 c2

a3 b3 c3

a4 b4 c4

a5 b5 c5

+ =

Page 17: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Properties of efficient algorithms

Page 18: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

CPU Pipeline

18

pipe

1 a b c d

2 a b c d

3 a b c d

4 a b c

1 2 3 4 5 6

1 a b c d

2 a b c

3 a b

4 a

clock1 2 3 4 5 6

7

d

7 8

b

b

b

b

c d

c d

c d

c d

9 10

clock

pipe

pipeline

time

8 9 10

d

c

b

a

Mis-prediction (“Bubble”)

Ideal case

Page 19: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Optimize for the processor pipeline

19

ifs

“Bubbles” can be caused by:

loopsvirtual calls

data dependency

cost ~ 12 cycles

Page 20: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Minimize CPU cache misses

20

a cache miss costs 10 to 100s cycles depending on the level

RAM

Bus

CPU Cache

Page 21: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Encodings in Apache Parquet 2.0

Page 22: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

The right encoding for the right job

22

- Delta encodings:for sorted datasets or signals where the variation is less important than the absolute value. (timestamp, auto-generated ids, metrics, …) Focuses on avoiding branching. !

- Prefix coding (delta encoding for strings)When dictionary encoding does not work. !

- Dictionary encoding: small (60K) set of values (server IP, experiment id, …) !

- Run Length Encoding:repetitive data.

Page 23: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Delta encoding

23

8 * 64bits values = 64 bytes 8 * 64bits values = 64 bytes

101 100101 105102 107101 11499 116101 102 101 119 120 121

values:

deltas

1 10 51 2-1 7-2 20 1 -1 3 1 1100

100

101 100101 105102 107101 11499 116101 102 101 119 120 121100

reference block 1 block 2

Page 24: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Delta encoding

24

3 02 43 11 60 12 3 1 2 0 0100 -2

mindelta

1 10 51 2-1 7-2 20 1 -1 3 1 1100

1

mindelta

make deltas > 0 by subtracting min

Page 25: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

3 02 43 11 60 12 3 1 2 0 0

maxbits = 2

11 10 11 01 0010 11 01

1110110110110100

maxbits = 3

8 * 2 bits = 2 bytes

000 100 001 110 001 010 000 000

000100001110001010000000

8 * 3 bits = 3 bytes

2 3

bits bits

100 -2

100 -2

1

1mindelta

mindelta

reference packing packing

1110110110110100 0001000011100010100000002 3100 -2 1result:

mindelta

mindelta

Delta encoding

25

Page 26: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Delta encoding

26

3 02 43 11 60 12 3 1 2 0 0

maxbits = 2

11 10 11 01 0010 11 01

1110110110110100

8 * 64bits values = 64 bytes 8 * 64bits values = 64 bytes

maxbits = 3

8 * 2 bits = 2 bytes

000 100 001 110 001 010 000 000

000100001110001010000000

8 * 3 bits = 3 bytes

2 3

bits bits

101 100101 105102 107101 11499 116101 102 101 119 120 121

100

values:

-2

mindelta

100 -2

deltas

1 10 51 2-1 7-2 20 1 -1 3 1 1100

1

mindelta

make deltas > 0 by subtracting min

1min

deltamin

delta

100

101 100101 105102 107101 11499 116101 102 101 119 120 121100

reference block 1 block 2

reference packing packing

1110110110110100 0001000011100010100000002 3100 -2 1result:

Page 27: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Binary packing designed for CPU efficiency

27

better: orvalues = 0!for (int i = 0; i<values.length; ++i) {! orvalues |= values[i]!}!max = maxbit(orvalues)!

see paper: “Decoding billions of integers per second through vectorization” by Daniel Lemire and Leonid Boytsov

Unpredictable branch! Loop => Very predictable branch

naive maxbit: max = 0!for (int i = 0; i<values.length; ++i) {! current = maxbit(values[i])! if (current > max) max = current!}!

even better: orvalues = 0!orvalues |= values[0]!…!orvalues |= values[32]!max = maxbit(orvalues)

no branching at all!

Page 28: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Binary unpacking designed for CPU efficiency

28

!int j = 0!while (int i = 0; i < output.length; i += 32) {! maxbit = input[j]! unpack_32_values(values, i, out, j + 1, maxbit);! j += 1 + maxbit!}!

Page 29: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Compression comparison

29

TPCH: compression of two 64 bits id columns with delta encoding

Primary key

0%

20%

40%

60%

80%

100%

plain delta

no compression + snappy

Page 30: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Compression comparison

30

TPCH: compression of two 64 bits id columns with delta encoding

Primary key

0%

20%

40%

60%

80%

100%

plain delta

no compression + snappyForeign key

0%

20%

40%

60%

80%

100%

plain delta

Page 31: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Decoding time vs Compression

31

deco

ding

spe

ed:!

Milli

on /

seco

nd

0

350

700

1050

1400

Compression (percent saved)0% 25% 50% 75% 100%

Delta

Plain + Snappy

Plain

Page 32: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Performance

Page 33: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Size comparison

33

TPCDS 100GB scale factor (+ Snappy unless otherwise specified)

Store salesLineitem

Text uncompressed

Seq Avro Text + LZO RC

Parquet 1Parquet 2

The area of the circle is proportional to the file size

Text uncompressed

Seq RC Avro Parquet 1

Parquet 2

Page 34: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Impala query performance

34

Seco

nds

0

75

150

225

300

Interactive Reporting Deep Analytics

Text Seq RC Parquet 1.0 Parquet 2.0

10 machines: 8 cores 48 GB of RAM 12 Disks

OS buffer cache flushed between every query

TPCDS geometric mean per query category

Page 35: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Roadmap 2.x

Page 36: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Roadmap 2.x

36

C++ library: implementation of encodings!Predicate push down:

use statistics to implement filters at the metadata level!Decimal, Timestamp logical types

Page 37: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Community

Page 38: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Thank you to our contributors

38

Open Source announcement

1.0 release

Page 39: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Get involved

39

Mailing lists: - [email protected] !

Parquet sync ups: - Regular meetings on google hangout

Page 40: Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014

Questions

40

Questions.foreach( answer(_) )

@ApacheParquet