how data volume affects spark based data analytics on a scale-up server
TRANSCRIPT
How Data Volume Affects Spark Based Data Analytics on a Scale-up Server
Ahsan Javed Awan EMJD-DC (KTH-UPC)(https://www.kth.se/profile/ajawan/)Mats Brorsson(KTH), Vladimir Vlassov(KTH) and Eduard Ayguade(UPC and BSC),
Motivation
Why should we care about architecture support?
*Source: SGI
Data Growing Faster Than Technology
Motivation
Cont..
*Taken from Babak Falsafi slides
Motivation
Cont...
Our FocusImprove the node level performancethrough architecture support
*Source: http://navcode.info/2012/12/24/cloud-scaling-schemes/
Phoenix ++,Metis, Ostrich, etc..Hadoop, Spark,Flink, etc..
Motivation
Conti...
A mismatch between the characteristics of emerging workloads and the underlying hardware.M. Ferdman et-al, Clearing the clouds: A study of emerging scale-out workloads on modern hardware, in ASPLOS 2012.
Z. Jia, et-al Characterizing data analysis workloads in data centers, in IISWC 2013.
Z. Jia et-al, Characterizing and subsetting big data workloads, in IISWC 2014
A. Yasin et-al, Deep-dive analysis of the data analytics workload in cloudsuite, in IISWC 2014.
T. Jiang, et-al, Understanding the behavior of in-memory computing workloads, in IISWC 2014
Existing studies lack quantitative analysis of bottlenecks of scale-out frameworks on single-node
Progress Meeting 12-12-14
Which Scale-out Framework ?
[Picture Courtesy: Amir H. Payberah]
Our Approach
Performance characterization of in-memory data analytics on a modern cloud server, in 5th International IEEE Conference on Big Data and Cloud Computing, 2015 (Best Paper Award).
How Data Volume Affects Spark Based Data Analytics on a Scale-up Server
What are the major bottlenecks??
Focus of this talk
Our Approach
Do Spark based data analytics benefit from using scale-up servers?
How severe is the impact of garbage collection on performance of Spark based data analytics?
Is file I/O detrimental to Spark based data analytics performance?
How does data size affect the micro-architecture performance of Spark based data analytics?
What are the remaining questions??
Our Approach
We evaluate the impact of data volume on the performance of Spark based data analytics running on a scale-up server.
We quantify the limitations of using Spark on a scale-up server with large volumes of data.
We quantify the variations in micro-architectural performance of applications across different data volumes.
What are the contributions??
Our Approach
Use a subset of benchmarks from BigDataBench
Use Big Data Generator Suite (BDGS), to generate synthetic datasets of 6 GB, 12 GB and 24 GB.
Configure Spark in local mode and tune its internal Parameters
Rely on GC logs to collect garbage collection times.
Use Spark logs to gather execution time of benchmarks.
Use Concurrency Analysis in Intel Vtune to collect wait time and CPU time of executor pool threads
Use General Micro-architectural Exploration in Intel Vtune to analyze impact of data volume on micro-architecture characteristics.
Methodology
Our Approach
What are the characteristics of benchmarks?
Our Hardware Configuration
System Details
Our Hardware Configuration
Machine Details
Hyper Threading and Turbo-boost are disabled
Hyper Threading and Turbo-boost are disabled
Our Approach
Software Parameters
Motivation
Do Spark based data analytics benefit from using larger scale-up servers?
Spark applications do not benefit significantly by using more than 12-core executors
Motivation
Is GC detrimental to scalability of Spark applications?
The proportion of GC time increases with the number of cores
Motivation
Does performance remain consistent as we enlarge the data
size ?
Decrease in Data processed per second ranges from 11% to 93% ( Parallel Scavenge)
Motivation
Does the choice of Garbage Collector impact the data processing capability of the system ??
Improvement in DPS ranges from 1.4x to 3.7x on average in Parallel Scavenge as compared to G1
Motivation
How does GC affect data processing capability of
the system ??
GC time does not scale linearly with data size.
Motivation
How does CPU utilization scale with data volume ?
CPU Utilization decreases with increase in input data size
Motivation
Is File I/O detrimental to performance ?
Fraction of file I/O increases by 6x, 18x and 25x for Word Count, Naive Bayes and Sort respectively when input data is increased by 4x
Motivation
How does data size affects micro-architectural performance ?
5 to 10 % better instruction retirement as we enlarge the data size
Motivation
Cont..
Execution units inside the core exhibit improved utilization at larger data sets
Motivation
Cont..
Increase in L1 Bound Stalls implies better utilization of L1 Caches
Motivation
Cont..
Spark benchmarks exhibit reduced memory bandwidth utilization
Key Findings
Spark workloads do not benefit significantly from executors with more than 12 cores.
The performance of Spark workloads degrades with large volumes of data due to substantial increase in garbage collection and file I/O time.
With out any tuning, Parallel Scavenge garbage collection scheme outperforms Concurrent Mark Sweep and G1 garbage collectors for Spark workloads.
Spark workloads exhibit improved instruction retirement due to lower L1 cache misses and better utilization of functional units inside cores at large volumes of data.
Memory bandwidth utilization of Spark benchmarks decreases with large volumes of data and is 3x lower than the available off-chip bandwidth on our test machine
Motivation
Future Directions
NUMA Aware Task SchedulingCache Aware TransformationsExploiting Processing In Memory ArchitecturesHW/SW Data PrefectchingRethinking Memory Architectures