daniel schall, volker höfner, prof. dr. theo härder tu kaiserslautern

Post on 14-Dec-2015

219 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

WattDB – Energy-Proportionality

on a Cluster ScaleDaniel Schall, Volker Höfner, Prof. Dr. Theo Härder

TU Kaiserslautern

Outline

Energy efficiency in database sytemsMulti-Core vs. ClusterWattDB

RecentCurrent WorkFuture

2

In-memory technology

Electricity Cost

MotivationMore and more data

Bigger servers

3

Power BreakdownLoad between 0 – 50 %Energy Consumption: 50 – 90%!

‘‘Analyzing the Energy Efficiency of a Database Server“,D. Tsirogiannis, S. Harizopoulos, and M. A. ShahSIGMOD 2010

‘‘Distributed Computing at Multi-dimensional Scale“,Alfred Z. SpectorKeynote on MIDDLEWARE 2008

4

Growth of Main Memory makes it worse

%20 40 60 80 100System utilization

Power(Watt)

0

%

20

40

60

80

100power@utilization

energy-proportional

behavior

In-memory data management assumes continuous peak loads!Energy consumption of memory linearly grows with size and

dominates all other components across all levels of system utilization

Mission: Energy-Efficiency!Energy cost > HW and SW cost

Energy Efficiency =

‚‚Green IT‘‘

Work

Energy Consumption

7

Average Server Utilization

Google Servers: load at about 30 %SPH AG: load between 5 and 30 %

8

Energy Efficiency - Related WorkSoftware

Delaying queriesOptimize external storage access patternsForce sleep states„Intelligent“ data placement

Narrow approaches Only small improvements

9

HardwareSleep statesOptimize energy consumption when idleSelect energy-efficient hardwareDynamic Voltage Scaling

Goal: Energy-Proportionality

%20 40 60 80 100System utilization

Power(Watt)

0

%

20

40

60

80

100power@utilization

energy-proportional

behavior

1) reduce idle power consumption2) eliminate disproportional energy consumption

1

2

From Multi-Core to Multi-Node

CPU CPU CPU CPU

CPU CPU CPU CPU

Cache Cache Cache Cache

Cache Cache Cache Cache

Main memory Main memory

Main memory Main memory

1Gb ethernet switch

Core Core Core Core

Core Core Core Core

L1 Cache

L1 Cache

L1 Cache

L1 Cache

L1 Cache

L1 Cache

L1 Cache

L1 Cache

L2 Cache L2 Cache

L2 Cache L2 Cache

L3 Cache

11

%20 40 60 80 100System utilization

Power(Watt)

power@utilization

0

%

20

40

60

80

100

A dynamic cluster of wimpy nodesenergy-proportional DBMS

Load

Time

12

Cluster OverviewLight-weighted nodes, low-power hardware

Each node Intel Atom D510 CPU2 GB DRAM80plus Gold power supply1Gbit Ethernet interconnect23 W (idle) - 26 W (100% CPU)41 W (100% CPU + disks)

Considered Amdahl-balancedScale down the CPUs to the disks and network!

13

14

Shared Disk AND Shared Nothing

Physical hardware layout: Shared Diskevery node can access every pagelocal vs. remote latency

Logical implementation: Shared Nothing:data is mapped to node n:1exclusive accesstransfer of control

Combine the benefits of both worlds!

15

16

Recent WorkSIGMOD 2010 Programming Contest

First prototypedistributed DBMS

BTW 2011 Demo TrackMaster node powering cluster up/down acc. to load

SIGMOD 2011 Demo TrackEnergy-proportional query processing

17

Current WorkIncorporate GPU-Operators

improved energy-efficiency?more tuples/Watt?

Monitoring & Load ForecastingFor management decisionsact instead of react

Energy-Proportional Storage storage needs vs. processing needs

Future WorkPolicies for powering up / down nodesLoad distribution and balancing among nodesWhich use cases fit for the proposed

architecture, which don‘t?Alternative hardware configurations

Heterogeneous HW environmentSSDs, other CPUs

Energy-efficient self-tuning

18

19

Node3Node3

Current WorkTable

Partition Partition

Node1 Node2

Partition

20

Node3

Future WorkTable

Partition Partition

Node1 Node2

Partition

Node2

Conclusion

Energy consumption matters!Current HW is not energy-proportionalSystems most of the time at 20% - 50% utilizationWattDB as a prototype for an energy-proportional DBMSSeveral challenges ahead

21

22

Thank You!

Energy Proportionality on a Cluster Scale

top related