energy in cloud computing and renewable energy reza farivar

Post on 31-Mar-2015

221 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Energy in Cloud Computing andRenewable Energy

Reza Farivar

Energy in Cloud Computing

• Data Center as a Computer– E-book by google by Luiz Andre Barroso and Urs

Holzle• Cloud Computing – the IT solution for the 21st

century– Carbon Disclosure Project Study 2011

Introduction

• Energy Consumption of data centers is considerable– 3% of total annual energy consumption in US– Double from 2006 (1.5%)– Growing even more– Estimated by Environmental Protection Agency

EPA

Cloud Computing Adoption in IT

Carbon Emission Model

Energy Savings due to Clouds

• CO2 savings of cloud computing compared to no cloud computing 2011-2020.

Energy Savings due to Clouds

• Net Energy Savings due to Cloud Computing adoption, 2011-2020

Energy infrastructure of a data center

Warehouse Scale Computers

• WSC a type of datacenter• WSC host hardware and software for multiple

organizational units or even different companies on shared resources

Storage in WCS

Storage in WCS

Energy usage in WCS

• Peak power usage of one generation of WSCs deployed at Google in 2007

• CPU is NOT the sole energy consumer

• No one subsystem dominates

Data Center Cooling

• CRAC unit: computer room air conditioning• Datacenter raised floor with hot–cold aisle setup

Data Center Energy Efficiency

• DCPE: Data Center Performance Energy – Defined by Green Grid

Data Center Energy Efficiency

• PUE (Power Usage Effectiveness): Relates to the facility– the ratio of total building power to IT power,

i.e. the power consumed by the actual computing equipment (servers, network equipment, etc.)

• 85% of current datacenters were estimated in 2006 to have a PUE of greater than 3.0– the building’s mechanical and electrical

systems consume twice as much power as the actual computing load;

• only 5% have a PUE of 2.0

PUE of 24 Data Centers (2007)

Improving PUE (Google, 2008)• Careful air flow handling: The hot air exhausted by servers is not

allowed to mix with cold air, and the path to the cooling coil is very short so that little energy is spent moving cold or hot air long distances.

• Elevated cold aisle temperatures: The cold aisle of the containers is kept at about 27°C rather than 18–20°C.

• Use of free cooling: Several cooling towers dissipate heat by evaporating water, greatly reducing the need to run chillers. In most moderate climates, cooling towers can eliminate the majority of chiller runtime. Google’s datacenter in Belgium even eliminates chillers altogether, running on “free” cooling 100% of the time.

• Per-server 12-V DC UPS: Each server contains a mini-UPS, essentially a battery that floats on the DC side of the server’s power supply and is 99.99% efficient. These per-server UPSs eliminate the need for a facility-wide UPS, increasing the efficiency of the overall power infrastructure from around 90% to near 99%.

Data Center Energy Efficiency

• SPUE (Server Power Usage Effectiveness): A server’s energy conversion, ratio of total server input power to its useful power– useful power: power consumed by the electronic components directly

involved in the computation: motherboard, disks, CPUs, DRAM, I/O cards, excluding all losses in power supplies, VRMs, and fans.

• SPUE ratios of 1.6–1.8 are common in today’s servers; many power supplies are less than 80% efficient, and many motherboards use (voltage regulator modules)VRMs that are similarly inefficient, losing more than 30% of input power in electrical conversion losses.

• SPUE should be less than 1.2

Data Center Energy Efficiency

• Efficiency of computation – Hardest to measure

• Use of benchmarks– In HPC, they use LINPACK to rank supercomputers (Green

500)– No similar wide-initiative for internet services– Joulesort

• measures the total system energy to perform an out-of-core sort

– SPECpower_ssj2008 • For server- class systems, find performance-to-power ratio of a

typical business application on an enterprise Java platform

SPEC Power results (2008)

• Performance-to-power ratio drops as the target load decreases – the system power decreases

much more slowly than does performance

• E.g. energy efficiency at 30% load is less than half the efficiency at 100%

• At idle, still consuming 175W – over half of the peak power

consumption of the server• Quad-core Xeon, 4GB RAM,

7.2K harddrive

Load vs Efficiency

• Relative efficiency of the top 10 entries in the SPEC power benchmark

• Running at 30% load vs. the efficiency at 100%• ratio of idle vs. peak power

Activity Profile

• Average CPU utilization of 5,000 Google servers during a 6-month period

• Servers spend relatively little aggregate time at high load levels

• Most of the time is spent within the 10–50% CPU utilization range– Perfect mismatch with the

energy efficiency profile of modern servers (most of time being most inefficient)

Absence of Idle Servers

• Result high-performance, robust distributed systems software design

• Efficient load distribution when load is lighter we have a lower load in multiple servers

• Migrate workloads and their corresponding state to fewer machines during periods of low activity– can be easy when using simple replication models, when servers are

mostly stateless (i.e., serving data that resides on a shared NAS or SAN storage system)

– Comes at a cost in terms of software complexity and energy for more complex data distribution models or those with significant state and aggressive exploitation of data locality

• Need for resilient distributed storage. GFS & HDFS achieve higher resilience by distributing data chunk replicas on entire cluster

Poor Energy Proportionality in SPUE • The CPU contribution to system power is 50% at peak, drops to

less than 30% at low activity levels the most energy-proportional– Dynamic range of 3.5X

• Dynamic range of memory systems, disk drives, and networking equipment is lower: 2X for memory, 1.3X for disks, and less than 1.2X for networking switches

Poor Energy Proportionality in Data Center • Efficiency curves derived

from tests of several server power supplies

• Lack of proportionality at less than 30% of the peak-rated power

• power supplies have significantly greater peak capacity than their corresponding computing equipment usual to operate way below peak

Oversubsribing

• Can increase the overall utilization and efficiency of the datacenter

• Opportunities at Rack, container (PDU) and data center levels

• Specially at the container and cluster levels

Energy from Mobile Aspect

• Cloud Computing can save energy for mobile clients

• E.g. Game of chess– State of the game can fit in a few bytes– Requires much more computation

top related