odca interop across_clouds_standard units of measurement for iaa_s

14
Open Data Center Alliance Usage: STANDARD UNITS OF MEASURE FOR IAAS sm © 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Upload: seanscs

Post on 13-Jul-2015

97 views

Category:

Technology


1 download

TRANSCRIPT

Open Data Center Alliance Usage: STANDARD UNITS OF MEASURE FOR IAAS

sm

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Legal Notice© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

This “Open Data Center AllianceSM Usage: Standard Units of Measure for IaaS” is proprietary to the Open Data Center Alliance, Inc.

NOTICE TO USERS WHO ARE NOT OPEN DATA CENTER ALLIANCE PARTICIPANTS: Non-Open Data Center Alliance Participants only have the right to review, and make reference or cite, this document. Any such references or citations to this document must give the Open Data Center Alliance, Inc. full attribution and must acknowledge the Open Data Center Alliance, Inc.’s copyright in this document. Such users are not permitted to revise, alter, modify, make any derivatives of, or otherwise amend this document in any way.

NOTICE TO USERS WHO ARE OPEN DATA CENTER ALLIANCE PARTICIPANTS: Use of this document by Open Data Center Alliance Participants is subject to the Open Data Center Alliance’s bylaws and its other policies and procedures.

OPEN CENTER DATA ALLIANCESM, ODCASM, and the OPEN DATA CENTER ALLIANCE logoSM are service marks owned by Open Data Center Alliance, Inc. and all rights are reserved therein. Unauthorized use is strictly prohibited.

This document and its contents are provided “AS IS” and are to be used subject to all of the limitation set forth herein.

Users of this document should not reference any initial or recommended methodology, metric, requirements, or other criteria that may be contained in this document or in any other document distributed by the Alliance (“Initial Models”) in any way that implies the user and/or its products or services are in compliance with, or have undergone any testing or certification to demonstrate compliance with, any of these Initial Models.

Any proposals or recommendations contained in this document including, without limitation, the scope and content of any proposed methodology, metric, requirements, or other criteria does not mean the Alliance will necessarily be required in the future to develop any certification or compliance or testing programs to verify any future implementation or compliance with such proposals or recommendations.

This document does not grant any user of this document any rights to use any of the Alliance’s trademarks.

All other service marks, trademarks and trade names referenced herein are those of their respective owners.

2

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

Open Data Center Alliance Usage: STANDARD UNITS OF MEASURE FOR IAAS

sm

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED. 3

Executive SummaryThe worldwide market for cloud services—encompassing both dedicated services available via private clouds and shared services via public clouds—could top $148.8 billion in 2014, up from $68.3 billion in 2010 . The sheer, and growing, volume of available services make it challenging for any organization to assess their options, measure what services are being delivered and what attributes exist for each service.

Organizations need a way to compare services from competing providers of cloud services, as well as with their own internal capabilities. Such comparisons need to be “quantitative” on a like-for-like or “apples-to-apples” basis (e.g., quantity of consumption, period of usage, etc.) and “qualitative” on a set of service assurance attributes (e.g., degree of elasticity, degree of service level, etc.). However, there is no standard, vendor-independent unit of measure equivalent to million instructions per second (MIPS) and other measures used for mainframes that allow such comparison. This is partly because there are no common measurements for the technical units of capacity being sold, and no common way to describe the qualitative attributes of cloud services. Consequently, organizations either try to fit individual Cloud-Provider’s models to their business problems, or embark on costly and lengthy request for proposal (RFP) processes in an attempt to conform the providers to a set of parameters that can be compared.

The Open Data Center AllianceSM recognizes the need for development of Standard Units of Measure (SUoM) to describe the quantitative and qualitative attributes of services to enable easier, more precise comparison and discovery of the marketplace. This Usage Model is designed to provide subscribers of cloud services with a framework and associated attributes used to describe and measure the capacity, performance and quality of a cloud service.

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

PurposeThis document describes the creation and use of SUoM for quantitative and qualitative measures which, between them, describe the capacity, performance and quality of the service components.

In the Usage Model detailed below, we restrict ourselves to consideration of the Infrastructure as a Service (IaaS) level, but the principles are extensible as needed to Platform as a Service (PaaS) and Software as a Service (SaaS). This methodology can thus be used at a micro level, to define the characteristics of individual service components, and extended to the macro level, to allow the prediction of the performance of a complex, composite application landscape.

The SUoM will be usable in a number of ways:

• Within a Service Catalog prior to service delivery

• As a definition of the expected service capabilities while services are in use

• As a billing reference after consumption

Actor Description

Cloud-Subscriber A person or organization that has been authenticated to a cloud and maintains a business relationship with a cloud.

Cloud-Provider An organization providing network services and charging Cloud-Subscribers. A (public) Cloud-Provider provides services over the Internet.

taxonomy

Quantitative MeasuresQuantitative units within the SUoM can be described and/or calibrated in terms of linear capability (e.g., 500 GB disk capacity), throughput (e.g., 2000 IOs per second), or consumption-based (e.g., $0.01 per million IO operations).

For IaaS, we begin with quantitative units for the three major components which the Cloud-Provider needs to describe:

• Compute (incorporating CPU and memory)

• Storage

• Network

For compute, there must be a consistent benchmark that is useful for comparison across a wide range of Cloud-Subscriber needs. We propose SPECvirt_sc2010 from http://www.SPEC.org. This benchmark covers three principal performance areas meaningful to many Cloud-Subscribers: Web Hosting (user interface intensive), Java Hosting (compute intensive) and Mail (database/transaction intensive). To represent memory needs, we suggest use of a default gigabytes per SPECvirt ratio, and descriptions of double and quadruple memory density above this level.

4

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

For storage, measurement units must allow comparison of capacity, performance and quality. Capacity can be measured in terabytes (TB). Performance can be provided in Input/Output Operations Per Second (IOPS) per TB. Quality would be rated by level and will be defined in a later iteration of this Usage Model.

For networks, measurement units must allow comparison of bandwidth, performance and quality. Bandwidth can be represented in gigabits/second (gb/s). Performance can be quantified in latency/jitter/throughput per minute. Quality in networks, as in storage, would be rated by level: Bronze, Silver, Gold and Platinum.

In addition, standard units are needed for aspects, such as:

• Time-to-Deployment (when does deployment start and end?)

• Duration-of-Use (when does billing start and end?)

• Block Scale Unit (the multiplier used for “wholesale” consumption of capacity — e.g., for providing resource in the thousands of cores for calculation farms)

Qualitative Measures of ServiceQualitative Units can be described in very specific terms (e.g., precise details of the encryption standard), but for this Usage Model, we will restrict the discussion to differentiating each qualitative service into levels of delivery, with further specificity to be added in later revisions.

We propose standardizing on four levels of service differentiation:

• Bronze

• Silver

• Gold

• Platinum

It is the outcomes which are important to consumers of a service, and there are at least two ways in which such levels of service can be assured: enforcement by penalty or assurance by provision.

1. Penalty – Establishing some form of penalty if service levels are not met, assuming this construction is accepted by the Cloud-Provider, then leaving the onus with them as to how the levels are achieved;

2. Assurance by Provision – Describing how the service levels will be achieved. In highly-structured and automated services such as cloud, the required effects are most likely to be achieved by having provisions pre-built into the environment (e.g., redundant components for high levels of recovery), rather than relying simply on supplier response or reactiveness.

It is expected that, in practice, a combination of both will be available.

5

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Some outline of the attributes of the intended service levels is given in the following table. It is expected that the highest level will represent a target value that is only invoked in exceptional circumstances.

Bronze Silver Gold Platinum

Basic Enterprise equivalent

Critical market or business sector equivalent

Military or safety-critical equivalent

Outline Representing the lower-end corporate requirement, possibly equating to a reasonably high level for a small to medium business customer

Representing a trade-off more towards cost than service level within the SLA range

Representing a preference for more cost to deliver a higher quality of service within the SLA range

Representing the maximum contemplated corporate requirement, stretching towards the lower end of military or safety-critical needs

Price Levels €

Lowest, commodity

€ € € € € € € €€

Premium

Measures likely to be taken

Standard out-of-the-box components

Standby or re-assignable elements

Full duplication with load-balancing or failover, no SPoFs

Performance assurances

Component inputs Component outputs Degrees of contention experienced

User applications experience

Scope of assurances

Components Sub-systems Full systems End-to-end, including all dependent elements

Security in-built Basic Enterprise Financial Military

Commitment measurement periods

Averaged over weeks or months

Daily Hourly Real time, continuous

For Service Assurance, we have identified the following attributes:

• Security – The extent of the solution's protection (e.g., encryption, tripwires, virtual local area network or VLAN, port filters, etc.)

• Availability – The degree of uptime for the solution (e.g., taking into account contention probabilities)

• Performance – The extent to which the solution is assured to deliver a level of output

• Elasticity – The configurability and expandability of the solution

• Manageability – The degree of automation and control available for managing the solution

• Recoverability – The solution’s recovery point and recovery time objectives

• Client Service Level Agreement (SLA) Priority – The service contention design for handling peak demand

While not intended to be exhaustive, the following table indicates how these assurance attributes can be defined. The figures given for aspects such as availability percentages and recovery times are illustrative, and may need to be further defined for particular types of solution or component.

6

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

ATTRIBUTE SLA LEVEL DESCRIPTION

Security

Bronze As described in the Security Provider Assurance Usage Model.

Silver As described in the Security Provider Assurance Usage Model.

Gold As described in the Security Provider Assurance Usage Model.

Platinum As described in the Security Provider Assurance Usage Model.

Availability

Bronze Reasonable efforts to attain 99% availability for the IaaS (up to but not including the Cloud-Subscriber's components). Note that the Service-Provider cannot be penalized for any failure of OS or app in the guest VM, except where the failure is clearly the fault of the hypervisor or underlying hardware solution.

Silver Provisions made to attain 99.9% availability, including increased focus on preventing impact from contention risks.

Gold Specifically demonstrable additional measures needed to achieve and sustain 99.9% availability and demonstrating resilience to reasonably anticipated fault conditions. Service penalties should apply at this level.

Platinum Highest possible focus on uptime to achieve 99.99% availability, with the expectation of significantly increased service penalties (beyond Gold tier) if not achieved.

Performance

Bronze Systems are installed and configured in a correct manner, such that they should deliver the levels of throughput specified by their manufacturer.

Silver Provisions made to tune the components to the environment in which they run. Performance is monitored and significant deviations are remedied.

Gold Specifically demonstrable additional measures applied to achieve and sustain acceptable throughput. Performance is monitored. Service penalties should apply at this level.

Platinum Highest possible focus on performance to achieve acceptable end-user experience, which may itself be monitored. Expectation of significantly increased service penalties (beyond Gold tier) if not achieved.

Elasticity

Bronze Reasonable efforts to provide ability to grow by 10% above current usage within 24 hours, or 25% within a month.

Silver Provisions made to provide ability to grow by 10% within 2 hours, 25% within 24 hours, 100% within a month.

Gold Significantly additional demonstrable steps taken to be able to respond very quickly to increase or decrease in needs: 25% within 2 hours, 50% within 24 hours, 300% within a month. Penalties to be applied if this capacity is not available to these scale points when requested.

Platinum Highest capability possible to flex up and down, by 100% within 2 hours, 1000% within a month, with major penalties if not available at any time as needed.

7

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

ATTRIBUTE SLA LEVEL DESCRIPTION

Manageability

Bronze Simple manual user interface for orchestration, monitoring, billing.

Silver Web service interface for all functions. Integration of workflows for incident, problem, change, orchestration, billing.

Gold Real-time interface to a full range of information on the service, including performance, configuration, and transaction rates. Availability goal of 99.99% MTBF on the management interface.

Platinum Real-time highly granular management interface capable of the fullest range of service interface from policy to millisecond-level probes. 99.99% MTBF goal on the management interface, with penalties for breach.

Recoverability

Bronze Reasonable efforts to recover the IaaS service (e.g., access to boot volumes and ability to reboot Cloud-Subscriber's virtual environment again) with up to 24 hours of data loss (e.g., loss of boot disk updates due to no intra-day backup) and up to 24 hours of recovery time. No site disaster recovery (DR). Note that the focus is on recoverability of the underlying service, after which Cloud-Subscriber still has their own recovery to complete.

Silver Provisions made to recover within 4 hours, with up to 24 hours of data loss. (No DR for full site disaster.)

Gold Enhanced recovery capability to recover within 2 hours for hardware failure, 24 hours for site failure, and no more than 4 hours of data loss.

Platinum Highest recovery focus to provide as close to continuous nonstop availability as possible, aiming for < 1 hour recovery and < 15 minutes data loss even in the event of a full site failure.

Client SLA Priority

Bronze Reasonable efforts to respond to client needs, but lowest priority.

Silver Provisions made to provide a good service and be responsive to client, attentive to problems.

Gold Enhanced service level, providing priority attention for all incidents, service interruptions, phone calls. Service penalties for failure to deliver priority service (details to be specified).

Platinum Highest possible service level, with immediate access to the highest level of service response from named (pre-agreed with Cloud-Subscriber) service contacts. First priority access to any resources (expectation of zero contention with anyone else) and dedicated fault teams on any incident. Dedicated service assurance team with named representatives working every month with the Cloud-Provider to assure the highest level of service reporting and interfacing. Major service penalties for breach of these service expectations.

It is not the Open Data Center Alliance’s desire to mandate a long list of measures. There’s a point at which the number of measures begin to make it more complex and difficult, rather than easier, to compare the cloud services of a number of Cloud-Providers.

It should be understood by all parties that, while a Cloud-Provider can predict and quantify how their infrastructure will behave, they cannot predict how the Cloud-Subscriber’s workload will perform in that environment. That performance will be dependent on a number of factors, of which even the Cloud-Subscriber may not be aware. Rather, the SUoM gives both sides a common currency which they can use to work towards such predictions.

8

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

FUNCTION MAP

The above flow chart shows the importance of the SUoM at all three stages of the Cloud-Subscriber-Cloud-Provider relationship.

• Prior to Use (Selection) – SUoM used to provide a realistic indication of the capacity and services offered, as described in the Cloud-Provider’s Service Catalog.

• During Use – SUoM used for assurance or benchmark that the capacity delivered is the capacity promised. The SUoM can also be used to determine whether current service will suffice in the event that major changes (e.g., a new OS) are planned.

• Post Use – SUoM used to report actual capacities used over a period of time to aid billing or future planning.

A big advantage of using the SUoM through all these stages is that it creates a closed loop that in practice can lead iteratively to better predictions and results.

9

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Methodology SUoM can be considered at three conceptual levels:

• At the lowest infrastructure level to define components

• As benchmarks to predict the outputs or throughputs of an infrastructure

• At the application level to predict or measure performance

The lower levels are simpler and more deterministic, while the higher levels are more useful to the Cloud-Subscribers.

10

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

The Cloud-Provider should be able to indicate service or infrastructure measures within a finely¬ predictable bandwidth, certainly well within ±10%. From this, the Cloud-Subscriber should ultimately be able to predict the performance of their systems with increasing accuracy via an iterative approach. There is scope for the development of more accurate, and thus more useful, benchmarking and performance prediction capabilities.

Amazon as a major Cloud-Provider, while not providing a benchmark as such, is treated as a standard for comparison in terms of delivered capacities and prices by many within the Cloud Services business. That does not, however, indicate that their proprietary definitions should be adopted as a standard for all players, but rather that a structure is needed which can usefully be used across all vendors.

Regarding system performance and transactional throughput, there are a number of potential sources for pre-existing measures:

• TPC (http://www.tpc.org/information/benchmarks.asp)

• SPEC (http://www.spec.org/)

• RPE2 (http://www.ideasinternational.com/IT-Buyers/Server-Performance)

• SAPS (www.sap.com/solutions/benchmark/measuring/index.epx)

All these benchmarks should at least be considered as a starting point and their relative strengths and weaknesses determined in order to learn what works. There are also organizations (e.g., CloudHarmony.com1 ) which measure the actual performance of various Cloud Suppliers’ environments to provide an indication of their performance in terms of speed and reliability. However, one should bear in mind that such measures are not an accurate indication of how a particular Cloud-Subscriber’s systems will behave and perform in the tested environment. For those purposes, it may be best for a Cloud-Subscriber to develop its own benchmark, using the application in question.

It can be expected that this is a new field where initial progress can best be made by “starting simple.” Yet, over time, further complexity and sophistication is bound to be needed. Our proposed courses of action for the immediate future are:

• Supply Side: Develop a structure and set of units whereby all suppliers can indicate the capacities of their infrastructure in terms that are: a) consistent within their own environment, and b) at least acceptably comparable to other vendors’ environments.

• Demand Side: Develop methods for being able to predict beforehand and analyze afterwards the performance of any given system within such an environment, using the above measures.

Further developments that will be needed in the future include measures to indicate the time-to-deployment of a new environment in terms of its scale (using SUoM), qualitative requirements, and common definitions of the starting and ending points.

1 See Cloud Harmony at http://cloudharmony.com/

11

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Use Case

Goals:

1. To ensure that Cloud-Subscribers have the ability to predict performance, as well as track and record actual usage.

2. To ensure that Cloud-Providers have the technical capability to give a deterministic indication of infrastructure capacities and track such capacities in an auditable manner.

Considerations:

Assumes the Service Catalog, as documented elsewhere by the Open Data Center Alliance.

Sample Usage:

Cloud-Subscriber wishes to run a newly developed system in a cloud environment. The commissioning of a suitable environment requires estimates of necessary processing, storage and I/O capacities. The system under development is run for a period in a quality assurance/acceptance environment with a known number of users. The environment used is quantified, using SUoM, and the performance acceptability gauged. For the first production deployment, the environment under consideration is also quantified using the same SUoM, factored by the number of users expected. Once deployed, the environment’s performance and the actual number of users are monitored and adjusted accordingly, again using SUoM.

Success Scenario 1 (pre-usage):

1. Cloud-Provider shall be able to define infrastructure capacities to allow accurate estimates of potential performance associated with all offered services.

2. Cloud-Provider is able to ensure that Cloud-Subscriber's requirements for capacities are met over the mid/long term.

3. Cloud-Subscriber is notified, as part of the Service Catalog, of the levels of capacity to be provided.

Failure Conditions 1:

Cloud-Provider is unable to identify a capacity and deliver a figure for each use of the infrastructure.

Success Scenario 2 (actual, instrumented):

Cloud-Provider is able to monitor actual capacities used. Cloud-Subscriber is notified after the fact, within acceptable bounds, of actual units used.

Failure Conditions 2:

Cloud-Provider is unable to identify the volume and rate of usage arising from each use of the infrastructure.

Failure Conditions 3:

Cloud-Provider reports the volume and rate of usage arising from each use of the infrastructure, but it is significantly outside the previously indicated levels.

Failure Handling:

For all failure conditions, Cloud-Provider should notify Cloud-Subscriber of the inability to provide benchmark figures and/or of the actual figures produced.

Requirements:

Existence and use of a Service Catalog.

12

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Benchmark Suitability Discussion Of the SPEC standards, SPECvirt_sc2010 is perhaps the closest we have to a cloud metric. SPECvirt_sc2010 combines modified versions of three previous standards: SPECweb2005, SPECjAppServer2004, and SPECmail2008. The client-side SPECvirt_sc2010 harness controls the workloads. Scaling is achieved by running additional sets of virtual machines (VMs), called "tiles," until overall throughput reaches a peak (wording taken from measurements of a physical machine2). All VMs must continue to meet required quality of service (QoS) criteria.

In general, vendors who have submitted benchmarks for these tests have used well-documented open source stacks generally based around Red Hat Linux and the Apache Stack, such as HP's test platform for the ProLiant DL380 G7 server. This is a good example, although vendors are allowed to choose the stack of their choice.

A key differentiator for this metric is that the test harness fires requests from a process external to the stack processing the requests. Compare this with many benchmarks where the stack and inputs and outputs are all contained within one VM. While the test harness could be bundled with the stack in the cloud VM to be tested, keeping it external allows "real world" tests to be conducted across the cloud data center or from client sites to a cloud VM. This brings some degree of user experience testing into the overall picture, such as results for network and data center latency.

RPE2 is another composite benchmark. It consists of several industry standards and can be used to provide a standard, objective measure of compute power across all hardware, irrespective of chip architecture. It fully supports virtualized environments. The six benchmarks it includes are listed below. (More information on them and RPE2 is available in this white paper: http://www.ideasinternational.com/PDFs/Server-Relative-Performance-Estimates-RPE2-Whitepa.aspx).

TPC-C –The Transaction Processing Performance Council's online transaction processing (OLTP) benchmark. This simulates the transactions of a complete order entry environment where a population of terminal operators executes transactions against a database for simulated order fulfillment from a set of warehouses. See Overview of TPC Benchmark C: The Order Entry Benchmark for full details.

TPC-H –The Transaction Processing Performance Council's ad hoc decision support benchmark. See TPC-H for details.

SAP SD 2-Tier –This is an SAP two-tier sales and distribution order processing benchmark. See Benchmark Sales and Distribution on SAP's web site for details.

SPECjbb –The Standard Performance Evaluation Council (SPEC) benchmark for Java Server performance. SPECjbb2005 evaluates the performance of server side Java by emulating a three¬-tier client/server system (with emphasis on the middle tier). The benchmark exercises the implementations of the Java Virtual Machine (JVM ), Just-In-Time (JIT ) compiler, garbage collection, threads and some aspects of the operating system. It also measures the performance of CPUs, caches, memory hierarchy and the scalability of shared memory processors (SMPs). See SPECjbb2005 for full details.

CINT2006 and CFP2006 –The integer and floating point components of SPEC CPU 2006 and the SPEC Benchmark for CPU performance. Each of these tests has over 10 sub tests in a range of programming languages. See SPEC CPU2006, CINT2006 and CFP2006 for full details.

RPE2 is a comprehensive benchmark. However, it is also a complex benchmark to implement with many tests requiring hours of runtime to provide meaningful comparable results. Some debate is needed to decide if this is the appropriate benchmark for comparing Cloud-Providers.

2See http://www.spec.org./virt_sc2010

13

Open Data Center Alliance Usage: Standard Units of Measure for IaaS

© 2011 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Summary of Industry Actions Required In the interest of giving guidance on how to create and deploy solutions that are open, multi-vendor and interoperable, we have identified specific areas where the Alliance believes there should be open specifications, formal or de facto standards, or common IP-free implementations. Where the Alliance has a specific recommendation on the specification, standard or open implementation, it is called out in this usage model. In other cases, we will be working with the industry to evaluate and recommend specifications in future releases of this document.

The following are industry actions required to refine this usage model:

1. Create a list of SUoM to be defined and calibrated (Alliance).

2. Align to proposed Service Catalog descriptions and units (Alliance).

3. Identify applicable units per component configuration (hardware vendor and/or Cloud-Provider).

4. Integrate within published catalogs/APIs (Cloud-Provider).

5. Incorporate into methods for reporting current usage and billing (Cloud-Provider).

6. Benchmark to calibrate units (Cloud-Subscriber and/or third parties).

7. Develop more sophisticated benchmark and application performance prediction capabilities (industry)

8. Provide feedback to Alliance on usage experiences (Cloud-Subscriber and Cloud-Provider).

9. Continually review usage and experiences in practice (Cloud-Subscriber via Alliance).

14

Open Data Center Alliance Usage: Standard Units of Measure for IaaS