eantc's independent test of cisco's cloudverse ......eantc's independent test of...

20
EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications Photos taken by Kim Ringeisen (Flying Monkey Photography)

Upload: others

Post on 05-Apr-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

EANTC's Independent Test of Cisco's CloudVerse Architecture

Part 1: Cloud Data Center Infrastructure Including Business Applications

Photos taken by Kim Ringeisen (Flying Monkey Photography)

Page 2: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

2

EANTC’s Independent Test of Cisco’s CloudVerse Architecture

INTRODUCTION FROM LIGHT READING

In December, Cisco Systems Inc. (Nasdaq: CSCO)introduced CloudVerse, its approach to building andmanaging cloud-based networks. In his story aboutthe announcement, Light Reading's Craig Matsumotolisted the key components of CloudVerse:

• Cisco Intelligent Automation for the Cloud(CIAC), tools for the provisioning of services.CIAC is part of a wider framework called CiscoUnified Management, which includes automationand orchestration software, including technologyfrom two Cisco acquisitions: newScale (servicecatalogs and service-provisioning portals) andTidal Software (tool for monitoring applicationperformance to detect problems ahead of time).

• Cisco Network Services Manager, whichhandles virtualization of the data center'snetworking elements (routers, switches, loadbalancers, firewalls, etc.). It can set up provi-sioning and policy universally, so that all thesepieces don't have to be configured individually.

• Cloud-to-Cloud Connect, a way of letting datacenters connect to the cloud more dynamically. Akey ingredient here is Cisco's NetworkPositioning System (NPS) being added to theASR 9000 and 1000 routers. NPS, originallyintroduced on the CRS-3 core router, searchesthe network/cloud for alternative resources whencapacity limits get reached.

Pre-tested applications: Cisco has about 50 of themready to add to CloudVerse, the company said at thetime. These are meant to help kick-start a carrier orenterprise's cloud offerings. The examples Ciscointends to emphasize on its webcast deal with enter-prise collaboration.(see video links in online version)With the scene and the context suitably set, we'llnow include some terms we'll be using in this reportbelow for you edification and then we'll hand thebaton to our testing partner European AdvancedNetworking Test Center AG (EANTC) to explain itstests of Cisco's CloudVerse.

TABLE 1. Acronyms Used in This Report

CISCO’S UNIFIED DATA CENTER

When Light Reading asked us to conduct a suite oftests for Cisco's converged data center we were farfrom surprised -- these days clouds are everywhereand Cisco has all the components one would needto offer cloud services.Cisco sees itself as a one-stop shop for every serviceprovider interested in rolling out cloud services orupgrading such services. They offer both networkinginfrastructure elements and the actual components ofthe data center.In addition to the components, we knew from ourwork with service providers that managing cloudservices and data center components can come at ahigh cost. Cisco's answer is a comprehensivesystem. The company has worked hard to merge thevarious server and network elements that typicallyexist in a data center in order to present a unified

Terminology Description

ACE Application Control Engine

BMC CLM Cloud Lifecycle Management

CGSE Carrier-Grade Services Engine

FC Fiber Channel

FCoE Fiber Channel over Ethernet

HCS Hosted Collaboration Solution

Multi-tenancy Multiple users accessing virtual servers

SAN Storage Area Network

SLA Service Level Agreement

SLB Server Load Balancer

SDU Systems Development Unit

UCS Unified Computing System

UF Unified Fabric

VM Virtual Machine

VNMC Virtual Network Management Center

VMDC Virtualized Multi-Tenant Data Center – Cisco’s cloud architecture

VM-FEX Virtual Machine Fabric Extender

VPC Virtual Port Channel

VSG Virtualized Security Gateway

VSM Cisco Nexus 1000V Virtual Supervisor Module

TABLE OF CONTENTS

Introduction from Light Reading .........................2Cisco’s Unified Data Center ..............................2Test Methodology ............................................3BMC Cloud Lifecycle Management Integration ....5Multi-Tenancy Isolation .....................................6FabricPath ......................................................7Tiered Cloud Services ......................................9Unified Fabric (UF) – UCS Manager ................10Virtual Machine Fabric Extender Performance ...11Virtual Security Gateway ................................13Locator/ID Separation Protocol (LISP) ...............14Provisioning: Cisco Network Services Manager 14Enterprise Applications: Siebel CRM ................15Cisco’s Hosted Collaboration Solution..............16Conclusion: Unified Data Center Test ...............17

Page 3: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

Test Methodology

3

system to the market. One result of this effort isCisco's Unified Computing System (UCS). Thisarticle documents our attempt to quantify and verifysome of Cisco's UCS latest features and capabilities.Of course we cannot forget about the supportingnetwork infrastructure, which has undergonechanges of its own. We look at data center infra-structure in later articles. Some believe that thisprocess of unification Cisco described means FiberChannel over Ethernet, but to Cisco this is a verysmall piece of the story, so much so that we didn’teven test much in that area. Intriguing, but we had questions: What do we feel isimportant? For over six months we brainstormedinternally and discussed with Cisco before spendingyet another month pre-staging and conducting thetesting in Morrisville, N.C. We set out to answer thefollowing questions: How does the data center infra-structure scale? Which services and applications arecloud ready? Is the infrastructure ready to bemigrated to IPv6? Were any key pieces -- security,virtualization, scale, multi-tenancy, prioritization,performance, server, software and network compo-nents -- all there? As we broke these questions downinto more specific tests of particular components wealso built an understanding of how Cisco answersthese questions, and how we could put them to test.We hope you find the answers you’re looking for aswell.

About EANTCThe European Advanced Networking Test Center(EANTC) is an independent test lab founded in1991 and based in Berlin, Germany that conductsvendor-neutral proof-of-concept and acceptance testsfor service providers, governments and large enter-prises. EANTC has been testing MPLS routers sinceearly 2000s for both online publications andinteroperability and service providers.EANTC's role in this program was to define the testtopics in detail, communicate with Cisco, coordinatewith the test equipment vendor (Ixia), and conductthe tests at the vendors' locations. EANTC engineersthen extensively documented the results. Ciscosubmitted their products to a rigorous test in acontrolled environment contractually defined. Forthis independent test, EANTC exclusively reported toLight Reading. Cisco did not review the individualreports before their release. Cisco had a right toveto publication of the test results as a whole, but notto veto individual test cases.— Carsten Rossenhövel is Managing Director of theEuropean Advanced Networking Test Center AG(EANTC) , an independent test lab in Berlin. EANTCoffers vendor-neutral network test services formanufacturers, service providers, governments andlarge enterprises. Carsten heads EANTC's manufac-turer testing and certification group and interopera-bility test events. He has over 20 years of experiencein data networks and testing.Jonathan Morin, EANTC, managed the project,worked with vendors and co-authored the article.

TEST METHODOLOGY

Testing cloud solutions is complex for a severalreasons. Cloud testing includes infrastructure teststhat could be evaluated using standard network testequipment. But cloud testing also includes the virtualserver space that standard testing tools cannot easilyexplore, not to mention the added complexity ofshared memory, CPU, network and storageresources. Luckily Ixia (Nasdaq: XXIA) was able tosupport both type of tests offering hardware to testthe infrastructure, virtual Ixia tools to test within thevirtual space, and configuration assistance to put itall together.All together we used two XM12 chassis and oneXM2 with the modules listed below:

• Xcellon-Ultra NP Application Traffic Testing LoadModule: Used for Layer 4-7 testing with 10Gigabit-Ethernet interfaces, such as for ourVideoscape or PCRF tests (to be seen in anupcoming article)

• Xcellon-Flex Accelerated Performance and FullEmulation Load Modules: Used for Layer 2 to 3testing with 10-Gigabit Ethernet interfaces, suchas for our Fabric Path test

• LSM1000XMVDC16-01: Used for Layer 2 to 3testing that required Gigabit-Ethernet interfaces

• Xcellon-Ultra XT/XTS Application Traffic TestingAppliances

We made use of both IxNetwork software for teststhat required Ethernet/IP (Layer 2/3) traffic, andIxLoad when state-full traffic like HTTP, streamingvideo or emulated mobile traffic was required. Hereis a photo of the XM12s:

Page 4: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

4

EANTC’s Independent Test of Cisco’s CloudVerse Architecture

EMCSymmetrix

Videor Emulator

r 2

8

009

Ixia E

Ixia Emul

a EmulatedServers

0-S6

(Mob

VMAX

Mob

NPS Client

Ixia EUser

ASA5585-X

Client

MDS9506

IP/MPLS Network

IxiaConsume

Data Center 1 Data Cente

Fibre Channel

VideoscapeSAN

10 Gigabit-Ethernet

Physical Chassis

MDS9513

EMCSymmetrix

CMP

UCS6140

UCS6140

Nexus1000v

Nexus5548

x

Nexus 7009 Nexus 7

UCS6140 UCS

6140

Nexus5548

x4x2

x2

x7

Nexus 7010 Nexus 7010

x4 x4

x4

x4

Nexus 7009Nexus 7009

xN N x Physical Links

x4

Gigabit-Ethernet

Nexus 7018

x6

Ixia Emulated

Nexus 7010Nexus 7010

x4VMAX

mulated

ated Servers

x8

Ixi

Media Suite

IxiaBMC CLM

DHCPv6 Server

Aggregated Physical Links

ASR 9010

IxLoadVM

IxiaIxNetworkVM

CDE 205

CDE 220

CDE 25

ASR 1006

ile Content)

IxiaIxLoadVM

IxiaIxNetworkVM

CDE 205

UCS 5100UCS 5100

CTM

Siebel

CMP

ile Users

ASR 1006

ASR 5000

Video Clients

Mobile Core

Nexus1000v

CRS-1 CRS-3

x8x8

ClientsCRS-1 with CGSE

Access Network

ASR 9010 ASR 9010 ASR 9010

IP/MPLS Network

Nexus 2248TP-E

CAE

x8

x4 x4

x2 x2

IxiaImpairNet

ASR1006

Ixia EmulatedServers

Nexus5548

Nexus5548

Nexus5548

Nexus5548

x6mulated Traffic Nexus 7018

Ixia EmulatedUser Traffic

MDS9513

MDS9506

Catalyst 6500

60

Catalyst 6500 ASA5585-X60

ASR1002 LISP

Page 5: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

BMC Cloud Lifecycle Management Integration

5

For the virtual tools, we made use of both Ixia IxLoadVM and Ixia IxNetwork VM, for state-full andstateless traffic, respectively. Finally you will also seethat in some cases we are looking at Cisco’s appli-cations. In these cases we had to evaluate Ciscoapplications that did not allow us to use an Ixia tooland thus we had to come up with new methodol-ogies.

BMC CLOUD LIFECYCLE MANAGEMENT INTEGRATION

EXECUTIVE SUMMARY: BMC Cloud LifecycleManagement successfully provisioned new tenantsand their network container services throughout thedata center, as well as their respective virtualmachines (VMs), all from a single user interface.Deliberations about new investments, for networkoperators and especially for cloud providers,typically focus more on operational cost than capitalexpenditure. The questions that come up include:How will a new system be operated, and will thisultimately mean an increase or decrease in opera-tional efforts? Will administration of the systemsbecome a complex task once resources are sharedwithin a virtual space?Given the scalability and cost effectiveness thatservice providers expect from cloud serviceplatforms, automated service creation (provisioning)is a key aspect. Provisioning is literally the firstactivity of a network operator when a service hasbeen commissioned. Thus it was on the top of our listof things to evaluate.Cisco explained that they partner with BMCSoftware Inc. (NYSE: BMC) to provide provisioningsoftware, with the goal of easing the pain for theadministrator and reducing the ramp-up time for newcloud tenants and services.

BMC’s Cloud Lifecycle Management (CLM) promisesto provision tenants’ data center services (cloudcustomers) and virtual machines across the datacenter, all through a single interface. As an umbrellasystem, CLM makes use of other tools to manageindividual elements -- BMC BladeLogic NetworkAutomation (data center network provisioning) andBMC BladeLogic Server Automation (VM and appli-cation provisioning, which interfaces directly withVMware’s Vcenter). Our interest was to put thissolution to use in the lab, and to take a peek underthe hood to see how we could use it to provisionservices, providing readers with an idea of theadministrator’s experience.

In a cloud services data center, even managementapplications such as BMC CLM can run on virtualmachines. We sat down with Cisco’s technicalmarketing engineers and went through the two tasksat hand -- tenant provisioning and VM provisioning.We started provisioning a single tenant/VM pair,then moved to bulk amounts. To provision a newtenant, CLM in fact opened telnet sessions toconfigure each component via command-lineinterface (CLI), just as an administrator would.The Cisco routers, switches and firewalls weconfigured were two ASR 9010s, four Nexus7000s, two Catalyst 6500s, ACE 30 and FirewallServices Module (FWSM), the UCS system and theNexus 1000v virtual switch. BMC CLM added allthe necessary VLANs and IP subnets, all taken froma pool that had to be configured initially. The CloudLifeCycle Manager automatically logged in to theCisco equipment via telnet protocol, using thecommand-line interfaces (CLIs). Once this wascompleted, we compared the configurations beforeand after, and sent some quick pings to see that theywere active.Then we moved on to provision a Virtual Machinefor this tenant. BMC CLM created a VM from itspredefined template, connected it to the appropriatetenant, and turned it on, all from our one user inter-action. We were able to open a remote desktopsession through the network -- it worked. Finally, werepeated the whole process but enabled bulk jobs. Ittook some time to get started after we had somefailed jobs due to other (human) administratorsaccidentally stepping on CLM’s toes -- an inherent

Figure 2: BMC Cloud LifeCycle Management Tenant View

Page 6: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

6

EANTC’s Independent Test of Cisco’s CloudVerse Architecture

problem with multi-user configuration management.Cisco confirmed that it would be possible toconfigure how CLI sessions are maintained/inter-rupted, but they spared effort for this test. Once thiswas cleared up, we were able to successfullyprovision five tenants and then 10 VMs for eachtenant. The single tenant took less than 25 minutes tocreate -- all through a single job request from theadministrator. The bulk tenant job for five tenantstook just over an hour, and the bulk VM job justunder an hour -- the exact time taken of coursedepends on management hardware variables likecomputer power. The BMC CLM tool saved us asadmins from opening a CLI session to the manydevices in the infrastructure, having to copy eachand every VM, and coordinating the whole process.

The tool did the job we expected it to. In addition,we witnessed the "tenant portal" -- the part of BMCCLM the customer would see. We provisioned someVMs, and even shut some down administrativelywhile they were used by other users in the lab. The"others" didn’t appreciate that, but it worked.

MULTI-TENANCY ISOLATION

EXECUTIVE SUMMARY: All tenants were completelyisolated from each other on virtual servers,throughout the data center network, to core networkVPNs.One of cloud service customers’ main concerns issecurity in terms of isolation: If I can access thisapplication, virtual machine or service from mynetwork, who else can?Firewalls are certainly a big part of the security storyin the cloud, but at least for enterprise users, they arenot sufficient. The service must be completelyisolated from other services -- much like a VPN. Infact, VPNs, amongst other technologies such asVLANs and virtual switching instances, are used in

Cisco’s Virtualized Multi-tenant DataCenter (VMDC) reference architecture,which was leveraged for this testprogram. Since there are several ways todesign the network, with countlesscombinations of how the various systemsare configured, the architecture helpedboth to provide a reference for the testprogram, and to define conventions -- forexample, how gold tenants will getfirewall services, and how gold andsilver tenants will get load-balancingservices.To verify tenant isolation we pulled out alegacy test methodology for MPLS VPNs.Using Ixia (Nasdaq: XXIA) virtual tools,we deployed 54 Ixia IxNetwork VMs --one in each of the fifty four tenants -- andattempted to transmit traffic in a fullmesh. We expected 100 percent loss. In

parallel, we sent traffic between each tenant and theoutside core network, which we expected to workresembling acceptable use. For this traffic, wedefined a Cloud Traffic Profile with Layer 3 trafficresembling a series of realistically emulated applica-tions using Ixia hardware. In addition we set up yet24 more Ixia IxNetwork VMs to emulate normaltraffic within the data center (so-called "east-west"traffic) sent in a full mesh pattern at 500 Mbit/s per

IP/MPLS

Cloud

Core

Emulated

VirtualMachine

Cloud

IP/MPLSCoreGold TenantSilver TenantBronze Tenant

Data Center1

Data Center

Gold Tenant Silver Tenant

10 x

Bronze Tenant

BMC CLM

1 to 3

Provisioning

Customer Site

10 x

CloudData

10 x

10 x

10 x

Center2

Figure 3: BMC CLM Logical Test Setup

Figure 4: Logical Tenant Isolation Test Setup

1 to 24

Tenant 1

IP/MPLSCore

Emulated

Ixia IxNetworkVM

Cloud

IP/MPLSCoreIsolated Traffic

CloudData Center1

Data Center

Tenant 2 - 55

Logical Depiction

Customer Site

Discarded

CloudData

Center2of Tenats

Page 7: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

FabricPath

7

Nexus 7010Nexus 7010

Nexus 7010 Nexus 7010Nexus 7010 Nexus 7010

x8x8

Ixia EmulatedServers

Ixia EmulatedServers

x8x8

Ixia EmulatedServers

Ixia EmulatedServers

Nexus 5548 Nexus 5548 Nexus 5548

Physical Chassis

xN N x Physical Links

Traffic Generator

}

} Generated Traffic

9.7 Gbit/s

Ixia EmulatedUsers

Generated Traffic

283.1 Gbit/s

Ixia EmulatedServers

Nexus 2248

Ixia EmulatedServers

10 Gigabit EthernetGigabit Ethernet

Nexus 5548

Aggregated10 Gigabit Ethernet

FabricPath Domain

TP-E

Figure 7: FabricPath Setup

VM. The 24 VMs were distributed across three UCSchassis -- each chassis configured as a single ESXcluster, eight blades per cluster, one Ixia VM perblade. The pie charts below show the traffic distri-bution toward the fifty four tenants' users, which wasalso used for our QoS test. (See Tiered Cloud

Services.)After running all traffic configurationssimultaneously for 209 seconds (eachIxia configuration was running in aseparate system, some ran longer), wecorrectly observed 100 percent loss ontraffic between tenants, zero loss fortraffic toward the customer. There was avery minor amount of 0.00014 percentloss on the traffic from the 24 IxiaIxNetwork VMs within the single tenantthat we had expected to pass. The teamexplained that it is possible to achieve alossless virtual environment with softwareswitching, but with all the services wewere running for this test, and theservices for other tests still running inparallel, this was a very low amount ofloss. Additionally, we pinged thedifferent gold firewalls from differenttenants' VMs, and correctly only receivedresponses for those we expected access.

FABRICPATH

EXECUTIVE SUMMARY: FabricPath, using 16-by-10Gigabit Ethernet links throughout the topology,forwarded 292.8Gbit/s of net traffic in the datacenter while also providing resiliency with sub-200-millisecond outage times and allowing for burstytraffic.We're seeing more and more services looking to behosted in the cloud while more and more tools areenabling those services to do so. The New YorkTimes recently reported that, while the economycontinues to be slow, data centers are booming andcompanies are reporting cloud-related growth eachquarter. (See Cisco Sees 12-Fold Cloud Growth.)How will the infrastructure support all this growth?Virtualization is only one part of the story. Whatabout the network? Will standard bridging, linkaggregation and Spanning Tree do the trick?Not really. Cisco and other interested parties arecontributing to new standardized protocols like TRILL-- Transparent Interconnection of Lots of Links. In ourtest bed, we calculated massive traffic requirementsto and from the virtual machines. Cisco configuredits solution, FabricPath, which incorporates TRILL andother Cisco technology in order to scale the numberof paths in the network, scale the bandwidth in thedata center and lower out of service times in thecase of a failure. Furthermore, Cisco wanted toquantify the buffering power of their latest "fabricextender," which goes hand in hand with theirFabricPath architecture. We looked at each solutionone at a time.

Given the scale of the test, and that there was noUCS included in the setup, we used hardware-basedIxia tools running IxNetwork to emulate all hosts. TheIxia test equipment was directly connected to Nexus5548 switches. Each of these switches had sixteen

Figure 5: Emulated Downstream Trafficper Tenant

Figure 6: Total Downstream Traffic

Page 8: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

8

EANTC’s Independent Test of Cisco’s CloudVerse Architecture

physical connections to each upstream end of rowswitch -- the Nexus 7010. Most traffic was trans-mitted from within the data center, as wouldnormally be the case. We ultimately emulated14,848 hosts spread across 256 VLANs behind thefour Nexus 5548 switches, transmitting a total of273.9Gbit/s of traffic to each other in pairs, whilealso sending 9.2Gbit/s of traffic toward theemulated users located outside the data center(283.1Gbit/s in total), who in return sent back9.7Gbit/s to emulate the requests and uploads. Thisadded up to a total of 292.8Gbit/s traversing theFabricPath setup, for ten minutes, without a singlelost frame.The total FabricPath capacity per direction was320Gbit/s, given the number of links that werehashed across. Our traffic was unable to fill the320Gbit/s completely, but it was still indeed a heftyamount of traffic. Below we have graphed thelatency as well as the load distribution within thenetwork (as reported via Cisco’s CLI) to show howevenly the hashing algorithm distributed the load.

Now that we had measured the performance, whathappened upon link failure? Cisco claimed weshould see shorter outages compared to thoseexperienced during failures in spanning treenetworks. Measuring the out-of-service time from afailure in our scenario was much less than straight-forward. The major testing problem was FabricPath’sstrength -- traffic distribution by hashing -- which wassort of unpredictable looking from the outside. Wecreated an additional traffic flow of minimal load --one user -- at 10,000 frames per second, andtracked its associated physical path in the FabricPathdomain. Once we found the link, we physicallypulled it out, and plugged it back in, while runningtraffic, three times. The link failure results are shownbelow. When we replaced the link, in all threecases, zero frames were lost.

Figure 8: FabricPath Latency Results

FabricPath Load Balancing from Interface Statistics (Part 1)

FabricPath Load Balancing from Interface Statistics (Part 2)

Page 9: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

Tiered Cloud Services

9

Finally, we wanted to validate one of Cisco’s claimsregarding their Fabric Extender, or FEX, a standardpart of their installation when data centers arekeeping legacy Gigabit Ethernet links.Cisco explained the FEX is an interface card thatmust not be located in the Nexus 5548 chassis, butrather in its own chassis -- in this case the Nexus2248. This allows the card itself to be placed at thetop of a data center rack, for example, as anextension of the end-of-row switch. This way whenmore ports are needed across a long distance,operators need not invest in a new top-of-rackswitch, just a new card, thus extending other top-of-rack or end-of-row switches. Although this card isdesigned for Gigabit Ethernet-based servers, it islikely to be used in data centers that also have 10-Gigabit Ethernet. Thus, Cisco explained, it wasimportant to design the card with large buffers,accommodating for bursty traffic coming from a 10-Gigabit Ethernet port toward a Gigabit Ethernetport.How bursty could that traffic be in reality?We connected the appropriate Ixia test equipmentas shown in the diagram. Different burst sizes wereconfigured on the Ixia equipment until we found thelargest burst size that just passed FEX without loss.We set the inter-burst gap to a large value -- 300milliseconds -- so we could send constant bursts, butthe individual bursts would not affect each other. Werepeated this procedure twice, once using IMIX(7:70, 4:512, 1:1500) frames, and once using1,500-byte frames. The burst size that observed noloss with IMIX was 28.4 MB and 13.4 MB for the1,500-byte frames. Both tests ran for three minuteswithout any loss. Latency was as high as expected,since we expected the buffers to be used:from 3.0 microseconds to 98.8 millisecondsfor the IMIX bursts ranging from 3.2 micro-seconds to 204.5 milliseconds for the burstsof 1,500-byte frames.

TIERED CLOUD SERVICES

EXECUTIVE SUMMARY: Application-specific quality-of-service prioritization across the network is crucialfor cloud services. Cisco's ASR9010 managedquality under congestion well, maintaining lowlatency for high-priority cloud service patterns whilealso not starving out low-priority cloud customertraffic.Delivering quality in the cloud is a task that bearsmany building blocks. When considering the packetnetwork alone, quality of service (QoS) is already abroad term that raises a number of design questions.What algorithms and policies are used to prioritizetraffic? What traffic characteristics need to beexpected? Is latency or loss more critical? In thecloud, one would expect three tiers (Gold, Silver andBronze) of application traffic at minimum. In order tocome up with realistic traffic patterns, we made afew assumptions.In almost all cases -- whether business, mobile orresidential services -- the traffic patterns are typicallyasymmetric. As compute platforms, cloud servicesare likely to generate more traffic than they receive.In effect, there will be little congestion on theswitches connected towards the servers. The otherdirection -- from the virtual machines to the Internet --will surely be used more heavily.Generally, a healthy traffic mix is about one-thirdGold traffic and 10-to-30-percent Silver traffic, withthe remaining traffic filled by emulators.In the short term, some cloud operators might havemuch configured gold (a.k.a. high-priority enter-prise) traffic. Typically gold services are under-booked and in turn over-provisioned so this shouldnot be a concern. It is more of an issue if there aretoo many lower-paying customers, who sign oneasily, and become overpopulated.Reviewing the standard network topology, wefigured the congestion point would often be the ASR9010, and the cloud traffic profile we designed --with an overload of Bronze user traffic -- would fitjust right for our use case. We only had to removelinks between our data center and our core networkin order to emulate the typical situation of havingmore bandwidth available in the data center thanupstream toward the network.

Figure 9: FabricPath Failover Times

Figure 10: Traffic for Runs 4 and 5

Gold trafficSilver trafficBronze traffic

Cloud

IP/MPLSCore

Data CenterIxia EmulatedClients

TrafficGenerator

ASR 9010 CRS-3

Congestion Point

10 GE LinkIxia Emulated

Servers

13 Gbit/s

{{

Run 4

Run 5

CustmorSite

Drop

Drop Traffic TowardsCustomer Site

34 Gbit/s

Page 10: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

10

EANTC’s Independent Test of Cisco’s CloudVerse Architecture

In addition to prioritizing Gold tenants over Silverand Silver over Bronze, Cisco configured apercentage of bandwidth for each class to beguaranteed: 70 percent for Gold, 20 percent for

Silver and 9 percent for Bronze bandwidth, so thatno single class of customers could get completelyblocked even if higher priority traffic aims to monop-olize a link.

In order to evaluate the QoS prioritization efficiency,we undertook the following test steps.Step 1: Transmit all north-bound and south-boundbidirectional traffic between the users and the datacenter from our data center traffic profile and verifythat there is no loss, as a baseline. Check.Step 2: Reduce the bandwidth available betweenthe data center and the core network by removingall links between one of the two ASR 9010s, andremoving one link from the second ASR 9010, tocreate a minimal amount of congestion. Only bronzecustomers were affected. Check.Step 3: Remove another link from the remaining ASR9010, leaving two links left, or 20Gbit/s upstreamand downstream. This still left enough bandwidth forGold and Silver traffic, and only affected Bronzecustomers, as expected. Check.Step 4: Remove yet another link between the ASR9010 and its upstream CRS-3 peer, leaving only asingle 10-Gigabit Ethernet link remaining. At thispoint the Cisco team told us the router was experi-encing fabric congestion due to the high congestion,thus not necessarily guaranteeing the dedicatedpercentages explained above, which are appliedafter the fabric. Nevertheless, Gold and Silver wereforwarded within their dedicated percentages. Weobserved loss only on Bronze traffic.Step 5: We then decreased bronze traffic to avoidfabric congestion, and increased Silver traffic to seeif Bronze tenants would still get their dedicated 1percent of traffic. They did.We also checked to see if latency increased, particu-larly for Gold tenants, during the various congestionscenarios. The Gold latency remained consistent, as

shown in the graph below:

In summary, the ASR9010 passed the three-tier prior-itization tests.

UNIFIED FABRIC (UF) – UCS MANAGER

EXECUTIVE SUMMARY: The UCS Manager success-fully brought up new cards and replaced failedcards automatically through the unified fabric.Earlier we underlined the importance of reducingoperational costs for cloud operators. Due to thenumber of components in the data center, itsmanagement has historically been quite complex. Inrecent years several advances such as the adoptionof Fibre Channel over Ethernet (FCoE) and the virtu-alization of servers have helped simplify data centermanagement. Cisco contributed to the simplificationof data center operations with the introduction of itsUnified Fabric solution -- a system that aggregatesserver connections and reduces the amount ofcabling and resources needed. Unified Fabric is oneof the cornerstones in Cisco’s Unified ComputingSystem (UCS), which also includes server blades,fabrics and the focus of this test: UCS Manager.

Figure 11: Traffic Loss per Test Run

Figure 12: Latency per Service TierThroughout the QoS Testing

Page 11: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

Virtual Machine Fabric Extender Performance

11

Simplifying data center operations and reducing theduration of tasks are two ways to control anddecrease the operational costs of running a datacenter. In our test we looked at both aspects. Specifi-cally, we looked at Cisco’s UCS service profiles -- asaved set of attributes associated with a blade. Theservice profile or template is configured once withdefinition for VLAN pools, World Wide Names(WWNs), MAC addresses as well as pointers to theappropriate boot disc image sitting in the StorageArea Network (SAN). Once this profile is applied toa blade the operator can expect that the services willbe brought up automatically. In the case that ablade failed, the profile could be automaticallymoved to a new physical blade speeding up thefailure recovery. Through the UCS Manager theoperator could see the status of the various bladesand create the configuration.

We started this test by configuring service profiles.The service profiles are typically stored in the CiscoUCS 6100 Fabric Interconnect, which is running theembedded UCS manager. We then set up two testscenarios. In the first test, we associated a serviceprofile to a slot within the Cisco UCS 5000 BladeServer Chassis. Out of the eight blades installed inthe chassis, six were being used by other applica-tions, one was active and was running the profile wejust created, and one blade was completely shutdown. We then went down to the lab, and pulledour active blade out of the chassis. Before we wentdown though we initiated ping messages, both tothe blade's IP, and to the IP of a VM running on thatblade. Our expectation was that the UCS managerwould load the same profile to the new bladewithout our involvement, since the profile wasassociated to the slot.We came back to the lab and checked our pingmessages. The replacement blade took 595 secondsto boot and respond. The UCS manager hadapplied the same profile to the new blade, andactivated it. The ping to the VM, however, startedgetting responses after 101 seconds (we sent oneping per second). This was achieved thanks to a VMlevel failover recovery mechanism that moved theVM to another blade altogether.

.

In the second test we wanted to see if we couldbring up a full chassis of UCS blades through theautomatic service profile. We started by emulatingthe full scenario as much as possible. We putourselves in the administrator's shoes. We went tothe lab and made sure that there were no blades inour chassis, as if we were waiti ng for them to beshipped. We then created service profiles for eachblade, requesting some randomization on the fly toensure that they were new profiles. After weassociated the service profiles to the empty bladeslots, we went back down to the lab, pushed theblades in, and waited. Before pushing the blades in,we again initiated pings to the first and last of theeight blades. Since the blade's IPs were DHCPbased, we had to configure the DHCP server to bindIP addresses to the service profile MAC addresses,which in turn served as another verification pointthat the service profiles were used.Indeed, as we saw the blades come up through themanagement tool, the ping messages startedreceiving responses. The first of the chosen bladesresponded to pings after 647 seconds, and the lastblade after 704 seconds. The VM we started on theeighth blade was responding to pings when justover thirteen minutes had passed since we beganinserting blades. Had the UCS Manger serviceprofiles not been available to us, we would havehad to boot the cards; look through our system tosee which MAC addresses, VLANs, etc., should beused; connect to each blade physically andconfigure these variables; ensure that they couldreach the SAN and boot from it; and of coursedebug any issues that came up along the way. TheUCS Manager reduced these steps to a simple"verify on the GUI that all blades were up andworking," and worked quite smoothly from the get-go.

VIRTUAL MACHINE FABRIC EXTENDER PERFORMANCE

EXECUTIVE SUMMARY: The Cisco UCS’s VirtualMachine Fabric Extender (VM-FEX) offers consis-tently increased network performance operationscompared to virtual distributed switch installations.In a standard Local Area Network, various hosts,laptops and PCs typically connect to a Layer 2

Figure 13: UCS Manager GUI

Figure 14: Time Taken to Respond in Minutes

Page 12: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

12

EANTC’s Independent Test of Cisco’s CloudVerse Architecture

C

switch that aggregates the physical stations beforehanding them off to a router. Communicationbetween two hosts on the same LAN can be donedirectly without reaching the router. Similarly, avirtual switching instance passes traffic either to aVM sitting in the same hardware, or pushes it out thephysical port. Virtual Switches (such as the CiscoNexus 1000v or VMware’s vNetwork DistributedSwitch) operations are done in software and VirtualSwitches (such as the Cisco Nexus 1000v orVMware’s vNetwork Distributed Switch) operationsare done in software and therefore take resourcesaway from the virtual machines hosted on the blade.Reducing the amount of resources available to theVMs.Cisco's Nexus 1000v has a rich set of capabilitiessuch as VLAN aggregation, forwarding policies andsecurity. Cisco, however, found that not all VMinstallations require these features, and in such casesit makes sense to save the resources taken by thevirtual switch and appropriate them to the customerneeds.Cisco claimed that their Virtual Machine FabricExtender (VM-FEX) in VMDirect mode replaces theswitch and shows a significant increase in CPUperformance for network intensive applications. VM-FEX, installed on VMWare ESX 5.0, enables all VMtraffic to be automatically sent out on the UCS'sVirtual Interface Card (VIC). This meant more trafficon the physical blade network interface, but reducedCPU usage, which is typically the VM bottleneck. Toverify that the VM-FEX really frees up CPU resources,we ran a series of tests comparing a VM-FEX-enabled UCS blade to a Nexus 1000v virtual switchsetup. Both UCS blade installations were identical inall aspects apart from the use of the VM-FEX in oneand Nexus 1000v in the other.

We started comparing the performance between thetwo setups using Ixia’s virtual tools. We installed fourIxia IxNetwork VMs on each of the two UCS bladesand sent 3,333 Mbit/s of traffic from each of thefirst three VMs, toward the fourth for 120 secondsusing 1,500-byte frames. In the VM-FEX case werecorded 2.186 percent frame loss, while in thedistributed switch environment we recorded 16.19percent frame loss.We expected loss in both cases, given the almost

10Gbit/s load we were transmitting in the virtualspace. The load was required in order to really keepthe CPU busy. We deduced from this initial test resultthat in the VM-FEX environment less resources wereused, which is why the frame loss we recorded wassmaller than the loss recorded in the virtualdistributed switch setup.For the next test setup we installed one IxLoad VM oneach of the two blades. We configured both IxLoadVMs as HTTP clients that requested traffic from aWeb server Cisco configured. The IxLoad emulatedclients were configured to try and use as muchbandwidth as possible by requesting 10 differentobjects from 10 URLs repeatedly. The VM-FEX setupreached 9.87 Gbit/s while the distributed switchreached 7.78 Gbit/s. The CPU usage was alsosignificantly higher in the virtual distributed switchsetup when compared to the VM-FEX setup.

Using the Ixia test tools we recorded the perfor-mance difference we expected. Cisco recommendedthat we perform a test that relies more heavily on theStorage Area Network (SAN). For this test, Ciscohelped us to set up 10 VMs on each of the twosetups, and install IOmeter on each virtual machine.IOmeter was configured to read blocks from aniSCSI based SAN as fast as it possibly could. Wemanually started each of the twenty IOmeterinstances, and after 10 minutes we manuallystopped each of them. At the end, we looked atthree statistics -- Input/Output Operations perSecond, Data Rate, and Average Response Time --all three averaged across the 10 VMs in each setup.The VM-FEX performance was indeed higher for allthree metrics. The data is shown in the graph below:

UCS System

EMC Symmetrix VMAX

loud Data Center

Virtual

SAN

Virtual Link

StorageTraffic Flow

Physical Link

UCS 6100

UCS 5100

IxiaIxNetwork VMs

IOmeterIxia

IxLoad VMs

VideoEncoder Ixia

IxNetwork VMs

VideoEncoder Ixia

IxLoad VMs

IOmeter

Virtual Distributed Switch

UCS VICVM-FEX

UCS VIC

Machine

Figure 15: VM-FEX setup

Figure 16: Performance Comparison Using Ixia Tools (higher values indicate better performnce)

Figure 17: SAN Read Performance

Page 13: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

Virtual Security Gateway

13

We were still curious what the difference would bewhen someone is running a common task on asingle VM. We wrote a script to use the open sourceprogram mplayer to encode a DVD image file thatwas stored in the SAN into mpeg (for private use ofcourse). We wrote two versions of the script -- oneperformed an additional round of encoding. Theresults of this test run actually showed that the act offetching blocks off the network-attached DVD werenot too resource intensive as the VM-FEX setuprequired only marginally less time to perform theencoding than the virtual distributed switch setup.

Perhaps the most interesting metric was not theperformance, but rather the CPU utilization. Howmuch of the CPU was used for the operation, andhow much was left over for other operations andother users? As shown below, the VM-FEX setup usedfar less of CPU resources in all cases. This wasexpected, since the CPU was skipping an entirelayer of virtual switching, and this was, after all,exactly what Cisco wanted to demonstrate.

VIRTUAL SECURITY GATEWAY

EXECUTIVE SUMMARY: Cisco’s Virtual SecurityGateway (VSG) successfully applies policiesbetween virtual machines, and continues to do so asVMs are migrated from hardware to hardware.Earlier in the Tenant Isolation test (link TenantIsolation test) we discussed the undeniable need fora comprehensive solution to the security concernsassociated with cloud services. The isolation of

tenants answers the question of security betweenprivate customers, but what about public or sharedcloud services? Typically data centers use firewallsto block all traffic except for the specific types oftraffic that are allowed, for the specific servers thatneed them. And if those servers are virtual? Well,then you need a virtual firewall of course.Cisco’s Virtual Security Gateway (VSG) integrateswith the Nexus 1000v via a module called vPath,which is embedded into the distributed virtualswitch. As Cisco explains it, most of the intelligenceof the policies are off-loaded to the VSG component,which tells the vPath component how to treat traffic,thus reducing the complexity of the forwardingdecision. Our interest was to a) verify that standardrealistic policies would work in a realisticenvironment, and b) verify that the appropriatepolicies remain associated with the appropriate VMseven when those VMs are moved around.

We set out to emulate a typical three-tier Web serverscenario. For those less familiar, websites aredelivered by splitting the main functions into threeparts: the "presentation" or "Web" tier that usersaccess directly; the "application" or "logic" tier,which runs the intelligence for the service thatwebsite is delivering; and "database" tier to storeinformation. We started by creating three IxiaIxLoad VMs, each one emulating one of the threeWeb server tiers. Using Ixia hardware to emulatethe outside users, we verified these policies with theappropriate traffic:

• Users could exchange HTTP traffic with thepresentation tier VM, but other types of trafficwould not work. We used FTP to verify that othertraffic types were discarded.

• Users only had FTP access to the application tier.HTTP access was expected to fail.

• Users had no access to the database tier --verified with both HTTP and FTP traffic.

All traffic was passed or blocked as expected. Weset up an additional three IxLoad VMs to verify thatthe policies among VMs would work in parallel topolicies to the outside. Between the servers:

• The presentation tier VM could exchange HTTPtraffic with the application tier, but not with thedatabase tier

Figure 18: Video Encoding Performance (lower values indicate better performace)

Figure 19: CPU Utilization(lower values indicate beter performance)

Cloud

EmulatedIxia IxLoadVM Cloud

Gold TenantSilver Tenant

Data Center

Data Center

Presentation Tier

HTTP TrafficFTP TrafficSQL Traffic

Discarded By VSG

Application Tier

Database Tier

Presentation Tier

Application Tier

Database Tier

CustomerSite

UCS 5100

Figure 20: Virtual Security Gateway Test Setup

Page 14: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

14

EANTC’s Independent Test of Cisco’s CloudVerse Architecture

• The application tier could exchange both FTPand SQL traffic with the database tier. Ixiahelped us to write a script to run the SQL traffic inparallel to the IxLoad generated traffic.

Again, all traffic was blocked and forwarded appro-priately, and the behavior we had observed withoutside customers remained the same, also inparallel. In fact, this entire setup was duplicated --one setup running within a Silver tenant, and onewithin a Gold tenant. With all these traffic flowsrunning to all twelve emulated servers in parallel, wewere pretty convinced. In fact Cisco also showed usthat there were different ways to configure thepolicies down to the VM -- one exemplified by theSilver tenant setup, and one by the Gold tenantsetup. The Silver tenant used VM attributes (in factthe name of the VM) to match the policy to the VMs,while Gold tenants used IP-based mapping. Yet, onefeature remained to be verified -- what happenswhen a VM is migrated? That is, what is when a VMis moved by an administrator to different hardware?Cisco promised that the behavior would remain thesame as VMs were migrated. We tested this out byleaving the traffic running and performing a"vMotion" (migration) on different Ixia IxLoad VMswhile they were still responding to clients (stillsending traffic). Indeed, the same behavior waswitnessed as described above. HTTP was blockedwhere expected, and forwarded where expected, aswas FTP. We randomly chose to move the outside-user-facing presentation tier IxLoad VM to a newblade, and in addition we also moved the outside-user-facing application tier IxLoad VM. We were leftfeeling pretty secure.

LOCATOR/ID SEPARATION PROTOCOL (LISP)EXECUTIVE SUMMARY: Using Nexus 7010 andASR 1002, Virtual Machines were successfullymigrated seamlessly from one data center to anotherwithout the need for IP reconfiguration.The folks at Cisco have supported the developmentof Locator/ID Separation Protocol (LISP) in the IETFfor some time now. Sometimes referred to as aprotocol, and sometimes an architecture, LISP is amechanism to optimize network flows by managingaddress families. LISP was originally devised toabstract network areas in order to "divide andconquer" scale, but ended up having the conse-quence of enabling mobility. The latter ends upbeing a pretty useful tool for data centers -- this iswhat we tested.In a nutshell, using an additional level of mapping,LISP abstracts exceptions in IP routes to avoid longrouting tables and reconfiguration of routers. Thus, ifa virtual machine would move from one data centerto another, it would not have to change its IP addressand the routers in the new data center would nothave to be configured for the VM’s IP subnet. TheLISP-aware nodes would dynamically learn aboutthis new VM with its misfit IP address. Users who

cached that VM’s IP address would not have torelearn a new IP, their services would be transparentto the location change, at least from an IPaddressing perspective. Without LISP, the adminis-trator would have to change the VM’s IP address,affecting customers much more.

We started by establishing a simple website, hostedon a virtual machine in our "Data Center 1." Weconnected to that website via a laptop hostconnected to an ASR 1002, which was positioned toserve as that host's Customer Edge (CE) -- LISP wasenabled here. LISP was also enabled on the Nexus7010s within each of the data centers. Thesesystems were both configured to serve the LISPmapping database. We ensured that caching didnot affect our test.As the next step, we migrated the VM. There aredifferent tools on the market to administer such anoperation, but this was not the focus of this test. Weused vCloud Director, which Cisco had installed.Once the move operation was completed, we firstpinged the server from the ASR 1002, which sentfive echo requests per default. The first did notreceive a response, as it triggered the LISP to polland update its database. As a note, this process wasperfected during the pre-staging through a softwarepatch, thus the Nexus 7010s were running effec-tively engineering code. All further echo requestsfrom that same ping operation, were responded to --just as expected. We refreshed our demo website,and it returned the contents to the client immediately.In summary, the ASR 1002 and Nexus 7010automatically detected the new location of the route,and the user could continue to access the websitefrom the new data center -- all without any reconfigu-ration.

PROVISIONING: CISCO NETWORK SERVICES MANAGER

EXECUTIVE SUMMARY: Cisco Network ServicesManager automatically configured multiple tenantsthroughout the data center.After we completed the BMC CLM tests, Cisco askedus to test another provisioning tool: Cisco NetworkServices Manager, formerly known as OverDriveNetwork Hypervisor. Cisco explained that this tool,

Data Center 1 Data Center 2

VM Migration

VirtualMachine

Cloud

HTTP Traffic

Data Center

LISP Client

VMVM

CustomerSite

Nexus 7010 Nexus 7010

ASR1002

Nexus 7010Nexus 7010

Figure 21: LISP Setup

Page 15: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

Enterprise Applications: Siebel CRM

15

acquired in 2010 together with Linesider Technol-ogies, is focused more on the network side of thedata center, whereas BMC CLM also provisionsvirtual machines.Cisco walked us through the Network ServicesManager Graphical User Interface (GUI) as wecreated our first tenant, but at the same time theyexplained that the plan is for the admin to never seethis GUI in the future. They plan for the tool tointegrate with higher layer orchestration tools.From the GUI, we chose a so-called “Metamodel”with which we could associate our yet-to-be-builttenant with policies and resources such as which IP/VLAN pools to use, and we chose which data centerto create it in. The only manual interaction requiringspecific information of the administrator was whenthe tool prompted us for the tenant’s public IPaddress -- understandably, some may not want tohave this metric set dynamically. Once we had hit"Go," we compared configurations throughout thedata center before and after the action. We wereable to see the appropriate configuration changeson the ASR 1000s, Nexus 7000s, UCS 6100s andthe Nexus 1000v.

Following that, a Cisco team member showed us ascript he had written in Ruby to simulate the higherlevel tool plugging in to Cisco Network ServicesManager. We used the script to provision tentenants. After less than 20 minutes the script wascomplete, and we again captured and observed thenetwork configuration changes.

ENTERPRISE APPLICATIONS: SIEBEL CRMEXECUTIVE SUMMARY: Cisco’s Siebel-based callcenter application was successfully installed and runby multiple users within the UCS-based cloud infra-structure.Cloud services are much more than throwing data tosome off-site storage. The real value added by cloudservices comes from applications. In the next weekswe’ll be releasing the next sections of this report,one of which focuses on Cisco cloud applications.To give a preview of what's coming we picked two

such applications to highlight today. Cloudoperators focusing on enterprise markets will bemaximizing their revenues as soon as they canconvince their customers to focus on their corebusiness and leave the corporate IT to the cloud.One essential application run used by every modernenterprise is Customer Relationship Management(CRM) -- the software enterprises use to maintainsales, clients and customer contacts. One commonCRM software is Oracle Corp. (Nasdaq: ORCL)'sSiebel CRM.For the test, Cisco provided us with access to theCisco corporate Siebel CRM. Cisco programed itsown customized version of Siebel for its call center,what it calls Sales and Marketing Call Center(SMCC) Application. Like any CRM software should,SMCC helps manage customers, the appropriatecontact people and their contact information, thecurrent state of a sales lead and other customer andsales related information. Cisco's flavor is particu-larly designed for call centers. Call centeremployees can bring up a contact and then open ascript associated for that contact leading themthrough the conversation with the customer.

Such CRM applications have been typically locatedlocally within an enterprises IT infrastructure, butsince the advent of the cloud, have moved away.Our goal was to see how Cisco's SMCC workedwhen set up in a cloud environment. We verified thatthe presentation and application tiers of the servicewere running on VMs in the test bed. For thedatabase tier, rather than copying an immenseamount of data, we took advantage of a Siebeldatabase at a remote site -- one of Cisco's mainnational corporate data centers in a completelydifferent U.S. state (the tests ran on a databasecopy). Since we were investigating an application,there was not much to learn by pumping traffic intoan infrastructure. With the help of a team proficientin HP's LoadRunner VuGen software programmers,we created a list of automated actions that a callcenter employee typically would execute.These actions included:

• Searching for a customer

• Adding a customer

• Associating a customer to a sales program

Figure 22: Cisco Network ServicesManager

Figure 23: SMCC Screen Shot

Page 16: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

16

EANTC’s Independent Test of Cisco’s CloudVerse Architecture

• Displaying and stepping through the pages of ascript (normally to be read out loud by call centeremployee)

• Entering a predetermined set of customeranswers

• Passing the lead to a sales team1200 emulated call center employees wereconfigured to repeatedly execute the list of actionsfor a total of 250 repetitions, five at a time. Theprocess of stepping through the list of actions tookfour hours, 16 minutes and 13 seconds. At the endof the run 200 of the transactions were evaluated assuccessful. The rest of the runs were mostly successful-- in the last step, in which the operator was to passthe lead to the sales team, a database request tooklonger than 60 seconds to complete, a delay that wedefined as too long, which is why the softwareconsidered these runs as fail. Since we defined 60seconds as a fail condition we could not really holdthis against Cisco.At the same time as the action list was executed, wemonitored the interface statistics between our datacenter test bed and the interface toward Cisco'scorporate site to verify the setup we understood wasindeed the one being used. We found no hat trickshere and were able to verify that the test bed wasreally used for this test. .

In summary, aside from a few delays in certainactions, the Siebel installation ran smoothly on theUCS system

CISCO’S HOSTED COLLABO-RATION SOLUTION

EXECUTIVE SUMMARY: Cisco's Hosted Collabo-ration Solution application successfully enabledphone calls and instant messages betweencustomers, while running from the cloud.In the previous section, Enterprise Applications:Siebel CRM, we reminded ourselves how effectivelyrunning enterprise applications is the key to winningand serving enterprise cloud solutions. To drive thepoint home we investigated another cloud-basedapplication: Cisco's Hosted Collaboration Solution(HCS).What does it actually mean? Hosted -- again, theidea is to run the application on Cisco's UCSplatform. Collaboration -- the key word indicating

that HCS is a solution for communications:managing phone numbers, instant messaging users,voice mail, with additional features planed. At firstwe were looking for a tangible single graphical userinterface, but we quickly learned that HCS is in facta suite of applications. In our discussions we usedthe parallel of Microsoft Office -- not a programitself, but a suite of programs. So what could we dowith it?To explain what HCS is, Cisco started by walking usthrough the Cisco Unified Communications DomainManager (CUCDM), which is a simplified version ofthe familiar Cisco Unified Communications Manager(CUCM). For the reader not initiated to the world ofCisco acronyms we could simplify the story a little.The demonstrated we received included using apiece of software to setup VoIP phones and thenassociate an instant messenger ID with a phonenumber. This demonstration also used a differentsetup to our data center test bed used for most of theother tests we report here. We went to the HCSteam’s lab to inspect the setup, and found a verysimilar setup there -- UCS 5100, UCS 6100 andNexus 7000. We also removed a cable during oneof our test calls explained below, to prove that thenew setup was really being used to make the calls.

We used the software to add fifty customers to thesystem, each with a number of users between 500and 10,000, for a total of 91,048 users accordingto the system’s user interface. To see HCS in action,we connected some phones to our setup, andstepped through registering a new caller which tookapproximately five minutes. Once the user/callerwas entered we associated the user to one of thephones we setup. Before the association we justpicked up the phone and verified that there was nodial tone. After the association, we repeated theexercise to verify that a dial tone was present. Wethen tried making and receiving calls to and fromdifferent customers. By default the user did not havevoice mail services, so using the CUCDM GUI weenabled voice mail, and left ourselves messages.Finally, using the same CUCDM GUI, we couldassociate that user to a Jabber instant messaging ID

Cloud Data Center

Data Base TransactionsPhysical Link

Unified

Nexus 7010

UCS 6100Nexus 5548

CiscoCorporate Network

CiscoCorporate Data Center

(Remote)

VirtualMachine

ComputingSystem

SMCC Application TierSMCC Web Tier

SMCC Users

Figure 24: Logical Siebel Setup

Cisco 6120-1

Unified Computing System

Customer A Phone Customer B Phone

(SME)

Cisco Unified Border Element(CUBE) SP

Storage Media Encrption

Cisco 6120-2

Customer Site

Nexus 7000-1 Nexus 7000-2

IP/MPLS Network

Cloud Data Center

Call Signaling

Call Media Customer A Customer B

Figure 25: HCS Call Flow

Page 17: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

Conclusion: Unified Data Center Test

17

and test it out by logging in using that ID and havinga quick chat with the team.

We've attached a screen shot to give a feel for theGUI. Overall the demonstration worked smoothlywithout issues, and we learned a bit about howCisco call centers work in the process.

CONCLUSION: UNIFIED DATA CENTER TEST

There you have it -- Cisco’s data center. You’ll noticethat we focused on Cisco’s newest bells and whistlesand made the assumption that the basics -- disks,virtual machines and data center switches -- work.We focused on the essential stops along the end-to-end road: the comprehensive infrastructure thatcovers all corners in the data center; the unifiedmanagement solution; and the security and agility ofthe solution. All elements were not only demon-strated, but passed our detailed inspection.We were relieved to find that Cisco’s Virtual SecurityGateway (VSG), while differing quite a bit fromstandard files on implementation, still had thefamiliar configuration and user interface, andenforced its policies in a realistic virtual machinesetup.We found that VM-FEX is certainly not for everydeployment, but for those willing to sacrifice networkfunctionality for performance, it is right up theiralley. By sending traffic both in the virtual andphysical space of the data center, we effectivelyverified that no tenant's traffic will be seen by oneanother. (See Virtual Machine Fabric ExtenderPerformance.)Cisco Tiered Cloud Services architecture of Gold/Silver/Bronze successfully prioritized appropriatelywithout starving anyone out, as long as the load wasnot overwhelming. FabricPath showed us that wecould theoretically have had a lot more servers inthat data center, forwarding 292.8Gbit/s through afully meshed topology.Cisco found a clever use for their Locator/IDSeparation Protocol (LISP) implementation in thecloud, enabling mobility by bending the rules of IPsubnetting through abstraction.

TABLE 2. Cisco Devices Tested

Furthermore, operators have been pretty loud aboutthe need for streamlined management tools, so webelieve they will be relieved to see Cisco prioritizingoperations with tools like UCS Manager and itspartnership with BMC. (See BMC Cloud LifecycleManagement Integration.)Data centers these days also open up a series ofnew variables, and it's not always clear how testingmethodology should manage them and keep experi-mental control. Take the VM-FEX test, for example. Itwas important to ensure that both the hardwareconfiguration and the virtual machine activity areidentical on both blades, so we could compare theirvirtual network interface configuration.When we tested the Cisco’s Virtual SecurityGateway (VSG) it was crucial that we run all tiers ofthe test in parallel -- multiple tenants, multiple Webtiers and multiple traffic directions were all flowingat the same time. This puts to rest the commonsecurity concern that shared resources lower theability to control and activate policies. The flows inthe data center are also important to understand,and much different from LANs or WANs. In ourFabricPath, tenant isolation and VSG test, we knewit was key to have so-called "north-south" traffic(between the data center and the customer) and"east-west" traffic (within the data center) in theappropriate proportions and flow models, whichwere mostly pairs and partial meshes.We also had firsthand experience at the level ofcommitment that Cisco has to the data center andcloud. In past tests of massive scale, Cisco built the

Figure 26: Adding Voicemail

Cisco Hardware TestedCisco ACE30

Cisco ASA 5585-X60

Cisco ASR 1002

Cisco ASR 1006

Cisco ASR 5000

Cisco ASR 9010

Cisco Catalyst 6500

Cisco CDE 205

Cisco CDE 220

Cisco CDE 250-S6

Cisco CRS-1

Cisco CRS-3

Cisco MDS 9506

Cisco MDS 9513

Cisco Nexus 1000v

Cisco Nexus 2248 TP-E

Cisco Nexus 5548

Cisco Nexus 7009

Cisco Nexus 7010

Cisco Nexus 7018

Cisco UCS 6140

Page 18: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

18

EANTC’s Independent Test of Cisco’s CloudVerse Architecture

test bed from scratch specifically to the project. Inthis test, we took advantage of Cisco’s VirtualizedMulti-Tenant Data Center (VMDC) lab, where Ciscoengineers spend their days (and often nights)designing, building and testing the Cisco data centersolutions. The same team that is responsible forverifying that different chapters of the data centerstory will work for the customers also supported ourtests. Along with the lab came the equipment andknowledge, and the team was open to our persistentquestions and calls for retests. That is not to say thatit was not a very long three and a half weeks -- itwas.As we move on to finishing up the next report onCisco's network for the cloud, we hope those whowere looking to become familiar with Cisco's clouddata center solution have done so. For thosewondering about IPv6, stay tuned for the next report.

Page 19: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

19

Page 20: EANTC's Independent Test of Cisco's CloudVerse ......EANTC's Independent Test of Cisco's CloudVerse Architecture Part 1: Cloud Data Center Infrastructure Including Business Applications

20

EANTC’s Independent Test of Cisco’s CloudVerse Architecture

EANTC AGEuropean Advanced Networking Test Center

Light ReadingA Division of United Business Media TechWeb

Salzufer 1410587 Berlin, GermanyTel: +49 30 [email protected]://www.eantc.com

240 West 35th Street, 8th floorNew York, NY 10001, USA

http://www.lightreading.com

This report is copyright © 2012 United Business Media and EANTC AG. While every reasonable effort has beenmade to ensure accuracy and completeness of this publication, the authors assume no responsibility for the use ofany information contained herein.All brand names and logos mentioned here are registered trademarks of their respective companies in the UnitedStates and other countries.