transforming the data center
DESCRIPTION
Slides from the Dell series on data center transformation and cloud computingTRANSCRIPT
Transforming the data centerThe impact of clouds on enterprise IT
Thursday, February 24, 2011
Good morning. Today, weʼre going to talk about a huge shift in IT, and how it will change the enterprise data center.
Some background@acroll
Thursday, February 24, 2011
I write, organize, and analyze emerging IT trends at Bitcurrent, and try to share some of these thoughts with enterprises and startups.
The arrival of utility computingOvernight change, 20 years in the making
Thursday, February 24, 2011
Iʼm going to start out talking about cloud computing, because thatʼs whatʼs prompting a major shift in enterprise IT. But most of this content applies to you whether youʼre running your own data center or entirely outsourced and whether youʼre a bare-metal shop or completely virtualized.
Thursday, February 24, 2011
I need to spend some time explaining things, because clouds are confusing.
http://img.dailymail.co.uk/i/pix/2008/04_01/tornadoDM3030a_800x533.jpgThursday, February 24, 2011
So here’s a simple, practical way to think about utility computing.
http://www.flickr.com/photos/mynameisharsha/4092086880/Thursday, February 24, 2011
The step-function nature of dedicated machines doesn’t distribute workload very efficiently.
http
://w
ww
.flic
kr.c
om/p
hoto
s/h4
ck/2
4135
6210
8/
Thursday, February 24, 2011
Virtualization lets us put many workloads on a single machine
Physicalmachine
Physicalmachine
Physicalmachine
Physicalmachine
Physicalmachine
Physicalmachine
Virtual machine
One on many
Physical machine
Virtual machine
Virtual machine
Virtual machine
Virtual machine
Virtual machine
Virtual machine
Many on one(or)
Virtualization divorces the app from the machine.
Thursday, February 24, 2011
Okay, so these things mean we have applications that run “virtually” – that is, they’re divorced from the underlying hardware. One machine can do ten things; ten machines can do one thing.
http://www.flickr.com/photos/stawarz/3538910787/Thursday, February 24, 2011
Once workloads are virtualized, several things happen. First, they’re portable
http://www.flickr.com/photos/swimparallel/3391592144/Thursday, February 24, 2011
Second, they’re ephemeral. That is, they’re short-lived: Once people realize that they don’t have to hoard machines, they spin them up and down a lot more.
http://www.flickr.com/photos/genewolf/147722350Thursday, February 24, 2011
Which inevitably leads to automation and scripting: We need to spin up and down machines, and move them from place to place. This is hard, error-prone work for humans, but perfect for automation now that rack-and-stack has been replaced by point-and-click
http://www.flickr.com/photos/pinkmoose/3278324276/Thursday, February 24, 2011
Automation, once in place, can have a front end put on it. That leads to self service.
“Cloudy” tech.
Virtualization
Automation
Self-service
Elasticity
Usage tracking & billing
Service-centric design
Thursday, February 24, 2011
These are the foundations on which new IT is being built. Taken together, they’re a big part of the movement towards cloud computing, whether that’s in house or on-demand.
Two main modelsA field guide to IaaS and PaaS
Thursday, February 24, 2011
There is, in fact, a good definition of clouds from NIST. But what you need to know, for the purpose of todayʼs content, is two cloud models: Infrastructure- and platform-as-a-service.
Infrastructure as a ServiceAmazon EC2, Rackspace Cloud, Terremark, Gogrid, Joyent (and nearly every private cloud built on Xen, KVM, HyperV, or VMWare.)
Thursday, February 24, 2011
The first is called Infrastructure as a Service, because you’re renting pieces of (virtual) infrastructure.
Web server
Machine instance
MachineImage
Thursday, February 24, 2011
In an IaaS model, you’re getting computers as a utility. The unit of the transaction is a virtual machine. It’s still up to you to install an operating system, and software, or at least to choose it from a list. You don’t really have a machine -- you have an image of one, and when you stop the machine, it vanishes.
App Server
Machine instance
Web server
Machine instance
DBserver
Machine instance
Storage
MachineImage
MachineImage
MachineImage
Thursday, February 24, 2011
Most applications consist of several machines -- web, app, and database, for example. Each is created from an image, and some, like databases, may use other services from the cloud to store and retrieve data from a disk
App Server
Machine instance
Web server
Machine instance
DBserver
Machine instance
StorageDB
server
Biggermachineinstance
Thursday, February 24, 2011
If you run out of capacity, you can upgrade to a bigger machine (which is called “scaling vertically.”)
App Server
Machine instance
Web server
Machine instance
DBserver
Machine instance
Storage
App Server
Machine instance
Web server
Machine instance
DBserver
Machine instance
LoadbalancerMachine instance
Thursday, February 24, 2011
Or you can create several machines at each tier, and use a load balancer to share traffic between them. These kinds of scalable, redundant architectures are common -- nay, recommended -- in a cloud computing world where everything is uncertain.
Platform as a ServiceGoogle App Engine, Salesforce Force.com, Heroku, Springsource, (and nearly every enterprise mainframe.)
Thursday, February 24, 2011
The second kind of cloud is called Platform as a Service. In this model, you don’t think about the individual machines—instead, you just copy your code to a cloud, and run it. You never see the machines. In a PaaS cloud, things are very different.
Processing platformData API
Storage
Yourcode
Others’code
Others’code
Others’code
Others’code
Others’code
Auth API
Userdatabase
Image API
Image functions
Blob API
Big objects
...
Governor Console Schedule
Shared components
Thursday, February 24, 2011
- You write your code; often it needs some customization.- That code runs on a share processing platform- Along with other people’s code- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
http://ww
w.com
puterhok.nl/JSP
Wiki/attach/G
oogleAppE
ngine/GA
EQ
uota.png
Thursday, February 24, 2011
It’s a true, pure utility because you pay for what you use. Remember this picture; we’ll come back to it.
http://www.flickr.com/photos/olitaillon/3354855989/Thursday, February 24, 2011
PaaS is a very different model from IaaS. On the one hand, it’s more liberating, because you don’t have to worry about managing the machines. On the other hand, it’s more restrictive, because you can only do what the PaaS lets you.
http://wiki.developerforce.com/index.php/Apex_Code:_The_World%27s_First_On-Demand_Programming_LanguageThursday, February 24, 2011
In the case of Salesforce’s Force.com, you have to use an entirely new programming language, called Apex.
Thursday, February 24, 2011
PaaS isn’t common today, but it will catch on fast. Consider a recent hackathon we ran: 55 coders, 18 apps, 12 hours. Several are live now. I’m betting there are already a ton of rogue PaaS apps running on Force.com, being built for the front office without IT’s involvement.
IaaS and PaaS differences
IaaS
Any operating system you want
Limited by capacity of virtual machine
Scale by adding more machines
Many storage options (file system, object, key-value, RDBMS)
PaaS
Use only selected languages and built-in APIs
Limited by governors to avoid overloading
Scaling is automatic
Use built-in storage (Bigtable, etc.)
Thursday, February 24, 2011
To summarize: two kinds of cloud platforms
Another side to clouds:Clouds as a business model
Thursday, February 24, 2011
Now let’s talk about the other definition -- the populist, popular one that has everyone believing clouds will magically fix IT.
Thursday, February 24, 2011
All of the things we’ve seen about cloud technology make it possible to deliver computing as a utility -- computing on tap. The virtualization provides a blood/brain barrier between the application the user is running, and the machines on which it runs. Which means it can be a utility.
Thursday, February 24, 2011
The utility promise is compelling. It means you can focus on the thing your business does that makes you special
Thursday, February 24, 2011
And stop worrying about many of the tasks you really didn’t want to do anyway.
Cloud technology makes a wide range of business relationships possible
Thursday, February 24, 2011
In other words, all of these cloud technologies, because they separate the computing from the computers, make new business relationships—such as outsourcing—possible.
http://www.flickr.com/photos/hugo90/4154329689/Thursday, February 24, 2011
Consider, for a minute, the number of business models available to a car user.
http://en.wikipedia.org/wiki/File:Hyundai_car_assembly_line.jpgThursday, February 24, 2011
At one extreme, you could be a car manufacturer. Youʼd have complete control over every aspect of your car, even though the cost of doing so would be very high. But you could still build cars from parts, and get them road-certified. It wouldnʼt scale very well as demand increased, so this is the domain of hobbyists (who need high customization) or large manufacturers (who need economies of scale)
http://www.flickr.com/photos/stevoarnold/2789464563/Thursday, February 24, 2011
For most of us, the answer to transportation is to own a car. Youʼre not responsible for design – though you have some choice of models and features – but you are liable for everything. You have to finance it, maintain it, and so on.
http://www.flickr.com/photos/mpk/50046296/Thursday, February 24, 2011
If youʼre a traveller, then you rent. This is a different model, with different responsibilities. Youʼre still at fault if you scratch or hit something, and still need to know directions, but someone else finances the deal and handles storage, cleaning, and other things. And youʼre paying for what you use, not for the entire asset.
http://www.flickr.com/photos/uggboy/4594493429/Thursday, February 24, 2011
A car hire service abdicates even more control – you can still decide where to go and how to get there, pickup and dropoff times, etc., but everything else is the driverʼs responsibility. You have only marginal control over the car model.
http://www.flickr.com/photos/xjrlokix/4379281690/Thursday, February 24, 2011
A taxicab takes this to the ultimate extreme: pay-as-you-drive economics, and nothingʼs your fault provided youʼre well behaved in the back seat. You have almost no control over the platform.
http://www.flickr.com/photos/abulic_monkey/130899453/
The abdication of authority (and responsibility.)
Thursday, February 24, 2011
These are all degrees of abdication and abstraction. Sometimes a taxi makes sense – for example, when weʼre going from place to place in a city. Other times, building our own makes sense – for example, if weʼre landing on the moon.
This challenges a decades-long monopoly on IT
Thursday, February 24, 2011
Models like these are now rushing into enterprise IT, challenging what has long been a monopoly within organizations.
http://www.flickr.com/photos/harshlight/3235469361Thursday, February 24, 2011
For decades, IT-as-a-monopoly was a good thing.
Two reasons.
Thursday, February 24, 2011
There were a couple of reasons IT was a monopoly for so long.
http
://w
ww
.flic
kr.c
om/p
hoto
s/br
ewbo
oks/
3319
7303
27/
(16MB)Thursday, February 24, 2011
First, the machines were expensive. That meant they were a scarce resource, and someone had to control what we could do with them.
http
://w
ww
.flic
kr.c
om/p
hoto
s/ar
gonn
e/45
6339
4851
/
Thursday, February 24, 2011
Second, they were complicated. It took a very strange sect of experts to understand them. AVIDAC, Argonne's first digital computer, began operation in January 1953. It was built by the Physics Division for $250,000. Pictured is pioneer Argonne computer scientist Jean F. Hall.AVIDAC stands for "Argonne Version of the Institute's Digital Automatic Computer" and was based on the IAS architecture developed by John von Neumann.
http://www.flickr.com/photos/ebeam/3586287989/Thursday, February 24, 2011
This was also a result of scarcity. When computers and humans interact, they need to meet each other halfway. But it takes a lot of computing power to make something that’s easy to use;
http://www.flickr.com/photos/ecastro/3053916892/Thursday, February 24, 2011
in the early days of computing, humans were cheap and machines weren’t
http://www.flickr.com/photos/binaryape/458758810/Thursday, February 24, 2011
So we used punched cards,
http://50ans.imag.fr/images/galerie/Source/IBM-1130-1.jpgThursday, February 24, 2011
and switches,
http
://ho
neyn
et.o
nofr
i.org
/sca
ns/s
can2
2/so
l/sub
mis
sion
/rev
erse
.jpg
Thursday, February 24, 2011
and esoteric programming languages like assembler.
http://www.flickr.com/photos/flem007_uk/4211743886/Thursday, February 24, 2011
Think about what a monopoly means.
http
://w
ww
.flic
kr.c
om/p
hoto
s/ca
vem
an_9
2223
/353
1128
799/
Thursday, February 24, 2011
A monopoly was once awarded for a big project beyond the scope of any one organization, but needed for the public good.
http://www.flickr.com/photos/athomeinscottsdale/2850893998/Thursday, February 24, 2011
Sometimes, nobody wants the monopoly—like building the roads.
http://www.flickr.com/photos/leokoivulehto/2257818167/Thursday, February 24, 2011
(IT’s been handed many of these thankless tasks over the years, and the business has never complained.)
http://www.flickr.com/photos/crobj/4148482980/Thursday, February 24, 2011
The only time we can charge back for roads are when the resource is specific and billable: a toll highway, a bridge.
http://en.wikipedia.org/wiki/File:Bell_System_hires_1900_logo.PNGThursday, February 24, 2011
Sometimes, we form a company with a monopoly, or allow one to operate, in order to build something or allow an inventor to recoup investment. This is how we got the telephone system, or railways.
For much of its history, AT&T and its Bell System functioned as a legally sanctioned, regulated monopoly.
The US accepted this principle, initially in a 1913 agreement known as the Kingsbury Commitment.
Anti-trust suit filed in 1949 led in 1956 to a consent decree whereby AT&T agreed to restrict its activities to the regulated business of the national telephone system and government work.
Changes in telecommunications led to a U.S. government antitrust suit in 1974.
In 1982 when AT&T agreed to divest itself of the wholly owned Bell operating companies that provided local exchange service.
In 1984 Bell was dead. In its place was a new AT&T and seven regional Bell operating companies (collectively, the RBOCs.)
http://www.corp.att.com/history/history3.html
Thursday, February 24, 2011
When monopolies are created with a specific purpose, that’s good. But when they start to stagnate and restrict competition, we break them apart.
http://www.flickr.com/photos/ktylerconk/4096965228/Thursday, February 24, 2011
In fact, there’s a lot of antitrust regulation that prevents companies from controlling too much of something because they can stifle innovation and charge whatever they want. That’s one of the things the DOJ does.
First: Monopoly good.
Thursday, February 24, 2011
In other words, early on monopolies are good because they let us undertake hugely beneficial, but largely unbillable, tasks.
Then: Monopoly bad.
Thursday, February 24, 2011
Later, however, they’re bad because they reduce the level of creativity and experimentation.
Data center upheavalWhat utility computing changes for enterprise IT
Thursday, February 24, 2011
So we live in a world where internal IT monopolies are increasingly seen as bad—inefficient, costly, unable to adapt to change, and so on.
So now IT is competing with public providers.
Thursday, February 24, 2011
That means enterprise IT professionals have to compete with external providers. To do so, they need to catch up.
Cycle time from years to days
Developers, not accountants,
decide when the infrastructure
needs to change.
Thursday, February 24, 2011
Once, IT used to buy a machine and run it for three years, because thatʼs how long accountants told us it took to depreciate. Today, machines live for as long as fickle developers need them—and their requirements change constantly, because of iterative development approaches like Agile and rapid-fire front-office initiatives.
Extreme horizontal scaling
Loads are tied to variable demand
from a connected
market; developers code
in parallel.
Thursday, February 24, 2011
Today, we donʼt buy one big machine; we have many small ones, able to adapt to demand as it changes, and resilient. Think RAID, but for entire application stacks.
Portability matters
Workloads move between public
& private platforms
according to price,
governance.
Thursday, February 24, 2011
Today, a workload that runs in-house for cost, capacity or compliance reasons may run elsewhere when those change.
Service levels shift radically
A shared resource means competition for capacity; utility models mean
you can pay for faster.
Thursday, February 24, 2011
Today, we can no longer determine how much traffic an app handles or how fast it will respond. It depends on the resources available—and those resources are elastic. You can handle a ton of users; but itʼll cost you. Old SLAs donʼt make sense.
The end of perimeters
Topology thinking about security won’t
last when workloads move
around.
Thursday, February 24, 2011
We used to think things on one side of a firewall were safe. We even had terms like the “Demilitarized zone.” No more; when apps move, they have to take their permissions and controls with them.
From machines to services
Seeing the sausage being made doesn’t
benefit anyone; VMs are a nice metaphor but a
damned nuisance.
Thursday, February 24, 2011
While we still think in terms of virtual machines, thatʼs just a convenient unit of measure for computing. Managing those underlying components has less and less value, and giving users too much control limits the operatorʼs ability to optimize things.
Getting there from herePractical migration strategies
Thursday, February 24, 2011
So how do we get there? Here are some practical strategies.
Thursday, February 24, 2011
First of all, recognize that itʼs not a big switch. While that might be a good book title, itʼs not a sudden change from one thing to another.
A spectrum of architectures
Baremetal
Virtualmachines
Privatecloud
Virtualprivatecloud
IaaS Cloudservices
(i.e. storage)
PaaS
<script></script>
Thursday, February 24, 2011
Ultimately, cloud computing is about a significant broadening of the available architectures -- there’s no “big switch”, just a series of new options.
What makes a workload suitable to move?
Benefit Examples
ComponentableCan be broken into component parts (storage, network, billing) separated by SOA-like, RESTful interfaces
Encapsulatable Easily encapsulated into virtual machine format
Performance tolerant
Won’t suffer from performance issues if WAN latency increases
Architecturally agnostic
Doesn’t have an “architectural opinion”—in other words, it’s network and hardware agnostic
CompliantIs free of legislative or compliance problems that restrict how and where it’s deployed
Thursday, February 24, 2011
In the coming year, youʼre going to have to decide which workloads are suitable to move into an on-demand environment, whether thatʼs a private or public cloud. First, you need to look at applications and see which ones can move.
What makes a workload beneficial to move?
Benefit Examples
CostVary in demand (because of seasonality, usage spikes, and so on)
TimeCan be divided into chunks and performed in parallel (such as data analysis)
RiskRequires high levels of redundancy that aren’t economically feasible to deliver on dedicated equipment
Experimentation Has an experimentation benefit because of trial-and-error development or a continuous deployment process
Agility The line of business can service itself, rather than relying on central IT and human involvement
Thursday, February 24, 2011
Then, you have to decide which workloads are beneficial to the business. These benefits come from a number of places.
Virtualize, ensure,
portability. Monitor cost and pricing.
Don’t move. Optimize bare
metal, acceleration, virtualization.
Move first. Use to showcase cloud benefits
and ROI.
Hybridize, make portable, seek
vertical “community”
clouds.
Business case for migration
Tech
nica
l sui
tabi
lity
for m
igra
tion
High
LowLow High
Thursday, February 24, 2011
Put these together and you have a good model for deciding what to do with each application.
How to think about migration
Model What it offersWhat you
moveBest for
SaaSTurnkey software functionality
Content and business processes
Commodity tools (mail, collaboration, word processing) and simple forms (order entry, CRM)
PaaSA platform that runs your code, with APIs
Your source code
New, relatively simple applications where you don’t need control over network topology, OS, or data location
IaaSVirtual infrastructure rented by the hour
Your OS or VM Variable workloads, testing and QA, massively parallel tasks
Thursday, February 24, 2011
Once you know whatʼs moving, figure out whether itʼs moving to Infrastructure, Platform, or third-party SaaS environments.
People, processes, technology(Only three tiny things to worry about.)
Thursday, February 24, 2011
PeopleThe changing role of enterprise IT professionals
Thursday, February 24, 2011
Let’s face it: tomorrow’s IT team will look a lot different from today’s.
Less of some things, more of others
Less of
Fire and forget
Business case first
Configuration
Procurement
Fishing
More of
Adapt and adjust
Ongoing analytics
Adaptive policies
Terms & relationships
Teaching people to fish
Thursday, February 24, 2011
You’ll spend your time doing a lot less of some things, and a lot more of others.
Not everyone will survive
Thursday, February 24, 2011
And not everyone will make it.
Poor Ada.
Thursday, February 24, 2011
Let me tell you a story about ADA. This was an early object-oriented language, named for the Ada Lovelace, the first real programmer and muse to early computer inventors.
OO promised so much
Object oriented (OOD) techniques and Ada (1985-95)
Increased NASA code reuse by 300 percent
Reduced all systems costs by 40 percent
Shortened development cycle time by 25 percent
Slashed error rates by 62 percent
Thursday, February 24, 2011
Remember Object-Oriented Programming?Object oriented design (OOD) techniques and ADA (1985-95)
But fell so short
Only 15-20% of FDD software written in Ada
Naysayers resisted the language change
Wanted to stay with what they knew (FORTRAN)
Had reusable components maintained by others
Evangelists didn’t help
Promised too much too soon
Avoided root issue: Lack of environment
Thursday, February 24, 2011
Only a certain percentage of NASAʼs coders could make that jump. With sharded, shared-nothing, distributed data, that may happen again.
ProcessesRetooling the way you work: two simple tests
Thursday, February 24, 2011
Okay, what about processes. How will they change? I can give you two simple tests.
The first test
What if you had to do it a thousand times?
Thursday, February 24, 2011
First, it’s all about large numbers. You’ll be measured on operating efficiency—things like the ratio of people to servers. Metrics like cost per visitor-second. So everything you do, ask yourself, how would you do it if it had to be done a thousand times?
The second test
Can you throw a random thing from a tenth story window?
Thursday, February 24, 2011
Second, itʼs all about architectures. We donʼt buy one big machine we hope wonʼt break—we buy a thousand we know will break, just not all at once, and design for failure.
TechnologyA return to centralization, with services and portability
Thursday, February 24, 2011
And the technology will change too.
Once upon a time: mainframes
Mainframes
Centralized
ITcontrolled
Computers expensive, humans
cheap.
Thursday, February 24, 2011
In the early days of IT, computers were complicated and expensive. Not a lot of people knew how to use them, and they were precious. So humans bent to their will: we wrote in languages they understood, like assembler. We shared time, waiting until late at night to run batch jobs.
Client-server shares the load
Mainframes
Centralized Distributed
Client-serverITcontrolled
Computers cheaper, distance expensive, user tasks varied, UI changes
Thursday, February 24, 2011
As computers became more affordable, we decided that some computing could happen at the edge of the network, in the client-server model.
The web puts developers on top
Mainframes
Centralized Distributed
Client-server
Web stack(LAMP)
ITcontrolled
Developercontrolled
Computers cheap, IT democratized,
complexity expensive, WAN slow
Thursday, February 24, 2011
Then the web – and with it an explosion of creativity – made it easy for developers to build atop software stacks like LAMP. Developers were in charge, and while browsers were everywhere, this was a return to centralization: huge farms of web, app, and database servers in data centers handled the load.
Rich clients spread out again
Mainframes
Centralized Distributed
Client-server
Web stack(LAMP)
Rich clients (AJAX, Silverlight, tablet
apps, Flash, Java)
ITcontrolled
Developercontrolled
Users smarter, demanding better experiences,
mobile, disconnected uses.
Thursday, February 24, 2011
The rich client explosion – first in the browser, and now on tablets and mobile devices – is a second wave of distributed computing, this time, with the consumer and developer in charge.
Now: virtual architectures
Mainframes
Centralized Distributed
Client-server
Web stack(LAMP)
Rich clients (AJAX, Silverlight, tablet
apps, Flash, Java)
Virtualization & clouds (adaptive
infrastructure)
ITcontrolled
Developercontrolled
Workloadcontrolled
Separation of compute, storage costly; retooling the platforms
Thursday, February 24, 2011
Now weʼre seeing the pendulum swing back to centralization, for several reasons.
Hairy, smoking golf balls.
http://www.flickr.com/photos/onigiri_chang/4791909127/Thursday, February 24, 2011
The extraordinary Jim Gray of Microsodft described the CPU of tomorrow as a “smoking, hairy golf ball” – a tiny computer bristling with wires and generating a lot of heat. He also said that, compared to the cost of moving bytes around, everything else is basically free.
Thursday, February 24, 2011
This means a return to centralized machines—but adaptive ones that can be re-tooled to handle different workloads, and that are able to move applications from place to place according to cost, compliance, and capacity policies.
What can IT do to prepare?Some practical steps.
Thursday, February 24, 2011
Yikes. So what can you do to prepare?
Cycle time from years to days
Developers, not accountants,
decide when the infrastructure
needs to change.
Figure out how to retool the
data center on the fly with
virtualization, centralized storage,
automation.
Thursday, February 24, 2011
First, get ready for this adaptive, always-being-redesigned data center.
Extreme horizontal scaling
Loads are tied to variable demand
from a connected
market; developers code
in parallel.
Resilient, elastic architectures
and fast backplanes
replace large vertical boxes.
Thursday, February 24, 2011
Second, focus on the kinds of architectures that let you pass the two tests we alluded to.
Portability matters
Workloads move between public
& private platforms
according to price,
governance.
Look at portability &
compatibility; check out
private cloud stacks.
Thursday, February 24, 2011
Third, pay a lot of attention to cloud stacks like Cloud.com, Openstack, Redhat Makara, Xen, VMWare, Eucalyptus, and so on. You need to know that your workloads can move between them, which means you need standard virtual machine formats and standard APIs and controls to manage them.
Service levels shift radically
A shared resource means competition for capacity; utility models mean
you can pay for faster.
Performance matters a lot; learn to define and negotiate
service contracts with
good monitoring.
Thursday, February 24, 2011
Reconsider what performance and cost means. Thereʼs a huge change coming here.
The end of perimeters
Topology thinking about security won’t
last when workloads move
around.
Focus on application-
centric security and policies that
survive relocation.
Thursday, February 24, 2011
Get ready to throw out your firewalls too.
From machines to services
Seeing the sausage being made doesn’t
benefit anyone; VMs are a nice metaphor but a
damned nuisance.
Get ready for PaaS and a set
of services.
Thursday, February 24, 2011
And while you need to deliver comfortable, familiar models like virtual machines today, figure out how PaaS offerings will get deployed. Are you running a private storage service for large objects? a key-value store? Plenty of tools, public and private—Cassandra, Hadoop, Ceph, CouchDB, MongoDB, Hypertable, and more—are ready for you to play with.
http://ww
w.com
puterhok.nl/JSP
Wiki/attach/G
oogleAppE
ngine/GA
EQ
uota.png
Thursday, February 24, 2011
Remember this screen? Assume that in two years, this is what your business users will expect from you. And they won’t want any more confusing details. They won’t care which machines they ran stuff on—just how many CPU-hours they consumed.Don’t believe me? How many of you have a mobile phone? How many know which cell towers and routers they used in the last month?
The lesson of the answering machineMaking Steve Wozniak really angry
Thursday, February 24, 2011
Iʼm going to finish with a story about monopolies and innovation, but with a different point this time.
“This was 1972 and it was illegal in the U.S. to use your
own telephone. It was illegal in the U.S. to use your own
answering machine. Hence it also virtually impossible to buy
or own such devices.”
Thursday, February 24, 2011
$700/monthThursday, February 24, 2011
Thursday, February 24, 2011
The genie is out of the bottleStop looking for a cork; start deciding what to wish for.
Thursday, February 24, 2011
If I have to leave you with one idea, itʼs this.
Thanks!@[email protected]
Thursday, February 24, 2011