2co704peqjwzeo7401b7

Upload: sandeepnagar29

Post on 30-May-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/14/2019 2co704peqjwzeo7401b7

    1/30

  • 8/14/2019 2co704peqjwzeo7401b7

    2/30

    TABLE OF CONTENTS

    The Geography of Internet Infrastructure 3

    Why Google Needs its Own Nuclear Plant 5

    Supercomputers, Hadoop, MapReduce and the Return to a Few Big Computers 7

    Defogging Cloud Computing: A Taxonomy 8

    Web 2.0, Please Meet Your Host, the Internet 10

    When Is the Right Time to Launch Your Own Cloud? 12

    Why Cloud Computing Needs Security 14

    10 Reasons Enterprises Arent Ready to Trust the Cloud 16

    The Craft: Automation and Scaling Infrastructure 18

    Is Infrastructure the New Marketing Medium? 20

    Achieving Equality is Critical to the Future of the Internet 22

    Does the Internet Need More Roads or Better Traffic Signals? 24

    About GigaOM 26

    We are still using the Internet infrastructure we created in the last boom. Our 21st century

    business data is running on 20th century technology. It is time we refreshed our ideas and

    our platforms to give us the new building blocks of innovation. New technologies and ideas

    cloud computing, virtualization, SaaS and infrastructure on demand hold promise for

    entrepreneurs and herald a new era of business. Refresh the Net, brought to you by PEER 1,

    examines the future of Internet infrastructure and the ideas that will be discussed at GigaOMsStructure 08 conference.

    Om Malik

    Founder and Editor-in-Chief, GigaOM

    Welcome to Refresh The Net

  • 8/14/2019 2co704peqjwzeo7401b7

    3/30

  • 8/14/2019 2co704peqjwzeo7401b7

    4/30

  • 8/14/2019 2co704peqjwzeo7401b7

    5/30

    led many large enterprise companies to consider new markets for data centers, especially since the9/11 terrorist attacks underscored the need for back-up data centers outside of New York andWashington. A study of data center costs by The Boyd Group has highlighted the affordability ofmarkets such as Sioux Falls, S.D., and Tulsa, Okla.

    Among the biggest winners in the battle for enterprise data centers have been Austin and San Antonio.Austin won a $450-million Citigroup data center, and two large HP data centers. In San Antonio,Microsofts announcement of a $550-million data center has been followed by new data-center projects

    by the NSA, Stream Realty, Christus Health Systems and Power Loft.

    In all cases we see that energy costs, environmental impact, and social and economic supply affectthe location of data centers powering both the enterprise and the cloud. As cloud computing gainsmind share and market share, it will continue to remake the geography of Internet infrastructure. Themassive scalability requirements of cloud platforms will drive construction of ever-larger data centers,offering a physical symbol that, while the Internet is everywhere, it lives in a data center perhapsone near you.

    Rich Miller is the editor of Data Center Knowledge which provides daily news and analysis about the data

    center industry.

    The Geography of Internet Infrastructure (cont)

  • 8/14/2019 2co704peqjwzeo7401b7

    6/30

    Indexing the worlds information and making it accessible takes a lot of people, a lot of machines anda lot of energy.

    I was talking to a good friend recently and reported some hearsay about how a server now costs morein its useful life than it costs to buy. I found that amazing, but his response was even more astounding.Well, we should put them in poor peoples houses to give them heat, he quipped.

    It sounds dumb at first, but really, its pure genius. If that much energy is being used, and half ofthat energy is used for cooling, we could put those servers to work as electric heaters. The hostfamilies could also get some broadband access, and institutions would save on data center build-outs.Its a shame that our culture and the technical practicalities of distributed computing make theidea impractical.

    But it got me thinking. How much energy really is burned in those big data centers? What follows nextis guesstimation and inference based on popular opinion and, er, Google search returns ( I mayappear to pick on Google, but its just because it happens to be a convenient example)

    So, lets take the worst case here: 1,000,000 Machines using 500 watts of energy an hour = halfa gigawatt an hour.

    Wow. Thats a lot. In Googles own words, thats about half what a city the size of San Francisconeeds every hour.

    That poses a worryingthought: Information on theweb is increasing in anexponential manner, andGoogle will increase its

    Why Google Needs its Own Power Plant

    By Surj Patel

    Google is rumored to have anywhere between half a million and 1 million machines in data

    centers. around the world. I am assuming it is the largest single-purpose commercial

    installation that we know about. (Lets not think about the governments data demands

    for now.)

    Each machine consumes about 500 watts of energy, including cooling systems.

    Energy overhead for networking and other support structures is nominal, so Ill ignore it

    for my guesstimate.

    1,000,000 Machines using 500 watts of energy an

    hour = half a gigawatt an hour. Wow. Thats a lot. In

    Googles own words, thats about half what a city the

    size of San Francisco needs every hour.

  • 8/14/2019 2co704peqjwzeo7401b7

    7/30

  • 8/14/2019 2co704peqjwzeo7401b7

    8/30

    Yahoo announced yesterday it would collaborate with CRL to make supercomputing resources availableto researchers in India. The announcement comes on the heels of Yahoos Feb. 19 claim to have the worldslargest Hadoop-based application now that its moved the search webmap to the Hadoop framework.

    There are a number of Big Computing problems today. In addition to Internet search, cryptography,genomics, meteorology and financial modeling all require huge computing resources. In contrast topurpose-built mainframes like IBMs Blue Gene, many of todays biggest computers layer a frameworkatop commodity machines.

    Google has MapReduce and the Google File System. Yahoo now uses Apache Hadoop. The SETI@Homescreensaver was a sort of supercomputer. And hacker botnets, such as Storm, may have millions ofinnocent nodes ready to work on large tasks. Big Computing is still big its just built from lots ofcheap pieces.

    But supercomputing is heating up, driven by two related trends: On-demand computing makes it easyto build a supercomputer, if only for a short while; and Software-as-a-Service means fewer instancesof applications serving millions of users from a few machines. What happens next is simple economics.

    Frameworks like Hadoop scale extremely well. But they still need computers. With services like AmazonsEC2 and S3, however, those computers can be rented by the minute for large tasks. Derek Gottfrid ofthe New York Times used Hadoop and Amazon to create 11 million PDF documents. Combine on-demand computing with framework to scale applications and you get true utility computing. With Sun,

    IBM, Savvis and othersintroducing on-demandofferings, well soon seeeveryone from enterprisesto startups to individualhackers buying computinginstead of computers.

    At the same time, Software-as-a-Service models are thriving. Companies like Salesforce.com, Rightnow andTaleo replaced enterprise applications with web-based alternatives and took away deployment and man-agement headaches in the process. To stay alive, traditional software companies (think Oracle and Microsoft)need to change their licensing models from per-processor to per-seat or per-task. Once they do this,simple economies of scale dictate that theyll run these applications in the cloud, on behalf of their clients.And when youve got that many users for an application, its time to run it as a supercomputing cluster.

    Maybe well only need a few big computers, after all. And, of course, billions of portable devices toconnect to them.

    By Alistair Croll

    On-demand computing makes it easy to build a super-

    computer, if only for a short while; and Software-as-a-

    Service means fewer instances of applications serving

    millions of users from a few machines.

    Supercomputers, Hadoop, MapReduce and

    the Return to a Few Big Computers

  • 8/14/2019 2co704peqjwzeo7401b7

    9/30

    Were heading quickly into the next big chapter of the Internet revolution, with tremendous buzz andexcitement around the development of cloud computing. As with all major disruptive changes in tech-nology, cloud computing has generated a flurry of definitions in the press and blogosphere, withacronyms and metaphors flying. At the same time, its important to remember that companies that

    are now deploying inthe cloud have commonproblems theyre trying tosolve, albeit in differentways using differentapproaches. Weve foundit helpful to create a map-

    ping of these approaches a taxonomy of the cloud, if you will to make it simpler to understandproduct offerings and what benefits they provide for customers.

    The term cloud computing has become a catch-all for any information technology solution that doesnot use in-house data center or traditional managed hosting resources. Self-defined cloud offerings

    range from Amazon Web Services and Google Apps and App Engine to Salesforce and even Applesnew MobileMe service for iPhone 3G.

    Among all these offerings is a common thread and key differentiator from the past the notion ofproviding easily accessible compute and storage resources on a pay-as-you-go, on-demand basis,from a virtually infinite infrastructure managed by someone else. As a customer, you dont know wherethe resources are, and for the most part, you dont care. Whats really important is the capability toaccess your application anywhere, move it freely and easily, and inexpensively add resources forinstant scalability. When customers have the power to turn on and off 10, 100, 1,000 or even 10,000

    servers as needed whether because a hotsocial Web application

    takes off, or a batchprocessing job startsto really crunch thatis the core reason cloudcomputing is growing

    so fast. It represents a true democratization of Web computing, and its changing the way IT infrastructureis being delivered and consumed.

    By Michael Crandell

    Defogging Cloud Computing: A Taxonomy

    The term cloud computing has become a catch-all

    for any information technology solution that does

    not use in-house data center or traditional managed

    hosting resources.

    As a customer, you dont know where the resources

    are, and for the most part, you dont care...When

    customers have the power to turn on and off 10, 100,

    1,000 or even 10,000 servers as needed...that is the

    core reason cloud computing is growing so fast.

  • 8/14/2019 2co704peqjwzeo7401b7

    10/30

  • 8/14/2019 2co704peqjwzeo7401b7

    11/30

    I have a major problem with many of the Web 2.0 companies that I meet in my job as a venture capitalist:They lack even the most basic understanding of Internet operations.

    I realize that the Web 2.0 community generally views Internet operations and network engineering asrouter-hugging relics of the past century desperately clutching to their cryptic, SSH-enabled commandline interfaces, but I have recently been reminded by some of my friends working on Web 2.0 applicationsthat Internet operations can actually have a major impact on this centurys application performanceand operating costs.

    So all you agile programmers working on Ruby-on-Rails, Python and AJAX, pay attention: If you wantmore people to think your application loads faster than Google and do not want to pay more to thoseancient phone companies providing your connectivity, learn about your host. Its called the Internet.

    As my first case in point, I was recently contacted by a friend working at a Web 2.0 company thatjust launched their application. They were getting pretty good traction and adoption, adding around athousand unique users per day, but just as the buzz was starting to build, the distributed denial-of-service (DDOS) attack arrived. The DDOS attack was deliberate, malicious and completely crushed

    their site. This was not an extortion type of DDOS attack (where the attacker contacts the site andextorts money in exchange for not taking their site offline), it was an extraordinarily harmful site

    performance attack thatrendered that site virtuallyunusable, taking a non-Google-esque time ofabout three minutesto load.

    No one at my friends company had a clue as to how to stop the DDOS attack. The basics of securingthe Web 2.0 application against security issues on the host system the Internet were completelylacking. With the help of some other friends, ones that combat DDOS attacks on a daily basis, we

    were able to configure the routers and firewalls at the company to turn off inbound ICMP echo requests,block inbound high port number UDP packets and enable SYN cookies. We also contacted the upstreamISP and enabled some IP address blocking. These steps, along with a few more tricks, were enoughto thwart the DDOS attack until my friends company could find an Internet operations consultant tocome on board and configure their systems with the latest DDOS prevention software and configurations.

    Unfortunately, the poor site performance was not missed by the blogosphere. The application hassuffered from a stream of bad publicity; its also missed a major window of opportunity for user adoption,which has sloped significantly downward since the DDOS attack and shows no sign of recovering.

    By Allan Leinwand

    Web 2.0, Please Meet Your Host, the Internet

    Internet operations can actually have a major

    impact on this centurys application performance

    and operating costs.

  • 8/14/2019 2co704peqjwzeo7401b7

    12/30

    So if the previous paragraph read like alphabet soup to everyone at your Web 2.0 company, its high time youstart looking for a router-hugger, or soon your site will be loading as slowly as AOL over a 19.2 Kbps modem.

    Another friend of mine was helping to run Internet operations for a Web 2.0 company with a sizable amountof traffic about half a gigabit per second. They were running this traffic over a single gigabit Ethernet linkto an upstream ISP run by an ancient phone company providing them connectivity to their host, the Internet.As their traffic steadily increased, they consulted the ISP and ordered a second gigabit Ethernet connection.

    Traffic increased steadily and almost linearly until it reached about 800 megabits per second, at whichpoint it peaked, refusing to rise above a gigabit. The Web 2.0 company began to worry that either their

    application was limited in its performance or that users were suddenly using it differently.

    On a hunch, my friend called me up and asked that I take a look at their Internet operations andconfigurations. Without going into a wealth of detail, the problem was that while my friends companyhad two routers, each with a gigabit Ethernet link to their ISP, the BGP routing configuration was

    done horribly wrong andresulted in all traffic usinga single gigabit Ethernetlink, never both at the sametime. (For those interested,both gigabit Ethernet linkswent to the same upstream

    eBGP router at the ISP, which meant that the exact same AS-Path lengths, MEDs, and local preferenceswere being sent to my friends routers for all prefixes. So BGP picked the eBGP peer with the lowestIP address for all prefixes and traffic). Fortunately, a temporary solution was relatively easy (I configuredeach router to only take half of the prefixes from each upstream eBGP peer) and worked with the ISPto give my friend some real routing diversity.

    The traffic to my friends Web 2.0 company is back on a linear climb in fact it jumped to over agigabit as soon as I was done configuring the routers. While the company has their redundancy andconnectivity worked out, they did pay their ancient phone company ISP for over four months for asecond link that was essentially worthless. I will leave that negotiation up to them, but Im fairly surethe response from the ISP will be something like, We installed the link and provided connectivity,sorry if you could not use it properly. Please go pound sand and thank you for your business. Only by

    using some cryptic command line interface was I able to enable their Internet operations to scale withtheir application and get the company some value for the money they were spending on connectivity.

    Web 2.0 companies need to get a better understanding of the host entity that runs their business, theInternet. If not, they need to need to find someone that does, preferably someone they bring in atinception. Failing to do so will inevitably cost these companies users, performance and money.

    Allan Leinwand is a Partner at Panorama Capital where he focuses on technology investments. Allan is a

    frequent contributer on GigaOM. He co-authored Cisco Router Configuration and Network Management:

    A Practical Perspective and has been granted a patent in the field of data routing.

    Web 2.0, Please Meet Your Host, the Internet (cont)

    If the previous paragraph read like alphabet soup to

    everyone at your Web 2.0 company, its high time you

    start looking for a router-hugger, or soon your site will

    be loading as slowly as AOL over a 19.2 Kbps modem.

  • 8/14/2019 2co704peqjwzeo7401b7

    13/30

    New York-based cloud computing startup 10gen launched today with backing from CEO Kevin Ryansstartup network, Alleycorp. It makes sense, since with several ventures already under his belt, Ryanprobably has enough customers to both justify the buildout and break even right away. And the foundersknow scaling, having built out ad network DoubleClick.

    But is it always a good idea to build your own cloud when you get big enough to do so?

    Yesterday, for example, I had a great chat with Lana Holmes, a Bay area startup maven, about productmanagement and how to focus on doing the one thing that matters to your company. The example Iuse is Amazon, she said. They just focused on selling books. And look at them now.

    At their root, Amazons EC2 and S3 offerings are the result of excess capacity from sales. The offeringshave paved the way for an online world in which compute power is a commodity. The company hassubsequently built, on top of those offerings, a layer of billing, services and support for them.

    The motivation behind the creation of 10gen is similar: If you successfully launch a number of web firms,at a certain point the economies of scale of others clouds starts fall away and you may as well run

    your own.Its easier than ever to launch your own cloud. Youve got grid deployment tools from folks like 3Teraand Enomaly. Virtualization management can be had from the likes of Fortisphere, Cirba and ManageIQ,to name just a few. And license management (built into cluster deployment from companies likeElastra) is knocking down some of the final barriers to building a cloud that you can offer to thirdparties as well.

    But imagine a world in which there are hundreds of clouds to choose from. Moving a virtual machineis supposed to be as easyas dragging and dropping,and cloud operators willhate that. Theyll resist, put-

    ting in proprietary APIs andfunction calls. Applications

    and data wont be portable. Youll be locked in to a cloud provider, who will then be free to chargefor every service. Sound familiar?

    By Alistair Croll

    When Is the Right Time to Launch

    Your Own Cloud?

    Its clear that good old-fashioned branding, plus a

    healthy dose of experience, will be key to winning

    as a cloud provider.

  • 8/14/2019 2co704peqjwzeo7401b7

    14/30

    My guess is that as the cloud computing market grows and matures, one (or more) of three thingswill happen:

    Whatever happens, its clear that good old-fashioned branding, plus a healthy dose of experience,will be key to winning as a cloud provider.

    During a panel at Interop last week that I sat on with folks from Amazon, Opsource, Napera, Syntenicand Kaazing, I asked the audience how many of them would entrust Microsoft to run a cloud withMicrosoft applications, and how many would prefer to see Amazon running a Microsoft kernel on EC2.Roughly 75 percent said theyd trust Amazon to run Microsofts own apps rather than Microsoft.

    So whens the right time to launch a cloud computing offering of your own? Unless you have thebranding and reputation to support that launch or you can re-sell excess capacity to partners orspecialize maybe never.

    Alistair Croll is a senior analyst at research firm Bitcurrent, covering emerging web technologies, networking,

    and online applications and is a frequent contributer on GigaOM. Prior to Bitcurrent, Alistair co-founded

    Coradiant, a leader in online user monitoring, as well as research firm Networkshop.

    When Is the Right Time to Launch

    Your Own Cloud? (cont)

    Standardization and portability, in which consortia of cloud vendors agree to a standard

    set of APIs and coding constraints that guarantee interoperability. This isnt just about the

    virtual machines; theyre fairly standard already. Its about the data storage systems and

    the control APIs that let cloud users manage their applications. This is the mobile phone

    model, where number portability is guaranteed and there are well-known services like voice

    mail and call forwarding.

    Shared grid computing, in which smaller clouds sell their excess capacity to bigger clouds.

    This would let the big cloud dominate while paying the smaller cloud just enough to stop it

    from launching an offering of its own. Think of this as the electric company model, selling

    computing between clouds the way a solar-powered household can pump excess electricity

    into the power grid.

    Specialization, where clouds are good at certain things. Youll get OS-specific clouds (Heroku

    is already providing optimized Rails deployment atop EC2.) Its only a matter of time before

    we see clouds tailored for specific industries or the services the offer anything from media

    to microtransactions. Sort of like the cable channel model, with specialized programming

    that allows niche channels too survive.

  • 8/14/2019 2co704peqjwzeo7401b7

    15/30

    Bribery, extortion and other con games have found new life online. Today, botnets threaten to takevendors down; scammers seduce the unsuspecting on dating sites; and new viruses encrypt yourhard drives contents, then demand money in return for the keys.

    Startups, unable to bear the brunt of criminal activity, might look to the clouds for salvation: After all,big cloud computing providers have the capacity and infrastructure to survive an attack. But theclouds need to step it up; otherwise, their single points of failure simply provide more appealing targetsfor the bad guys, letting them take out hundreds of sites at once.

    Last Friday, Amazons U.S. site went off the air, and later some of its other properties were unavailable.Lots of folks who wouldnt let me quote them, but should know, said that this was a denial-of-serviceattack aimed at the companys load-balancing infrastructure. Amazon is designed to weather hugeamounts of traffic, but it was no match for the onslaught.

    When it comes to online crime, the hackers have the advantage. A simple Flash vulnerability nets themthousands of additional zombies, meaning attacks can come from anywhere. During Amazons attack,legitimate visitors were greeted with a message saying they were abusing Amazons terms of service,

    which could mean that those visitors were either using PCs that were part of the attack, or were onthe same networks as infected attackers. The botnets are widespread, and you cant block themwithout blocking your customers as well.

    Other rackets give the attacker an unfair edge, too: It takes an army of machines to crack the 1024-bitencryption on a ransom virus, but only one developer to write it.

    A brand like Amazon can weather a storm, because people will return once the storm has passed. Butjust look at the Twitter exodus to see how downtime from high traffic loads can tarnish a fledgling brand.

    Slideshare survived suchan attack in April, andwhile many other sites

    admit to being threat-ened, they wont go onthe record as saying so.

    Up-and-coming web sites are often great targets, as they often lack the firewalls, load-balancers andother infrastructure needed to fight back. And its not just criminals: In some cases, the attacker is acompetitor; in others, its someone who just doesnt like what youre doing.

    By Alistair Croll

    Why Cloud Computing Needs Security

    When it comes to online crime, the hackers have the

    advantage. A simple Flash vulnerability nets them

    thousands of additional zombies, meaning attacks

    can come from anywhere.

  • 8/14/2019 2co704peqjwzeo7401b7

    16/30

    Fighting off hackers is expensive. Auren Hoffman calls this the Black Hat Tax, and points out that manytop-tier Internet companies spend a quarter of their resources on security. No brick-and-mortar companydevotes this much attention to battling fraud.

    Wanting to survive an attack is yet another reason for startups to deploy atop cloud computing offeringsfrom the likes of Amazon, Google, Joyent, XCalibre, Bungee, Enki and Heroku. But consolidation of theentire Internet onto only a few clouds may be its Achilles heel: Take down the cloud, and you take downall its sites. Thats one reason carriers like AT&T and CDNs like Akamai are betting that a distributed

    cloud will win out in the end.

    Cloud operators need to find economies of scale in their security models that rival the efficienciesof hackers. Call it building a moat for the villagers to protect them from the barbarians at the gate.Otherwise, this will remain a one-sided battle that just gives hackers more appealing targets.

    Why Cloud Computing Needs Security (cont)

  • 8/14/2019 2co704peqjwzeo7401b7

    17/30

    Many entrepreneurs today have their heads in the clouds. Theyre either outsourcing most of theirnetwork infrastructure to a provider such as Amazon Web Services or are building out such infrastructuresto capitalize on the incredible momentum around cloud computing. I have no doubt that this is TheNext Big Thing in computing, but sometimes I get a little tired of the noise. Cloud computing could

    become as ubiquitous as personal computing, networked campuses or other big innovations in theway we work, but its not there yet.

    Because as important as cloud computing is for startups and random one-off projects at big companies,it still has a long way to go before it can prove its chops. So lets turn down the noise level and adda dose of reality. Here are 10 reasons enterprises arent ready to trust the cloud. Startups and SMBsshould pay attention to this as well.

    1. Its not secure. We live in an age in which 41 percent of companies employ someone to read their

    workers email. Certain companies and industries have to maintain strict watch on their data at

    all times, either because theyre regulated by laws such as HIPAA, Gramm-Leach Bliley Act or

    because theyre super paranoid, which means sending that data outside company firewalls isnt

    going to happen.

    2. It cant be logged. Tied closely to fears of security are fears that putting certain data in the cloudmakes it hard to log for compliance purposes. While there are currently some technical ways

    around this, and undoubtedly startups out there waiting to launch their own products that make it

    possible to log conversations between virtualized servers sitting in the cloud, its still early days.

    3. Its not platform agnostic. Most clouds force participants to rely on a single platform or host only

    one type of product. Amazon Web Services is built on the LAMP stack, Google Apps Engine locks

    users into proprietary formats, and Windows lovers out there have GoGrid for supporting computing

    offered by the ServePath guys. If you need to support multiple platforms, as most enterprises do,

    then youre looking at multiple clouds. That can be a nightmare to manage.

    4. Reliability is still an issue. Earlier this year Amazons S3 service went down, and while the entire

    systems may not crash, Mosso experiences rolling brownouts of some services that can effect

    users. Even inside an enterprise, data centers or servers go down, but generally the communication

    around such outages isbetter and in many cases,

    fail-over options exist.

    Amazon is taking steps

    toward providing (pricey)

    information and support,

    but its far more comforting

    to have a company-paid

    IT guy on which to rely.

    By Stacey Higginbotham

    10 Reasons Enterprises Arent Ready

    to Trust the Cloud

    Cloud computing could become as ubiquitous as

    personal computing, networked campuses or other

    big innovations in the way we work, but its not

    there yet.

  • 8/14/2019 2co704peqjwzeo7401b7

    18/30

    5. Portability isnt seamless. As all-encompassing as it may seem, the so-called cloud is in fact made

    of up several clouds, and getting your data from one to another isnt as easy as IT managers would

    like. This ties to platform issues, which can leave data in a format that few or no other cloud accepts,

    and also reflects the bandwidth costs associated with moving data from one cloud to another.

    6. Its not environmentally sustainable. As a recent article in The Economist pointed out, the emergence

    of cloud computing isnt as ethereal as is might seem. The computers are still sucking down

    megawatts of power at an ever-increasing rate, and not all clouds are built to the best energy-efficiency

    standards. Moving data center operations to the cloud and off corporate balance sheets is kind of

    like chucking your garbage into a landfill rather than your yard. The problem is still there but you nolonger have to look at it. A company still pay for the poor energy efficiency, but if we assume that

    corporations are going to try to be more accountable with regard to their environmental impact,

    controlling ITs energy efficiency is important.

    7. Cloud computing still has to exist on physical servers. As nebulous as cloud computing seems, the

    data still resides on servers around the world, and the physical location of those servers is important

    under many nations laws. For example, Canada is concerned about its public sector projects being

    hosted on U.S.-based servers because under the U.S. Patriot Act, it could be accessed by the

    U.S. government.

    8. The need for speed still reigns at some firms. Putting data in the cloud means accepting the latency

    inherent in transmitting data across the country and the wait as corporate users ping the cloud and

    wait for a response. Ways around this problem exist with offline syncing, such as what Microsoft

    Live Mesh offers, but its still a roadblock to wider adoption.9. Large companies already have an internal cloud. Many big firms have internal IT shops that act as a

    cloud to the multiple divisions under the corporate umbrella. Not only do these internal shops have

    the benefit of being within company firewalls, but they generally work hard from a cost perspective

    to stay competitive with outside cloud resources, making the case for sending computing to the

    cloud weak.

    10. Bureaucracy will cause the transition to take longer than building replacement housing in New Orleans.

    Big companies are conservative, and transitions in computing can take years to implement. A good

    example is the challenge HP faced when trying to consolidate its data center operations. Employees

    were using over 6,000 applications and many resisted streamlining of any sort. Plus, internal IT

    managers may fight the outsourcing of their livelihoods to the cloud, using the reasons listed above.

    Cloud computing will be big, both in and outside of the enterprise, but being aware of the challengeswill help technology providers think of ways around the problems, and let cloud providers know whattheyre up against.

    Stacey Higginbotham has over ten years of experience reporting on business and technology for publications

    such as The Deal, the Austin Business Journal, The Bond Buyer and Business Week. She is currently the lead

    writer for GigaOM, where she covers both the infrastructure that allows companies to deliver services via the

    web, as well as the services themselves.

    10 Reasons Enterprises Arent Ready

    to Trust the Cloud (cont)

  • 8/14/2019 2co704peqjwzeo7401b7

    19/30

    Progress is made by lazy men looking for easier ways to do things

    Robert A. Heinlein

    Until the late 18th century, craftsmen were a primary source of production. With specialized skills, a

    craftsmans economic contribution was a function of personal quantity and quality, and a skilled artisanoften found it undesirable, if not impossible, to duplicate previous work with accuracy. Plus, there is alimit to how much a skilled craftsman can do in one day. Scaling up the quantity of crafted goods tomeet increased demand was a question of working more or adding more bodies both of whichpotentially sacrificed quality and consistency.

    Today, Internet applications and infrastructure are often the creations of skilled modern craftsmen. Theraw materials are files, users, groups, packages, services, mount points and network interfaces details most people never have to think or care about. These systems often stand as a testament tothe skill and vision of a small group or even an individual. But what happens when you need to scalea hand-crafted application that many people and potentially the life of a company depend on?The drag of minor inefficiencies multiplies as internal and external pressures create the need for more:

    more features, more users, more servers and a small army of craftsmen to keep it all together.

    These people are often bright and skilled, with their own notions and ideas, but this often leads toinconsistencies in the solutions applied across an organization. To combat inconsistency, mostorganizations resort to complicated bureaucratic change control policies that are often capriciouslyenforced, if not totally disregarded particularly when critical systems are down and the people who

    must sign off have littleunderstanding of thedetails. The end resultis an organization thatpurposely curtails itsown ability to innovate

    and adapt.

    Computers are extremely effective at doing the same repetitive task with precision. There must besome way to take the knowledge of the expert craftsmen and transform it into some kind of a programthat is able to do the same tasks, right? The answer is yes, and in fact, most system administratorshave a tool belt full of scripts to automate some aspects of their systems. For both traditional craftsmenand system administrators, better tools can increase quantity and quality of the work performed.

    By Andrew Shafer

    The Craft: Automation and Scaling Infrastructure

    Internet applications and infrastructure are often

    the creations of skilled modern craftsmen. The raw

    materials are files, users, groups, packages, services,

    mount points and network interfaces

  • 8/14/2019 2co704peqjwzeo7401b7

    20/30

  • 8/14/2019 2co704peqjwzeo7401b7

    21/30

    In the first decade of major commercial adoption of the Internet, marketers quickly seized upon thenew media types it provided, such as email, banner ads, search placements, and now the broadvariety of new media options that have appeared in recent years. Marketers, however, adopted thesemedia types in very much the same way that television, radio, and print media had been used in

    prior decades.

    Today, progressive marketers are realizing that these new media types can provide a greater depth ofinformation on prospects interest areas and objections a service that is often more valuable thanthe marketing communication itself. A desire to better communicate with prospects based on anunderstanding of their true interests is driving a shift toward coordinating all communications into acommon technology platform. Using that common platform to provide deep insight into prospectinterests, will drive innovation within marketing for the next decade.

    As information requiredby prospects is increas-ingly found online, thenuances of what someonelooked at, what caughtsomeones eye, and whatsomeone reacted nega-

    tively to become as important to a marketer as the nuances of body language are to the salespersoncommunicating face-to-face with a prospect. For marketers to succeed in todays world, they need tobecome proficient at reading this digital body language.

    The value of tracking a prospects behavior is directly related to the number of marketing touchpointsthat can be aggregated: Web, email, direct mail, search, downloads, webinars and whitepapers alltell a piece of the story. Together, they provide direct, actionable insight into the prospects propensityto buy. To repeatably and reliably provide this type of insight, marketers need an infrastructure thatrelieves them from the technical details of both launching campaigns across multiple media and tracking

    the results of a individual components through to a web site. Without todays infrastructure, marketingwont be able to innovate at the level of todays expectations.

    With this infrastructure in place, new campaigns that coordinate messages, promotions and commu-nication across media types, in real time, based on a prospects actual interest area are possible.When a prospective condominium buyer spends time looking at two bedroom units with a lake view,a direct mail offer might be sent highlighting one such unit. When a qualified prospective buyer of

    By Steven Woods

    Is Infrastructure the New Marketing Medium?

    A desire to better communicate with prospects based

    on an understanding of their true interests is driving

    a shift toward coordinating all communications intoa common technology platform.

  • 8/14/2019 2co704peqjwzeo7401b7

    22/30

    network equipment spends significant time digging into technical specifications of a new router, theymight be invited to a detailed technical webinar with the lead engineers of that router.

    As marketers explore theprospect insights that thenew marketing infrastructureprovides, while leveragingthe time that is freed up by

    having a platform that takescare of the mundane details of campaign execution, such innovations will accelerate. We will see themedia types that the Internet created and many media types that existed prior to the Internet used in novel ways for innovative campaigns that could never have been considered before.

    Steven Woods, co-founder and CTO of Eloqua, leads the companys product strategy and technology vision

    while working with hundreds of todays leading marketers. Mr. Woods has gained a reputation as a leading

    thinker on the transition of marketing as a discipline. Most recently, he was named to Inside CRMs Top 25

    CRM Influencers of 2007.

    Is Infrastructure the New Marketing Medium? (cont)

    For marketers to succeed in todays world, they

    need to become proficient at reading this digital

    body language.

  • 8/14/2019 2co704peqjwzeo7401b7

    23/30

    Inequality, or unfairness in how network capacity is allocated between different homes or computers,is causing major reductions in the actual realized speed of Internet service for almost every user. Themagnitude of the problem is well beyond what most people understand, with realized access speedoften reduced to as little as a tenth of its potential. For the Internet to truly support all of our imagined

    uses video, voice, gaming, social networking and the like we must eliminate the basic inequalityinherent in TCP/IP. To put it simply: Each user must receive equal capacity for equal payment.

    Lets consider the residential ISP market. The real goal should be to provide equal capacity to all homesthat have paid the same amount, and on some scale, more to those that paid more.

    In the current situation, pricing is flat, and any user, via a greedy program like P2P, can capitalize onTCPs preference for multi-flow traffic and drag down the average capacity of all other users. So far,the most common approach to addressing inequality problems is Deep Packet Inspection (DPI),which literally inspects packet contents to find P2P applications and then slows them down orkills them.

    However, this inspect-and-destroy approach has led to a new kind of arms race: P2P applications

    add encryption and rapidly changing signatures, and DPI constantly races to catch up. In a typicalnetwork, DPI finds roughly 70 percent of the P2P traffic, and things will only get more difficult as

    encryption becomes thenorm and signatures changeeven faster. Even at 70percent detection, theremaining P2P still slowsdown all the normal usersto a third of potential speed.

    The problem affects residential users, but it can be even more serious in a school or corporateenvironment. It is clear that DPI is doomed as a solution for containing P2P. However, a totallydifferent solution is possible.

    Each cable or DSL concentrator has a maximum capacity which must be shared at any moment. If allthe traffic from each home was rate-controlled to share the total capacity equally, a P2P user with 10flows would get 10 percent of the capacity per flow when compared to a neighbor downloading anew application with one flow. Both homes would get the same number of bytes delivered in the sameamount of time. A third neighbor doing something simple, such browsing the web or checking hisemail, would get much faster service than before, since his short-duration flow would not experienceany delay or loss. That is, unless he extended his session long enough that the total use neared that

    Achieving Equality is Critical to the Future

    of the Internet

    By Dr. Lawrence G. Roberts

    Each user must receive equal

    capacity for equal payment.

  • 8/14/2019 2co704peqjwzeo7401b7

    24/30

    of the file transfer users. In that case, he would be treated the same as the others who are consumingthe same amount of capacity for the same price.

    Since this automatic rate equalization does not require inspection of every packet, it operates at full 10Gbps trunk rates quite inexpensively compared to using many DPI systems, and the result is completenetwork usage equality for all users paying for the same service.

    Once inequality is eliminated in the network, application vendors can stop devising techniques that

    unfortunately harm other users and start discovering techniques that deliver improved service. Easingtraffic snarls will also bring down the cost of reliable, high-speed Internet service substantially. Withoutsolving the TCP/IP inequality problem, providing affordable Internet service will become extremelydifficult if not impossible.

    Dr. Lawrence Roberts led ARPANET, the team that birthed the Internet as we know it today. The ARPANET cluster

    included distinguished individuals such as Vint Cerf, who created the core protocol TCP/IP that underlies the

    infrastructure of our modern IP-based communication systems. We are proud to have Dr. Roberts speak at our

    Structure 08 conference and proud to present his thoughts here on the problems P2P and inequality in network

    capacity cause for consumers.

    Achieving Equality is Critical to the Future

    of the Internet (cont)

  • 8/14/2019 2co704peqjwzeo7401b7

    25/30

  • 8/14/2019 2co704peqjwzeo7401b7

    26/30

    The other way to reduce traffic involves each P2P company making tweaks to their software. In Octoberof 2007, BitTorrent launched a function called BitTorrent DNA that recognizes when a network pointis too congested and shunts the traffic flow through different areas. Jay Monahan, general counsel

    for Vuze, says his P2Pcompany started payingmore attention to congestionwithin the last few months

    as well.

    At some point new roads will have to be built. But in the meantime, there are ways to prevent networkcongestion that dont involve kicking certain cars off the road.

    Does the Internet Need More Roads or

    Better Traffic Signals? (cont)

    There are ways to prevent network congestion

    that dont involve kicking certain cars off

    the road.

  • 8/14/2019 2co704peqjwzeo7401b7

    27/30

  • 8/14/2019 2co704peqjwzeo7401b7

    28/30

  • 8/14/2019 2co704peqjwzeo7401b7

    29/30

    Find More Success with the Web

    The fastest-growing category of todays workforce is the knowledge worker, a trend thats predicted to continue unabatedfor the next 30 years as more economies become information-driven. To fuel that, a generation of professional web-basedworkers has emerged; WebWorkerDaily helps them to become more efficient, productive, successful and satisfied. Thesite provides a saber to hack through the ever-growing mountain of information and schedule distractions that conspireto clog up web workers time; it also provides hands-on reviews and practical analysis of the tools found on the new andemerging web. WebWorkerDailys team of writers have built successful careers in non-traditional settings, each day theyshare their practical, resourceful and inspiring secrets with readers.

    AUDIENCE

    Mobile workers Distributed project teams Independent consultants Developers Small business owners IT managers

    Calling All Ecopreneurs

    The threat of global warming has inspired a new wave of entrepreneurs and innovators to develop technology thatcould ultimately save our planet. Earth2Tech.com is a news-based web site that chronicles these cutting-edge cleantechnology startups and their innovations, be they based on solar, biofuels, wind, energy efficiency, green IT, water orother materials. Earth2Tech keeps all members of the eco-ecosystem from entrepreneurs and investors to students,researchers and policymakersinformed.

    AUDIENCE

    Investors Policymakers Entrepreneurs Scientists and researchers Cleantech startup executives Green-leaning consumers Tech companies with eco-initiatives Cleantech lawyers, media representatives,

    analysts and journalists

  • 8/14/2019 2co704peqjwzeo7401b7

    30/30

    Open Source: Find. Evaluate. Collaborate.

    There are hundreds of thousands of great open-source, proprietary and web-based applications to choose from today;finding the right one is hard. So in March 2008, Giga Omni Media launched OStatic, a site that delivers a comprehensiverepository of open-source applications and a set of tools that allow users to find them, evaluate them and collaborateon them more effectively. OStatic combines Giga Omni Medias insightful and in-depth reporting with cutting-edgecommunity tools to bring better information, case studies and context to users interested in open-source software solutions.

    AUDIENCE

    Tech-savvy individuals IT executives Hackers C-level executives Developers Startups System administrators Aspiring founders Business managers

    CONTACT GIGA OMNI MEDIA:

    Sponsor [email protected]

    Events and PR [email protected]

    Editorial Inquiries

    [email protected]