cloudy with a chance of arm

25
Oppenheimer & Co. Inc. does and seeks to do business with companies covered in its research reports. As a result, investors should be aware that the firm may have a conflict of interest that could affect the objectivity of this report. Investors should consider this report as only a single factor in making their investment decision. See "Important Disclosures and Certifications" section at the end of this report for important disclosures, including potential conflicts of interest. See "Price Target Calculation" and "Key Risks to Price Target" sections at the end of this report, where applicable. March 30, 2012 TECHNOLOGY/SEMICONDUCTORS & COMPONENTS Rick Schafer 720-554-1119 [email protected] Shawn Simmons 212 667-8387 [email protected] Jason Rechel 312-360-5685 [email protected] Cloudy With A Chance Of ARM What the Microserver Market Means for Semiconductor Vendors SUMMARY The world is going mobile as an expanding base of both consumer and enterprise users connect from an increasing number of devices. Remotely accessing localized files is no longer an acceptable solution and next-generation data centers are being tasked with supporting the migration to the cloud. Further fueling this migration is an ongoing shift from pure compute to data access—moving from heavy computational workloads to millions of relatively smaller workloads. Servers must adapt. x86-based processors have long held a server monopoly, but this changing workload dynamic, compounded by the need for greater efficiencies, is opening the door for alternative processor architectures like ARM. While any material shake-up in the server CPU landscape remains unlikely before 2014, investors should be prepared for change. In this paper, we seek to examine what's behind microserver demand, define its advantages and growth prospects while identifying which semiconductor vendors are poised to benefit. KEY POINTS A server workload identifies incoming work based on a set of user-defined connection attributes. Where the workload has historically sought to maximize how quickly data might be computed, it now seeks to maximize how quickly data can be accessed. It is a transition from power to speed. The new workload dynamic is best exemplified by Web 2.0 companies where small, high-volume transactions drive business. These companies are beginning to design and build internal data centers and must quickly and efficiently scale capacity. We forecast Web 2.0 data center spending to grow at a 23% CAGR from 2011 to 2016, increasing from $6.3B to $17.8B. A microserver is inherently less powerful relative to a traditional server and seeks to maximize operational and space efficiency. Today CPUs account for 1/3 of server/system BoM, but 2/3 of power usage. Microservers are able to handle "new" workloads with a less powerful CPU. More tantalizing to operators, our analysis demonstrates a 60-70% reduction in the cost of ownership. We believe the microserver market, driven by the move to the cloud, will grow from <1% of the x86 server market today to 21% in 2016. Microserver CPU TAM will grow at an impressive 95% CAGR, reaching $4.5B over the same period. Microserver growth will also expand overall server CPU TAM by $1.5B in 2016 as CPU BoM jumps from 33% today to roughly 50% in 2016. As the current market share leader, INTC stands the most to lose from the growth of microservers, though Atom and low-power Xeon will undoubtedly capture share. ARM vendors AMCC, NVDA and Calxeda, Tilera with its own architecture and AMD/SeaMicro are today's new breed and sit poised to benefit. EQUITY RESEARCH INDUSTRY UPDATE Oppenheimer & Co. Inc. 85 Broad Street, New York, NY 10004 Tel: 800-221-5588 Fax: 212-667-8229

Upload: stephan-cadene

Post on 18-Jan-2015

1.506 views

Category:

Documents


4 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Cloudy with a chance of arm

Oppenheimer & Co. Inc. does and seeks to do business with companies covered in its research reports. Asa result, investors should be aware that the firm may have a conflict of interest that could affect theobjectivity of this report. Investors should consider this report as only a single factor in making theirinvestment decision. See "Important Disclosures and Certifications" section at the end of this report forimportant disclosures, including potential conflicts of interest. See "Price Target Calculation" and "Key Risksto Price Target" sections at the end of this report, where applicable.

March 30, 2012

TECHNOLOGY/SEMICONDUCTORS & COMPONENTS

Rick [email protected]

Shawn Simmons212 [email protected]

Jason [email protected]

Cloudy With A Chance OfARMWhat the Microserver Market Means forSemiconductor Vendors

SUMMARY

The world is going mobile as an expanding base of both consumer and enterpriseusers connect from an increasing number of devices. Remotely accessing localizedfiles is no longer an acceptable solution and next-generation data centers are beingtasked with supporting the migration to the cloud. Further fueling this migration is anongoing shift from pure compute to data access—moving from heavy computationalworkloads to millions of relatively smaller workloads. Servers must adapt.x86-based processors have long held a server monopoly, but this changingworkload dynamic, compounded by the need for greater efficiencies, is opening thedoor for alternative processor architectures like ARM. While any material shake-upin the server CPU landscape remains unlikely before 2014, investors should beprepared for change. In this paper, we seek to examine what's behind microserverdemand, define its advantages and growth prospects while identifying whichsemiconductor vendors are poised to benefit.

KEY POINTS

■ A server workload identifies incoming work based on a set of user-definedconnection attributes. Where the workload has historically sought to maximizehow quickly data might be computed, it now seeks to maximize how quickly datacan be accessed. It is a transition from power to speed.

■ The new workload dynamic is best exemplified by Web 2.0 companies wheresmall, high-volume transactions drive business. These companies are beginningto design and build internal data centers and must quickly and efficiently scalecapacity. We forecast Web 2.0 data center spending to grow at a 23% CAGRfrom 2011 to 2016, increasing from $6.3B to $17.8B.

■ A microserver is inherently less powerful relative to a traditional server andseeks to maximize operational and space efficiency. Today CPUs account for1/3 of server/system BoM, but 2/3 of power usage. Microservers are able tohandle "new" workloads with a less powerful CPU. More tantalizing to operators,our analysis demonstrates a 60-70% reduction in the cost of ownership.

■ We believe the microserver market, driven by the move to the cloud, will growfrom <1% of the x86 server market today to 21% in 2016. Microserver CPUTAM will grow at an impressive 95% CAGR, reaching $4.5B over the sameperiod. Microserver growth will also expand overall server CPU TAM by $1.5B in2016 as CPU BoM jumps from 33% today to roughly 50% in 2016.

■ As the current market share leader, INTC stands the most to lose from thegrowth of microservers, though Atom and low-power Xeon will undoubtedlycapture share. ARM vendors AMCC, NVDA and Calxeda, Tilera with its ownarchitecture and AMD/SeaMicro are today's new breed and sit poised to benefit.

EQUITY RESEARCH

INDUSTRY UPDATE

Oppenheimer & Co. Inc. 85 Broad Street, New York, NY 10004 Tel: 800-221-5588 Fax: 212-667-8229

Page 2: Cloudy with a chance of arm

2

Cloudy With a Chance of ARM

The world is going mobile; a rapidly expanding base of both consumer and enterprise

users is increasingly accessing data and applications from a growing number of devices

across an expanding number of access points. Remotely accessing localized files or

applications is no longer an acceptable solution and next-generation data centers are

being tasked with supporting the transformational migration to the cloud. Compounding

the migration is an ongoing shift from single-thread, heavy workloads to millions of

relatively smaller computational workloads. Servers must adapt, and where x86 has long

had a foothold on the semiconductor server market, the changing workload dynamic and

rising importance on operational and space efficiency has begun to pave the way for

alternative processor and system architecture. We believe a fundamental shift in the

server market is yet in the top of the first inning. In this paper, we seek to examine the

growth and drivers of the microserver market, the advantages of more efficient

architecture and which semiconductor vendors may be poised to benefit.

The Intersection of the Cloud and the Web

It is widely known that the exponential growth of mobile devices, applications and data is

straining today’s network infrastructure. You may be reading on your desktop, notebook,

tablet, smartphone, or even on your connected-TV. Said devices may be plugged in via a

traditional home or business ethernet connection, connected to a public or private WiFi

network, MiFi network, mobile hotspot or to a 3G/4G wireless network. That these varying

devices and connections can access the same data and the same applications presents a

growing problem of complexity. And an expanding number of mobile devices (access

points) and booming number of applications/data only compounds upon the problem of

complexity that cloud computing attempts to ease.

Exhibit 1 – Growing Complexity of Connected Devices

Source: Cisco and Oppenheimer & Co.

Exhibit 1 portends a five-fold increase in the total possible combinations (complexity) of

access points and applications between 2010 and 2015. Consider that, according to The

Cisco Global Cloud Index, the percentage of global internet users using more than five

network connected devices will grow from 36% in 2010 to 69% by 2015; and those using

more than ten devices will quadruple from 6% in 2010 to 24% in 2015. Separately, the

Cisco Visual Networking Index forecasts that mobile cloud traffic will grow 28-fold from

2011-2016 and that cloud applications will account for 71% of global mobile data traffic in

2016. Simply put, the data and applications that we have all come to depend upon can no

longer be stored on a localized server and remotely accessed; the number and variety of

connected devices will demand that it all happens in the cloud.

The problem of complexity, however, expands further than simply moving enterprise

applications onto the cloud. The movement to the cloud will likely drive the demand for

traditional data centers. Where the web, and its millions of apps, maps, downloads and

streams per day intersects with the cloud is where the workload changes.

Information Technology / TECHNOLOGY

Page 3: Cloudy with a chance of arm

3

The Workload Has Changed…

As data, services and applications move toward the cloud, and as the world goes mobile,

Web 2.0 companies are increasingly driving traffic. A server workload identifies incoming

work based on a set of user-defined-connection attributes. Attributes are changing and the

type of server workload is transforming with the relative growth of these applications

because they inherently ask the server to perform different tasks. Large, single-thread

computational-heavy enterprise workloads of the past, while still important and still the

vast majority of workloads, have given way to millions of tiny, fractional, workloads. A

Google search, a social media status update or a high-frequency trade is each a

dramatically different task for the data center than a traditional enterprise-class workload.

Where the workload previously sought to maximize how fast a series of data could be

computed, the workload now seeks to maximize how quickly data can be accessed. It’s a

move from power to speed, and with always-on-always connected computing, the demand

for access to data is growing at the rate of the cloud.

…And What It Means for the Data Center

In plain English, the computational horsepower needed for a traditional enterprise

workload is akin to the necessity for an 18-wheeler to transport heavy military artillery from

New York to Los Angeles. The process will be time consuming and expensive in the form

of energy transportation cost. If one only needs to transport a case of 24 pint-sized

individual beers to the neighborhood barbeque, this same 18-wheeler is stodgy, difficult to

drive and highly energy inefficient. Why drive the 18-wheeler when the hybrid electric

sedan would suffice? Better yet, the fully electric sedan.

This is not a question of, or a problem that can be easily solved by virtualization.

Virtualization addresses chronic server underutilization by consolidating workloads onto

individual virtual machines that are hosted on a single physical server. Virtualization

reduces the size, complexity and administrative costs of the data center, but doesn’t

fundamentally address how the CPU handles workloads. Even in a virtualized and fully

utilized environment, the shoe still doesn’t fit for higher performing CPUs. For heavy

workloads, virtualized servers need high horsepower. But for hundreds or millions of

fractional workloads even in a fully utilized data center, a lower power CPU can and

should handle the task. As the most rudimentary element of the data center, the CPU

needs to become more efficient. Where it could not stray from the highest performing form

with workload constraints of the past, web and cloud workloads of today now permit a

move down the power train.

As personal applications and services hosted in the cloud converge with Web 2.0 data

centers, fractional workloads are becoming an increasingly large percentage of data

center demand. Because of the rapid growth of the mobile computing era, the data center

must adapt. There must be varying types of servers to meet the demand for varying types

of workloads, and more efficient servers must be able to scale-out capacity in addition to

today’s infrastructure. It’s not a question of upping compute power, but of scaling out

compute power. And as the market begins to demand low-horsepower, ultra-low power

servers that can accomplish exactly that, the door is opening for alternative architectures

and designs to challenge the traditional high-horsepower x86 incumbents.

If validation of the changing workload is required, look no further than AMD’s late-

February acquisition of SeaMicro. SeaMicro was to be termed the “architecturally

ambiguous wild card” in this paper and is, in our view, a landscape-changing acquisition

by AMD. As the first mover at the system level toward microservers, SeaMicro had

developed a proprietary architecture which the company claims reduces total system

power by 75% and can be processor agnostic (e.g., x86, ARM, etc.). By acquiring the

leading low-power system vendor that can reduce total power comparable to what a single

ARM SoC could achieve, AMD/SeaMicro has positioned itself well against both Atom and

Information Technology / TECHNOLOGY

Page 4: Cloudy with a chance of arm

4

ARM-based competitors. Let’s now look at how and why the data center will change to

meet the evolving workload demands and what this means for silicon vendors.

Defining the Term: Microserver

Intel officially coined the term microserver in 2009 before such a market really existed, and

though the market is still nascent today, Intel’s definition still stands. As we dissect this

market, it’s important to understand what exactly a microserver is. By Intel’s definition (and

by ours) a microserver is:

- Single socket;

- Lightweight, low power and low cost; and

- Part of a shared infrastructure environment whereby many small servers are

packaged into a larger ecosystem.

We will use the terms microserver, “many-core,” and “ultra-low power server”

interchangeably throughout this report, though there is a nuanced difference between

“many-core” and a microserver. The concept is the same and has always hinged upon

maximizing space and operational efficiency while minimizing capital intensity. However,

within the microserver market, not all CPUs are created equally and not all will have the

ability to put many-cores onto one chip in a cache coherent manner. Many core servers

(loosely defined as a processor with 64-or-more cores) can serve to effectively reduce

software porting costs, something that is likely to be of primary concern when first

evaluating a microserver based on alternative architecture.

What has kept the market nascent to this point, namely software, architectural inhibitors

and a lack of motivation to innovate from the incumbents, has begun to change. We

believe the microserver market is yet in the top of the first inning of growth and is poised to

become a meaningful portion of the market over the next 3-5 years.

The Economics: Money and Sense

As the workload changes, the demand pull is happening today. But throughout the supply

chain, one message is clear: systems vendors and end users will not make the switch to

an alternative architecture unless the switch presents a value that is magnitudes better.

Porting software on top of existing infrastructure is no small task and the inherent value of

a more efficient CPU must be worth the switching costs, man-hours, and gamble on a new

architecture—we think the message is clear. The supply side of the equation has not

made the economics work…until now.

The analysis in Exhibit 2 examines the Total Cost of Ownership (TCO) of a traditional

server today and a microserver. We have simplified things into round numbers and used a

standard server today and a custom-built microserver based on today’s announced specs.

It is important to note that one many-core processor can replace a standard x86 cluster

with many smaller and more efficient cores, thereby maximizing efficiency at the CPU

level. Or that several highly efficient single socket servers can replace one single multi-

socket server.

We have best attempted to build a microserver that would have enough performance to

scale-out a typical Web 2.0 workload. It’s not a question of whether the horsepower of a

microserver(s) matches the horsepower of a traditional server. It doesn’t, and it won’t, and

that’s the point; it doesn’t need to be as powerful. Only just powerful enough to scale out

simple workloads. And the TCO analysis is meant to compare the merits of a microserver

relative to a traditional server, not any one particular architecture against another. We

used four years as our useful life. Because we’ve held acquisition costs at a constant

Information Technology / TECHNOLOGY

Page 5: Cloudy with a chance of arm

5

level, the argument therefore shouldn’t be made that depreciation would alter the cash

flows of our analysis.

Exhibit 2 – Cost of Ownership Analysis

Source: Tilera, Oppenheimer & Co.

One factor that we did not include in our initial analysis: software switching costs. Software

porting costs can take substantial man-hours and we believe the initial costs can be

similar to the total amount spent on new servers. We will later see that Web 2.0 data

centers are where microservers will play. Where traditional enterprises may be hesitant to

begin incrementally adopting microservers because of very high software porting costs,

this is not the case with Web 2.0 vendors. These companies, in many cases, use either

internally developed software or public open-source software, meaning that there is a

much smaller barrier to entry and it is much easier for new architectures to make dramatic

savings on cost. The lower the software costs, the greater the opportunity for new

architectures to gain initial traction. All this said, software porting costs are highly variable,

and may range from zero in some cases to millions of dollars in others. Software will differ

on a per-case basis, but we have standardized an average cost for this analysis.

As the volume (or in this case total value purchased) of microservers grows, these porting

costs (large or small) will not increase linearly and we believe the initial total cost of a

similarly-equipped microserver would decline from ~2x a standard server to closer to

~1.5x over time. But as Exhibit 3 below demonstrates, even including substantial software

switching costs, the microserver TCO is 63% less than a traditional server.

Exhibit 3 – Cost of Ownership Analysis (Including Software Porting Costs)

Source: Oppenheimer & Co.

Information Technology / TECHNOLOGY

Page 6: Cloudy with a chance of arm

6

Our key economic takeaways:

1) Acquisition costs for a traditional server and a microserver are held the

same. The cost per core is substantially different and there could, in theory,

be acquisition cost savings at the microserver level. But we believe end

demand is such that capital expense budgets will be maximized to include

the maximum computing power at the lowest total cost of ownership. A Web

2.0 company, for example, will buy as many microservers as it has dollars in

the capital spending budget. Though software costs can be highly variable,

ultimately, the equation boils down to Performance / Watt (W) / $.

2) The cost savings of microservers are most apparent at the operating line -

both from power and cooling costs. Efficiency and improved utilization are

the primary drivers of reduced power costs.

3) More difficult to account for is the savings from physical location.

Microservers are inherently smaller and take up much smaller floor space,

though it is likely that some of these savings may be reflected in the lower

total cooling cost.

4) The performance/core in a microserver is substantially less than a traditional

server. As workloads change to become more about the speed in which a

server can access data rather than the speed in which it can process said

data, the performance/core no longer needs to be maximized. And though

microservers won’t be a solution for all data centers, there is clear value

where workload requirements permit.

With 60%-70% average savings in the TCO, the majority of which accrue in operating

expenses, it is clear that microservers can and will have value in the market. Acquisition

costs are notably the same, performance per core is notably lagging in the microserver,

and there are tremendous operating expense savings. And because of internally

developed or open-sourced software, the area with the lowest software barriers to entry

for microservers will be in Web 2.0 data centers.

Sizing the Market

We believe the market for microservers is likely to grow rapidly over the next 10 years as

the value proposition grows. Rather than project a growth of public vs. private cloud, we

will run with Web 2.0 because it represents a blend of both; and because the demands of

Web 2.0 best represent the dynamic of the changing server workload. Hosting

environments are comparable and also representative of this workload dynamic. Web 2.0

represents a blend of public/private cloud because applications can often be freely

transferred across both. The workload, and these companies’ inherent business models,

is all about volume. To forecast the growth of Web 2.0 data center spending, we’ve taken

a bucket of companies that we believe best exemplify the growth and characteristics of

this market. Our list includes GOOG, YHOO, BIDU, ZGNA, LNKD, GRPN, P, EQNX, and

“Other.” We’ve chosen EQNX as it serves as the data center host for many web-based

service and software companies, and “other” can best be defined as social media.

The basket of Web 2.0 companies, which noticeably directly excludes AMZN, CRM,

EBAY, MSFT, NFLX and even iCloud provider AAPL, has grown data center capital

spending at a 95% CAGR since 2009. Where the trend was once to lease data center

capacity (and thereby expense it through the P&L), Web 2.0 companies have quickly

begun to migrate towards internally designed, developed and hosted data centers – a

transition to capital expenses. This move to internally designed data centers has coincided

with the move to open-sourced software, lowering the software barriers to entry for

alternative architecture and making Web 2.0 the most natural fit for microservers. Take

Information Technology / TECHNOLOGY

Page 7: Cloudy with a chance of arm

7

internet radio company Pandora (a relatively small company by any measure) as an

example: In 2010 the company spent less than $2M on its data center. In 2012, Pandora

will have capital expenditures upwards of $3M per quarter (~6x annual increase), which

we are led to believe will be spent predominantly on the data center. Historical spending is

depicted in Exhibit 4.

Exhibit 4 – Estimated Web 2.0 Data Center Infrastructure Spending (2009-11, $M)

Source: SEC Filings, Oppenheimer & Co. Inc.

* Social Media Provider

**Cloud Services Provider

The move from operating expense to capital expense is an important anecdote. As we’ve

seen in our economic analysis, the opex savings of a microserver are substantial and the

value proposition is undeniable. Why this is most important for Web 2.0 companies stems

most notably from their definition: as rapidly growing companies with finite resources.

These companies must invest in headcount and other operating resources, thereby

limiting opex freedom. The transition to capital spend frees up opex, and the substantial

opex savings from microservers relative to traditional servers allows for incremental cash

flow that can be reinvested in additional server capacity. With such rapidly growing

demand, Web 2.0 companies must save to invest, and savings spur incremental

investment in servers. Look no further than the following SEC filings:

Google: In order to manage expected increases in internet traffic, advertising

transactions, and new products and services, and to support our overall global business

expansion, we expect to make significant investments in our systems, data centers,

corporate facilities, information technology infrastructure, and employees in 2012 and

thereafter.

Zynga: We intend to invest in our network infrastructure, with the goal of reducing our

reliance on third-party web hosting services and moving towards the use of self-operated

data centers.

Groupon: We have spent and expect to continue to spend substantial amounts on data

centers and equipment and related network infrastructure to handle the traffic on our

websites and applications. The operation of these systems is expensive and complex.

Information Technology / TECHNOLOGY

Page 8: Cloudy with a chance of arm

8

The overarching message is clear: Web 2.0 companies must continue to aggressively

invest in data center infrastructure to support tremendous traffic growth. That opex

savings spur incremental capital investments will likely only accelerate the investment

cycle over the next 2-3 years, and these are exactly the buyers who have fractional

workloads that are most efficiently processed on microservers. As we will later see,

microservers cannot fully scale to support the entirety of these ecosystems, but they will

represent the most efficient and most compelling way to scale out capacity. Even Google

has publicly stated that microservers, as built today on wimpy cores, aren’t a viable

solution for warehouse style data centers; however, we think the introduction of competing

and lower power 64-bit ARM architecture alters the wimpy core argument.

The OPCO bucket of Web 2.0 companies had aggregate data center capital spending of

~$6.3B in 2011, a 64% Y/Y increase. We believe this pace of data center investment is

unsustainable long term, and project the data center investment growth rate to slow

markedly by 2016. The primary driver of slowing growth is not because of slowing demand

for capacity, but the result of more efficiently scalable servers. Even at a slower pace of

annual growth, Web 2.0 data center spending would likely approach $18B by 2016. By

making the opex-to-capex transition, we estimate this $18B spend is a 10.6x increase

from 2009.

Exhibit 5 – Estimated Web 2.0 Data Center Spending (2011-16, $M)

Source: Oppenheimer & Co. Inc.

Rather than having classified some form of public cloud and private cloud, we believe that

a bucket of “Web 2.0” data center spending is far more appropriate. As we have

discussed, the workload of mobile Web 2.0 companies varies from the traditional cloud

but, at the same time, cannot be truly excluded from any measure of “the cloud.” While

this may not be perfect, it does show us who is driving the changing workload dynamic.

The anticipated 5x increase in mobile device connections and 28x increase in mobile

cloud traffic through 2016 demonstrates how quickly those companies that facilitate the

data behind this traffic must build out their infrastructure to support the rapidly growing

mobile ecosystems.

Our ten Web 2.0 companies primarily care about volume transactions, and driving these

volumes as high as possible. Just as LinkedIn benefits from a growing user base,

Groupon users benefit from larger scale and Google benefits from a rising number of

mobile searches on any mobile OS. The higher the transaction volume, the better and the

faster these businesses grow. For a server to be optimally efficient when processing these

light transactions, it doesn’t need high compute intensity—just the ability to process many

at once. A professional weightlifter is best suited to bench press 400 lbs; a professional

Information Technology / TECHNOLOGY

Page 9: Cloudy with a chance of arm

9

juggler is best suited to juggle 12 tennis balls. These types of smaller, lighter transactions

are the exact workloads that will ultimately be best supported by a lower-power

architecture in the data center.

According to IDC, there were 8.05M x86 servers shipped in 2011. At a blended average

price of $4,837, the x86 server TAM was $39B. This is the server market and not the CPU

market, and we have ignored all but x86 servers. We do not believe ultra-low-power

architectures can compete in the “other” server market, nor would they want to. We in fact

believe that standard x86-based servers, which represented 64% of all server spending in

4Q11, will expand over time to encapsulate a larger percentage of the remaining server

market. For our purposes, “total” server market represents only what is today the x86

server market. We believe our bucket of Web 2.0 spending thus represented 16% of total

x86 server spending in 2011. Splitting the total IDC server forecast into Web 2.0 spending

and “All Other,” we forecast that Web 2.0 spending will exceed 40% of total x86 spend by

2016.

Exhibit 6 – Total Server Spending (2011-16, $M)

Source: IDC, Oppenheimer & Co. Inc.

While Web 2.0 is not the only area where microservers will play—it is the easiest to

quantify. Today, microservers only participate in the standalone, maximum efficiency

market (and account for less than 1% of total volume). There are multiple reasons why

micro/ultra-low-power servers have not yet expanded beyond a tiny niche of the market.

But as efficiency becomes more important in the grand scheme of the server market over

time, as Web 2.0 companies substantially outgrow the rest of the market and as low-

power architecture reaches critical mass, we see the available market expanding to

encapsulate a larger piece of the total pie. Servers today account for 2.5% of all electricity

usage in the United States—double that of just six years ago. This sends a clear message

that the server market on whole must begin to focus on efficiency. We don’t believe that

microservers can or will ever play in the High Performance Computing (HPC) market. But

they can, and are likely to, play in all other areas of the market— not as the primary

source of server capacity, but as a cheap, quick and efficient method of scaling capacity

on top of existing infrastructure.

Today, we believe microservers represent just 5% of total Web 2.0 spending and <1% of

the total server market. Intel Xeon E5 (Romley) can be low(er) power, but can it build a

microserver? No. Intel Atom in a SeaMicro system? Now we have a microserver. To date,

HP’s “Project Moonshot,” based on Calxeda silicon, and Quanta Computer’s S2Q, based

on Tilera silicon, are the only true microservers outside of SeaMicro, and only

Information Technology / TECHNOLOGY

Page 10: Cloudy with a chance of arm

10

Quanta/Tilera is in production. Dell recently introduced PowerEdge and we believe

Huawei is working on ultra-low power servers through its Project Borg. The move began

from the ground up with innovation from SeaMicro, Tilera and Calxeda. The message is

now being heard loud and clear at the OEM level. We believe that Tilera has begun

production with multiple top-20 Web 2.0 companies and that design wins are beginning to

mount for Calxeda. This recognition is trickling to the ODM level, where Web 2.0

companies can build their own internally developed systems.

Exhibit 7 – Microserver Market Penetration

Source: Calxeda, Oppenheimer & Co. Inc.

With insatiable demand for higher capacity and substantially lower power, we believe

microservers will incrementally penetrate the total server market and become an

increasingly large percentage of total Web 2.0 spending over the next two years. Early

envelope-pushers Calxeda/Tilera and SeaMicro will likely see incremental traction. Atom

will likely have success as well. However, as ARM 64-bit server processors enter in 2014,

we believe the market will reach critical mass and see hyper-growth through 2016.

Exhibit 8 – Microserver Penetration and Growth Rates (2011-16E, $M except ASP)

Source: IDC, Oppenheimer & Co. Inc. estimates

Information Technology / TECHNOLOGY

Page 11: Cloudy with a chance of arm

11

The Silicon Opportunity

We see microservers accounting for 21% of the total server market (remember, our “total”

is only today’s x86 market) by 2016, driven primarily by 50% penetration into Web 2.0

data center capital spending budgets. Intel has publicly stated that it believes microservers

will be 10% of the market in 2015. The 5% delta in our analysis stems from our use of only

the applicable x86 server market (3%) and Intel’s under-estimation of the ARM catalyst in

2014 (2%). Over time, we have no reason to believe that ultra-low power servers cannot

account for the two-thirds of Web 2.0 incremental server spending. Exhibit 9 below shows

the total Web 2.0 spending relative to the total spent on microservers and ultra-low power

servers. Rising penetration in a rapidly growing market points to the inflection point in

2014 as ARM 64-bit silicon hits volume production. And though alternative vendors appear

poised to benefit more than the incumbents, it’s far from a zero-sum game.

Exhibit 9 – Microserver Penetration and Growth Rates (2011-16E, $M)

Source: Oppenheimer & Co. Inc. estimates

As seen above, we forecast the Microserver/ULP market to grow from <1% of total (x86)

servers today to 21% in 2016. Important to note here is that IDC only captures servers

built at OEMs, not ODMs. As many Web 2.0 companies move to internally designed

systems, notably social media compan(y)ies which have initial builds with Quanta

Computer, they will likely shift at least some production to the ODM level—a shift not likely

to be captured by IDC. In any regard, we do not expect an inflection point before 2014,

and here’s why:

1) System vendor SeaMicro dominates the direction of the microserver market

today and startups Tilera and Calxeda, while each are uniquely positioned,

currently lack the scale necessary to drive the market. Supply-side economics do

not yet work. Meanwhile, traditional x86 powerhouse INTC lacks the motivation

to push ultra-lower-power architecture when its livelihood depends upon high-

horsepower CPUs. Just-introduced Romley is a meaningful step for the

mainstream server market, but won’t compete in microservers and, more than

anything, validates the need for lower-power server solutions across the board.

2) Tilera is the pure-play on true “many-core” architecture but as a startup it lacks

the scale and breadth to drive the market. We believe many-core architecture will

capture the low hanging fruit through 2014, but will first need to reach critical

mass to become a meaningful piece of the pie. Many-core Tilera will ride the sea

Information Technology / TECHNOLOGY

Page 12: Cloudy with a chance of arm

12

of change, created when the microserver market as a whole reaches critical

mass. We’ve already seen notable social media companies as early adopters of

many-core processors in internal data centers.

3) Points 1 and 2 take us to the key of microserver adoption: ARM. Calxeda is

gaining solid traction today with 32-bit ARM architecture, but something is

missing: more powerful 64-bit. From both a hardware and a software perspective,

64-bit ARM is the technical secret sauce that makes the value proposition work.

The feelers in 32-bit ARM have to date been prepping the market for the arrival

of 64-bit, which is unlikely to happen in volume before 2014.

4) AMCC and NVDA are among the first who have announced 64-bit ARM licenses.

AMCC was the first to unveil its server plans, and will have a 64-bit ARM SoC in

silicon by early 2013. AMCC is well positioned here, but the real momentum is

likely to build when NVDA/MRVL (and/or QCOM and others) announce ARM

server-on-chips. The entrance of multiple, established companies with 64-bit

ARM SoCs will create the splash that the market place needs to give ultra-low

powers critical mass.

5) Lastly, this critical mass is so important because of software. OEM qualification

and sourcing aside, software is the biggest inhibitor to migrating to any

architecture outside of x86. SeaMicro can succeed in the interim because it

competes with ARM SoCs on power/efficiency specs while still using x86 silicon.

The port from 64-bit x86 to ARM 32-bit makes only small economic sense and to

make the leap to different architecture, we believe these solutions must present a

magnitude-more compelling value. As we hit critical mass of ARM 64-bit and the

efficiency savings begin to outweigh software porting costs by multiple times

over, the ultra-low power server market can reach its inflection point.

There are several questions we must answer on the above five points before we can be

led to believe alternative architecture can successfully power a microserver. The question

that is perhaps most pressing in more markets than just microservers, the ARM v. x86

debate: Is ARM architecturally more efficient than x86, and is it powerful enough to power

mass market servers? The short answer to these questions are “yes,” and “it will be.”

First, x86 has been the dominant force in the server market for 20-plus years. This means

an x86 chip must support over 20 years’ worth of software, legacy applications and must

be backwards compatible with all prior systems. ARM, which has never had a presence in

the server market, has dropped un-needed instruction sets and software, and the v-8

cores have optimized performance to make it wholly more efficient. Substantial integration

further contributes. In 20 years, will ARM be forced to support its legacy applications? You

bet, but it will still be two decades ahead of x86. Further, because of backwards

compatibility and a wholly larger number of instruction sets, an x86 CPU must be run on a

much larger die. ARM can deliver similar performance on a much smaller die size,

creating an inherent efficiency advantage.

Then there is the question of performance. There is no question, in our view, that if a

server wants or needs higher horsepower, it will gravitate to x86. We don’t believe the

server market will shake out like the mobile baseband market and x86, in our view, will

maintain dominant share for some time. If heavy, multi-thread workloads are demanded,

x86 is clearly the solution. However, as workloads continue to change and Web/Cloud

workloads take up an increased amount of the total, Web 2.0 servers won’t need the

highest horsepower servers. Not for the entirety of their data center needs, anyway.

Information Technology / TECHNOLOGY

Page 13: Cloudy with a chance of arm

13

The rest of the market may even gravitate towards incrementally lower power architecture

over time. 32-bit ARM isn’t viewed as a major threat today—and even 64-bit ARM won’t

come close to catching x86 on raw horsepower—so the full solution to the ARM/x86

debate is many-core. Semiconductor vendors must compensate for ARM’s performance

disadvantage by bringing more CPU cores into the equation. This can be cost effective at

the CPU hardware level, but is it economically efficient at the software level? Or scalable?

Slower and more energy-efficient cores have been deemed “wimpy” cores relative to their

higher-performing “brawny” core counterparts. The debate of brawny vs. wimpy cores and

the theoretical applicability of wimpy cores has been staged by Google Fellow Urs Holze

in his paper Brawny Cores Still Beat Wimpy Cores, Most of the Time. Published in 2010,

Holze argues that wimpy cores must jump three primary hurdles before they can make

business sense for warehouse-style servers. First, wimpy core CPU performance cannot

lag current-generation commodity processors by more than a factor of two. If they do,

software development costs to optimize code back to the prior latency would outweigh the

savings on the hardware side. Second, non-CPU costs of basic infrastructure must be

shared across multiple servers and scaled accordingly. Finally, in a question of utilization

and efficiency, wimpy cores must be able to handle large enough applications to avoid

taking longer, partitioning the application into yet smaller pieces and decreasing utilization

while increasing overhead. This final point demonstrates the difficulty and diminishing

returns of scaling an application across many cores and argues that wimpy cores simply

cannot be useful in the entirety of the data center. We agree, and believe that there are

two primary solutions to the wimpy core problem: 1) wimpy cores will become an easy and

cost-effective way to scale capacity on top of existing or new brawny cores; and 2) many-

core, where you avoid the wimpy core argument altogether.

In short, it has never made business sense to run with wimpy cores on a large scale. And,

in our view, it won’t make business sense to use wimpy cores (or many-core) in the

entirety of a data center, even a data center tasked with only fractional workloads. This is

why we have assumed only 50% penetration of ultra-low power servers within Web 2.0

spending. In our view, the line is blurred between an extremely-low power server built on,

say Xeon, and an attempt at a brawny-core microserver based on this same Xeon CPU.

The difference is hard to measure, define and quantify, and we see it becoming more

blurred, not less, over time. Another anecdote worth mentioning here: for a Web 2.0

company where transactions and volume are the business model, having a server “down”

for any period of time is detrimental to business. Having multiple (wimpy core

microservers) scale on top of existing servers helps to spread out workloads and mitigate

this risk—and this theme of “scaling” at the processor level is perfectly consistent with the

need to “scale” capacity at the server level. More data simply needs to be accessed more

quickly.

Semiconductor vendors today are getting close to balancing the wimpy core problem and

making economic sense for the microserver. SeaMicro has largely evaded the wimpy core

argument by removing power-hungry components, making the remaining components

more efficient, consolidating them onto a super-efficient motherboard and linking multiple

motherboards with a highly efficient fabric. This is, as we’ve said, the definition of a

microserver and could be built on any architecture. Calxeda has built an SoC around an

ARM 32-bit core—most argue that 32-bit ARM isn’t competitive enough and yet the

company is STILL gaining traction with customers. Tilera is powerful enough, scalable and

highly efficient, but lacks the software standards compatibility necessary to drive a

powerful change. Where all of these wimpy core requirements converge, in our view, is

64-bit ARM cores. As established companies with large balance sheets (AMCC, MRVL,

NVDA) intersect with a CPU powerful enough to make operational efficiency savings

outweigh software switching costs, the market for microservers can finally inflect.

Information Technology / TECHNOLOGY

Page 14: Cloudy with a chance of arm

14

As we hit this inflection point where mass-market microservers can effectively scale the

requisite workload demands, supply side economics will for the first time be aligned with

the demand pull coming from the data center. The demand is there today, but the

business economics for large-scale deployments don’t add up until 64-bit ARM enters the

equation. We think new players and new architecture are very well positioned to capitalize

on this inflection point in the growth of the silicon TAM. Exhibit 10 displays our forecast for

the silicon microserver TAM. We forecast that the microserver CPU TAM will grow at a

95% CAGR from $158M in 2011 to $4.5B in 2016.

Exhibit 10 – Total Available Microserver Silicon (2011-16E, $M)

Source: Oppenheimer & Co. Inc. estimates

Importantly for all CPU players, but most importantly for incumbents INTC and AMD, we

don’t believe that the growth of this microserver market is a zero-sum game. We believe

that alternative architectures (including Intel Atom) are poised to capture a substantial

share of the microserver TAM, and at the same time microservers will cannibalize the

traditional x86 server market. However, microservers are additive to the server TAM for

CPU vendors and will in fact cannibalize other semiconductor content within the server.

We won’t look at anything beyond the CPU here, but let’s do a (most basic) BOM (bill of

material) analysis.

Exhibit 11 – Relative BOM Analysis – Traditional Server vs. Microserver

Source: Oppenheimer & Co. Inc.

CPU represents approximately one-third of the total system cost for today’s traditional x86

server. Through the focus on efficiency and because of the trend toward many-core to

meet requisite performance demands, CPU density is increasing in microservers. Our

~one-half assumption is likely highly conservative and will increase with ARM SoCs. This

Information Technology / TECHNOLOGY

Page 15: Cloudy with a chance of arm

15

TAM expansion as CPU density grows from one-third to one-half the server BOM will be

cross-architectural and all CPU vendors will benefit. However, while the microserver is

additive to the CPU TAM, we think it is a zero-sum game overall—other silicon vendors

(ethernet, power management, switch etc.) will likely see decreasing content as these

functions become integrated into the SoC or the fabric.

While we expect microservers to become 21% of the total server market by 2016, we

expect microserver CPU spend to be 28% of total server CPU spend by 2016—this delta

is higher CPU/total dollar spend and is incremental to today’s CPU spending. Just as Web

2.0 companies will trade opex for capex and additional capacity, so too will they continue

to spend the same amount of money while yet increasing total CPU spend. Because of

the transition to many-core within the microserver market and because of the unbridled

demand for compute power, we believe the microserver CPU is additive to the total server

market. At the most basic of levels, servers are increasing in CPU density. Exhibit 12

below shows this phenomenon.

Exhibit 12 – Server CPU Tam and Incremental Microserver CPU ($M)

Source: Oppenheimer & Co. Inc. estimates

While we believe that the microserver CPU TAM will grow to nearly $4.5B by 2016, we

also believe that ~$1.5B of this is additive to the total CPU market. It’s a rising tide that we

believe will float all low-powered boats. The lighter (more efficient) boats will float the best.

Figure 13 displays a share analysis as we see design wins shaking out today. SeaMicro

today uses Atom/Xeon and, given its acquisition by AMD, we suspect that it will over time

phase out INTC in favor of AMD. SeaMicro and AMD are now forever linked in the

microserver market. Because of its total-system power reduction, we see SeaMicro as a

future competitor to semiconductor players and will therefore analyze its market share as

such. And while we won’t forecast expected market share, we will analyze the relative

advantages and capabilities of each player.

Information Technology / TECHNOLOGY

Page 16: Cloudy with a chance of arm

16

Exhibit 13 – Current Design Win Share – Microserver Silicon (Revenue Dollars)

Source: Oppenheimer & Co. Inc.

*System Vendor uses INTC Atom/Xeon today – low power fabric makes it a direct competitor with semiconductor players. Likely to

phase out INTC and phase in AMD over time. Market share based on estimated silicon spend.

Examining the Players

The Incumbents

Semis

Intel – As the 800 lb. gorilla in the server market, at first glance INTC stands the most to

lose from the penetration of microservers. The company’s challenges against ARM in the

mobile market, where power is of utmost importance, have been well documented and

have highlighted ARM’s relative efficiency advantages. Intel cannot compete head-on in

terms of power efficiency with alternative architecture in the microserver market, but

software compatibility and inherent performance advantages are each large economic

swing factors toward INTC’s advantage. The company’s large and growing process

technology advantage should also help to increase relative efficiency metrics. We had

thought that a strategic relationship with SeaMicro was the key to Intel’s broad-based

success in the microserver market and following the acquisition by AMD, we now see

Atom as the primary key to its long term competitiveness. We see Atom sustaining a

meaningful percentage of the microserver market due to software porting complexity, and

we also see Xeon as becoming (i.e., remaining) the “brawny core” of choice. Today, we

believe that INTC is working with multiple OEM/ODMs on microservers, including, among

others, Dell, Super Micro, NEC, Hitachi, Wistron and Tyan. The relative mix of wimpy

Atom cores and brawny Xeon cores longer term is yet unknown, but we know two things:

1) Both Atom/Xeon will likely benefit from the expanding CPU silicon TAM; and 2) total

INTC share in the microserver market will become increasingly at risk as multiple

established ARM vendors bring 64-bit solutions to market.

AMD – AMD has long played second fiddle to Intel in the server market and to date had

no publicly-disclosed presence in the microserver market. As far as the microserver

market is concerned, the acquisition of SeaMicro is a game changer. SeaMicro had been

termed the “architecturally ambiguous wild card” for this paper (its own brief is below) and

has immediately improved AMD’s position in the microserver market. Not only did AMD

Information Technology / TECHNOLOGY

Page 17: Cloudy with a chance of arm

17

buy an important customer/partner away from INTC, but also it can now either package

AMD silicon into a SeaMicro server and go straight to market, or bundle AMD silicon with

SeaMicro IP and take a system to its existing customers. Though product has yet to be

announced, we believe the company will announce a SeaMicro system based on AMD’s

Opteron processor in 2H12 and a combined silicon product in the next 12-18 months. We

believe that a SeaMicro system based on AMD (x86) silicon would be competitive with to-

be-announced ARM SoCs. This mitigates AMD’s need to adopt ARM, in our view. With

multiple opportunities to succeed in the ultra-low power market and as an x86 incumbent

with a key piece of fabric IP from SeaMicro, we see AMD establishing a presence in the

market and gradually expanding its share of the server silicon market over time.

System Vendors

We believe that systems vendor incumbents Dell and Hewlett-Packard stand the most to

lose from the penetration of microservers. Beginning in late 2011, IDC began tracking a

new class of servers which it terms “Density Optimized Systems” and which serve “the

unique needs of datacenters with streamlined system designs that focus on performance,

energy efficiency, and density.” While density servers aren’t exactly microservers, they are

a stepping stone from traditional x86 servers to microservers. And of the $458M market in

4Q11, according to IDC, Dell held 45% market share while HP held 15%. As SeaMicro

gains traction and, beginning in 2014, ARM SoC vendors begin to take incremental share,

we suspect Dell/HP stand the most to lose. Further, many Web 2.0 companies that

internally source servers are already beginning to go straight to the ODM level (Quanta

S2Q being a good example)—also a negative event for the traditional x86 system leaders.

Exhibit 14 – Performance/Efficiency Microserver Competitiveness

Source: Oppenheimer & Co. Inc.

Green: Highly Competitive, Yellow: Moderately Competitive, Red: Not Competitive

Information Technology / TECHNOLOGY

Page 18: Cloudy with a chance of arm

18

New Kids on the Block

Semis

Calxeda – Calxeda was founded in 2008 and two years later received a hefty investment

from both ARM and a mix of VC firms. With a vested interest from ARM, Calxeda has thus

far been the pioneer in the ARM-based server market and, we believe, will enter volume

production with multiple OEMs in 2H12. HP’s “Project Moonshot” is just the beginning, in

our opinion, and we see the company remaining successful as it makes the transition from

32-bit to 64-bit ARM. While the market is somewhat resistant to adopting 32-bit ARM

solutions today, as the first mover in the ARM camp, we believe that Calxeda will be

successful leveraging 32-bit into a 64-bit CPU.

Source: Calxeda

The Calxeda SoC has been termed EnergyCore, designed to dramatically cut space and

power requirements in hyper-scale computing environments. EnergyCore includes the

processor complex, with multiple quad-core ARM processors, L2 cache and integrated

memory and I/O controllers. The on-chip fabric switch and management engine are each

optimized for many-core server clusters. That the company currently has success with 32-

bit is a testament to its design and integration capabilities, and a testament to the market’s

desire for alternative architecture.

EnergyCore is capable of cutting total system power and space by 90% compared with

today’s systems and can scale to thousands of cores. The ECX-1000 Series is capable of

driving total system power below 5W. This is not an apples to apples comparison with the

sub-10W Atom chip which doesn’t enter the same ballpark when taking the total system

power into account. We believe Calxeda will continue to mount design wins and will be a

niche player in the microserver market long term. The company has publicly stated its

intent and desire to continue to succeed as a standalone entity.

Tilera – Founded in 2004, Tilera is a small private company specializing in multicore

embedded processors. Tilera is benefiting from the explosion in data traffic, which is

creating bottlenecks in carrier networks. The company made its first inroads into the

security and networking market, in addition to the multimedia market. In 1Q11, Tilera

started shipping its TILE64Pro into Quanta’s S2Q server, which we believe has been

adopted by multiple top 20 Web 2.0 companies. With its first mover advantage to 64-bit

processors outside of the x86 incumbents, we believe the company remains well

positioned to continue to take share in the cloud computing server market in the medium

term with its latest generation of multicore processors (TILE-GX). The key differentiator

and value proposition of Tilera is its proprietary architecture and own iMesh on-chip

network interconnect.

Tilera unveiled its first product in 2007, the TILE64. At a total power dissipation of 20W,

the 64-core processor is able to target an array of high-speed embedded applications. A

year later in 2008, the company released its second generation of multicore processors,

the TILE64Pro. Within this generation of processors, the company released both 36- and

Information Technology / TECHNOLOGY

Page 19: Cloudy with a chance of arm

19

64-core versions. Management claimed this line of processors achieves 1.5x to 2.5x

performance boost over the previous generation, primarily through more efficient caching.

In October 2009, Tilera revealed its third generation of multicore processors, the TILE-GX.

The new product line includes a range of multicore processors, spanning from 16 to 100

cores. Manufactured on TSMC’s 40nm technology process, management claims power

consumption is going to range from 10W to 55W, depending on the number of cores.

Performance for this line of processors achieves roughly a 1.5x to 2.1x boost over the

previous generation. In addition to its core networking and security end-markets, we

believe the new line of multicore processes expands Tilera’s addressable market into

cloud computing servers. We believe Tilera has tallied 20+ design wins and is engaged

with over 80 system vendors for the GX. The company is expected to ramp into pre-

production revenues with its 16- and 36-core versions this quarter in primarily networking

and multimedia applications.

Source: Tilera

We anticipate server-based revenues to come in 2H12. The company’s primary

advantage over ARM-based competitors is its many-core technology and the ability to

scale many cores onto a single chip with cache coherency. This takes the wimpy core

debate largely out of the equation. We see Tilera playing primarily in existing data center

with open-sourced software where the company can evade software compatibility issues.

The trend toward ODM production (the first of which is based on Tilera silicon) will also

positively impact its market opportunity.

System Vendors

SeaMicro (Now a part of AMD) – SeaMicro was founded on the premise that one-size no

longer fits all in the server market. At the system level, SeaMicro set out to become the

pioneer of microservers—and succeeded. In late February, the company agreed to be

purchased by AMD, a move we have applauded.

Source: SeaMicro

SeaMicro had four generations of servers built around 32-bit Atom, two around 64-bit

Atom and a just-announced Xeon partnership (along with Samsung). SeaMicro reduces

Information Technology / TECHNOLOGY

Page 20: Cloudy with a chance of arm

20

total system power by 75% by eliminating “un-needed” components and consolidating the

rest into a custom ASIC. This custom ASIC is then thrown onto a credit-card sized

motherboard with the CPU and DRAM and linked with hundreds of other motherboards by

an ultra-efficient fabric. Where the CPU is generally two-thirds of the total power draw

within a system, SeaMicro has effectively reduced total system power by 75% (or greater

than the CPU draw) by eliminating the unnecessary components and making those

remaining inherently more efficient. Optimizing utilization is key to this part of the equation.

By being architecturally agnostic, SeaMicro could have effectively succeeded in the

microserver market without tying itself to a single CPU vendor. And by reducing total

system power by 75% even using x86 cores, the company is highly competitive on the

power and efficiency front with the expected specs from ARM SoCs. Further, and perhaps

even its greatest advantage, is that SeaMicro servers are “plug-and-play,” meaning they

need no changes to software operating systems or applications. Now in bed with AMD, we

believe AMD/SeaMicro will continue to take share of the microserver system market in the

near term.

Huawei – In ramping an engineering design team for the task, we believe Huawei is

working on a microserver via its very secretive “Project Borg.” A traditional Chinese

networking giant, Huawei has begun to expand its footprint, notably into smartphones.

Without any further knowledge of the company’s server initiatives, we believe Huawei

could immediately and successfully leverage its existing customer/channel relationships

and balance sheet to carve out a chunk of the system-level microserver market.

The Coming ARM Challengers

There are many licensees of 32-bit ARM today and Calxeda is the only vendor that plays

in the server market. The advantages of 64-bit arrive at both the hardware and software

level. 64-bit is the next step in performance that can pull ARM vendors within the requisite

performance standards to make real economic sense. The equation becomes

substantially more economical as it is exponentially easier to port software from 64-bit to

64-bit rather than 64-bit to 32-bit. As we’ve said, ARM can begin to establish a meaningful

presence in the server market only with the arrival of 64-bit CPUs.

At date of publication, there are just three officially announced licensees of ARM 64-bit

technology: AppliedMicro, Microsoft and NVIDIA. We believe that AAPL and QCOM have

also licensed the technology and that several additional ARM vendors could announce 64-

bit licenses in the second-half of 2012 with the intention of developing a CPU for the

server market. These licensees most notably include Marvell and Samsung.

AppliedMicro (AMCC) – AMCC is the only company to date that has demonstrated a 64-

bit ARM CPU (as an FPGA). The company initially started working with ARM in 2009, and

as a result of the collaboration, ARM announced V-8, the first 64-bit ARM architecture in

October 2011. In conjunction with the announcement, AMCC launched (and

demonstrated) its 64-bit ARM CPU, codenamed X-Gene. X-Gene is a 3GHz CPU

designed to be scalable up to 128 cores. Utilizing its past expertise in high-speed

connectivity, AMCC is integrating PCIe, 10/40/100G I/O, and storage on board the SoC.

The company started sampling an FPGA version in 1H12 with several customers and is

expected to have first standard CMOS silicon (40nm at TSMC) by early 2013.

We believe AMCC is already working on a move to 28nm. AMCC has spent roughly $50M

developing X-Gene to date and could spend an incremental $100M before first revenue

from X-Gene in early 2014. Assuming a timely release, we expect AMCC to be the first

company to ramp into production with a 64-bit ARM server CPU starting in early 2014.

With an announced 6-12 month lead over other would-be 64-bit ARM competitors, we

believe AMCC could capture the lion’s share of initial design wins. How this share trends

Information Technology / TECHNOLOGY

Page 21: Cloudy with a chance of arm

21

over time depends upon the yet-to-be-announced products from larger competitors that

remain the wild-cards of the ARM camp.

Marvell Technology (MRVL) – Marvell’s industry leading ARMADA family of application

processors and cellular SoCs have provided it an established presence in the wireless

communications market. And as the market share leader in storage-based controllers,

MRVL is well positioned to leverage its world-class design expertise into a 64-bit CPU for

the microserver market. And with the balance sheet of a much-larger company, MRVL is

not a hard sell in the OEM/qualification process. We expect MRVL to announce its 64-bit

license from ARM later this year and to unveil plans for its server CPU within the next 12-

18 months.

NVIDIA (NVDA) – Nvidia’s Tegra family of application processors has built an established

presence in the mobile computing market, including smartphones and tablets. Alongside

TXN and QCOM, we believe NVDA will play in the first Windows-on-ARM notebooks later

this year. As one of the few companies with an announced 64-bit ARM license, we believe

NVDA could look to utilize its market-leading graphics capability to develop a CPU

targeted at the microserver and HPC markets.

NVDA announced Project Denver at CES in 2011. Project Denver is an initiative by NVDA

to fully integrate an ARM-based CPU and GPU on the same chip. While the company has

not given a timeline on product launches, we believe NVDA could announce products

within the next 12 months. By leveraging its pristine balance sheet, design expertise and

existing server OEM relationships, we believe NVDA could emerge as the initial leader in

the 64-bit ARM camp alongside AMCC. Longer term, we believe NVDA could pose as one

of the larger threats to INTC’s dominant market share in the server space.

Qualcomm (QCOM) – Qualcomm is the dominant global supplier of mobile chipsets

featured in today’s smartphones and tablets. Alongside NVDA and TXN, QCOM will play

in the first Windows-on-ARM notebooks later this year. With a 64-bit ARM license (which

we believe it unofficially has today), QCOM would seek to widen its advantage in the

mobile market and further penetrate traditional notebooks. Further, we believe the

company could develop a CPU for the microserver market and seek to enable end-to-end

solutions to capitalize on the mobile computing revolution. By leveraging its balance sheet,

current relationship with ARM, design expertise and dominant presence in the mobile

handset market, we believe that QCOM could emerge as a threat in the low-power data

center market.

Samsung – Tech bellwether Samsung is in no sense of the meaning a “new kid on the

block,” but it would be a new entrant to the server CPU market. We believe Samsung is

developing a new, ultra-low power CPU that would play in microservers and is on a short

list for an ARM 64-bit license. As an 800 lb. gorilla in countless markets across the

electronics food chain, we believe Samsung could immediately flex its muscle in the

microserver market. Above all else, Samsung has one very clear advantage; DRAM. The

SeaMicro system, for example, consolidates all server components into CPU, internal

ASIC and DRAM. An ARM SoC would also sit directly alongside DRAM within a server. As

the overwhelming market share leader in the DRAM market, we believe Samsung could

muscle its way into the CPU market by even further easing memory bandwidth

constraints.

Information Technology / TECHNOLOGY

Page 22: Cloudy with a chance of arm

22

Stock prices of other companies mentioned in this report (as of 3/28/12):

ARM Holdings Plc (ARMH-NASDAQ, $28.42, Not Rated)

Dell Inc. (DELL-NASDAQ, $16.52, Not Rated)

LinkedIn Corp. (LNKD-NASDAQ, $102.08, Not Rated)

Groupon (GRPN-NASDAQ, $17.80, Not Rated)

Hewlett Packard Co. (HPQ-NYSE, $23.58, Not Rated)

Hitachi (HIT-NYSE, $64.67, Not Rated)

Pandora Media Inc. (P-NYSE, $10.17, Not Rated)

Super Micro Computer (SMCI-NASDAQ, $17.30, Not Rated)

Quanta Computer (2382.TW, 72.30 TWD, Not Rated)

Wistron (3231.TW, 44.50 TWD, Not Rated)

Samsung Electronics Co. (005930.KS, 1,280,000.00 KRW, Not Rated)

Zynga Inc. (ZNGA-NASDAQ, $12.66, Not Rated)

Information Technology / TECHNOLOGY

Page 23: Cloudy with a chance of arm

23

Important Disclosures and CertificationsAnalyst Certification - The author certifies that this research report accurately states his/her personal views about the

subject securities, which are reflected in the ratings as well as in the substance of this report.The author certifies that no

part of his/her compensation was, is, or will be directly or indirectly related to the specific recommendations or views

contained in this research report.

Potential Conflicts of Interest:

Equity research analysts employed by Oppenheimer & Co. Inc. are compensated from revenues generated by the firm

including the Oppenheimer & Co. Inc. Investment Banking Department. Research analysts do not receive compensation

based upon revenues from specific investment banking transactions. Oppenheimer & Co. Inc. generally prohibits any

research analyst and any member of his or her household from executing trades in the securities of a company that such

research analyst covers. Additionally, Oppenheimer & Co. Inc. generally prohibits any research analyst from serving as an

officer, director or advisory board member of a company that such analyst covers. In addition to 1% ownership positions in

covered companies that are required to be specifically disclosed in this report, Oppenheimer & Co. Inc. may have a long

position of less than 1% or a short position or deal as principal in the securities discussed herein, related securities or in

options, futures or other derivative instruments based thereon. Recipients of this report are advised that any or all of the

foregoing arrangements, as well as more specific disclosures set forth below, may at times give rise to potential conflicts of

interest.

Important Disclosure Footnotes for Companies Mentioned in this Report that Are Covered byOppenheimer & Co. Inc:

Stock Prices as of March 30, 2012

AppliedMicro (AMCC - Nasdaq, 7.04, PERFORM)Advanced Micro Devices (AMD - NYSE, 8.12, PERFORM)Intel Corp. (INTC - Nasdaq, 28.16, PERFORM)Marvell Technology Group (MRVL - Nasdaq, 15.74, PERFORM)NVIDIA Corp. (NVDA - Nasdaq, 15.23, OUTPERFORM)QUALCOMM Incorporated (QCOM - Nasdaq, 67.93, OUTPERFORM)Baidu.com, Inc. (BIDU - Nasdaq, 146.41, PERFORM)Google, Inc. (GOOG - Nasdaq, 648.41, OUTPERFORM)Yahoo! Inc. (YHOO - Nasdaq, 15.30, PERFORM)Microsoft Corporation (MSFT - Nasdaq, 32.12, OUTPERFORM)Netflix, Inc. (NFLX - Nasdaq, 115.05, OUTPERFORM)Amazon.Com, Inc. (AMZN - Nasdaq, 204.61, OUTPERFORM)Salesforce.com (CRM - NYSE, 156.41, PERFORM)Apple Inc. (AAPL - Nasdaq, 609.86, OUTPERFORM)

All price targets displayed in the chart above are for a 12- to- 18-month period. Prior to March 30, 2004, Oppenheimer &

Co. Inc. used 6-, 12-, 12- to 18-, and 12- to 24-month price targets and ranges. For more information about target price

histories, please write to Oppenheimer & Co. Inc., 85 Broad Street, New York, NY 10004, Attention: Equity Research

Department, Business Manager.

Oppenheimer & Co. Inc. Rating System as of January 14th, 2008:

Outperform(O) - Stock expected to outperform the S&P 500 within the next 12-18 months.

Perform (P) - Stock expected to perform in line with the S&P 500 within the next 12-18 months.

Information Technology / TECHNOLOGY

Page 24: Cloudy with a chance of arm

24

Underperform (U) - Stock expected to underperform the S&P 500 within the next 12-18 months.

Not Rated (NR) - Oppenheimer & Co. Inc. does not maintain coverage of the stock or is restricted from doing so due to a potential

conflict of interest.

Oppenheimer & Co. Inc. Rating System prior to January 14th, 2008:

Buy - anticipates appreciation of 10% or more within the next 12 months, and/or a total return of 10% including dividend payments,

and/or the ability of the shares to perform better than the leading stock market averages or stocks within its particular industry sector.

Neutral - anticipates that the shares will trade at or near their current price and generally in line with the leading market averages due to

a perceived absence of strong dynamics that would cause volatility either to the upside or downside, and/or will perform less well than

higher rated companies within its peer group. Our readers should be aware that when a rating change occurs to Neutral from Buy,

aggressive trading accounts might decide to liquidate their positions to employ the funds elsewhere.

Sell - anticipates that the shares will depreciate 10% or more in price within the next 12 months, due to fundamental weakness

perceived in the company or for valuation reasons, or are expected to perform significantly worse than equities within the peer group.

Distribution of Ratings/IB Services Firmwide

IB Serv/Past 12 Mos.

Rating Count Percent Count Percent

OUTPERFORM [O] 332 55.89 144 43.37

PERFORM [P] 255 42.93 86 33.73

UNDERPERFORM [U] 7 1.18 3 42.86

Although the investment recommendations within the three-tiered, relative stock rating system utilized by Oppenheimer & Co. Inc. do not

correlate to buy, hold and sell recommendations, for the purposes of complying with FINRA rules, Oppenheimer & Co. Inc. has assigned

buy ratings to securities rated Outperform, hold ratings to securities rated Perform, and sell ratings to securities rated Underperform.

Company Specific DisclosuresOppenheimer & Co. Inc. expects to receive or intends to seek compensation for investment banking services in the next 3

months from AMCC.

Oppenheimer & Co. Inc. makes a market in the securities of AMCC, INTC, MRVL, NVDA, QCOM, BIDU, GOOG, YHOO,

MSFT, NFLX, AMZN, AAPL, EBAY, DELL, and ARMH.

Additional Information Available

Please log on to http://www.opco.com or write to Oppenheimer & Co. Inc., 85 Broad Street, New York, NY 10004, Attention:

Equity Research Department, Business Manager.

Information Technology / TECHNOLOGY

Page 25: Cloudy with a chance of arm

25

Other DisclosuresThis report is issued and approved for distribution by Oppenheimer & Co. Inc. Oppenheimer & Co. Inc transacts Business on all Principal

Exchanges and Member SIPC. This report is provided, for informational purposes only, to institutional and retail investor clients of

Oppenheimer & Co. Inc. and does not constitute an offer or solicitation to buy or sell any securities discussed herein in any jurisdiction

where such offer or solicitation would be prohibited. The securities mentioned in this report may not be suitable for all types of investors.

This report does not take into account the investment objectives, financial situation or specific needs of any particular client of

Oppenheimer & Co. Inc. Recipients should consider this report as only a single factor in making an investment decision and should not

rely solely on investment recommendations contained herein, if any, as a substitution for the exercise of independent judgment of the

merits and risks of investments. The analyst writing the report is not a person or company with actual, implied or apparent authority to

act on behalf of any issuer mentioned in the report. Before making an investment decision with respect to any security recommended in

this report, the recipient should consider whether such recommendation is appropriate given the recipient's particular investment needs,

objectives and financial circumstances. We recommend that investors independently evaluate particular investments and strategies, and

encourage investors to seek the advice of a financial advisor.Oppenheimer & Co. Inc. will not treat non-client recipients as its clients

solely by virtue of their receiving this report.Past performance is not a guarantee of future results, and no representation or warranty,

express or implied, is made regarding future performance of any security mentioned in this report. The price of the securities mentioned

in this report and the income they produce may fluctuate and/or be adversely affected by exchange rates, and investors may realize

losses on investments in such securities, including the loss of investment principal. Oppenheimer & Co. Inc. accepts no liability for any

loss arising from the use of information contained in this report, except to the extent that liability may arise under specific statutes or

regulations applicable to Oppenheimer & Co. Inc.All information, opinions and statistical data contained in this report were obtained or

derived from public sources believed to be reliable, but Oppenheimer & Co. Inc. does not represent that any such information, opinion or

statistical data is accurate or complete (with the exception of information contained in the Important Disclosures section of this report

provided by Oppenheimer & Co. Inc. or individual research analysts), and they should not be relied upon as such. All estimates, opinions

and recommendations expressed herein constitute judgments as of the date of this report and are subject to change without

notice.Nothing in this report constitutes legal, accounting or tax advice. Since the levels and bases of taxation can change, any reference

in this report to the impact of taxation should not be construed as offering tax advice on the tax consequences of investments. As with

any investment having potential tax implications, clients should consult with their own independent tax adviser.This report may provide

addresses of, or contain hyperlinks to, Internet web sites. Oppenheimer & Co. Inc. has not reviewed the linked Internet web site of any

third party and takes no responsibility for the contents thereof. Each such address or hyperlink is provided solely for the recipient's

convenience and information, and the content of linked third party web sites is not in any way incorporated into this document.

Recipients who choose to access such third-party web sites or follow such hyperlinks do so at their own risk.

This report or any portion hereof may not be reprinted, sold, or redistributed without the written consent of Oppenheimer & Co. Inc.

Copyright © Oppenheimer & Co. Inc. 2012.

Information Technology / TECHNOLOGY