managing enterprise risk

225

Upload: mohamed-fouad

Post on 13-Apr-2015

102 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Managing Enterprise Risk
Page 2: Managing Enterprise Risk

Foreword

Risk Management in the Energy Sector

Business leaders must manage risk in all of its forms, from risks associated with compet-itive issues to those connected to regulatory shifts or natural disasters. Unfortunately,however, experience shows that businesses have often responded to risk only as it arises.

In recent years, faced with challenging environmental and regulatory conditions, man-agers have begun to recognize the need for a more strategic, proactive approach to quan-tifying and managing risk. This new focus has resulted in substantial advances in riskmanagement techniques, notably within the banking industry, where an accurate measureof risk is especially crucial as financial institutions grow in both size and complexity.

Although all industries have their own unique risks, techniques which are used to quan-tify risks can have broad application across industries. Certainly all firms, regardless ofsize or complexity, can benefit from the collection of data that allows the measurementand management of risk.

While this book specifically considers the experience of understanding and managing riskin the electric utility industry, all industries can readily draw from its lessons. A contribu-tion of this book is to enlarge the application of the KUU framework – the Known,Unknown, and Unknowable. Recent writers and thinkers in the risk management arenaurge company leaders to do more to manage a wider range of risk – to delve more aggres-sively into the Unknown and to expand glimpses into the heretofore Unknowable.

Recent evidence indicates that leaders of business want to be more proactive – more pre-pared to identify emerging risks – those perhaps previously unknown or even unknowable –and to identify appropriate strategies for dealing with them; to be proactive in establish-ing and executing their company’s level of risk tolerance; and to manage risk as an inte-gral part of the fabric of the company. Few want to be, and a fewer still can afford to be,reactive in responding to risk or to relegate the responsibility to lower levels.

The challenge will continue for future generations. This book is a contribution to meetingthat challenge.

Thomas M. HoenigPresident, Kansas City Federal Reserve Bank

vii

Prelims-I044949.qxd 6/28/06 3:07 PM Page vii

Page 3: Managing Enterprise Risk

Introduction

David L. Bodde

International Center for Automotive ResearchClemson UniversityClemson, SC, USA

Karyl B. Leggio

Henry W. Bloch School of Business and Public AdministrationUniversity of Missouri at Kansas CityKansas City, MO, USA

Knowledge, Imagination, and the Domain of Habitual Thinking

Mark Twain once observed that a cat that has once sat on a hot stove lid will never againsit on a hot stove lid. But neither will it sit on a cold one. In like manner, the nation caughtunaware at Pearl Harbor learned its lessons well and made significant effort throughoutthe long struggle with the Soviet Union to prevent that kind of surprise from occurringagain. From the distant early warning (DEW) line of the 1950s to the satellite-based missiledefenses proposed for the present, the lessons from the Pearl Harbor experience guided(and limited) the application of science and engineering to the public purpose. But thenation’s ingenuity proved irrelevant to the homicidal attacks of 9/11. This historicalrecord demonstrates the tension we find between habitual thinking, reinforced by what we“know” to be true, and our imaginations, which can identify a new set of perils containingboth realities and phantoms, probably in equal measure. Both habitual thinking and ourimagination (or imaginative creative thinking) can inform our actions, albeit differently,and both can mislead, also differently.

Yet nothing our nation has experienced since Pearl Harbor is new to the human experi-ence. To be sure, the atrocity of 9/11 has sharpened our perception of risk. More recentlya host of natural influences – tsunami, hurricanes, drug-resistant microorganism amongthem – combined with terrorism have demonstrated the ongoing perils to modern life.To this list we must add concerns with the consequences human inventiveness – green-house gas releases that threaten the global climate, genetically modified organisms, andso forth. And finally, the work of our own hands seems increasingly endangered.Globalization allows no shelter from a hyper-competitive business environment; rapidlyadvancing technology quickly makes what we know obsolete, irrelevant, or both; and

ix

Prelims-I044949.qxd 6/28/06 3:07 PM Page ix

Page 4: Managing Enterprise Risk

x Managing Enterprise Risk

an accelerated business cycle affords little time for learning and none for recovery frommistakes.

And yet for all this, we mortals must still earn our daily bread through commerce. Thoseentrusted with the leadership of business enterprises must make choices that have conse-quences far beyond their capacity to forecast, or perhaps even to imagine. Yet choicesmust be made and the risks managed. This book seeks to advance the art of understand-ing and managing the dangerous phenomena that we loosely call “risk.”

This is a practitioners’ book. It draws upon the experience of the electric industry to informprofessionals in diverse fields of the best current thinking on business risk – how it can beunderstood, how it can be managed, and how it can be communicated to diverse constituen-cies. We choose as our initial focus on the electric sector,1 a capital intensive infrastructurethat underlies the digital society and now faces extraordinary political, environmental, regu-latory, and technological uncertainties. Thus the electric sector industries serve as a minecanary for managers in all fields, and we emphasize the widescale implications of thatexperience throughout.

Managing Enterprise Risk

The strategic leaders of large, publicly traded corporations – boards of directors and sen-ior executives – have always been accountable to investors for delivering bottom-line eco-nomic performance. But the increasing complexity of emerging business models and agrowing societal concern with the integrity of financial reporting now leads to three newemphases in accountability:

● senior managers must improve their understanding of the risks facing their businesses;

● they must manage those risks effectively;

● they must communicate that improved understanding fairly and clearly to investors.

The chapters that follow bring a special dimension to this three-fold challenge. They applya new concept to the management of risk, a concept that has proven useful in the physicalsciences – the partitioning of risk thinking into the categories of “known, unknown, andunknowable.” This triage can offer managers and investors fresh insights into the phenom-enon of risk and thereby improve their ability to place intelligent economic bets. For indus-tries such as those focused on electric energy, where investments in long-lived economicassets carry with them consequences well beyond the ability of forecasting models to illu-minate, understanding the ramifications of an unknowable future becomes essential tomanaging the exposure of investors and managers alike.

This book is divided into four parts: The Nature of Risk in the Electric Sector; CurrentApproaches to Managing Risk: Their Powers and Limitations; Emerging Risk ManagementTechniques; and An Integrated Approach to Understanding, Managing, and CommunicatingRisk. The content of each section is outlined below. The book begins by focusing on risksunique to the energy industry, broadens to include current risk management techniques

1Within the term “electric sector,” we include the traditional electric utilities, the electric energy trading compa-nies, independent power producers, and the federal and state regulators that set out the rules of competition.

Prelims-I044949.qxd 6/28/06 3:07 PM Page x

Page 5: Managing Enterprise Risk

Introduction xi

in use today and new risk management techniques applicable across industries, and thenconcludes with a cross disciplinary approach to deal holistically with risk.

Part I The nature of risk in the electric sector

● William Riggins writes A Perspective on Regulatory Risk in the Electric Industry.Unlike other deregulating industries (e.g., airlines, telecommunications, and truck-ing), the energy industry has not chosen initial full-scale deregulation; rather, theindustry is partially deregulated. Much debate centers on this strategy and its impactupon competition in the industry.

● In Electricity Risk Management for the Beginner: What It Is, Why It Matters andHow to Deal With It Andrew Hyman and Leonard Hyman contend that the restruc-turing of the electric supply industry has not only shifted risks but also opened allparticipants, including consumers, to new risks. Multi-billion dollar failures and polit-ical crises testify to the severity of the consequences of inadequate assessment andmanagement of the risks. This chapter examines some of those risks, and some of theexisting tools that executives can use to mitigate them.

● In Surprised by Choice: The Implications of Technology for Strategic Risk MichaelChesser and David L. Bodde discuss how new technologies have led to the over-capacity of physical assets in some geographic area. They ask critical questions abouthow current technological developments are going to drive change in this industry inthe next 20 years.

● And finally, in Why the Transmission Network as Presently Planned Will Not Providethe Efficient, Economic and Reliable Service that the Public Wants Leonard Hymandescribes how policy-makers have devised an organizational structure for transmis-sion that does not align the interests of operator with consumer, discourages investmentand entrepreneurship, lacks incentives for efficient operation, and diffuses responsi-bility for outcomes. Without change of direction, the transmission sector will evolveinto a command-and-control entity that will undermine the use of market mecha-nisms in the electric supply industry. This chapter will explore why the transmissionnetwork as presently planned will not provide the efficient, economic, and reliableservice that the public wants.

Part II Current approaches to managing risk: their power and limitations

● The DCF Approach to Capital Budgeting Decision-Making, by Diane Lander andKaryl B. Leggio notes that net present value (NPV) is touted as the means to analyzeinvestment decisions. This chapter will outline the advantages and shortcomings ofNPV and suggest ways to improve its use as a decision-making tool.

● Real Options and Monte Carlo Simulation Versus Traditional DCF Valuation inLayman’s Terms by Jonathan Mun covers the new analytical models such as realoptions analysis and Monte Carlo simulation. These models do not replace traditionaldecision-making tools. Rather, they complement and build upon the traditionalapproaches. The advantages and pitfalls of using these methodologies are explored.

Prelims-I044949.qxd 6/28/06 3:07 PM Page xi

Page 6: Managing Enterprise Risk

● Enterprise Risk Management in 2005 Moving Beyond Market and Credit Risk byJana Utter answers the questions: Enterprise Risk Management: What is it? Howdoes it work? The latest entrée into the stable of risk management tools requires athorough understanding and identification of the firm’s risk exposure and the corre-lation between these risks. The process of developing an enterprise risk managementsystem is outlined.

Part III Emerging risk management techniques

● Overview of Operational Risk Management at Financial Institutions by LindaBarriga and Eric Rosengren ties energy risk management to its origins in banking.Many of the tools of risk management used in the energy industry began as tools inbanking. This chapter looks at Basle I and its impact upon the banking industry andfurther explores the expected changes associated with the introduction of Basle II.

● The Application of Banking Models to the Electric Power Industry: UnderstandingBusiness Risk in Today’s Environment by Karyl B. Leggio, David L. Bodde, andMarilyn L. Taylor looks at the implementation of banking techniques in energy riskmanagement. Further, it discusses expected changes in energy risk management inresponse to changing risk management practices in the banking industry.

● Investors believe they understand the risk inherent in their investment decisions.What would investors recommend risk managers and decision-makers do to coordi-nate their efforts to determine how risk taking and risk management efforts can meshis explored in: What Risk Managers Can Learn From Investors Who Already Knowand Consider the Risks and Who Wish That Professional Risk Managers andDecision-Making Executives Would Coordinate Their Efforts and Figure Out HowRisk Taking and Risk Management Efforts Can Mesh by Leonard Hyman.

Part IV An integrated approach to understanding, managing, and communicating risk

● Executive Decision-Making under KUU Conditions: Lessons from Scenario Planning,Enterprise Risk Management, Real Options Analysis, Scenario Building, and ScenarioAnalysis, by Marilyn L. Taylor, Karyl B. Leggio, Lee Van Horn, and David L. Boddedraws upon the KUU framework as a new risk classification to demonstrate the com-monalities and differences among the various approaches to risk management andcalls for the synergistic usage of these approaches. Risk management methods con-sidered include scenario planning, enterprise risk management, real options analysis,scenario building, and scenario analysis.

● Assessing Capital Adequacy by Robert Anderson and the Committee of Chief RiskOfficers explores ways for the CEO and upper level management to lead towards a planfor the company under which the company looks to create value while maintainingsound financial health. The process of measuring and communicating capital adequacyprovides transparency for the external agencies and financial markets.

xii Managing Enterprise Risk

Prelims-I044949.qxd 6/28/06 3:07 PM Page xii

Page 7: Managing Enterprise Risk

Introduction xiii

● Full-Spectrum Portfolio and Diversity Analysis of Energy Technologies by ShimonAwerbuch, Andrew Stirling, and Jaap Jansen looks to combine two methodologies toarrive at a method to produce an efficient frontier showing the optimal generatingportfolio mix for firms in the energy industry.

A word about origins

The book grew out of a workshop funded by the Alfred P. Sloan Foundation of New York incooperation with the Federal Reserve Bank of Kansas City and organized by the authors.The workshop drew together leading practitioners and thinkers in the field of business riskfor an exchange of views regarding the frontiers of application. Their discussions empha-sized action: what those charged with understanding, managing, and communicating riskcan do now to improve their enterprise. The papers and conversations that emerged fromthe workshop form the substantive basis for the book.

In addition, to the encouragement from the Sloan Foundation we have also immenselybenefited from our partnership with the Federal Reserve Bank and especially Dr. ThomasHoenig, President of the Kansas City Federal Reserve Bank and Esther George, SeniorVice President at the KC Fed. Many other colleagues have encouraged us including theexecutives and representatives from the regulatory agencies, and leading thinkers whojoined us for the two-day forum hosted at the Kansas City Federal Reserve Bank. We aregrateful for the thoughtful contributions of all of the authors whose wisdom is containedin the chapters that follow. These leaders in their fields thoughtfully considered their indi-vidual expertise and how it integrated with the expertise of the other authors. This collab-orative endeavor served to strengthen this work. We are also grateful to Jonathan Agbenyega,our Elsevier editor whose work on our behalf has smoothed the editing process. Andfinally, and most importantly, we thank our families for their patience and assistance as theyshouldered the homefront responsibilities so we could immerse ourselves in understand-ing the issues surrounding management of risk. We are grateful to all of these colleagues,friends, and family for the richness of the learnings that have been our privilege. Our intentin this book was to share those learnings for those on the forefront of application. For its lim-itations we accept full responsibility.

Prelims-I044949.qxd 6/28/06 3:07 PM Page xiii

Page 8: Managing Enterprise Risk

CHAPTER 1

A Perspective on Regulatory Risk in the ElectricIndustry

William G. Riggins1,2

Vice President and General Counsel, Kansas City Power and Light

Introduction

“Past experience, if not forgotten, is a guide to the future.”3 The past experiences of theelectric industry provide a guide to predicting, and thus better managing, risks created bychanging regulation. The industry’s history is represented by continuing cycles of phe-nomena that create socio-political dynamics that drive regulatory initiatives. These regu-latory initiatives, and in some cases their unintended consequences, can have a dramaticimpact on the ability of electric utilities to meet their public service obligations while, atthe same time, maintaining their financial commitments to investors.

This chapter begins by presenting and summarizing some of these historical patterns andthemes. It then assesses current issues and provides some suggestions for thinking about,and planning for, an uncertain future.

Historical Scenarios

Why an entrepreneurial industry became subject to economic and quality of service regulation4

Entrepreneurs were responsible for the creation and expansion of the electric utilityindustry in the late 19th and early 20th centuries. The cost of constructing a central

3

1William G. Riggins is General Counsel, Great Plains Energy.2The author is indebted to Jerry Pfeffer, Gerald Reynolds, and Robert Zabors for their contributions to this chapter.3A Chinese proverb.4Sources for this subsection include Welch, Francis X., Cases and Text on Public Utility Regulation, 1st ed.(Washington: Public Utilities Reports, Inc., 1961), pp. 543–544, 547–548, 550–552; Vennard, Edwin, The Electric

Ch01-I044949.qxd 5/23/06 11:36 AM Page 3

Page 9: Managing Enterprise Risk

4 Managing Enterprise Risk

generating station and distribution facilities was high. Initially, therefore, electricity wasconsidered a luxury. Its use was limited to “public service” applications such as street light-ing and to more affluent customers. However, demand grew as new, efficiency enhancingdevices that used electricity, such as the electric motor, were created. These devices cameto be considered necessities, not luxuries, and so did electricity. When problems began toemerge with the supply and pricing of this essential service, regulation of rates and serviceemerged as the public policy response.

Initially, electric companies were formed and operated wherever the investors thoughtthey could profitably sell electricity. Companies competed vigorously with each other forcustomers. They operated at various voltages with different kinds of equipment. Therewas significant duplication of facilities. In some cases, multiple sets of wires were strunghaphazardly throughout city landscapes. Companies engaged in price wars to increasemarket share for industrial and commercial customers who were able to shop among suppliers. They offset any revenue losses by charging extremely high rates to “captive”customers who had no other supply option. As a result of these tightening margins andcompetitive pressures, some electric companies began to fail.

These dynamics of inferior service, widely varying rates, and inadequate or no returns toinvestors were occurring at the same time that demand for electricity, and the capitalrequirements to supply that demand, were growing rapidly. Economies of scale emergedas an industry driver. Larger systems had numerous inherent benefits such as:

1. facilitating the replacement of small, obsolete, or inefficient units,

2. facilitating standardization of equipment and facilities,

3. improving load and diversity factors,

4. centralized purchasing,

5. more efficient use of a specialty labor force.

These benefits resulted in operating economies and more uniform and dependable serv-ice. The bigger and more efficient the generating plant and distribution network, the lowerthe price per unit of electricity produced and delivered to end-users.

Ironically, regulation emerged as a result of two apparently contradictory dynamics. Onthe one hand, major suppliers such as Samuel Insull, who had aggregated a number ofsmall suppliers into a large, national holding company system, actively lobbied for regu-lation. Their goal in doing so was to insulate their local franchises from competitive entryand thereby protect their profits. On the other hand, policy-makers began to doubt that thenew industry was conducive to competition or that market forces alone were likely to beeffective in bringing about adequate service at reasonable rates. Early regulatory efforts in the 1900-1920s included state legislatures and municipal governing bodies directly

Power Business, 2nd ed. (New York: McGraw-Hill Book Company, 1970), pp. 9, 69; Bradley, Robert, “The Originsof Political Electricity: Market Failure or Political Opportunism?” Energy Law Journal, 17(1) (1996), 59–102.

Ch01-I044949.qxd 5/23/06 11:36 AM Page 4

Page 10: Managing Enterprise Risk

regulating public utilities by statute and ordinance. When it became obvious that regu-lation was a full-time job requiring specialized skills and expertise, states began to formspecially organized boards with powers over utility service and rates.

Federal regulation followed in the 1920s and 1930s, as electric service expanded fromdowntown areas throughout urban regions. To enhance the economy and reliability ofpower supply, the first higher voltage transmission lines began to be constructed to inter-connect local and regional systems spanning several states. Electric power assumed evenmore of the characteristics of an essential service to interstate commerce. This created aperceived need for federal legislation to facilitate industry growth and to avoid conflictingand parochial state regulatory mandates. The first significant federal legislation came withenactment of the Federal Power Act of 1935 that expanded the role of the Federal PowerCommission, which previously was limited to the licensing of hydroelectric projects. Theact expanded the commission’s jurisdiction to include the interconnection and coordina-tion of electric facilities; mergers and security issuances and asset sales; wholesale rates;adequacy of service; asset valuation, and; accounting practices.

Financial and structural regulation5

Regulation of public utility securities and corporate organization developed after rate reg-ulation. In the early days of regulation, it was not considered necessary to regulate thefinancial and corporate structure of the industry. During a period of relatively simple cor-porate organizations and capital structures that were not excessively leveraged, the pre-vailing view was that the utility’s rates and service were sufficiently controlled on the basisof property values, regardless of the company’s capitalization. Therefore, it did not makemuch difference, from the customers’ perspective, how the company organized its capitalstructure or acquired the capital needed for expansion. Congress and state legislaturesresponded with financial regulation only when the holding company excesses in the late1920s and early 1930s demonstrated that a company’s organization and capitalization couldnegatively affect its ability to serve and unnecessarily increase its rates.

During the 1920s, the electric utility industry, faced with increasing demand, needed tobuild plants and to raise a great deal of new capital. At the same time, holding companieswere emerging as the predominant corporate structure. Holding companies facilitated theconsolidation occurring in the industry and provided greater financial flexibility to theentrepreneurs that still controlled most utility companies, many of which were owned bynon-utility holding companies that were interstate in character. By 1932, 49% of the investor-owned electric utility industry was controlled by three holding companies. Another 35%was controlled by the next 12 largest holding company systems.

Over time, however, the scale economies that encouraged consolidation into complexholding company structures became secondary to a wide range of financial abuses. In fact,

A perspective on regulatory risk in the electric industry 5

5Sources for this subsection include Welch, op. cit., pp. 615, 617, 641–644; Hawes, Douglas W., Utility HoldingCompanies, 4th ed. (New York: Clark Boardman Company, Ltd., 1987), pp. 2–5, 2–12, 2–13, 2–15.

Ch01-I044949.qxd 5/23/06 11:36 AM Page 5

Page 11: Managing Enterprise Risk

the complex corporate structures made the abuses difficult to detect. Eventually, however,the abuses were exposed, and public concern began to grow about the evolving industrystructure. These abuses included pyramiding of corporate organizations (which was usedto magnify control and/or profits), excessive leveraging of capital structures, and abusiveaffiliate dealings within the holding companies. In addition, some holding companies col-lapsed because of faulty acquisition and diversification strategies or because of account-ing inadequacies. In 1928, Congress directed the Federal Trade Commission to investigateholding companies and abuses in the electric and gas utility industries. This investigation,which lasted a number of years, documented numerous financial abuses and, in turn, ledto enactment of the Public Utility Holding Company Act of 1935 (PUHCA). The act wasprimarily directed toward simplifying holding company structures, eliminating businessesunrelated to the utility industry from holding company structures, and regulating servicecontracts between affiliated companies. As a result, the number of registered holding companies dropped from more than 200 in the 1930s to approximately 306 in 2005.

Regulation driven by environmental, national security, and safety issues7

Between 1940 and 1960s, electricity usage doubled every decade. The post-war economicboom increased demand in all customer segments. The cost of electricity declined signif-icantly as scale economies combined with technological innovation to reduce the unit costsof production. This reduction in unit costs accelerated as increasingly larger coal and oil-fired base load-generating plants were introduced. This phenomenon resulted in greaterelectrification of the economy. In the late 1960s and 1970s, however, a number of factorsconverged which threatened the financial viability of the industry in a manner unseensince the industry’s financial problems of the 1930s.

It was during the 1960s and 1970s that evolving public concern about the environmentaleffects of energy production technology resulted in the enactment of new laws that effec-tively became a major form of indirect economic regulation of the utility industry. Newlegislation such as the National Environmental Policy Act, the Clean Water Act, and theClean Air Act, and the accompanying regulatory initiatives, became as important to theindustry as traditional regulation of rates and services. It also drove utility managementdecision-making concerning the fuel choice for new generation facilities. Although largesupplies of coal were available domestically, and the technology for converting coal toelectricity was mature, many utilities began to look for alternatives because of theexpense and uncertainty associated with constantly changing environmental regulations.

On a national level, natural gas generally was not considered a cost effective alternative tooil and coal for power generation. A bifurcated gas market resulting from federal priceregulation limited the availability of gas at reasonable prices in regions that did not pro-duce natural gas. Therefore, the use of natural gas for new generation was limited to the

6 Managing Enterprise Risk

6There are currently 31 “top tier” registered holding companies. The total number of registered holding compa-nies exceeds 50 due to the fact that some registered holding companies own other registered holding companies.7Sources for this subsection include Congressional Research Service, Electricity: A New Regulatory Order?(Washington: U.S. Government Printing Office, 1991), pp. 158–164, 205–206, 212–216.

Ch01-I044949.qxd 5/23/06 11:36 AM Page 6

Page 12: Managing Enterprise Risk

gas producing regions of the Gulf Coast and Southwestern states. Over time, however,changes in the nation’s energy supply and geopolitical considerations (e.g., the oil embar-gos and natural gas shortages of the 1970s) resulted in the passage of the Powerplant andIndustrial Fuel Use Act that prohibited new facilities from burning natural gas or petro-leum products to generate electricity.8 At the same time, the energy crisis also broughtabout the demise of controlled pricing for oil and natural gas, and prices for those com-modities (which still could be used as boiler fuels for existing facilities) equalized acrossthe country, albeit at substantially higher prices.

One of the few options available to the industry was to place a greater reliance on nuclearenergy. The first commercial nuclear generating plants were developed in the 1960s withconsiderable support from the federal government. They were able to exploit plentifulsupplies of uranium, technical skills developed in the naval reactor program, and an enrich-ment capability developed for military purposes to produce electricity at costs that ini-tially were projected to be “too cheap to meter.” More importantly, from environmentaland fuel supply reliability perspectives, nuclear plants did not raise air or water pollutionconcerns and reduced U.S. dependence on imported fuels. However, at the same time asthe industry sought to increase its reliance on nuclear energy, a growing awareness of thesafety risks associated with nuclear generation and a slowing demand for electricity com-bined to create a “perfect storm” for utilities involved in nuclear generation.

Increasing public opposition to nuclear plant development delayed the completion of newplants that were planned or under construction. Double-digit inflation increased the capi-tal carrying charges associated with plants whose costs could not be recovered until theycommenced commercial operation. Complying with constantly changing federal safety reg-ulations proved to be an expensive and time-consuming process. The industry’s nuclear-related problems accelerated dramatically after the accident at Three Mile Island triggereda de facto moratorium on new plant orders and intensified the regulatory burdens and delaysfor plants already under construction.

By the late 1970s and early 1980s, these problems caused some companies that were build-ing or planning nuclear plants to begin rethinking their position. The impact of allowingutilities to capitalize the carrying charges of plants under construction was customer ratesthat spiraled upward. The combination of huge cost overruns and uncertain cost recoverymade it prohibitive for some utilities even to complete their plants. Those utilities choseto minimize their losses by simply abandoning and writing off their partially completedplants. In other cases, the plants were completed, and utilities were forced to seek hugerate increases to recover the costs. These requests, along with the omnipresent safety debate,created a political and, ultimately, a regulatory dynamic that had severe adverse effects onthe entire industry. Regulatory commissions slashed rate requests, and only allowed theutilities to recover costs and earn the minimal returns necessary for survival. The qualityof utility earnings also deteriorated as the non-cash component (reflecting capitalized inter-est costs) increased, and some companies resorted to borrowing to cover their dividends.The notion of utility stocks as “debt equivalents” vanished as dividends were cut or eliminated, and earnings volatility increased dramatically. It took years for some of these

A perspective on regulatory risk in the electric industry 7

8The act was repealed in 1987.

Ch01-I044949.qxd 5/23/06 11:36 AM Page 7

Page 13: Managing Enterprise Risk

utilities to return to a financially sound position. Some never did and were acquired in thewave of industry consolidation that began in the late 1980s and is still ongoing.

Economic and financial deregulation9

In 1978, partially in response to the energy crisis and to major power outages, Congresspassed the Public Utility Regulatory Policies Act (PURPA). PURPA provided a startingpoint for deregulation and competitive entry into the generating segment of the market byproviding incentives for non-utility cogeneration and small power production. Initially, asurge of new cogeneration, small hydroelectric projects, and biomass projects were devel-oped. Utilities were mandated by PURPA to purchase the output of these facilities at theutilities’ “avoided cost.” 10 This mandate enabled these facilities to be financed with proj-ect financing secured by long-term contracts with utilities. PURPA’s incentives also wereextended, however, to alternative fuel technologies that were immature and/or uneconomic.For that reason, these technologies never achieved the hoped-for levels of market pene-tration. In addition, other continuing regulatory barriers discouraged many non-utility enti-ties from developing more traditional generating facilities that did not qualify for PURPAbenefits because this would have subjected those entities to regulation as public utilities.

Enticed by the price reductions and increased innovation that had occurred in other dereg-ulated industries, Congress tried again with the Energy Policy Act of 1992, which didaccelerate the development of regional, competitive wholesale power markets. Amongother things, and unlike PURPA, the act gave authority to the Federal Energy RegulatoryCommission (FERC)11 to require access to the transmission grid for wholesale genera-tors. During the next few years, the FERC required utilities owning transmission to pro-vide open access to their transmission systems under standard terms, conditions, and rates.It also promoted regional entities that would independently operate transmission systems.These developments provided the incentive for non-utility independent power producers(IPPs) to construct gas-fired generation wherever gas and transmission lines were in rea-sonable proximity.12 The legislation also created a new designation as “exempt wholesalegenerators” that protected new wholesale market entrants from regulation under PUHCA.

In state-regulated retail markets, price disparities were a major driver of efforts to imple-ment retail competition. States where prices were high, like California, embraced the concept. By 1997 some form of retail competition had been authorized or was under consideration in nearly half the states, which included nearly two-thirds of the country’spopulation. By 1998, one survey suggested that 80% of electric industry executivesbelieved that all retail electric customers would be able to choose their supplier by 2005.

8 Managing Enterprise Risk

9Sources for this subsection include Washington International Energy Group, The 1998 Electric IndustryOutlook (Washington: Washington International Energy Group, Ltd., 1998), pp. 6–8, 30.10In the context of PURPA, avoided cost generally was considered to be the marginal cost of utility production.11The successor to the Federal Power Commission.12Gas-fired generation (which had been prohibited for several years in the 1970s and 1980s) was preferredbecause the plants are relatively inexpensive and can be built relatively quickly.

Ch01-I044949.qxd 5/23/06 11:36 AM Page 8

Page 14: Managing Enterprise Risk

There was wide variation between the retail competition plans at the state level. In somecases, this led to dysfunctional outcomes. In California, for example, although the whole-sale market was deregulated, retail prices were capped. The state’s three investor-ownedutilities were encouraged to sell their generation as part of a deal related to stranded costrecovery.13 At the same time, however, they were prohibited from entering into long-termcontracts with suppliers to hedge their risk and were forced to buy all of the power neededto serve their residual retail load in volatile spot markets. Unlike other regional energymarkets such as those established in the Northeast, the forward energy market and the net-work reliability functions were organizationally separated.

With this inherently flawed market system in place, California began to experience dramat-ically higher natural gas and emission allowance prices.14 There was an unforeseen surgein electricity demand because of the state’s economic growth and unusual weather patterns.Due to siting and environmental rules limiting new power plant construction, none had beenbuilt in the state in more than 10 years, and the state’s aging portfolio of plants experiencedhigh outage rates. Similar rules had made it difficult to construct new transmission lines,and this limited the amount of power that could be imported. Droughts in the Northwestreduced the amount of hydropower that was available for import.

During certain volatile periods, California’s utilities sometimes were forced to pay thou-sands of dollars per megawatt hour (MWH) for wholesale purchases in the spot market tomaintain reliability. However, because of the price cap, they were able to charge theirretail customers only $60–70 per MWH. Within 6 months, this price squeeze resulted inutility debt of $10–12 billion, rolling blackouts, and, for one of the state’s major utilities,insolvency. The problem was exacerbated by significant abuses of a flawed market designby several of the major energy trading companies that moved quickly to exploit the oppor-tunities created by newly deregulated wholesale markets. Utility customers in California,meanwhile, were paying only a fraction of the cost of power purchased on their behalf andhad little incentive to conserve or to shop for alternative suppliers.

“The California Experience” was the primary contributor to a shift in political momentumin the national deregulation movement. Beginning in the late 1990s many states slowed orreversed their deregulation efforts. At the federal level, the FERC began investigationsinto alleged market abuses in California, thereby creating a threat of widespread contractabrogation and refund obligations for the newly deregulated merchant sector of the indus-try. At the same time, the industry’s cash flow and its access to capital markets on rea-sonable terms deteriorated significantly. The concern about deregulation was exacerbatedby the collapse of Enron and several other trading companies implicated in the Californiacrisis and its aftermath. Newer plants constructed by IPPs to take advantage of newlyderegulated markets became liabilities instead of assets and were placed on the market at

A perspective on regulatory risk in the electric industry 9

13Stranded costs were costs incurred by utilities under their obligation to serve all customers that would not berecoverable in a competitive market.14Emission allowances, created by the Clean Air Act, authorize a generating unit to emit 1 ton or 1 pound ofspecified pollutants during or following a given year. The number of allowances allocated to each unit is estab-lished by federal or state environmental regulatory agencies. Emission allowances are a marketable commoditythat can be saved, transferred, sold, or purchased.

Ch01-I044949.qxd 5/23/06 11:36 AM Page 9

Page 15: Managing Enterprise Risk

prices less than book value. Broader concerns about the reliability of audited financialstatements (across all publicly traded companies) worsened the situation for utilities in the“accounting crisis” of 2001–2003 that led to, among other things, the Sarbanes Oxley Actand a new layer of accounting, reporting, and governance regulation.

The problems in the generating and energy supply sector also had an indirect effect on thewire sector of the industry. Investment in transmission and distribution, which had laggedgeneration growth in the 1980s and 1990s, slowed to a trickle as the uncertainty over futureindustry structure and the federal interest in transmission divestiture made utilities reluc-tant to commit new capital. At the same time, the bubble in energy trading increased thedemand for interregional energy transfers and further strained an inadequate transmissionnetwork. Several massive regional outages have highlighted the need for tens of billionsof dollars of investment in new infrastructure at a time when the industry is ill prepared tocommit such capital.

Themes

The foregoing discussion illustrates how certain industry themes drove the perceived needfor economic regulation of the power supply industry during various phases of its growthand maturation. The critical factor that defines this interaction is that, once electricityemerged as an essential service, a temptation to regulate was inevitable. For example, inthe 1900s and 1920s, the public became unhappy with high or disparate prices and inferiorservice. The companies desired protection against encroachment by new market entrantsand local patchwork regulatory efforts; the result was comprehensive price and serviceregulation. This new regulatory order facilitated industry consolidation. Industry consol-idation ultimately created opportunities for “gaming the system,” financial abuses, andcompany failures resulted. This led to financial regulation and restriction of utilities’ abilityto diversify.

In the 1960s and 1970s, for various reasons, regulation restricted, and in some cases fore-closed, the ability of utility management to finance needed growth in infrastructure solelyon the basis of traditional “free market” forces. When combined with external forces suchas high inflation, these regulatory prohibitions restricted market-based responses to chang-ing supply and demand conditions. This, in turn, created inefficiencies, supply shortages,and higher prices. When frequent price spikes became a political issue, two relevant regulatory responses occurred. First, aggressive regulation restricted price increases butcreated financially vulnerable companies. This, in turn, stimulated industry consolidationonce again. Second, Congress took a small step toward re-introducing competition in thegenerating segment of the industry through passage of PURPA.

In the 1990s, price disparities and public expectations of the same price reductions thathad occurred in other deregulated industries drove the electric utility industry towardwholesale and retail deregulation. This encouraged diversification as utilities looked forways to replace the revenue from customers that would be lost to competitors and trans-form their business models in the hope of achieving the higher growth expectations necessary to compete with the higher multiples being assigned to new market entrants.

10 Managing Enterprise Risk

Ch01-I044949.qxd 5/23/06 11:36 AM Page 10

Page 16: Managing Enterprise Risk

High prices, inferior service, company failures, and poorly designed markets that facilitatedfinancial abuses brought the movement toward competition to a halt. Customers paid muchof the cost associated with bringing insolvent utilities out of bankruptcy, and politicalleaders called for re-regulation. A great deal of capital was spent to construct plants thatultimately proved to be uneconomic.

Current Issues

Today’s electric utility industry is desperately in need of greater regulatory predictability.Political momentum, however, wavers between continuing down the road toward compe-tition or re-imposing some form of cost-based regulation. This uncertainty and indecisionhas resulted in a temporary, uneasy equilibrium.

In many respects, this mixed competitive/regulated environment is the worst of both worldsand only exacerbates market risk. The attempted regulatory responses to date often havebeen incomplete and/or inconsistent. Perhaps this is because they address issues using thefaulty premise that a single utility market structure still exists. In fact, regional differencesin market structures are pronounced and range from traditional regulated models to liber-alized unbundled ones.

In the absence of a national energy strategy or policy, and given the difficulties of passingcomprehensive energy legislation, Congress tends to address energy matters on an ad hocbasis through spending bills that neither coherently nor comprehensively address energyissues in a strategic or integrated way. Even when consensus develops to pass “compre-hensive energy legislation,” the many components of the legislation vary widely in theireffectiveness and value.

A recent example is the Energy Policy Act of 2005, the first comprehensive energy legis-lation enacted by Congress in 13 years. Various factors served to create consensus on theneed for such legislation. These included:

1. global geopolitical developments that, once again, highlight U.S. vulnerability to oilsupply disruptions,

2. increased public awareness of global warming,

3. renewed interest in non-fossil technologies such as wind, solar, and nuclear,

4. a recent major power blackout in the Eastern United States that underscored theneed for massive new investment in transmission and federal reliability standards.

Even though the legislation contained provisions that had widespread support (such asfederal reliability standards for the power industry), it was nearly doomed by parochialissues, such as insulating petrochemical companies from liability associated with certaingasoline additives. In addition, to gain the votes needed for passage, the legislation includeda number of non-market-based incentives for the development of selected renewableenergy, “clean coal” technology and “advanced” nuclear reactors. It also included provisionsthat essentially repeal PUHCA and modify PURPA. Given the lessons of history, one might

A perspective on regulatory risk in the electric industry 11

Ch01-I044949.qxd 5/23/06 11:36 AM Page 11

Page 17: Managing Enterprise Risk

confidently predict that, as demonstrated by PURPA in the 1980s, it is unlikely that non-market-based incentives for otherwise uneconomic technologies will significantly con-tribute toward the development of a sustained, widespread competitive market or otherwiseincrease the efficiency of power supply. It also is likely that the repeal of PUHCA, whichwas enacted to discourage consolidation and diversification, will stimulate further indus-try consolidation and ultimately encourage diversification. It may also attract a genre ofowners that are primarily or solely interested in financial performance.

The change in administrations and the political mix in Congress also have been reflectedin a more pragmatic approach to regulation of the industry. During the past several years,the Environmental Protection Agency (EPA) has attempted to bring some regulatory cer-tainty to owners of coal-fired generation. It took definitive positions in new proposed reg-ulations and was less aggressive in pursuing some of the litigation it inherited from theprior administration. The EPA promulgated a series of regulations designed to reduce nitro-gen oxide, sulfur dioxide, and mercury emissions. It attempted to clarify the situations inwhich changes to existing plants would trigger compliance with more stringent emissionrequirements and require investment in emission reduction infrastructure. Once again,however, regionalism became evident as numerous Northeastern states, concerned both withair quality and economic competitiveness, cooperated with environmentalists to challengeEPA action in litigation. Thus, uncertainty regarding the operational and financial risksassociated with coal-fired generation is likely to continue as long as litigation is pendingand until a clear national environmental policy emerges.

The FERC has continued its focus on encouraging new transmission development andassuring access to transmission for new market entrants. It has, however, backed off of itsefforts to mandate a national standard market design. This change in position was promptedby strong opposition from industry and political leadership in Southeastern and North-western states with low-cost power. The federal agency continues to push utilities to relin-quish control over the operation of, if not ownership of, their transmission systems toindependent organizations. Its rationale is that open-access transmission tariffs alone willnot totally eliminate the ability of transmission providers to favor their generation affili-ates which, in turn, will discourage the development of a healthy and stable wholesaleenergy market. Transmission-owning companies, however, are understandably reluctantto invest significant amounts of money in assets they do not control. As noted, the FERC’sprior efforts to unbundle generation and transmission assets have led to conflict with stateregulators who are concerned about numerous issues. These issues include the priority ofretail customers for transmission capacity and the jurisdiction of state agencies to site newtransmission lines. From the state regulators’ perspective, both of these issues have poten-tial impacts on cost and quality of service. State regulators in low-cost states also are con-cerned that FERC efforts to promote large, regional wholesale power markets will diverttheir low-cost power supplies to higher cost regions.

As a result of the 2003 widespread blackout and pervasive transmission constraints, theFERC also has continued its focus on mandatory reliability rules. Congress established asystem of voluntary compliance with transmission operating standards after the North-east Blackout of 1965. This system was deliberately structured to rely on industry “self-regulation” and to keep governmental involvement to a minimum. However, in a restructured

12 Managing Enterprise Risk

Ch01-I044949.qxd 5/23/06 11:36 AM Page 12

Page 18: Managing Enterprise Risk

market characterized by many non-utility generators moving power over long distances,and with the cost pressures on utilities, compliance with voluntary standards has eroded.The current challenge is for the FERC to develop meaningful enforcement mechanismsas it implements the reliability provisions of the Energy Policy Act of 2005.

The foregoing issues are being debated and deliberated in the context of new geopoliticalrealities shaping the industry’s future risk profile. Nuclear power plants, utility informa-tion systems, transmission grids, and substations all have been identified as terrorist tar-gets. As referenced earlier in this chapter, regulatory uncertainty has contributed to a needfor infrastructure investment, particularly in transmission. This need is exacerbated by forecasts for economic recovery and growth, with the attendant increase in demand forelectricity.

Assessing Risk in Terms of What Is Unknown and What Is Unknowable15

It is sometimes said that events in the electric utility industry unfold in slow motion. Thistruism, considered in isolation, would lead one to believe that utility management shouldbe well positioned to deal with unanticipated regulatory changes. The obvious problem is that, although potential regulatory changes can be identified in advance, the lead timesfor response in a highly capital intensive industry with enormous infrastructure needs aremeasured over a much longer planning horizon.

In the past, regulatory changes were prompted by high prices, bad service, consolidation,financial abuses, and industry failures. In response, policy-makers have either extendedregulation, attempted to stimulate competition, or both. Utilities have sometimes supportedthese governmental initiatives. The primary strategic industry response to risk mitigation,however, has been through diversification of its risk exposure. Often, the preference has beento achieve that diversification quickly through acquisitions, which results in industry con-solidation. Typical diversification strategies have included:

1. geographic diversification to reduce the impact of a single regulatory commissionon a utility’s financial structure,

2. fuel diversification to minimize the impact of regulation on a specific fuel source,

3. electric/gas convergence to smooth cash flows and reduce costs,

4. diversification into non-utility businesses, which in turn facilitates removing somefunctions of the utility business from regulatory purview.

A perspective on regulatory risk in the electric industry 13

15Sources for this section include an article entitled “Consequential Heresies: How ‘Thinking the Unthinkable’Changed Royal Dutch/Shell” written by Art Kleiner in 1989 for Doubleday as a prototype for a magazine calledCurrency, and; The Application of Banking Models to the Electric Power Industry: Understanding BusinessRisk in Today’s Environment, Karyl Leggio, David Bodde, and Marilyn Taylor, March, 2003. The latter paperdefines unknown risks as “risks that are knowable with new technologies, additional research, or a shift inresources. . . .” Unknowable risks are those that cannot become known regardless of the “amount of research orresources deployed. . . .”

Ch01-I044949.qxd 5/23/06 11:36 AM Page 13

Page 19: Managing Enterprise Risk

As noted, however, some of these risk diversification strategies have not been successful,especially when companies desperate to attract new capital have entered “higher growth”markets that are far removed from their core competencies.

In recent years, utilities also have undertaken numerous initiatives that, although smallerin scope, have a significant cumulative impact when successful. These include seekinglegislative or regulatory authority for power cost adjustment mechanisms, environmentalcompliance surcharges, pre-approval of significant capital projects, incentive rates and tariffredesign. These efforts are aimed at reconciling regulation and market risk, and reducingregulatory uncertainty. Underlying all of these utility efforts, both large and small, arestrong political and community relationships. States, counties, and municipalities are recog-nizing that utility investment in supply and reliability is a powerful tool for economicdevelopment.

Utilities are fortunate when compared to companies in other industries that provide non-essential services in a highly competitive market. Since regulatory change develops slowlyand is prompted by crisis or controversy, much that is unknowable becomes unknown.The emergence of traditional indicators of regulatory change such as increasing prices,environmental accidents, company failures, deteriorating service quality, or acceleratingindustry consolidation should alert utility management that currently unknown risks arebeginning to build. Traditional governmental responses to these indicators provide a his-torical reference point, but the challenge is how to identify, categorize, and develop actionplans to address these unknown risks. Companies that can develop and ingrain such adynamic and adaptive process into their corporate cultures will establish themselves as atrue exception – a proactive company in a historically reactive and cyclic industry. Thepotential rewards from occupying that position are self-evident.

Precedent and analytic tools exist for companies willing to invest the efforts of their beststrategic thinkers and their operational experts. Royal Dutch/Shell used scenario planningto foresee the 1973 energy crisis 2 years before it happened. At the time, the price of oilhad remained steady for 25 years. Only 2 of Shell’s 40 top managers thought that the priceof oil would rise above $2 a barrel. Nevertheless, Shell imagined prices at an outlandishamount – $10 a barrel – and developed contingency plans that included convertingrefineries so they could quickly switch to refine oil from different countries. Within a yearand half, oil prices were $13 a barrel. The energy crisis was the beginning of a turnaroundfor Shell. At the time the crisis began, Shell was considered the least profitable of themajor oil companies. By the late 1980s, Shell was the most profitable oil company in theworld. Shell prospered in a crisis because it was prepared for it.

Scenario planning, which considers adaptive behavior under alternative futures, is uniquelysuited for identifying and categorizing unknown utility risks. It focuses both on facts andperceptions and considers internally consistent combinations of variables. It transformsinformation into fresh perceptions. It forces decision-makers to question “the rules” gov-erning their industry and ultimately to change their perception of the reality of their busi-ness environment. Uncertainty and risk management are institutionalized as a part of thedecision-making process. Value is created from the scenarios themselves because they leadto better planning and more options. The process also identifies triggers for re-evaluatingstrategy, thus leading management to become more flexible and creative.

14 Managing Enterprise Risk

Ch01-I044949.qxd 5/23/06 11:36 AM Page 14

Page 20: Managing Enterprise Risk

The Future16

As mentioned numerous times in this chapter, the defining characteristic of electricity isthat it is essential to our quality of life, the operation of our economy and our security. Theattributes of the industry in terms of its capital intensity, long lead times, regional infra-structure, and inability to stockpile its essential output are unique and not readily subjectto change. Investment in transmission and distribution (a significant percentage of GrossDomestic Product) is most efficient through a single provider. Given those facts, it is rea-sonable to expect that the transmission and distribution of electricity will remain regu-lated to some extent. As for generation, it is certainly possible that governmental policy,for the foreseeable future, will continue to encourage niche competition and aggressivelyregulate certain types of generation, such as coal and nuclear.

The cycle of events and regulatory reactions in the industry are well established. As dis-cussed in the previous section, scenario planning in this context should enable the devel-opment of strategic initiatives that are sufficiently robust to proactively meet unknownrisks as they become known. Scenario planning offers even more value, however, when it is extended further by developing contingency plans for the unknowable. This is espe-cially true given the long lead times necessary to develop utility infrastructure.

For example, the unknowable could be a new technology (e.g., wireless power transmis-sion) that completely changes the nature of the electric utility business and the dynamicsof industry structure and market forces. Dramatic technological innovations that will min-imize the importance of the current electric infrastructure, and consequently regulation,have been predicted for years. Indeed, one argument in support of deregulation of theindustry in the 1990s was that competition would provide the incentive for the develop-ment of new technologies that could vastly reduce costs and enhance the quality of ourenvironment. Clearly, some of those innovations are likely to happen, but the time frameis unknowable. The unknowable also could be external forces of enormous consequence.Is the industry in denial about the possibility of terrorism, or a widespread nuclear shut-down (as occurred in Japan in 2001 and 2002), or end-use technology such as LiquidCrystal Displays replacing lights, or fuel cell breakthroughs, or interest rate volatility, ordramatically rising coal prices, or electric vehicles?

In retrospect, major transformations have taken place in the electric industry in the past100 years. Utilities should focus on the tools that will position them to prosper in the next100 years. By planning for both unknown and unknowable risks, utilities can identifycommon causal relationships between apparently unrelated factors that are interwoventhroughout various scenarios. Developing strategic plans based on these causal relation-ships not only minimizes risk, it provides a competitive advantage during times of indus-try turmoil and better positions those companies to thrive under a wide range of futureconditions.

A perspective on regulatory risk in the electric industry 15

16Sources for this section include Leggio, Bodde, and Taylor, op. cit.

Ch01-I044949.qxd 5/23/06 11:36 AM Page 15

Page 21: Managing Enterprise Risk

CHAPTER 2

Electricity Risk Management for the Beginner:What It Is, Why It Matters and How To Deal With It

Leonard S. HymanSenior Associate ConsultantR.J. Rudden Associates

Andrew S. HymanMarketing DirectorFiske Walter Capital Management, Ltd.

The restructuring of the electric supply industry has not only shifted risks but alsoopened all participants including consumers to new risks. Multi-billion dollar failures and political crises testify to the severity of the consequences of inadequateassessment and management of the risks. This chapter examines some of those risks,and some of the existing tools that can mitigate them.

Introduction

The dictionary defines “risk” as:1

a chance . . . of danger, loss . . . or other adverse consequences

and “manage” as:2

organize . . . be in control of . . .

Risk managers do not necessarily eliminate risk. They, rather, control its impact on thebusiness or on the consumer, at a cost that the potential beneficiary must pay. Risk man-agement plays an essential role in the operation of business and the household. Firms willundertake complicated financial transactions to protect themselves from the effects ofswings in prices of products that they consume or sell. They even protect themselves fromchanges in interest rates. Households buy insurance policies to protect themselves from

16

Ch02-I044949.qxd 5/23/06 11:37 AM Page 16

Page 22: Managing Enterprise Risk

Electricity risk management for the beginner 17

the risk of flood, fire, medical expenses or the untimely death of the breadwinner.Protection, of course, costs money, so the person or firm buying the protection has to balance the cost of the protection against an ability to withstand a loss, and may decide to insure against only part of the risk. Electricity suppliers and consumers now have toengage in that same process.

Old Versus New Electricity Model

In the old, fully regulated era, the electric utility could ignore risks, to a great extent, becausethe utility passed all costs on to the consumer (except when the regulator deemed the costsimprudent), at least in theory. Cost of service, or rate of return regulation, was really costplus regulation. In effect, the utility did not take out an insurance policy because the regula-tor might object to the extra cost of the policy that consumers would have to pay in the timeswhen the untoward event did not occur. The consumer took the risks, often unknowingly,and suffered the consequences. The regulator decided the level of risk for the consumer.

If operating expenses rose, due to a decision of the company, consumers paid more. Ifcapital expenditures rose due to poor construction management, the consumer paid. If thecompany mismanaged its money raising, the consumer footed the bill.

The shareholders, however, took the risk of delay between cost increases and regulatoryaction to correct prices. They also received the benefits when costs fell before regulatorscould correct prices but, by and large, the companies suffered from the lag. They, gener-ally, did not earn the return allowed by regulators.3 The shareholders also bore the risk of ex post regulatory findings of imprudent expenditures, which regulators would not allow util-ities to pass on to customers for recovery. Those findings of imprudence destroyed billionsof dollars of industry capital. Regulators, however, often feared to punish utilities severely,because they wanted to maintain the utilities in a financial condition that permitted them tomeet the demands of consumers. Thus, utility creditors rarely suffered the ultimate conse-quence of poor investment decision-making: loss of principal through bankruptcy.

Perhaps service outages best typify the risk sharing arrangement. Beyond some limited pay-ments from the utility for small damages, the customer bears the burden. Utility managerswill explain that putting lines underground or other protective measures are too expensive,meaning, rationally speaking, they do not expect the regulators to approve the price increaseneeded to pay for the service protection improvement. Neither they nor the regulators, how-ever, balance the cost of improvement against the cost of poor service borne by the con-sumers. Yet the cost of unreliability to the economy may approach an astounding $100billion annually, according to an estimate from Electric Power Research Institute (EPRI).4

Restructuring the electric industry into its present semi-competitive state has shifted someburdens of risk, and made others explicit. Utilities that engage in competitive generatingactivities face an array of risks that they have to manage, because they cannot pass themon to customers in an automatic fashion, plus they face the danger that a governmentallyapproved compliance officer will undo transactions months after the fact. Much of theregulated part of the business will operate as before, except for the provider of last resort(POLR) service. In that service, the utility will continue to procure electricity for the bulk

Ch02-I044949.qxd 5/23/06 11:37 AM Page 17

Page 23: Managing Enterprise Risk

18 Managing Enterprise Risk

of its customers, those who decide not to choose a competitive supplier, at terms set by thestate. It may have to take the risks of participating in a volatile market without proper (orany) compensation, with one of those risks that politicians will second-guess its activities.

Customers, of course, now take the explicit price risks and face price variability, but they tookprice risk before, in the regulated days, because they had to pay for whatever the utility did.The difference, though is that in the old days, the regulators smoothed price changes, andconsumers could plan ahead easily due to the length of the process, and possibly control itthrough legal action. Today, price variability is greater on a short-term basis (day-to-day orhour-to-hour). Price is less predictable, and less controllable through legal or political pres-sures. Greater variability means greater risk.5

The more open market increases the chance that the utility will face technological inno-vation that could make the utility’s operations obsolete, or at least leave the utility unableto raise prices enough to earn the allowed return. Regulators may not recognize that riskwhen setting prices that supposedly incorporate cost of capital.

And, of course, regulatory risk continues, as before, but it takes on new meaning in thesupposedly competitive sector, because of the government’s willingness to change therules, reopen deals and interfere in the conduct of market participants. The participantsmust not only guard against the usual risks of producing a commodity but also the possi-bility that regulators will view their actions, at a later date, as exercises of market powerthat require revision of terms of the transactions. The Federal Energy RegulatoryCommission (FERC) never deregulated the wholesale energy markets, after all. It, simply,works on the presumption that competitive markets produce the equivalent of the “justand reasonable” price required by law. If FERC does not consider the market competitive,it can impose its own price on the market.

The new structure of the market creates another risk of doing business. Previously, the utilitycontrolled the entire chain of supply from production to delivery and billing. Presumably, alllinks in the supply chain worked together to produce a final product satisfactory to the cus-tomer. In the new structure, nobody controls the entire supply chain, and a number of thelinks in the chain have no contact with the customer and no incentive to work with the othercomponents to assure a satisfactory final product. Thus, every firm in the supply chain takesthe risk that other suppliers may take actions that would hurt its investment or sales.6

In the end, the additional risks will affect the cost of capital of those in the supply chain whoserisks have increased, while reducing the cost of capital of those suppliers in the chain that haveless to worry about. The efficiency savings that restructuring should bring in operations, then,must exceed the increase in the cost of capital, or consumers, in the long run, will not benefitfrom the restructuring. The same comment applies to consumers. Increased volatility of pricesincreases consumer risk. The benefits must exceed the value of the increased risks. Simplylowering prices when increasing risks may not improve consumer welfare any more than rais-ing profits while also raising risk improves investors’ wealth. That is a point missed by the bigenergy players of the nineties and one totally ignored by policy makers.

Regulatory Risk

Regulators expect utilities to invest capital into facilities before knowing what return regu-lators will allow the investment to earn. Furthermore, the regulators do not distinguish

Ch02-I044949.qxd 5/23/06 11:37 AM Page 18

Page 24: Managing Enterprise Risk

between the risk levels and required cost of capital for discrete projects, but rather choose anoverall return based on past experience. In order to reduce the risks of the process, rationalutility managers should select projects with below average risk, whether or not those proj-ects lead to the lowest costs for consumers. Conceivably, rational business – as opposed toutility – managers might refuse to invest altogether. As a solution, regulators could set areturn, in advance, for a specific investment. For the water industry, some regulators permitautomatic price adjustments to cover the costs of ongoing small investments. As a compro-mise step, regulators could allow the utility to earn a specified return on capital being usedduring the construction process (construction work in progress in rate base).

Regulation works slowly. A rate case could take a year or more from filing to decision, withits results based on calculations for a test period completed before the filing of the case.Thus, the utility may end up setting future prices based on stale assumptions of costs andmarket conditions. Furthermore, the regulator rarely compensates the utility for events notcontemplated in the rate case. Admittedly, though, regulators do, at times, permit timelyadjustment of prices to cover changes in specific costs, such as fuel or local taxes. And, inthe past, when sales growth was strong and costs were declining due to economies of scale,regulatory delay may have worked to the benefit of the utility. In a period of slow growth anduncertain economies of operation, though, the delay probably increases the risk to the util-ity shareholder. (Some utilities, however, have embraced multi-year price freezes as part ofregulatory deals. So far some of those deals have produced high returns for shareholders.Recall, however, that a similar deal created a disaster for utilities in California. The highreturns now being earned may be required to reflect high risk. Investors may not know thesuccess of the price freeze strategy until the freezes expire).

In the past, regulatory agencies disallowed from rate base billions of dollars of invest-ment, which, effectively, rendered the investment worthless. While few utilities, nowa-days, engage in the massive generation project spending that led to such disallowances,they face similar potential for disallowance of expenditures made to purchase powerthrough contracts. In recent years, regulators (notably in California) have discouraged orprohibited utilities from hedging those contracts or from signing long term contracts for power, both of which would have lent greater certainty to the outcome of the transac-tions and would have lowered the risk to the utility. Presumably, the regulators objectedto the cost of hedging or feared that the utility would sign a contract to purchase power attoo high a price. One regulatory agency pushed a local utility to the brink of bankruptcyby refusing to allow the utility to recover the cost of power purchases made during achaotic period in the market. Given the variable nature of the wholesale power markets,the lack of certainty engendered by such policies creates risks to investors similar, in mag-nitude, to the disallowance policies of the past, and may subject consumers to high risks,as well.

FERC’s ability to revisit the wholesale market, to declare firms dominant and, therefore,ineligible for participation in the competitive market, to change the rules of engagementwhen it chooses, demonstrates that the wholesale power market remains regulated in everyrespect, where deviance from fixed regulated prices depends on FERC’s forbearance. FERCand its agents, the regional transmission organizations (RTOs) can cap prices, and even cal-culate justifiable market prices based on costs. (Shades of Aristotle, St. Thomas Aquinasand St. Augustine!) Investors in supposedly unregulated generation must operate with one

Electricity risk management for the beginner 19

Ch02-I044949.qxd 5/23/06 11:37 AM Page 19

Page 25: Managing Enterprise Risk

certainty: that market overseers will put a ceiling on prices but not a floor. This asymmetryadds to the risk of investment, especially for facilities that depend on peak prices duringshort periods of time to provide a profit for the entire year. (What would happen to toy storesif the government required the store owners to refund supposedly excess profits earned dur-ing the holiday season? What would pay the rent for the rest of the year?)

In the old days, the utilities could meet social obligations and cross subsidize deserving cus-tomer groups with ease. After all, the subsidizing customer had to pay, having no place elseto go for service. That policy led to difficulties, eventually, when large customers deter-mined that they did have alternatives. Since the unbundling of service (allowing consumersto choose a separate supplier of electricity while still using the lines of the utility for deliv-ery), large customers, in the main, have sought out alternative suppliers, but small customershave not. Utilities have, as a result, ended up as suppliers to customers that either chose notto choose alternative suppliers or whom alternative suppliers chose not to serve. That is,they have taken on the function of suppliers of last resort, without the cushion formerly pro-vided by other customers who subsidized the social service function. Some utilities providethis service with little or no compensation, bearing risk with no return. In addition, pricingfor electricity tends to subsidize users at times of peak at the expense of those who do notexacerbate peak load and the high costs involved in serving peak customers. Any change inthat policy could affect demand patterns, and the value of equipment put in place to meetpeak loads. In other words, rational pricing creates risk, too. In sum, to the extent that regu-lators or legislators wish to impose or fund social obligations, they will continue to do sothrough the local utility, the only entity that they still control, and the utility cannot avoid thepossibility that some of those policies might damage the firm’s profitability.

The market changes, but the utility still requires permission to change with it. That needto get permission may limit the ability of the regulated electric utility to respond to theactions of competitors. In some instances, regulators may require the utility to take stepsthat favor their competitors, or better, keep them in business, because of what Alfred E.Kahn described as the “temptation to produce some competitors, even competitors lessefficient than the incumbents, by extending them special preferences or protections andrestraining efficient competitive responses by the incumbents.”7

Unbundling of service adds risk to long-range planning. Customers can leave. Therefore,the utility and its suppliers may fear to sign long-term arrangements predicated on thecontinuity of the relationship between customer and utility or suppler. That inability tocount on a customer base increases the risk of generation and transmission projects whosefinancing depends, to some extent, on long-term commitments from potential customerswhose own customers may leave. For that matter, as Paul Grey notes, “The disconnectionbetween retail and wholesale operations translates into business risk. How can the whole-sale unit forecast accurately without accurate knowledge of historical customer load, orwithout fully understanding customer consumption patterns. How can they judge theirforecasting success if what they buy is not reconciled with what is billed?”8

Restructuring has created a new and generally unexplored risk for every component of thenow fragmented supply chain. None of the participants receives its compensation based onthe satisfaction of the customer with service, product or price. The generators sell into apool, without considering whether the price set there will, in the long run, attract or drive

20 Managing Enterprise Risk

Ch02-I044949.qxd 5/23/06 11:37 AM Page 20

Page 26: Managing Enterprise Risk

away customers or damage the competitive position of the product. The transmissionowner collects a return whatever the adequacy of service. The RTO, which manages thetransmission network, has no obligation to control the congestion and the ancillary servicecosts that make up a significant part of the bill. The local distributor makes its profit oninvestment in fixed assets, without regard for the costs of what it carries through its lines.The supply company that actually sells the electricity probably does care, but has little con-trol over the costs of the components of the package that it sells. It has to use the servicesof the various monopolists in the delivery chain. Oddly enough, the RTO, with no customercontact and no economic motivation (it is a non-profit entity) may play the key role in thesupply chain. It regulates the wholesale market, determines whether transactions take placeor not, and has the power, effectively, to include or exclude distributed resources anddemand side pricing and management from the market place.

The fact that a number of entities produce parts of the product is not the issue. Many busi-nesses outsource functions, but they invariably design and control every aspect of theproduct, and the suppliers understand that their individual successes depends on their col-lective success in producing a product that sells. In the case of the electricity market, eachentity in the supply chain (or its regulator) seemingly designs and produces a componentthat goes into the final product with little regard for whether that component, as designedand produced, makes the end product more or less successful in the market.

The risk in the electric market is that a supplier, acting in its own short-term interest, willaffect the marketability of the product, to the detriment of other components of the supplychain. Since key entities in the supply chain exercise legal monopolies, the seller of the endproduct has to deal with them. It is as if the supplier of brakes to General Motors (GM)chose to produce an expensive product that worked best on Fords, and consumers dislikedit so much that the brakes drove them to buy Toyotas, but GM could not specify the termsof purchase of the brakes and had to continue to do business with the offending brake man-ufacturer. The risk of doing business has risen, at GM, as a result of its inability to controlits end product. And other suppliers’ risk of doing business with GM has risen, too, becausetheir sales volume is at the mercy of the uncontrollable brake manufacturer.

Finally, as the August 14, 2003 blackout demonstrated, the network does not function per-fectly. Generally speaking, customers bear the costs of outages, in terms of inconvenienceor lost income or, occasionally, lost life. The utilities, however, will bear the costs of lostsales and emergency repairs. The regulatory system, however, has no way to determinehow the cost of building, reinforcing or operating the network in a way that improves reli-ability stacks up against the value of that reliability to consumers. Thus, the consumer runsthe risk of having to pay more than the improvement is worth, or, cannot pay the utilitymore to improve service because the regulator does not want to allow a price increase. Inaddition, the fractionated industry structure diffuses responsibility for reliability, so regu-lators will have difficulty enforcing reliability policies. The blackout illustrates the prob-lem. Assuming that the actions of one company triggered the event, the regulators in thestate in which the initial event took place may choose to order the offending utility to takeremedial action. But that state cannot do much about the fundamental issue: why is theinterstate network in the northeast constructed and operated in such a way that the offend-ing actions of one company could bring down the electrical network in the entire north-east? Basically, industry restructuring has not removed the risks inherent in regulation but

Electricity risk management for the beginner 21

Ch02-I044949.qxd 5/23/06 11:37 AM Page 21

Page 27: Managing Enterprise Risk

rather increased some and decreased others, and has shifted the likelihood that participat-ing entities, rather than the consumer, will bear those risks.

Technology Risks

Historian of science and technology Thomas P. Hughes distinguished between “technical”and “technological.” He defined “technology . . . as a complex system of interrelated factorswith a large technical component. Technical refers primarily to tools, machines, structuresand other devices. Other factors embedded in technology, besides the technical, are the eco-nomic, political, scientific, sociological, psychological, and ideological.” He then looked atpurpose: “Technology usually has the structure of a goal-seeking, problem-solving, opensystem. An example . . . is an electric light and power system . . .” As a key to why policymakers need to pay attention to the big picture, Hughes notes the seemingly obvious pointthat, “Factors or components in the electric power systems . . . interact . . .”9

In that sense of the term, the electricity supplier faces technological risk, some of whichit cannot pass on to customers unless regulators recognize the risk in their calculations ofcost of capital, and consumers face risks, some of which they cannot insure against. Thathappens in many markets, of course, but the key difference is that regulators control someof the pricing that should reflect the risk, and may prevent or discourage market partici-pants from taking mitigating actions.

To begin, the regulated electricity supplier operates on the basis that its assets have long use-ful lives, for example 40–50 years. So far, the industry has miscalculated the economic livesof many generating plants, thereby necessitating write-offs and demands for recovery of“stranded costs” from customers, but the transmission and distribution plant has retained itsvalue. As long as the utility retains its monopoly position as provider of service over the lifeof the investment, it can justify the long accounting and regulatory life of the asset. (Even ifthe assets has attained obsolescence, it retains its value as long as it remains in the rate base).One must consider, however, that the electric industry has delivered essentially the sameproduct with the same delivery mechanism for over 100 years, and much of the system oper-ates on principles developed decades ago. The local exchange telephone carriers and thejudge who broke up the Bell System viewed the local exchange as an impregnable naturalmonopoly. They were wrong. They had disregarded cable and never dreamed of the growthof cellular services. Changes in patterns of demand or new generating techniques couldaffect the value of the electric wires investment, but that risk does not enter calculations oflives of assets.

Consider, too possible breakthroughs in generation (such as fuel cells or photovoltaiccells) or in user efficiency (low power consumption flat panels instead of light bulbs orsuperconducting wires in electric motors) that might pull the rug out from demand pro-jections. Consider how the industry expected an upsurge in demand from charging thebatteries of electric cars. In the future, instead, it might have to contend with fuel cells incars that can produce power to light houses. The gas industry’s growth now comes fromselling gas to electric generators, in part because new gas appliances use so much lessthan the old appliances. Considering that the electric industry spends almost nothing onresearch and development, one should not expect it to either shape or anticipate change.

22 Managing Enterprise Risk

Ch02-I044949.qxd 5/23/06 11:37 AM Page 22

Page 28: Managing Enterprise Risk

The franchise limits utility flexibility, as well. It delineates the territory served and theproducts offered. It will prevent the franchisee from following a customer, physically, andmay prevent the utility from fashioning or offering a more efficient means of meeting acustomer’s needs. The old city gas companies sold gaslight not illumination, the telegraphcompanies sent telegrams not communications, and the railroads furnished rail transportrather than transportation. They did not or could not shape their products to the changingneeds of the market place. On the other hand, thrift institutions and mutual insurance com-panies did convert themselves into successful commercial entities, but they could not havedone so without the equivalent of a rewrite of their franchises, which required a majoreffort. Competitors are likely to oppose removal of franchise restrictions. Thus, while theregulated franchise protects the franchisee from competition in a narrowly defined market,it may also limit the ability of the franchisee to compete for or retain customers in a morebroadly defined market. The franchise, then, lowers risk in a stable market specified by reg-ulators, but it may raise risks as the market evolves with changes in production techniquesand customer needs.

Participants in the power industry have big foot prints, to use the managerial jargon of thenineties: thousands of huge power stations, more miles of line than the interstate highwaysystem has of roads, wind mills on scenic mountains, major users of water, producers of airpollutants, contributors to global climate change, big taxpayers, and most importantly, tiedto location. The power generator cannot move a giant facility once the deal with the localsunravels, as Enron found out from its Indian venture, and as other investors discovered inBrazil, the United Kingdom or California. The regulator may withhold permission forwhat the electric company wants in order to get the company to do something else, becausethe electric company cannot escape the jurisdiction. As for reducing environmental pollu-tants, regulators will concentrate their efforts on the electric producer because it producesvast volumes at a few locations, and it cannot move out if threatened with high mitigationcosts. Cleaning up a large power producer is easier than attempting to track down manysmall or mobile pollution sources. Anyway, in the old days, the utility did not suffer, any-way, because it passed on the costs to customers, often through fuel adjustment clauses thatit tacked on to the bills. Now, however, power producers face real risks from environmen-tal enforcement measures. Many regulated utilities no longer have fuel adjustment clausesthat allow automatic pass through of fuel related costs or have signed price freeze agree-ments that prevent them from passing any new costs on to customers. The merchant powerproducers may have no way to pass on new environmental costs to consumers. Perhaps oneof the greatest risks is that power industry leaders believe their own rhetoric about theuncertainty of global climate change, or believe that change will come slowly enough forthem to adapt, and the result is wholesale unpreparedness.10

Finally, consider the odd combination of regulated and unregulated, and how it affects thecompetitive picture. When unregulated generators set up operations, they looked at thestate of the regulated generators, many old clunkers in need of retirement, and concludedthat the new power plants would take over the market. They did not count on the abilityof regulated companies to run inefficient facilities and get away with it because the cus-tomers footed the bills. For that matter, many government-owned utilities run without the discipline of the market, too. Thus, those entering the market as competitors take therisk that others in the market will not play by the rules that govern normal competitiveoperations.

Electricity risk management for the beginner 23

Ch02-I044949.qxd 5/23/06 11:37 AM Page 23

Page 29: Managing Enterprise Risk

Financial Means to Mitigate Price and Volume Risk

Regulated electric companies passed along unexpected costs, especially fuel price increases,to customers. In competitive markets, unpleasant surprises can drive away customers – whocan choose suppliers. Under some deregulation agreements utilities agreed to freeze rates inreturn for stranded cost settlements, but those utilities paid market prices for their power,because they no longer owned generation. In some cases, the utilities’ frozen rates did notcover all the costs of supplying power. In some states, utilities still pass all their costs throughto customers, but that situation has become less common. Ironically, with deregulation,customers now face market prices, because the utility purchases the energy and passes italong to the consumer, with no markup from the wholesale purchase price. However, deregu-lation need not mean disaster. Effective risk management can minimize the impacts ofvolatile costs and prices. Risk management tools can enable electricity generators and mar-keters to insulate profitability from price volatility. On the buyer side, electricity consumerspurchasing power on the open market can hedge against unexpected price swings and thehazards of buying in the hourly and next-day power markets.

The utility operating environment creates risks for all participants – risks with significantfinancial consequences. The key risks to energy companies center around price and volumeof sales and purchases. Companies need ways to minimize or neutralize the effects ofadverse prices or adverse weather (which can create volumetric problems). Energy com-panies can engage in a transaction that will transfer the risk of adverse moves in price, orvolume, to another party. A company that can be harmed by rising prices, such as a buyerof fuel or power, needs to use an instrument that will neutralize that price rise. A companythat can be harmed by falling commodity prices (usually of a product it is trying to sell oralready owns) needs to enter into a transaction for an instrument that will neutralize thatprice decrease.

Financial risk management tools allow those affected by energy price volatility to shiftthe risk to other parties using derivatives, which are financial products that derive theirvalue from the price of another commodity or the value of an index number. Using themcan help market participants manage financial risk from unpredictable energy prices.Derivative products are built from forward contracts, futures, options, and swaps. Forwardand futures contracts enable buyers and sellers to fix a price in advance on an item theywill either receive or deliver in the future. Knowing the price allows buyers and sellers totransfer the risk that prices will rise or fall in the future.

In a forward contract:

. . . a commercial buyer and seller agree upon delivery of a specified quality andquantity of goods at a specified future date. A price may be agreed upon in advance,or there may be agreement that the price will be determined at time of delivery.11

Forward contracts are privately negotiated and not standardized. Buyers and sellers cus-tomize contracts to fit their needs. Customization, however, makes the contracts harder tosell or transfer to others, if the contracts no longer serves the needs of the buyer (long) orseller (short).

24 Managing Enterprise Risk

Ch02-I044949.qxd 5/23/06 11:37 AM Page 24

Page 30: Managing Enterprise Risk

A futures contract, similar to a forward contract, is:

. . . a legally binding agreement, made on the selling floor (or electronic equivalent)of a futures exchange, to buy or sell a commodity or financial instrument sometimein the future. Futures contracts are standardized according to the quality, quantity,and delivery time and location for each commodity. The only variable is price, whichis discovered on an exchange trading floor [or electronic trading system].12

In the United States, futures contracts trade on markets, known as futures exchanges,which the Commodities Futures Trading Commission has licensed to trade futures andoptions.13 Although standardized, futures contracts often do have some built in flexibilitywith relation to the quality of the product delivered. In some contracts, commodities ofhigher or lower quality than the specified grade may be delivered at a respective premiumor discount to the contract grade.14

One advantage of futures contracts over forward contracts is that the buyer does not haveto take delivery and the seller does not have to make delivery of the commodity. In fact,only a small percentage of traded contracts ever result in delivery. This low delivery rateoccurs because of a procedure known as covering, or offset, which is a:

situation that occurs when, instead of taking delivery, the buyer of a futures contractreverses the position by selling the (same) contract (month) before the delivery date;a contract writer can reverse this short position by purchasing the [same] contract[month].15

In contrast, forward contracts cannot be easily liquidated through an offsetting trade,because their specialized nature restricts their usefulness for the overall marketplace. It iseasy for buyers and sellers trading on exchanges to cover their positions, because they donot place their transactions with each other, but with the clearinghouse of the exchange aclearinghouse is:

An agency or separate corporation of a futures exchange that is responsible for set-tling trading accounts, clearing trades, collecting and maintaining margin monies,regulating delivery, and reporting trading data. Clearinghouses act as third parties toall futures and options contracts – acting as a buyer to every clearing member sellerand a seller to every clearing member.16

This clearing/credit management function provides another advantage for future contractsover privately negotiated derivatives, because the clearinghouse guarantees the contracts.Standardized contracts have still another advantage. Although they are less effective thanforward contracts in fitting specific needs, they do trade in a liquid market so the partywho no longer wishes to be involved in the transaction can sell the contract to others.

Forward and futures contracts allow buyers and sellers to fix a price in advance on an itemthey will either receive or deliver at a future time. By knowing the price, the buyers andsellers can transfer the risk that prices will rise or fall in the future, assuring a certain pricefor a transaction.

Electricity risk management for the beginner 25

Ch02-I044949.qxd 5/23/06 11:37 AM Page 25

Page 31: Managing Enterprise Risk

26 Managing Enterprise Risk

Options convey the right, but not the obligation, to buy or sell a particular good (theunderlying asset) at some time in the future, at a certain price, during a time period thatends on the option’s expiration date. A call option gives the holder the right to buy theunderlying asset at a certain price, for a limited time period. A put option gives the holderthe right to sell the underlying asset at a given price for a limited time period. The strikeprice or exercise price is, “the price at which the futures contract underlying a call or putoption can be purchased (if a call) or sold (if a put).”17 To exercise an option means, “toelect to buy or sell, taking advantage of the right (but not the obligation) conferred by anoption contract.”18 An option that is profitable to exercise is known as in-the-money. A calloption goes in-the-money when its, “strike price is below the current price of the under-lying futures contract.”19 A put option goes in-the-money when, “its strike price is abovethe current price of the underlying futures contract.”20

For options buyers (holders of the options), options are less risky than forward or futurescontracts, because risk is limited to the payment of the premium required to purchase theoption,21 so the most that buyers can lose is the cost of the option (the premium). How-ever, that low risk does not apply to the seller of the option (option writer or optiongrantor)22. The option writer has, “the obligation (if assigned) to BUY (short put option),or SELL (short call option) a futures contract at a specific price on or before an expirationdate.”23 The option writer assumes the risk of assignation in exchange for receipt of thebuyer’s premium, in the same way that an insurer receives a premium to bear a policy-holder’s risk for the duration of an insurance policy. Consequently, the option writerexposes himself to unlimited risk as long as he holds the written contract. He can cancelhis obligation by offsetting through a purchase of the opposite obligation (a long call orlong put) of the same expiration date and strike price on the options market.

The value of an option depends on:

Price of the underlying asset relative to option’s exercise – Options gain in value whenthe underlying price nears the strike price, because those options have a betterchance of going in-the-money.

Time to expiration – An option with a longer life has greater value because the under-lying asset has more time to hit the strike price.

Volatility of the price of the underlying asset – Greater volatility increases the chancethat the underlying asset will hit the strike price during the option’s life.

Interest rates – Affect cost of financing options. Higher interest rates tend to lead tohigher premiums.24

Options buyers make a one-time payment on an option, so their liabilities do not fluctuatedaily, as with futures contracts. The most they can lose is the premium – and they pay thatwhen they purchase the option. Hence, there are no mark-to-market procedures for optionsbuyers. Options writers, however, are subject to risk from price fluctuations. Consequently,exchanges require them to post margin because their positions are subject to losses in thesame way as futures contracts. The accounts of options writers are marked-to-market on adaily basis. Margin calls are made when margin levels fall below the maintenance level.

Ch02-I044949.qxd 5/23/06 11:37 AM Page 26

Page 32: Managing Enterprise Risk

Swaps are privately negotiated derivatives in which two parties agree to exchange (swap)a price risk exposure for a given time period. Swaps can be customized to meet all sortsof risks. In commodities trading, swaps are used to protect against price fluctuations.Most swaps involve a periodic exchange of cash flows between two parties, with one pay-ing a fixed cash flow and the other a variable amount relative to a given benchmark. Bycombining swaps with a position in the cash market, companies can lock in a price forcommodities they wish to buy or sell.

Derivatives should be used to neutralize risk by profiting from, “the self-same events thatinflict losses in the commercial arena.”25 Proper use lessens risk. Derivatives should not beused to increase an organization’s risk exposure.

The recent travails of the merchant power sector provide ample testimony to the risks inher-ent in that business. Generators are subject to both price volatility in the fuel they buy andthe price at which they sell their power. They can be whipsawed by a combination of risingfuel prices and fixed or declining prices for the output they sell to customers. A generatorneeds to protect against upward volatility in the price of its generating fuel while at the sametime protecting itself against a fall in the price of the power it produces. A generator couldprotect itself from fuel price increases through the purchase of a forward contract, longfutures contract, a call option, or engage in a swap with a counterparty. For example, if thegenerator were concerned that natural gas prices might spike, it could purchase a futurescontract on Henry Hub Natural gas prices at the New York Mercantile Exchange. If the priceof gas rose for the generating company, it would also rise on the futures contract. The gen-erator could then sell the contract at a profit, and use the proceeds to neutralize the price ofgas it buys from its gas supplier. The generator might also be concerned about the price atwhich it sells its power – and fearful of a drop. Since power contracts are not widely tradedon the futures exchanges, if the generator wanted to protect against a price drop, it wouldneed to make an over-the-counter transaction. It might be able to create a forward sale for aspecified time period with a customer that would allow it to fix a price for future sales, Ofcourse, the sustainability of this price is subject to the creditworthiness of the counterparty.In addition, a generator is subject to weather risk. For example, a merchant generator has setup a natural gas fired turbine and needs to sell 1000 mWh/month at 10 cents/kWh in eachsummer month to break even. What happens if the summer is very cool and the companyonly sells only 100 mWh/month?

Standard derivatives are good for controlling price risk. However, they do not protect utilitiesfrom non-monetary forces, such as weather that drive sales volume. For example, a generat-ing company could have locked in a high summer price for its power, but a very cool summerwould not generate the volume of business it needs to meet its profit targets. Weather deriva-tives are tools to manage that volumetric risk based on heating and cooling degree-days.

Heating degree-days measure the coolness of weather. A heating degree-day is the num-ber of degrees Fahrenheit by which the average temperature on a day falls below 65°F(18°C). There is a positive relationship between the number of heating degree-days anddemand for electric power and natural gas for heating purposes.

A cooling degree-day measures how hot the weather is. A cooling degree-day is the num-ber of degrees Fahrenheit by which the average temperature on a day exceeds 65°F

Electricity risk management for the beginner 27

Ch02-I044949.qxd 5/23/06 11:37 AM Page 27

Page 33: Managing Enterprise Risk

(18°C). There is an extremely strong relationship between the number of cooling degree-days and demand for electric power for air conditioning purposes, one of the heaviestpower loads in summer.

Generating companies make money in the summer when there is high demand for theirpower. However, a cool summer could hurt their profits, if their profitability is tied to vol-ume in their rate design, because few people will use their air conditioners and the capac-ity is underutilized. For example, Chicago has an average of 259 cooling degree-days inthe summer. A generator in Chicago has determined that without at least 250 degree-daysit will not meet its profit targets. The generating company could then purchase coolingdegree-day puts, so if there are fewer than 250 degree-days in the summer, the puts willpay out the monetary equivalent of any missing degree-days. Swings in weather condi-tions can severely affect the profitability of both the energy supplier and the big user ofenergy.

Each component of the energy supply chain faces risks brought about by unexpectedchanges in price and volume. A power marketer (who buys and sells in the wholesale mar-ket) is in a position of being squeezed in all directions on both price and volume. Currently,the operators of a transmission line – whether gas pipeline or electric line – may be paid,largely, based on the volume of energy that its conveys. This makes it highly subject to thevagaries of the weather. Pipelines just do not go away – they have significant fixed coststhat must be paid so the pipeline is always available for customers. These fixed costs and avariable revenue stream that depends on weather could create significant cash flow prob-lems for a pipeline that may not effectively manage its weather risk. For example, apipeline builds a spur to a new gas fired electric generating plant expecting a lot of summergas business. What happens if the summer is very cool – the pipeline operator may haveless business than expected, which could effect the breakeven point for the spur. What if,because of the bad weather the plant owner does not sell much power and cannot meet itslenders’ requirements and goes bankrupt? The pipeline could be left holding the bag. (Thesame analysis applies to transmission spurs built to serve merchant power plants.) Thepipeline could protect itself against low demand by engaging in a weather hedge based ontemperature.

Another solution to this challenge could be a change in the tariff structure for pipelines andelectric transmission and even local distribution system. Perhaps they should think more interms of renting their space for a flat fee, rather than charging based on volume. Essentiallya transmission company is offering the exclusive use of its wires or space for a payment –which is really no different from renting out an apartment or office space. For the most part,the companies’ cost of keeping up the pipeline or transmission line does not vary based onvolume. In a sense, a change in tariff emphasis could reduce risk in the same way as the useof financial instruments.

That leads, again, to the local distribution function. A utility whose profitability depends onvolume – such as a gas distribution company, could be adversely affected by bad weather –notably a warm winter with low volume. Many gas companies charge their customers aprice for gas based on the spot market along with distribution charges based on volumedelivered. What if the volume falls? In the past, regulators allowed gas companies to passthat risk on to their customers, through weather stabilization clauses. In a major ruling, the

28 Managing Enterprise Risk

Ch02-I044949.qxd 5/23/06 11:37 AM Page 28

Page 34: Managing Enterprise Risk

Massachusetts Department of Telecommunications and Energy ruled that weather riskshould be borne by the utility’s shareholders – not the customers, because weather deriva-tives were available.26 Electric utilities do not, normally, have weather adjustment clauses in place, but regulators often make rate case determinations based on supposedly normalweather conditions. The utilities might want to use weather derivatives to assure that theirearnings did not stray far from what they would have been under normal conditions.

In looking at the utility business, it is important to think about the party that was not previ-ously considered – the customer. In the past, the utility customer was protected against vari-ability through regulation – the only changes that came without the deliberations of a publicutility commission were those of fuel adjustment clauses. The deregulated environment hasnow exposed the customer to risks that were previously borne by the utility – or did not exist(as in the days when natural gas prices were regulated by the FERC). Now many customersare facing variability in their energy prices, due to either price fluctuations or variableweather. In many areas, gas suppliers charge their customers the market price for gas plus atransportation or distribution fee. The customer is fully exposed to the vagaries of the mar-ket. A large customer, however, could purchase a natural gas futures contract or option as away of protecting against a price spike that could hurt its business. The same could apply inthe power markets, but unfortunately in that area, the customer would need to trade an over-the-counter product, and not be able to avail itself of the transparency of the futures oroptions markets.

In the power markets, Large industrial customers are also exposed to the possibility thattheir electric supplier may interrupt their service, especially if their contract allows it.Although the industrial customer thinks there may be a small chance of these events occur-ring, it is distinctly possible that in a period of very high summer temperatures, and highdemand, a power company may interrupt a factory’s power – shutting it down. A weatherhedge for very high temperatures would help protect the customer from these weatherproblems.

Financial instruments could be developed in the future to cover other weather risks that mayhave a significant impact on electricity suppliers. Water conditions, for instance, obviouslyaffect the volume of water power that dams can produce. But low water may prevent bargesfrom transporting fuel to power stations and preventing the stations from operating effi-ciently due to shortages of water for intake. Power stations use huge quantities of water.Solar storms can disturb the stability of the power grid. Wind conditions, obviously, deter-mine the output of wind generators, but they also affect the carrying capacity of transmis-sion lines. The industry should find ways to hedge itself against recurring events, rather thancomplaining about them, afterwards.

Conceivably, electricity producers could find ways to hedge against economic conditions thatcould have an untoward impact on profitability. A utility with a disproportionate percentageof its sales going to an aluminum smelter, for instance, could hedge its risk by using financialinstruments tied to the prosperity of the aluminum industry, or even to the particular producer.Managers of the utility might counter that they have written contracts with the producer thatprotect the utility, but that argument holds up only as long as the aluminum producer canmeets its commercial obligations (that is, stay out of bankruptcy). The financial debacles ofthe late nineties, during which power market participants discovered that the creditworthiness

Electricity risk management for the beginner 29

Ch02-I044949.qxd 5/23/06 11:37 AM Page 29

Page 35: Managing Enterprise Risk

of the other party in the contract meant a lot, should underline the need to understand andseek ways to mitigate business risk. In the old days, the utilities added the unpaid bills to abad debt provision which they made all customers pay. They simply spread the loss over allthe customers that paid their bills. They may have more difficulty doing so in the future.

Risks Left Uncovered

The electric industry, regulators and consumers tend to think in terms of preventive rules,engineering standards and absolutes, rather than in terms of probabilities and mitigation.As an example, the N-1 rule for planning states that the system must be constructed andoperated so that it will continue to function even if its one largest component goes out ofservice. That rule does not take into account the probability that the largest componentwill fail or the cost of N-1 as opposed to other means of protection or mitigation.

The rules and standards do not consider that different consumers have different appetites forrisk, and different means of defining or dealing with it. Solving the problem – usually poorlydefined – with rigid standards may only prevent a repeat of what happened previously, at acost that might exceed the benefits to consumers. System operators, in an effort to deal withwhat others might deal with more efficiently, may deprive the consumer of cost benefits thatwould come with a liberalization of the decision-making procedures within a market.

The threat of terrorism, on a global scale, should necessitate a serious look at electricityrestructuring. Should the nation, under that threat, continue to push for greater dependenceon large, distant power stations, with power carried long distances over relatively unprotectedtransmission lines, with a central dispatch of power? Or should it put greater emphasis onmoving reliability mechanisms to the local level, even making those mechanisms portable forthe greatest flexibility and economy?27 Should it emphasize use of one fuel, which maydepend, in the future, on transport of liquefied natural gas from politically unstable regions,or pay more attention to achieving a stable supply of indigenous or renewable resources?Industry planning and restructuring, both before and after 9/11, have moved in one direction;toward large markets more dependent on distant sources run through a central markets thatemphasize the cheapest power of the moment. Presumably someone else – the consumer ortaxpayer – takes the risk of the consequences of terrorism to the power grid.

Conclusion

Restructuring of the energy sector makes risks more explicit and immediate, exacerbatessome and redistributes others. In the regulated era, on the other hand, the pricing systemtended to smooth the price swings, spread them over time and over more customer groups,concealed them in a price that bundled together everything, and eventually dumped mostof them on the consumer.

The policy makers of restructuring seemed not to have considered risks carefully when theydid their work. Now, when they have to, they tend to retreat back to command-and-controlmechanisms as solutions, rather than seeking market-based mechanisms to evaluate andallocate risks, and to design mitigation mechanisms.

30 Managing Enterprise Risk

Ch02-I044949.qxd 5/23/06 11:37 AM Page 30

Page 36: Managing Enterprise Risk

The financial markets have developed many means to mitigate risk, some of which maycost less than command-and-control procedures. No doubt, the financial engineers couldcreate even more tools if a market existed for them.

In sum, at this stage in the restructuring process, the electricity market can go in one of twodirections. It can open the way for participants in the market (and their agents) to decidewhat risks they wish to take, and let them reap the benefits or suffer the consequences ofthose decisions. (Doing so does not, necessarily, open consumers to greater risk. The con-sumers can choose plans that reduce the risks that they take.) Or, it can mandate rulesdesigned to maintain risks at levels chosen by the government (and its agents) at costs toconsumers decided by the government.

Notes

1. Reader’s Digest-Oxford Complete Wordfinder (Pleasantville, NY: Reader’s Digest Association,1996), p. 1303.

2. Reader’s Digest-Oxford Complete Wordfinder, op. cit., p.909.

3. Hyman, Leonard S. “Investing in the ‘Plain Vanilla’ Utility,” Energy Law Journal, Vol. 24,No.1, 2003, pp. 8–9.

4. EPRI. Electricity Sector Framework for the Future (Palo Alto: Electric Power ResearchInstitute, August 6, 2003), Vol. I, p. 40.

5. Awerbuch, Shimon and Berger, Martin. Energy Diversification and Security in the EU: Mean-Variance Portfolio Analysis of the Electricity Generating Mix and Its Implications forRenewables (Paris: International Energy Agency, 9/05/02 Draft).

6. Hyman, Leonard S. “The Customer is Always Right,” R.J. Rudden Associates, Inc., 2003.

7. Kahn, Alfred E. Letting Go: Deregulating the Process of Deregulation (East Lansing, MI: TheInstitute of Public Utilities and Network Industries, The Eli Broad Graduate School ofManagement, Michigan State University, 1998), p. 16.

8. Grey, Paul. “The Missing Link: Integrated Customer & Commodity Management,” Commodi-ties Now, December 2003, p. 68.

9. Hughes, Thomas P. “Technological History and Technical Problems,” in Chauncey Starr andPhilip C. Ritterbush, eds., Science, Technology and the Human Prospect (NY: Pergamon Press,1980), p. 142.

10. Stipp, David. “The Pentagon’s Weather Nightmare,” Fortune, February 9, 2004, pp. 100–108.

11. Commodities Futures Trading Commission. The CFTC Glossary: A Layman’s Guide to theLanguage of the Futures Industry (Washington, DC: Commodities Futures Trading Commission,2003), p. 29 (http://www.cftc.gov/ files/opa/cftcglossary.pdf). Definition for Forward Contracting.

12. Chicago Board of Trade. Glossary of Futures and Options Terminology (Chicago: Board ofTrade of the City of Chicago) (http://www.com/cbot/pub/page/o,3181, 1059,00.html).Definition for Futures Contract.

13. Commodities Futures Trading Commission, loc. cit. Definition for Contract Market.

14. Futures Industry Institute. Futures and Options Course (Washington, DC: Futures IndustryInstitute, 1995), p. 31.

Electricity risk management for the beginner 31

Ch02-I044949.qxd 5/23/06 11:37 AM Page 31

Page 37: Managing Enterprise Risk

15. Smith, Gary. Financial Assets, Markets, and Institutions (Lexington, MA: DC Heath and Co.,1993), p. A34.

16. Chicago Board of Trade, op. cit. Definition for Clearinghouse.

17. Chicago Board of Trade, op. cit. Definition for Strike Price.

18. Commodities Futures Trading Commission, op. cit., p. 26, Definition for Exercise.

19. Chicago Board of Trade, op. cit. Definition for In-the-money.

20. Ibid.

21. Nyhoff, John. Options for Beginners (Chicago IL: Chicago Mercantile Exchange, Fall 2000), p. 4.

22. Nyhoff, John op. cit., p. 9.

23. Nyhoff, John op. cit., p. 8.

24. Powers, Mark J. Getting Started in Commodities Futures Trading (Cedar Rapids, IA: InvestorPublications, 1983) p. 235.

25. McBride Johnson, Philip. Derivatives: A Manager’s Guide to the World’s Most PowerfulFinancial Instruments (NY: McGraw-Hill, 1999). p. 26.

26. Massachusetts Department of Telecommunications and Energy, DTE 03–40. “Petition ofBoston Gas Company d/b/a KeySpan Energy Delivery New England, pursuant to GeneralLaws Chapter 164, §94, and 220 C.M. R. §5.00 et. seq. for a General Increase in Gas Rates.October 31, 2003.”

27. Northampton Energy Services, LLP. “The SCAMPS Advantage: Lowest Total Cost ofDelivered Power,” Abstract for Published White Paper (http:// www.northamptonenergy.com).

32 Managing Enterprise Risk

Ch02-I044949.qxd 5/23/06 11:37 AM Page 32

Page 38: Managing Enterprise Risk

CHAPTER 3

Surprised by Choice: The Implications of New Technology for Strategic Risk

David L. Bodde

International Center for Automotive ResearchClemson UniversityClemson, SC, USA

Michael J. Chesser

Chairman and Chief Executive OfficerGreat Plains Energy Inc.Kansas City, MO, USA

The handwriting on the wall may be a forgery.–Ralph Hodgson, American poet, 1871–1962

Experience is often the worst teacher – it gives the examination first and the lessons after-ward. Thus, past success can become an unreliable guide for the strategic leadership ofcompanies, especially in businesses where technology holds promise for dramatic changein the business environment. And yet decisions must be made, even though every actiontaken (or not taken), every investment made (or not made), every capability gained (orlost) inevitably brings consequences that cannot be fully recognized in advance. Thischapter concerns wise planning for an unknowable future, especially one in which pastbusiness success combines with a misunderstood technology to increase strategic risk forsome – and strategic opportunity for others.

By “strategic risk” we do not mean the risk of failure to make the technology work, eventhough that should be a primary concern for operational management. Nor do we meanthe failure to invent. Rather, we mean the risk that the business model1 of a successful

33

1By “business model,” we mean the complete set of ideas concerning the way that value is created and a durable,structural competitive advantage sustained. See, for example: Bodde, David L. The Intentional Entrepreneur,M.E. Sharpe, Armonk, New York, 2004.

Ch03-I044949.qxd 5/23/06 11:38 AM Page 33

Page 39: Managing Enterprise Risk

34 Managing Enterprise Risk

company will be rendered obsolete by events quite outside the company’s habitual frameof reference. We use “strategic opportunity” to describe exactly the reverse – the prospect thatnew technology, skillfully applied, can render someone else’s business model obsolete.

In this chapter, we show how past business success and technological change have combinedin blend that has proven lethal for strong incumbents, and we suggest a way to assess thedanger of strategic surprise – and thereby avoid it. We commend adoption of this system-atic and continual process to strategic level leaders, but we also note that only an open,inquiring company culture, one that rewards initiative rather than inertia, can provide thefoundation that this approach requires for success.

Technology and the Perils of Strategic Surprise

In many cases, the victims of strategic surprise were no strangers to technology. The bestof them were large, sophisticated companies with a significant base of skills in the tech-nology that eventually overthrew them. Instead, their failing sprang from two underap-preciations:

1. of the growth potential of the new technology;

2. of mainstream marketplace needs that had lain dormant until revealed by the newtechnology.

The story of Western Union illustrates this well, and that theme continues to unfold for thetelecom companies of today.

Western Union and voice-over wire2

The company was once all that its investors could hope for – experienced and successfulin national and international markets, politically well connected, financially powerful,and technologically sophisticated. Indeed, by the 1870s, Western Union had grown withthe young nation to provide a vital communications infrastructure, one that allowed com-merce to be conducted at the speed of contemporary electronics, albeit diminished by theswiftness of the messengers delivering the telegrams.

But in 1879, Western Union made a fateful business decision, effectively handing over thefuture of telecommunications to a small, start-up company built around the inventions of Alexander Graham Bell. In a contract signed on November 10, 1879, Western Unionwithdrew from the telephone business in order to focus on and defend its highly profitable

2The Western Union case draws upon: Smith, George D. “The Bell-Western Union Patent Agreement of 1879: AStudy in Corporate Imagination,” Readings in the Management of Innovation, Michael L. Tushman and WilliamL. Moore, eds. (Ballinger, 1988).

Ch03-I044949.qxd 5/23/06 11:38 AM Page 34

Page 40: Managing Enterprise Risk

core, the telegraph. Bell’s company, National Bell Telephone, agreed to:3

● Offer only local telephone service, leaving long-distance communications to the tele-graph for 17 years – an apparently easy provision, since the technology of the daylimited effective telephony to distances under 40 miles.

● Restrict telephone conversations to personal and not business use, an essentiallyunenforceable provision.

● Transfer all its telegraph business to Western Union Lines.

● Pay Western Union a 20% royalty on the income from all rented telephones in service.

In turn, Western Union gave up all rights to the telephone, which included: the 84 patentsthat supported its formidable capability in the new technology; 56,000 telephones in 55 cities; and valuable assets in plant and equipment. The company agreed to remain outof the telephone business for the duration of the 17-year agreement.4

We cannot, of course, know the minds of those who made this fateful decision. Personalantipathy between the Western Union leadership and the financial backers of Bell no doubtplayed some part. But more fundamentally, economic historian George David Smith sug-gests that three strategic considerations, all fully supported by the logic of the day, couldhave led to that fateful choice.5 First, Western Union sought to create a strategic hedge forthe life of the agreement by allowing the company to participate in whatever profits mightbe extracted from the telephone business without actually having to manage it. Second, all ofWestern Union’s highly successful experience demonstrated that long-distance, business-related communications – chiefly, commercial and financial correspondence and news –offered the most profitable markets for wire-based communications. And for this businessmarket, the Western Union leadership remained fixed on the notion that wire communica-tion was not about interactive conversations, but rather about bursts of terse data – muchlike e-mail would be today if it had to be written in a code comprehensible only to highlytrained specialists. Thus, the agreement allowed Western Union to focus on this serviceconcept provided for its best customers – the banks, financial houses, and news services.Third, nothing in the Western Union experience suggested that telephone technology couldgrow to offer voice-grade conversations over very long distances.

Thus, a sound and widely approved logic appeared to support the strategic choice made byWestern Union. In the end, however, the most important elements of this experience-basedlogic actually misled the the company – the strategic hedge precluded further hands-on learn-ing about the technology and the market; interactive business conversation soon provedquite valuable; social conversation emerged as an important market, though not until theprosperity that followed World War I; and a series on incremental improvements allowed thetelephone to compete effectively in long-distance markets. The Western Union experiencespeaks volumes about the power of technology to inflict strategic surprise. We see this powermanifest today in the contemporary telephone industry.

Surprised by choice: the implications of new technology for strategic risk 35

3Smith, op. cit.4Smith, op. cit.5Smith, op. cit.

Ch03-I044949.qxd 5/23/06 11:38 AM Page 35

Page 41: Managing Enterprise Risk

Déjà vu all over again: the telephone industry and voice-over-Internet-protocol

The technologies that enable voice signals to be sent over an Internet connection (termedvoice-over-Internet-protocol, or VOIP) provide consumers the opportunity to place computerto computer phone calls, computer to telephone calls, or telephone to telephone calls, thelatter when special devices link the conventional telephone to the Internet. This means thatthe cost structure for businesses offering VIOP services more closely resembles the coststructure of the Internet, marginal costs approaching zero, rather than that of the conven-tional telephone company.

VOIP services do not require advanced technology. Rather the disruptive power of VOIPresides with the business models that the technology enables, business models that takeadvantage of the zero marginal cost of the Internet. Companies using these new businessmodels – Skype, recently acquired by e-Bay, for example – tend to stress very small chargeson a very large number of calls. This essentially destroys the pricing models of the incumbenttelecoms, which are based on increasingly irrelevant factors like distance between callersor the number of minutes consumed in the call. Further, the VOIP model offers better serv-ice, allowing customers to integrate voice, digital, and video services from any Internet con-nected device, whether a computer, a mobile phone, or other such device, into one integratedcommunication service. The capital cost for expansion of such networks is low because theusers bring their own hardware to the network.6

To be sure, the incumbent telephone companies recognize the threat, but their response todate has been to defend their existing business models. Many telecoms, especially in Europe,have sought to prevent their customers from using the Internet for voice communicationsand other digital services like file-swapping. Rather than seeing the revealed demand as abusiness opportunity, they have viewed it as an unwelcome distraction from their currentbusiness models, and have developed clever technologies to detect and thwart such indeco-rous customer behavior.7 Will a defensive strategy of entrenchment work for the telecomsthat practice it any better than Western Union’s contractual prohibition of business trafficover the telephone? If history offers any guide, we doubt it.

Anticipating Strategic Surprise

Inerrant knowledge of the future would surely eliminate some (but, we suspect, not all) of therisks of strategic surprise. But the crystal ball approach to predicting the future providesonly cloudy insights even for (perhaps especially for) heretofore successful decision-makers. Instead, we must rely on reasoned inquiry into the implications of what can be,as distinct with attempting to forecast what will be. For our purposes, the domain of “whatcan be” contains two dimensions:

1. the technology itself, especially its capacity for improvement and its potential eitherto reinforce or to attack the prevailing business models;

36 Managing Enterprise Risk

6“The Meaning of Free Speech,” The Economist, 15 September, 2005.7Grant, Peter and Jessica Drucker, “Phone, Cable Firms Rein in Customers’ Internet Use,” The Wall StreetJournal, 21 October, 2005. A1.

Ch03-I044949.qxd 5/23/06 11:38 AM Page 36

Page 42: Managing Enterprise Risk

2. the societal and/or market pressures that could drive that technology rapidly into themarketplace.

For each of these two dimensions of technology-driven surprise, we suggest an approachthat strategic managers might use to discern the long-term implications of decisions thatmust be made in the present.

The technology dimension of strategic surprise

A qualitative assessment of the relevant technologies can offer important clues to the like-lihood that any of them could prove decisive in some future competitive environment.Consider the framework for analysis proposed in Figure 3.1, a matrix relating the growthpotential of a technology to its effect on the business model. This framework provides thestrategic leadership of a company with a systematic way to debate and understand theinherent capabilities of the new technologies most likely to influence competitive success.We illustrate how this can work with the example of an electric utility company operatingin markets that are partially regulated, partially de-regulated, although the concept applieswell to any company.

The vertical scale divides the universe of relevant technologies into those that reinforce theprevailing business model and those that could overthrow it. The horizontal scale dividesthese relevant technologies into those with high potential for performance growth andthose with limited growth potential.8 Into each quadrant, we have sorted some examples

Surprised by choice: the implications of new technology for strategic risk 37

What ... me worry?

• Rooftop photovoltaics• Independent power producers

Attacked

Low High

Reinforced• Gas-cooled nuclear reactor• Sequestration of CO2• Digital grid control • IGCC

Paradise Gained

• Large-scale pulverized coal• Light water reactor

All Quiet on the Western Front

• Fuel cells• Digital grid control• IGCC

Paradise Lost

Potential for performance growth of the technology

Prevailing business model

Figure 3.1. Technology growth and the business model.

8Some examples of physical limits include: the efficiency of a power plant, limited by the second law of ther-modynamics; or, the turning radius of high-speed fighter aircraft limited by the g-forces sustainable by the pilot.Examples of market limitations include: the inability of the human ear to discern further improvements in stereo-phonic sound; or, the saturation of the best locations for fast-food restaurants.

Ch03-I044949.qxd 5/23/06 11:38 AM Page 37

Page 43: Managing Enterprise Risk

of technologies that illustrate the capacity of this framework to raise important strategicquestions. The sorting process requires many judgments, and reasonable persons mightdisagree with the assessments in Figure 3.1. But the value of this (or any similar framework)resides less in the precision with which technologies can be sorted and more in the qual-ity of strategy debate that attends the sorting. Below we consider the implications of eachof the four quadrants.

All quiet on the Western frontWe begin here because this quadrant is closest to the current experience of most electricutility companies. Technologies judged to fall within this quadrant of Figure 3.1 generallysustain the current market relationships and hence the business model of the regulatedelectric utility. Consider, for example, a modern, state-of-the-art power plant fueled by pul-verized coal. Though such power plants are technologically sophisticated, the potential forefficiency improvement is bounded by the ability of materials to withstand high tempera-tures and pressures and, hence, by the Second Law of Thermodynamics. Further, therecent experience with independently produced power suggests that the business riskattending such plants will be lower when owned and operated by an incumbent electric util-ity, hence the reinforcing nature of the technology. Nuclear power plants (but only the lightwater reactors) have similar characteristics.

Paradise LostDiagonally across the matrix, we find a set of technology possibilities but currently out-side the scope of most utility industry experience – those with the potential for high per-formance growth and that attack the dominant business model of the regulated electricutility. Consider the fuel cell as a source of distributed generating capacity. Large units inthe 200 kW range have been on the market for over 15 years, but at $4500/kW they costtoo much for all but special applications. However, laboratories around the world havebegun a grand technology race to improve the fuel cell in response to two markets thatpromise opportunities of extraordinary potential:

● the market for very small-scale replacements for battery systems for mobile elec-tronic equipment;

● the market for vehicular power systems.

Entrepreneurs and innovators in fields far removed from the electric utilities will pursueboth of these opportunities. In doing so, they will advance the basic technologies requiredfor all fuel cells, and the fuel cell vehicles will offer special challenges to the utility busi-ness model through their ready adaptability for distributed generation.

Each automotive fuel cell must deliver between 75 kW and 100 kW to compete with theinternal combustion engine on performance; and, it must cost under $100/kW for the vehi-cles to compete on price. But once these goals are achieved, possibly within 15 to 20 years,9

38 Managing Enterprise Risk

9National Research Council and National Academy of Engineering, The Hydrogen Economy, National AcademiesPress, 2004.

Ch03-I044949.qxd 5/23/06 11:38 AM Page 38

Page 44: Managing Enterprise Risk

significant numbers of fuel cell vehicles will enter the world marketplace. In the UnitedStates, about 235 million vehicles are registered for use. Even the lower figure of 75 kWper vehicle would eventually yield about 18 TW10 of vehicular generating capacity – capac-ity that spends about 88% of the time sitting around parking lots and garages waiting togo somewhere. If the parked fleet could plug into the electric grid and generate electricityat the marginal cost of the fuel cell, it could provide formidable competition for the con-ventional fleet of large-scale utility-owned power plants.

To understand how or when such competition could arise, we must turn to another concept,that of enabling technology. By an “enabling technology,” we mean one that provides a serv-ice essential to the disrupting technology, but quite independent of it. Consider the overthrowof the manufactured ice industry, for example. The disruptive technology that penetratedthe home refrigeration market in the 1930s and 1940s included small, compact refrigerationcycles and small, efficient electric motors, packaged as the home refrigerator. But electricrefrigerators would have penetrated nothing without the widespread availability of elec-tric energy. And so electricity became the enabling technology that allowed home refrig-erators to displace the ice industry in its chief market.

For massively distributed generation, perhaps through parked vehicular fuel cells, to suc-ceed, the electric grid must become capable of absorbing and distributing the input energy.Thus, a grid that can operate as a network of a very large number of nodes, any of whichmight alternatively serve as a source of electric energy or as a sink for electric energy wouldserve as an enabling technology for these generators. This will require digital grid control,switching devices, computers, and software, that is not available right now. But if it were todevelop, the effects on the business model of the incumbent electric utility could be profound.And that is why we include “digital grid control” in our illustration of high-potential tech-nologies that attack the current business model in Figure 3.1. Indeed, this set of technolo-gies could treat the business model of the integrated electric utility as roughly as VOIPtreats for the business model of the incumbent telecom.

Paradise GainedIf paradise can be lost, so too can it be gained. Some of the high-potential technologiesthat could reinforce the electric utility business model appear in the southeast quadrant ofFigure 3.1. Note that digital grid control appears in this quadrant also. That is because inits earliest phases, the digital grid actually makes the dispatch of existing power plants muchmore efficient. And if automotive fuel cells never become practical, perhaps because break-throughs in battery technology make the plug-in electric vehicle superior, then digital gridtechnologies might actually reinforce the electric utility business model.

Integrated gasification combined cycle (IGCC) technology also appears in both ParadiseGained and Paradise Lost, but for a different kind of reason. This technology essentially“cooks” the coal under high pressure to set in motion a series of chemical reactions thatproduce a synthetic gas, “syngas.” The syngas chiefly contains hydrogen, carbon monoxide,methane, and other gaseous constituents whose content varies depending upon the conditions

Surprised by choice: the implications of new technology for strategic risk 39

10In contrast, the stationary generating fleet in the United States has a generating capacity of around 1 TW,according to the U.S. Energy Information Administration.

Ch03-I044949.qxd 5/23/06 11:38 AM Page 39

Page 45: Managing Enterprise Risk

in the gasifier and the type of feedstock. The gas can be burned in a combined cycle powerplant or processed to create a broad slate of other fuels and chemicals. Thus, in principle,the output can be tailored to the relative prices of electricity, synthetic fuels, or syntheticchemicals. Currently, the IGCC technology achieves efficiencies in the range of 45%, andfuture systems might reach the 60% range. And so great potential plainly exists for growthin IGCC technology; the real issue concerns the business model for the electric utilitycompany and whether IGCC reinforces or attacks it.

In fact, a case can be made for both. On the one hand, utilities count the building of large-scale industrial facilities among their core skills. But on the other, the essential operatingcharacteristics of an ICGG unit appear closer to an oil refinery or chemical processingplant than to a coal-burning power plant. Further, the slate of products offered by IGCCwill require utilities to understand fuels and chemical markets if they are to take full valuefrom the technology. Thus utility companies would need to adopt new skill sets to succeedwith this technology. New skills, of course, are not without precedent. For example, utilitycompanies had to reach well beyond their customary licensing, construction, and operat-ing practices when nuclear power plants were introduced in the 1970s. Those that mas-tered the new skills did well; those that did not reaped costly mistakes, which illustrates thestrong link between an effective corporate culture and strategic risk, a link that we shallrevisit at the conclusion of this chapter.

What … Me Worry? Finally, we reach the strategically unimportant quadrant where nichetechnologies attack the business model. Rooftop photovoltaics and merchant power pro-ducers illustrate this well. As long as strategic management can remain confident thatthese niches will remain so, then little more needs to be done here.

The market dimension of strategic surprise

When fighter pilots list the advantages that one combat aircraft holds over another, theydo not speak of speed. Rather, they refer to the ability of one aircraft to “turn inside” another,to negate other aspects of performance with a tighter turning radius. (Figure 3.2, courtesyof Lockheed-Martin, shows the contrails of the newer F-16 turning inside the vintage F-4.) For many businesses, like the electric utility industry, that require large investmentin long-lived capital equipment, market changes can “turn inside” the ability of the indus-try incumbents to respond. Left unanticipated, such market shifts can lead to strategic surprise just as surely as unanticipated growth in a technology.

Past success offers little help here – nothing in the experience of the Western Union lead-ership could have led them to expect that social conversation would emerge as a driver ofthe telephone business. Nothing in the current telecom experience suggests a businessmodel that can succeed as the marginal cost of service approaches zero.

Indeed, successful experience often sets a trap, as the difficulties of the U.S. auto-makersillustrate. Focused on the profit margins achieved by their heavy conventional vehicles,the U.S. auto companies neglected the possibility that significantly higher oil prices mightstimulate demand for greater fuel economy. In contrast, Toyota Motor Company securedan early lead in the fuel economy market with a light hybrid vehicle, the Prius. The threat

40 Managing Enterprise Risk

Ch03-I044949.qxd 5/23/06 11:38 AM Page 40

Page 46: Managing Enterprise Risk

to the “Big Three”11 comes less from the small hybrids, however, than from the move-ment of hybrid vehicle technology up-market into the heavy vehicle classes. Drawingupon its experience with small-vehicle hybrids, Toyota has moved up-market toward theheavier, luxury vehicles, and sports-utility markets. There, the higher prices that thesevehicles command more than recoup the expense of the hybrid electric equipment; indeedsuch vehicles have become enormously popular, starting with the Hollywood “A-list.”

To be sure, the “Big Three” have recognized the threat and are responding, albeit tardily,and are now responding with their own hybrid vehicle lines.12 But the advantages enjoyedby the market leader poses a formidable challenge: not only does the market leader com-mand the low-cost position due to learning curve effects (formally known as dynamiceconomies of scale), but it also enjoys technology leadership through a superior base oftacit knowledge, the unwritten know-how gained by its technologists through trial, error,and learning.

Could the “Big Three” have reasonably anticipated the market shift? To be sure, the GulfCoast storms that caused fuel price to spike in the United States were unpredictable. Butthe peaking of world oil production is a known eventuality – only its timing defies fore-cast. To be sure, unconventional fuels such as manufactured from tar sands or biological

Surprised by choice: the implications of new technology for strategic risk 41

Figure 3.2. Turning inside.

11We use the traditional term “Big Three” to refer to General Motors, Ford, and DaimlerChrysler.12See, for example: Sapsford, Jason et al. “Ford Plans to Speed Up Production of Hybrid Vehicles,” The WallStreet Journal, 22 September, 2005, A1.

Ch03-I044949.qxd 5/23/06 11:38 AM Page 41

Page 47: Managing Enterprise Risk

materials can mitigate the certain decline in extracted oil production, but only at a higherprice than conventional oil. And high fuel price provided the issue that drove the hybridvehicles into the market in the first place. Equally important, once their market positionbecomes established, the hybrid electric vehicles might actually provide better mobilityservices than their conventional counterparts. Electric drive can offer superior torque, andthe more capacious onboard electric system can provide for the burgeoning electricalneeds of the contemporary automobile. This implies that the market share for hybrid elec-tric technology might well survive even a downturn in the price of petroleum. Thus, a wealthof evidence suggested the prudence of an early rather than late start.

But all this begs an important question: How are strategic managers to identify the tech-nology component and the market component of strategic surprise? To that issue, wenow turn.

The Power of Corporate Culture

Of the several planning tools that might call management attention to the potential forstrategic surprise, we favor scenario planning, also the subject of a chapter in this book.However, the real issue runs much deeper than merely adopting certain management tools.More fundamentally, it concerns the ability of a company to learn from events that are nota part of its historical experience, to distill from the cacophony of signals that flood thebusiness environment the few that must command attention. And that is less a matter oftools than of organizational culture.

By “culture” we mean the skein of unwritten customs, habits, rewards, and punishmentsthat provide the social framework for any corporation. Some of these cultures serve acompany well, providing a highly effective impedance match with its surrounding busi-ness environment. But others can isolate the organization’s critical decision-makers fromthe business realities seen by its front line employees.

Consider Western Union once again. Did some unappreciated technologist deep withinthe organization appreciate the long-term potential of telephone technology? Did that per-son attempt to convey that idea to strategic management? Was anybody listening? Of course,we have no way of knowing. But we can say with certainty that an open culture reward-ing initiative, inquiry, and learning is more likely to benefit from such a person than a culture without these attributes. And so we conclude that whatever management tools acompany employs to guide the bets it must place on an unknowable future, an effectiveculture provides the foundation against strategic surprise.

42 Managing Enterprise Risk

Ch03-I044949.qxd 5/23/06 11:38 AM Page 42

Page 48: Managing Enterprise Risk

CHAPTER 4

Why the Transmission Network as PresentlyPlanned Will Not Provide the Efficient, Economicand Reliable Service that the Public Wants

Leonard S. Hyman

Senior Associate ConsultantR.J. Rudden Associates

Policy-makers have devised an organizational structure for transmission that does not align the interests of operator with consumer, discourages investment and entrepreneurship, lacks incentives for efficient operation, and diffuses responsibilityfor outcomes. Without change of direction, the transmission sector will evolve into a command-and-control entity that will undermine the use of market mechanisms inthe electric supply industry.

Electric utilities in the USA, by and large, developed into vertically integrated monopolies.That is, they generated electricity, transmitted it from power station to load center, and thendistributed it to consumers. On the other hand, British utilities developed differently. TheBritish built up a National Grid, a transmission network that bought electricity from mostgenerators, transported it to the load centers, and sold the electricity to distributors thatresold it to consumers. Some distributors owned generators and some generators alsoowned distributors, as well. Both the British and American systems worked. The Britishfolded National Grid into the generating agency when the government nationalized theelectric industry in 1947, but resurrected National Grid as a separate company when thegovernment privatized the electric industry in 1990. American regulators have begun aprocess of removing transmission from the control of the integrated utilities.

Why restructure a vital industry that had provided reliable service for so many decades?

Old Model and New Needs

The electric industry used to exhibit economies of scale, that is, large facilities operatedat lower costs per unit of output than small facilities. One large firm, operating full out,

43

Ch04-I044949.qxd 5/23/06 11:55 AM Page 43

Page 49: Managing Enterprise Risk

44 Managing Enterprise Risk

could serve the public at lower cost than many small, competing firms. The public, then,would benefit if the state limited the electric supply function to one large firm, so it couldexploit economies of scale to the fullest extent possible. To make certain that the public – notthe electricity producer – reaped the benefits of the economies of scale, the state regulated theprice charged, setting it at cost of production plus a profit. Thus the term “cost of service” reg-ulation. Since most American utilities provided the functions of generation, transmission anddistribution, they offered a bundled price for electricity – one that included the costs of all thefunctions. The utility had no reason to determine the costs of the separate functions. In the UK,the utility did know the costs of the separate functions, because it paid for them separately.

Despite the fact that cost of service varied by time of day and season of year, utilities setprices based on average costs. Originally, they did not have the means to meter consumptionon a timely basis. Later, they desired to spread costs over many customers, rather than tar-get the customers who caused the costs. Doing so provided cross subsidies between cus-tomer groups. Average pricing, for instance, encouraged consumers to take more electricityduring peak periods, when the utility’s cost of production may have exceeded the pricethat it charged. That cost included payment for plant and equipment built specifically toserve the peak load that remained idle for the rest of the year. In other words, utilities had tokeep more plant in service than would have been the case if customers had paid the correctprice for service. All customers paid those excess costs whether or not they contributed tothe problem, meaning that some paid more and others would pay less. That is cross subsidy.

The old model produced declining prices for decades. The utilities installed larger plant thatreduced costs, that lowered prices, that encouraged more consumption, that enabled the util-ities to install even larger units that lowered prices, that continued the process. The regulatorysystem protected the utilities from competition (assuming that anyone could compete againstsuch a price reducing firm), afforded them steady profits and allowed them to attract capital.Regulation did have one other peculiar aspect. Under cost of service regulation, the ineffi-cient utility with high costs could charge more than the efficient low cost utility. Efficientmanagement earned no reward. Regulators set the profit on the basis of a return on capitalinvested. Academics accused utilities of over investing in order to earn greater profits.

Despite its problems, the regulatory system worked until conditions began to change in the1960s. Generating plants reached physical limits to increased efficiency, so building biggerno longer reduced costs. Utilities did not choose the new, gas fired generators derived fromthe jet engine as the next technology but rather nuclear power, which raised rather than low-ered costs, and jeopardized the finances of utilities. The Northeast Blackout of 1965 demon-strated inadequacies in the transmission grid. In the 1970s, industrial customers bandedtogether to undo the price allocations and costs subsidies that discriminated against them. In1978, Congress passed the Public Utilities Regulatory Policies Act, which permitted inde-pendent power producers to generate electricity and then sell it to utilities. Within a fewyears, the independent producers, using small, gas fired turbines, demonstrated that theycould produce electricity reliably, cleanly, and more cheaply than the utilities. Industrialcustomers discovered that alternatives to the utility existed. Perhaps, if economies of scaleno longer prevailed, the utility’s monopoly position had lost its justification.

Then, in 1990, the British government sold the state owned electric industry to the privatesector. It split the industry into unregulated generating, regulated transmission, regulated

Ch04-I044949.qxd 5/23/06 11:55 AM Page 44

Page 50: Managing Enterprise Risk

Presently planned transmission networks and public needs 45

distribution, and unregulated supply (sale of electricity to the customer). The generatorssold output to a central wholesale market, large customers could buy directly from thatmarket, generators and customers could make side deals to guarantee prices. The regula-tor set prices in a manner that encouraged the utilities to operate more efficiently. Theworld had a model for a competitive power market operating beside a utility industry regulated in way that produced cost savings for consumers.

America Moves Slowly

Congress passed the Energy Policy Act in 1992, establishing a new class of power pro-ducers that could sell into wholesale power markets. Congress did not open up the retailmarket to competition. In order to assure that the incumbent utilities did not operate theirtransmission lines in a manner that hindered the ability of competitive generators (whodid not own transmission), the Act provided that the utility had to give the new generatorthe same access to transmission that it gave to its own generation facilities. Congress didnot legislate separation of transmission from the utility, as did the British.

The transmission sector plays a central role in the development of a competitive market. Itcan facilitate or obstruct delivery from a competitive generator, not to mention setting rulesfor connection to the grid that thwart access to the grid. It can, as well, price service in sucha way that prevents distant generators from competing in a market. The Federal EnergyRegulatory Commission (FERC), which has jurisdiction over wholesale power transac-tions and interstate transmission, took 4 years to lay out the rules needed for the 1992 Actto function. In Order 888,1 as noted by Awerbuch, Hyman and Vesey, FERC ordered:

utilities to separate the transmission network operations from other aspects of the inte-grated electricity business and to ultimately divest themselves of their transmissionoperations by either moving their transmission systems into business units that operateseparately form the rest of the utility, by selling off the transmission assets, or by put-ting them in control of an organization that could run that transmission network inde-pendent of interference from the utility. At the time … in 1996, most utilities had noplans to sell … As a consequence, utilities compromised … agreeing to retain trans-mission ownership but cede operating control … to an independent system operator(ISO). With the exception of one case, the ISO simply replaced the existing power poolthat had already operated the transmission function for its members.2

FERC also told the system operators to treat the utility’s wholesale trading the same wayas everyone else’s. (At that time, most utility associated traders sold excess output of powerstations to neighbors, or bought power when their own power stations could not meetdemand.) FERC told the three power pools in the Northeast to convert to the new statusof ISO. The pools already ran the power plants of associated companies as if they werepart of one company, always trying to keep the lowest cost units in service. So the ISOstook over the role of the power pool, essentially, but with important modifications. Thesenon-profit organizations did not report to the owners of the transmissions lines or to thegenerating owners or to the state. The ISOs had independent boards of directors, con-trolled the operations of the network, and defined the rules for attachment to the grid.

Ch04-I044949.qxd 5/23/06 11:55 AM Page 45

Page 51: Managing Enterprise Risk

In 1999, responding to complaints that the transmission owners who also owned genera-tion continued to discriminate against competing generators, FERC issued Order 2000,3

which ordered transmission owners to “voluntarily” join regional transmission organiza-tions (RTOs) or explain why. The RTO concept embraced two alternatives, the ISO andthe independent transmission company (ITC). Modeled after Britain’s National Grid, theITC was a for-profit grid owner and operator with no ownership of generation, and there-fore, it had no reason to operate the network in a manner that favored one generator overanother. Proponents of the ITC believed that a for-profit entity would have more incentiveto operate the grid in the most efficient manner possible. The order told the companies tojoin a RTO that had to be in operation by December 15, 2001. FERC said it would considerinnovative ratemaking procedures. It limited utility share ownership of the RTO. It saidthat the RTO had to operate facilities, take responsibility for reliability and manage trans-mission congestion. FERC also said the RTO could evolve in a manner that would improveits efficiency.

The order reflected the ideological split within FERC. The majority wanted to mandate atransition to network control by ISOs, with conventional regulation of the underlying util-ity assets. The minority hoped for the creation of profit-making transmission entities thatoperated under a system that provided incentives for them to improve network efficiencyand attract capital. Order 2000 seemed to leave the door open for all proposals, but mostof the transmission mavens of the day seemed to believe that business structure did notmatter, although some experts dissented vociferously:

The term ISO, as the FERC originally used it, meant a non-profit operator – but notowner – of transmission lines. A number of analysts subsequently concluded that theISO lacked the structure or incentives required to respond to market conditions andthus had inherent … inefficiencies. In addition, they concluded that the transfer oftransmission asset control – but not ownership – to an agency with little incentive tomaximize return on those assets made little sense from a business perspective. Thisled to heated debates on the virtues of ISOs versus the alternative: independent trans-mission companies … that would both own and operate the assets. This debate seem-ingly has caused FERC to move beyond its singular support for the ISO model andto substitute a neutral term – Regional Transmission Organization … to encompassboth the ISO and ITC models. Regulators now say that the actual business organiza-tion does not really matter so that either ISO or ITC will do. We emphatically dis-agree with this position. Structure does matter. ITCs will produce superior resultsand RTOs should not become ISOs by another name.4

Not much happened after Order 2000, other than endless rounds of meetings, so, in 2002FERC decided to order utilities to join RTOs, imposed a pricing framework, set up a mar-ket design for the entire country called standard market design (SMD) and shelved theconcept of a for-profit independent grid owner/operator.5 As part of the SMD, the RTOs hadto adopt locational marginal pricing (LMP), a specific formula to charge users of transmis-sion more when they used specific lines during periods of congestion on those lines, akinto charging commuters more in rush hour trains. LMP had two objectives. It was sup-posed to discourage generators from putting their facilities at places where transmissioncapacity was short, because they would have to pay high congestion charges that wouldreduce their profits. More dubiously, it was supposed to provide incentives to entrepreneurs

46 Managing Enterprise Risk

Ch04-I044949.qxd 5/23/06 11:55 AM Page 46

Page 52: Managing Enterprise Risk

Presently planned transmission networks and public needs 47

to solve the congestion problem by erecting new facilities that would collect the conges-tion fees from users. FERC seems to have envisioned the rise of merchant transmissionentrepreneurs akin to the merchant generators that built power plants without assurancesof long-term return through contracts to sell the electricity that they produced.

The SMD proposal provoked opposition from state regulators who argued, rightly, thatFERC had asserted the benefits of SMD without any justification, who objected that FERCwas intruding in the jurisdiction of the states and who claimed that a one-size-fits-allmodel did not make sense for a country as varied as the USA. Congressional oppositionto SMD developed, as well. Furthermore, industry observers continued to complain thatFERC had not effectively addressed the need to attract capital. So, early in 2003, FERCissued a proposal designed to entice transmission owners to sell their assets to independ-ent firms and to attract capital into new transmission plant.6 The agency said that it wouldadd bonuses to allowed return on equity for joining an RTO, for independent ownership,and for investment in new plant, although it did not specify the base to which it would addthe bonuses, and put in the proviso that the total equity return allowed (including all thebonuses) could not exceed the rate of return allowed for comparable local utilities. FERCalso provided a return on money that the buyer had to pay the seller to cover certain taxpayments triggered by a sale. All in all, no more than 10–15% of the proposed benefitswould go to attracting capital expenditures that would expand the transmission infra-structure. FERC’s effort did not attract any comers. Then, in April, FERC issued the ulti-mate waffle, a white paper that transformed SMD into wholesale power market platform(WPMP), backed down from its previous challenge to state authority, and agreed to regionalapproaches.7 After the August 14, 2003 blackout, FERC and the Department of Energychanged their emphasis from restructuring to reliability.

In the decade following the passage of the Energy Policy Act of 1992, the real price of elec-tricity to consumers fell about 1.1% per year, after excluding from the calculation the impactof falling fuel prices.8 That drop in prices is in line with the calculations of the potential ben-efits from adopting best practices made by management consultant Mitchell Diamond.9

Since the proclamation of SMD, only one new ISO has gone into operation. Two utilitiesdid sell off transmission assets, but they did so because state restructuring law requiredthe transactions. One merchant transmission line has gone into service, but only becausethe Secretary of Energy forced its opening as an emergency measure after the August2003 blackout.

Reliability has become a public issue of public thanks to the August 2003 blackout. Butconsumers have suffered from a series of major outages since 1992, and the grid’s relia-bility may cause economic problems, on a steady basis, for digital economy firms. Thelong record of anemic investment in the grid since the initiation of restructuring, despitegreater demands being placed on it, may contribute to unreliability.10 Transmission servesanother purpose beyond facilitating the expansion of wholesale markets. It should shoreup the reliability of the electric supply, providing a safety net for consumers whose localutility, operating under stress, cannot assure delivery of service. That function the policy-makers seem to have ignored until the lights went out. Restructuring has not included asystematic examination of the reliability level that customers want, what they want to payfor, and how to assure that reliability product at an economical price.

Ch04-I044949.qxd 5/23/06 11:55 AM Page 47

Page 53: Managing Enterprise Risk

48 Managing Enterprise Risk

So far, restructuring, can claim limited success. Price of electricity has fallen, in real terms,but not much. Large commercial and industrial customers have taken advantage of com-petitive markets, but residential customers have ignored them, by and large. Competitivefirms that expanded to meet the supposed shortage of power supply have, gone bankrupt,required debt restructuring to avoid collapse, or have taken huge losses. The transmissionnetwork has barely expanded, mired as it is in disputes about who pays for what. Half thecountry opposes restructuring. The other half that did restructure shows little enthusiasmfor the process. The transmission sector, and those regulating it, will play a key role ineither pulling the electricity market out the morass, or sinking it altogether.

Status Report

The Federal Government has developed a preferred model for the new electric industryalthough not every region will accept all the parts. Power generators will sell output intoa regional wholesale market. (They may also sell output through long-term contracts.)Supply and demand will set wholesale prices except when the RTO or the FERC deter-mines that sellers exercise market power. Therefore, despite the bidding, the marketremains regulated. Utilities owning transmission must sell the lines to independent partiesor place them under the operating control of an RTO. Independent owners, despite havingno reason to discriminate against any generators, must place their facilities under RTOcontrol.

The RTO, a non-profit organization with an independent board of directors, will operatebut not own lines, will run the local wholesale power market and decide which plants run.To reliably operate the grid, the RTO has to assure adequate generating capacity and avail-ability of so-called ancillary services – the electrical output and characteristics needed tokeep the grid up and running. The RTO will, in addition, price use of lines in a manner thatpenalizes users of congested lines. The designated pricing method, LMP, to simplify, chargesusers the difference in price of electricity between two points for use of the line betweenthose two points, assuming, probably incorrectly, that users of the line, alone, cause thecongestion. Although the method appears market based, it relies on the RTO’s judgmentof what the line can carry, and gives the RTO no incentive to find ways to carry more onthe lines, that is, to reduce congestion. System users may buy rights to collect those con-gestion charges to protect themselves against unexpected fluctuations in costs. Originally,backers of LMP and transmission rights touted them as devices to incent investors to solvecongestion problems, but that hope has dried up. RTOs, as a whole, have not yet devel-oped coherent frameworks for capital investment. Several ITCs have formed, but withoutthe scope of operation previously envisioned. They do little more than build and own assets,while taking orders from the RTO.

In theory, FERC regulates interstate commerce, but all transmission assets owned by theutility, while under the control of the RTO, remain in state regulated rate base. Therefore,FERC sets the revenue requirements for only a fraction of the transmission business. TheRTO reports to FERC, and what it controls accounts for a significant and rapidly risingpercentage of the electric bill, but it has little in the way of regulated assets, and nobodyseems to have control over the expenses it generates and dumps on the customer.

Ch04-I044949.qxd 5/23/06 11:55 AM Page 48

Page 54: Managing Enterprise Risk

Presently planned transmission networks and public needs 49

The states retain control over the local distribution network and the ultimate customer.Studies show that bringing the customer into the market, by pricing the product in a waythat reveals costs, would produce greater savings than all the other policies that consumeregulatory time and effort.11

Although British experience shows that regulation designed to improve productivity canproduce dramatic benefits, American regulators have eschewed such methods, assumingthat they know about them.

Response to the August 2003 blackout followed narrow engineering lines. One companydid not follow rules. One RTO did not know what was going on. The solution: obey rules,rewrite rules when required, and make sure that everyone follows them. That does notaddress the question of whether the grid should be designed and operated in such a waythat an operating error in Ohio (if that was the cause) could bring down the entire Northeast.Nor does it ask whether non-grid solutions (such as distributed generation located nearconsumers or arrangements to cut off designated consumers, with compensation for incon-venience) could provide equal or better reliability at a lower cost.

So, by early 2004, the northeastern quadrant of the United States had organized into fourRTOs, all committed to using LMP. An RTO in Texas operates outside FERC jurisdiction.The California ISO runs an essentially re-regulated market. The rest of the country has resis-ted restructuring or FERC attempts to impose structure.

As for infrastructure, transmission capital spending has declined in real term. In the firstdecade of restructuring, transmission capacity per unit of demand fell almost 20%. Pro-jections for 2003–2007 show transmission capacity rising less than 1% per year, or halfthe expected rate of growth in electricity demand.12 Electric Power Research Institute (EPRI)estimated that modernization of the transmission and distribution networks over a 20-yearperiod would cost roughly $12 billion per year, presumably largely in addition to theroughly $16 billion now being spent on a seemingly bare bones attempt to keep the net-work in repair and able to serve demand.13 Thus, any grid-based attempt to bring the elec-tric system into the twenty-first century would require the industry to raise massive sumsof money.

By now, even the staunchest proponent of LMP seems ready to concede that LMP doesnot send the signals needed to attract capital for large transmission investments, but theonly solution offered appears to be to mandate investment if the RTO designates that invest-ment as needed to maintain system reliability.14 As FERC chairman Patrick Henry WoodIII put it, “We need something on the books so there is a shareholder obligation to act andso it is everyone’s cost to bear equally.”15 Does that mean that market incentives apply onlyto congestion on the day of the transaction, and the new market structure simply will pro-vide an expensive and cumbersome camouflage for a reversion to command-and-control?

That outcome could drive out what remaining entrepreneurship exists in the transmis-sion sector, enshrine prescriptive operating rules that attempt to maintain reliability at levels set without regard to costs, and possibly deprive consumers of the benefits ofincreased productivity within the sector. Could the introduction of incentives produce abetter outcome?

Ch04-I044949.qxd 5/23/06 11:55 AM Page 49

Page 55: Managing Enterprise Risk

50 Managing Enterprise Risk

Incentives in Regulated Industry

Incentive payments pervade the economy. Sales people work on commissions. Automobilemanufacturers pay cash to buyers of slow selling vehicles. Airlines give frequent flyermiles to loyal customers. The government pays farmers to produce some crops and to notproduce others.

Oxford’s lexicographers define “incentive” as “a motive … to action” or “a payment orconcession to stimulate greater output ….”16 They trace the word to the Latin incentivus,meaning “setting the tune.” They define “disincentive,” from an economics standpoint, as“a source of discouragement to productivity and progress.”17 Perhaps Henry of Navarreput it best when he said, “One catches more flies with a spoonful of honey than with twentycasks of vinegar.”

Providing an incentive is the same as paying a bonus. Do the job satisfactorily and you getpaid. Do the job well and you earn something extra.

Regulators often argue against the use of incentives, asserting, “The regulated entity isduty-bound to do the right thing, and we pay it exactly what it needs to do the right thing.You should not have to pay something extra to get people to do what they are paid to do.”This argument misses several points:

Regulation, to benefit consumers, should focus on price and quality of product offeredrather than on profit earned. The incentive should encourage the utility to improvethe product without raising prices, or to furnish the service for less than under stan-dard regulation. The consumer should come out ahead, whether or not the utility makesa greater profit as a result of the incentive plan.

The incentive should encourage the utility to take the risks of innovating, which it couldnot prudently take under the standard regulatory regime, because regulators wouldpenalize it for failure but would not reward it for success.

If properly fashioned, the incentive will encourage the utility to aim for a particular endresult, without specifying the steps required to reach the destination, thereby encour-aging innovation in an environment not known for innovation.

Incentives and disincentives already exist in standard regulation. Companies earned areturn on rate base, and the bigger the rate base, the bigger the income. Some economistsconcluded that utilities tended to over invest in rate base, which added to customer bills.18

As regulators set rates that covered prudently incurred costs, with little consideration towhether the utility could have operated with lower costs, critics said that typical cost ofservice (rate of return) regulation bred inefficiency.19 Almost all the countries that priva-tized their utilities in the 1980s and 1990s chose not to follow the American model of util-ity regulation because of the perverse incentives imbedded in it.

Dr. John Chalice, in an 1858–1859 investigation of the British city gas industry, proposeda “sliding scale” for the manufactured gas sold by the local utilities. The governmentshould set a maximum dividend return on capital (akin to a maximum return on invest-ment because the companies paid out most of earnings as dividends), with the proviso thatthe utility could raise the dividend by a prescribed amount over the maximum if it could

Ch04-I044949.qxd 5/23/06 11:55 AM Page 50

Page 56: Managing Enterprise Risk

lower gas prices by a prescribed amount below the price set by the government. That pric-ing procedure became standard in the British gas and electric industries, lasted well intothe twentieth century, and was introduced into the USA, where one such plan stayed inforce from 1925 to 1955.20 The British learned, soon after the introduction of price regu-lation, that they had to find ways to make the utility pay attention to costs, and that thebest way to do so was to offer a reward for producing results that benefited consumers.Perhaps they remembered that famous line from Adam Smith, “It is not from the benevo-lence of the butcher, the brewer, or the baker, that we expect our dinner, but from theirregard to their own interest.”21

In the latter decades of the twentieth century, American regulators instituted variousforms of incentive plans, often called performance-based regulation (PBR), such as:

Performance index: Regulators grant bonus returns if the utility achieves or surpassesgiven operating indices. (The procedure has a drawback, however. It focuses theattention of the utility on beating indices chosen by the regulator, rather than onimproving overall costs or providing the services desired by the customer.)22

Range of return: The regulator sets a range around the allowed return. The utility ben-efits from operating improvements that do not require a price increase up to the topof the range but also has to wait until earnings hit the bottom of the range before itcan seek additional rate relief. (The range allowed is usually limited, though, so thepractice may have only minimal impact on customers and operations.)

Profit sharing: The regulator sets the price of the product and the allowed return. If theutility can lift its return, through operating economies or higher sales, over a setlevel, it must share the excess profits with customers through price reductions. Onthe downside, it will have to endure a reduction in return below the allowed levelbefore it can seek to recoup lost income through price increases. (The range is, usu-ally, wider than that allowed in range of return incentive regulation. Over a maximumreturn, all savings go to customers.)

Price moratorium: As part of restructuring deals, regulators instituted price freezes formulti-year periods. During this period of time, the utility can benefit from all oper-ating savings and sales increases. Presumably the customer benefits from not havingprices rise. (The price freeze, however, may simply encourage the utility to deferexpenses until after the end of the freeze, when the utility could raise prices to coverthe costs. Furthermore, during the freeze, the utility has no way to recover the costof new capital invested, so the utility may defer capital spending, as well.23 Thus, themoratorium may change the timing of costs, rather than save consumers money.)

Regulatory lag: By delaying regulatory examination for extended periods of time, theregulator allows the utility to retain benefits of productivity improvements, and,thereby, encourages innovation and efficiency. By delaying regulatory action,though the regulator may also discourage needed capital expenditures because theutility will defer investments for which it remains uncompensated for extended peri-ods of time. (The big problem is that regulatory lag depends on the regulators notdoing their jobs efficiently or simply looking the other way. The next set of regula-tors may not be so inclined. It is difficult to plan for, invest in or run a business onsuch an uncertain basis.)

Presently planned transmission networks and public needs 51

Ch04-I044949.qxd 5/23/06 11:55 AM Page 51

Page 57: Managing Enterprise Risk

52 Managing Enterprise Risk

Most other countries, in the process of privatization or restructuring of their utilities, haveinstituted formal incentive regulatory programs. The British, set the standard for regula-tion when they transformed a formula designed to set prices for condoms into an elegantlysimple regulatory regime. The regulatory agency examines the utility’s cost structure andneed for capital over a coming 5-year period. It then sets a beginning price for service thatis automatically adjusted every year for the rate of inflation and also adjusted downwardevery year for a fixed productivity factor.

If the utility can find more productivity savings than those estimated by the regulator atthe beginning of the period, it can keep the difference. If it is unable to attain the estimatedproductivity gains, it still must reduce prices by the formula. Companies with high capi-tal spending needs have a capital expenditure factor added to the formula. At the end ofthe 5-year period, the regulator resets the formula, so utilities cannot retain the additionalprofits for an extended period of time.

The foreign regulatory formulas, by and large, focus on price rather than profitability.They do not presume to tell the utility how to accomplish the necessary productivityimprovements. The utilities have an incentive to innovate, because they know that theywill keep some of the benefits, which compensate them for taking the risks involved in theinnovation. At the same time, they will assess the risks of the innovation, because theyknow that they cannot charge the consumer more to cover the costs if the innovation fails.The foreign regulatory regimes have been in operation for approximately 20 years, seem-ingly without hindering the utilities’ ability to raise capital or provide service.

Market Mechanisms in Place of Regulation

At least from the days of Adam Smith, economists have argued that government regulationdistorts the market, meaning that it hinders the efficient production and distribution ofproducts and services to consumers. From the early debates in Great Britain to SamuelInsull’s call for regulation in the USA, evidence points to the protection of the utility as oneof the main goals of regulation. University of Chicago economists picked apart regulationover the decades and picked up Nobel Prizes for their work. Alfred E. Kahn observed, in1971:

Regulated monopoly is a very imperfect instrument … It suffers from the evils ofmonopoly itself – inertia, the absence of … stimuli to aggressive, efficient and inno-vative performance. Regulation itself tends inherently to be protective of monopoly,passive, negative, and unimaginative …. Regulation is ill-equipped to treat the moreimportant aspects of performance – efficiency, service innovation, risk taking andprobing the elasticity of demand. Herein lies the great attraction of competition: itsupplies the direct spur and the market test of performance.24

If that analysis is correct, consumers could benefit from the replacement of regulation bya competitive market. But, policy-makers say, the utility is a natural monopoly, competi-tion is not feasible, and the regulatory agency acts as a substitute for the discipline of themarket place. That argument, though, assumes, to some extent, that the market is static,

Ch04-I044949.qxd 5/23/06 11:55 AM Page 52

Page 58: Managing Enterprise Risk

Presently planned transmission networks and public needs 53

with a uniform, never changing product sold to consumers who always buy the same thingand have no other choices.

In dynamic, competitive markets, firms produce new products and services. In regulatedindustries, the regulator often designs the product, always approves the product line and itspricing, and gives careful consideration to the impact of the offering on different classes ofcustomers and even on other market participants (competitors). Regulators not only believethat they can and should control the direction of the market but they also think that they canpredict the results of their policies. Treacy and Wiersma, in their study of effective innova-tion and customer care, however, argued that one of the keys to success in a competitivemarket is “Expecting the Unexpected.”25 Deregulation has produced the unexpected, includ-ing the hub and spoke network of airlines, the proliferation of cellular telephony, a glut ofnatural gas, the demise of independent stock research, and a worldwide overabundance ofpower stations. Generally speaking, consumers come out ahead, because they do not pay forwhat they do not want, and they are offered services they did not have before.

In the regulated market, price falls out of a formula after everything else has been deter-mined. The regulator first sets the profit that the utility should earn, adds on the expensesincurred to produce the determined volume of output, and then divides the total of profitplus expense by volume to determine the price per unit sold. In the ideal regulated market,customers cannot come out ahead using the product more efficiently (and taking less) inresponse to the high price, because the regulator will raise prices even more to compensatefor lower demand, and the regulated entity cannot benefit by lowering its costs because theregulator will pass on those savings to customers as soon as it can.

In the competitive market, price encourages consumers to compare the price to the valueof the product to them, to the costs of doing without or with less, and to the costs of alter-natives. Consumers will use the product more efficiently, knowing that they will retain thebenefits of efficient use. Price also signals existing and potential suppliers about whetherto and how much to supply of the product, and it encourages them to operate efficiently,because the most efficient supplier will earn the highest profit.

Pricing could provide options to the consumer. Electricity is a complicated package. Youwant the energy to do work for you. But it has greater value in the dead of winter (to keepthe furnace running) and in the heat of the summer (to operate the air conditioning). Someconsumers, though, might put up with less reliable service in the winter (because theyown wood stoves) and others with less reliable service in the summer (because they livenear the beach) if they could pay less for the electricity. In a regulated market, the utilitytends to charge the same price for a uniform bundle of product and services. As a result,some customers pay for a higher quality bundle than they need while others would paymore for a higher quality bundle than they get.

A regulatory framework should do more than cover the expenses of the service provider.It should encourage (and reward) those who use or provide the service efficiently. Pricingserves that purpose in a competitive market. It could do so, as well, in a regulated market.

Deregulation reallocates risk, which affects the decision-making process. Regulation shovesmost risk onto the consumer (with the exception of risk from legally imprudent activities).

Ch04-I044949.qxd 5/23/06 11:55 AM Page 53

Page 59: Managing Enterprise Risk

54 Managing Enterprise Risk

The utility may receive a low return from the regulator, but that low return derives, in part,from the allocation of the risk to the consumer. The regulator, however, rarely if ever con-siders whether the low price paid by the consumer (due to the low return earned by theutility) compensates the consumer for the risk incurred. To overly simplify, the decision-maker does not take the risk of the decision. True deregulation shifts all costs and risks ofproduct development and production to the decision-maker, the investor.

One might argue that consumers now have to take price risk when they buy the newlyderegulated product, but they have no difficulty taking price risk for everything else thatthey buy. And they took price risk when they bought the regulated product, because theydid not know what price the regulator would set, over time. At least, in a competitive market,no producer can over charge for long, because competitors will enter the market and theincreased supply will lower prices. (The bizarre prices exhibited in electricity markets in the past few years may indicate the rigged nature of the rules under which they oper-ated, the lack of sufficient competition, and the incompetence of supervisory authoritiesrather than a failure of a competitive model per se. In the end, however, high prices induceda glut of supply that brought prices down.)

Here is the essence of the difference between the regulated and unregulated model. Yearsago, Ford Motor Company sunk a fortune into the development and production of a newcar, the Edsel. Nobody wanted to buy the Edsel. Ford did not add a surcharge to the priceof Ford’s, Lincoln’s and Mercury’s in order to recoup the losses on the Edsel. If a regulatedbusiness, Ford and the regulators would have found a way to make consumers who did notwant Edsels pay for them.

Utilities, unlike many other businesses, have to attract large pools of capital for long-livedinvestment. The Hope Natural Gas Co.26 decision of 1944 specifically required that theregulatory agency set a return that will attract capital to the business. In a competitive mar-ket, potential investors must see the opportunity to earn a return commensurate with risktaken before they invest. In the regulated utility business, in theory, the utility invests onthe assumption that the regulator will grant an attractive return after completion of theinvestment and that it will earn that return. In reality, in the period since the end of WorldWar II, the electric industry has consistently earned less than the return allowed, with onecontributing factor the disallowance (and subsequent write off) of investment.27 Thus, toachieve the desired investment in the regulatory environment, the rules may have to includeincentives that improve the likelihood that the investment will earn the allowed return. Inother words, incentives should not only encourage market participants to find the most effi-cient means of providing customers what they want, but also to make sure that the necessarycapital enters the business whether on a regulated or unregulated basis.

In a competitive market, each participant weighs the risks of a decision it makes againstpotential returns it hopes to earn. The process places the cost of error on the decision-maker. Suppliers vie with each other to sell their products. Customers can choose the bestproduct or the best price. But, one might object, operating and building an electrical net-work is not a competitive business. It requires regulation. Accept that argument. It stilldoes not preclude the development of a regulatory system that makes regulation a bettersurrogate for the competitive market than it is now. Consumers might benefit if regulatorsencourage market participants to find the most efficient solutions rather than those chosen

Ch04-I044949.qxd 5/23/06 11:55 AM Page 54

Page 60: Managing Enterprise Risk

Presently planned transmission networks and public needs 55

by the regulators. Everyone might gain from a system that makes decision-makersresponsible for their decisions. Incentive-based regulation attempts to bring market forcesinto the regulatory process in order to achieve social goals more efficiently.

Incentives for Transmission and Reliability

The regulatory system can encourage or discourage timely planning and execution of capi-tal expenditures required to facilitate commerce or maintain reliability. It can attract newinvestors or cause them to look elsewhere. It can induce existing firms to plow more into thebusiness, or have them determine that more investment would reduce the value of the enter-prise. It could promote the efficient operation of the existing network, or fossilize operatingprocedures. Finally, it could reward those who provide services desired by consumers andpenalize those who demand that customers must do only what conveniences the network.

For years, observers decried the lack of investment in the transmission network. Utilityexecutives cite a plethora of difficulties to account for the low level of spending. But thosedifficulties represent the risks of the business. Return should compensate for risk. Lack ofinvestment may indicate inadequate return. Or, lack of investment may come about becausethe transmission owner does not want to open the network to competing generation. If thereason is inadequate calculation of return or deliberate obstruction by the utility, then theregulator has failed at the job.

Could the existing grid run more efficiently and reliably? Vernon L. Smith and LynneKiesling made the case that it could, if regulators concentrated on bringing market pric-ing to the retail level and instituting demand side response mechanisms.28 The EPRI puttogether a long list of actions that could improve grid efficiency and reliability.29 NationalGrid, the UK’s transmission owner and operator, has operated within an incentive-basedregulatory framework for over a decade. It has reduced the real cost of delivery by 40%without adding on new rights of way.30 Yet, nothing in FERC’s major pronouncements sincethe current Commission took over in the fall of 2001 indicates that the agency ever consid-ered the possibility of offering incentives to improve the operation of the existing network.

Finally, most businesses succeed by serving customers, providing them with what theywant. It is unclear, at this stage, if the transmission operators or owners have a clear notionof who the customer is, but it is clear that the network does not design its offerings aroundcustomer needs, but rather requires the customer to accommodate to the requirements ofthe network. Note that network operators do not make more or less by providing better orworse service (as defined by the customer). This is an odd piece missing from a structureredesigned, supposedly, to bring the benefits of competitive choices to consumers.

Transmission and Reliability Then and Now

For years, studies of transmission said that the capacity of the network has not grown com-mensurately with the demands placed upon it.31 Real spending on transmission declineddramatically, possibly to levels that do not replace depreciated old plant.32 Number and

Ch04-I044949.qxd 5/23/06 11:55 AM Page 55

Page 61: Managing Enterprise Risk

severity of outages and need to halt transactions due to line congestion increased.33

Conceivably, increasing demands placed on an essentially unchanged network has affectedthe network’s ability to deal with unexpected events.

Order 2000, issued by FERC in 1999, sent mixed signals to investors. FERC intimatedthat it would consider innovative rate making proposals, if accompanied by cost–benefitanalyses. FERC did not, however, put a generic proposal on the table that would move theprocess forward in a dramatic fashion. It would work on a case-by-case basis, which waswhat it always did.

The government has issued reports on reliability, periodically, at least since the 1965blackout, often after still another outage or difficulty on the network. With the exceptionof the formations of North American Electric Reliability Council (NERC) and EPRI, fewmajor institutional changes resulted from the reports.

The dictionary defines “rely” as “depend on with confidence”34 and “reliable” as “of soundand consistent character or quality.”35 NERC defines “reliability” as:

The degree to which the performance of the elements of that system result in powerbeing delivered to consumers within accepted standards and in the amount desired.36

In a real market, studies of reliability would also ask these questions:

Why is the network built in such a way that it cannot withstand periodic emergencies?

Would it be more efficient (meaning more beneficial to the ultimate customer), instead, tomitigate the effect of the emergency at the local level?

The network has suffered numerous reliability disturbances on a regular basis fordecades. Evidence indicates that they have increased in frequency and seriousness,although one could argue endlessly about how many small events equal a big one. Theindustry usually boasts about a 99.9% reliability record, although that number may sufferfrom definitional issues. That 0.1%, however, may cost consumers between $7.8 and$19.5 billion per year, based on common estimates of the value of lost load.37 (That isroughly 2–5% of the electric bill.) EPRI surveyed firms in the “digital economy” andreported that they claimed their losses from power disturbances at $52 billion per year.From that survey and additional work, EPRI estimated the total cost of power distur-bances to the economy at $100 billion per year.38 Whatever the number, three points standout: the number is big, consumers bear the burden directly, and few people ask whetherthe grid could reduce those losses more economically than could consumers acting forthemselves.

One might expect a business to attempt to meet the service needs of its customers, whenit could do so in an economical manner. Restructuring, however, has complicated thattask, by severing the link between the infrastructure owner and the customer, and by fix-ing prices in a way that could turn providing better service into a losing proposition. Forinstance, many utilities operate under multi-year price freezes which provide no room torecover costs of improved customer service or reliability, even if the improvement reduces

56 Managing Enterprise Risk

Ch04-I044949.qxd 5/23/06 11:55 AM Page 56

Page 62: Managing Enterprise Risk

Presently planned transmission networks and public needs 57

the customer’s expenses more than the extra amount the utility would have to charge toimprove the service. Unfortunately the industry structure now has the look of somethingput together in a way to provoke the minimum of dissatisfaction from stakeholders, ratherthan as an efficient product delivery mechanism designed to satisfy customers. Basically,nobody is in charge of delivering to customers, for their satisfaction and approval, a pack-age of products and services, and nobody reaps the benefits of or suffers the consequencesof customer satisfaction or dissatisfaction.

The fact that no firm now produces the complete package is irrelevant. In other businesses,virtual corporations design products, contract out all aspects of manufacture and delivery,but still retain responsibility for the success or failure of the product. Auto-makers dependon outside suppliers for much of their content, but they do not let the suppliers dictate thedesign of the product, the availability of parts, the manufacturing standards or the price ofthe purchased products. All suppliers understand that if the car will not sell they all maygo out of business, so they have a stake in the success of the auto manufacturer, whichmeans the success of the manufacturer in putting together a competitively priced product.In the electric business, in effect, the suppliers seem to call the shots.

Operationally, in the restructured electricity market, the RTO plays the central role, runningday-to-day operations and planning for the future. The transmission owners (merchant orutility) execute the orders and participate in planning, although no clear framework forexpansion of the networks seems to exist. The structure gives the RTO little incentive tosubstitute productivity improvements on its part for capital additions, other than the like-lihood that a cumbersome and potentially litigious decision-making process could delayneeded physical expansion, thereby forcing the RTO to find a way around inadequateplant capacity. So far, FERC has not spelled out a means to reward the RTO for improvedproductivity, possibly because the non-profit nature of the RTO makes it unresponsive toincentives, and it has not spelled out means to reward the transmission owners for increasedproductivity, possibly because it sees little reason to provide incentives to those who cannot respond because someone else makes the decisions for them.

Cost of Fixing the Problem

First, what is the problem? Is it making sure that all firms follow prescribed procedures,or writing the correct rules, or assuring aggressive tree trimming? Solving those problemswould add relatively little to electric bills. NERC would either enforce existing rules orwrite new ones. Maintenance, information technology, and compliance expenses will riseand everyone will be happy until the accident not anticipated by the rules, or until growthin demand bumped against fixed network capacity, at which point someone might discoverthat policy-makers fixed the wrong problem.

Perhaps the existence of major congestion points renders the network less able to meetdemands put upon it. Assume a cost of $1 billion to correct each of 16 major conges-tion points39 (possibly an overestimate). The carrying costs of such an investment wouldadd roughly 2% to the average electric bill before subtracting out the savings from lowercongestion costs.

Ch04-I044949.qxd 5/23/06 11:55 AM Page 57

Page 63: Managing Enterprise Risk

58 Managing Enterprise Risk

Perhaps dampening peak demand through price signals could reduce strains on the network.The system would have to install advance metering and communications devices. Assumethat prices signals could affect about 15% of the load and that the utilities would have toinstall special meters at 15% of their customers and that each connection would cost $1000.(More likely, the utilities would install the equipment at a smaller number of large customers.The cost per meter used in this estimate probably exceeds the cost of a mass produced prod-uct several times over.) That investment of less than $15 billion would add close to 2% to theaverage electric bill, before subtracting savings derived from the load management.

Numerous studies have argued that transmission and distribution (T&D) spending has torise in order to bring the plant account up to the level needed to properly serve demand.Assume that the industry needs to add $8 billion per year to its T&D spending plan40 anextraordinary increment. Financing that sum, annually, would add an incremental 1% tothe electric bill each year, excluding any benefits accruing to customers from their abilityto access more competitive markets.

In short, price paid by consumers to remedy network inadequacies, beyond the need torewrite or enforce rules, could be offset by reductions in the expenses that consumers nowbear as a result of those inadequacies.

Incentives and Fixes Within the Existing Framework

Regulators have placed their bets on LMP and congestion rights as the signals that willattract investment to the grid, but they hedged those bets by putting the RTO in charge ofauthorizing investments needed but somehow not signaled by LMP, without any indica-tion of what the RTOs will offer to get anybody to invest. Regulators could put in place anumber of simple measures – common in other regulatory jurisdictions – to encourageneeded investment. The easiest and probably most effective measure would be to raisereturn on transmission equity to a level that actually attracted capital. The second wouldbe to allow returns on plant under construction (construction work in progress in rate base)because of the extended duration of construction projects. For small additions to plant,regulators could authorize the utilities to raise prices automatically, within set limits, asthe projects are completed, to quickly cover the costs of these additions without the expenseof a rate case. (Several states use this procedure for ongoing water utility expenditures.)Finally, the regulators could accelerate the depreciation schedule on plant, to reduce therisk of asset stranding and to increase the cash flow on the investment.

Those incentives to attract investment to transmission would have a minimal impact onprices to the end use customer. Raising return on transmission equity investment by 500basis points, for instance, would add less than 1% to the overall electric bill. Changing depre-ciation schedules, putting construction work in progress into rate base, and automaticallyraising prices to cover costs of small additions would have little or no impact on electric bills,other than to change the timing of payments.

The endless debates, regulatory lags, and bickering may have diverted the industry awayfrom financing an expansion when capital market conditions were favorable in a way not

Ch04-I044949.qxd 5/23/06 11:55 AM Page 58

Page 64: Managing Enterprise Risk

Presently planned transmission networks and public needs 59

likely to be seen again for years. Investors soured by market collapse and disturbed by lowreturns available from money market accounts would have been attracted to the combina-tion of return, risk, and current income that transmission investments might have pro-vided. Conditions might remain favorable for some time into 2004, but a combination ofeconomic upturn and rising interest rates could encourage investors to seek other placesto put their money.

Incorporating demand side measures into reliability planning would serve two purposes.It would give grid operators an additional set of tools that do not require massive invest-ment in projects with long lead times. And, it would move attention closer to solving theproblem of unreliability as experienced by the customer rather than as seen by the gridoperator, meaning that measure of reliability is whether or not the customer has neededenergy. Demand side measures have to encompass more than turning out the lights of hap-less customers whenever the grid chooses to do so. The grid operator should reach out tothe demand side and pay whatever market rates are necessary to acquire demand sideresources, when doing so is less expensive than using centralized resources.

Investors may not want to depend on congestion and rights for their profits. They may wantcontracts, or some equivalent, to assure them of recovery of investment. Lack of certaintyraises cost of capital. But, in a restructured industry, few players can sign contracts becausethey cannot depend on the allegiance of their own customers. Perhaps electricity suppliersshould offer contracts to customers, guaranteeing price and reliability, within bounds, withthose contracts representing tradable property rights. During periods of short supply and highspot market prices, those customers who do not value the electricity highly could makemoney by selling the rights to the electricity. If the grid has to interrupt service, it would haveto pay customers for inability to deliver, as contracted. That requirement would force the gridoperator to consider alternatives to cut off, such as installing distributed resources or reorder-ing operations in such a way to take preventive steps, or cutting off only those customers whohave agreed to lower quality service in return for a lower price. Customers could decide thedegree of price and operating reliability that they desire in the same way that they decide theprotection and deductibles that they want in an automobile insurance policy.

If the grid has to commit itself, contractually, to provide a level of service, it will have toarrange the necessary infrastructure and operating services to do so. The grid, however,remains a monopoly. Distribution utilities and other suppliers have no choice but to dealwith the grid. Therefore, the grid will remain regulated, for some time to come.

Regulation, though, should encourage the grid to respond to the demands of its customersrather than vice versa, and fix responsibility for performance on the grid, without the escapeclauses that would allow it to dump the cost of errors on consumers. In order to promoteinnovative solutions, the regulations should emphasize the end product, reliability, ratherthan specific rules designed to produce reliability. In other words, the grid should earnrewards for producing results rather than for simply following rules. European regulatoryframeworks may provide the best models. The regulator would order a realistic, multi-year assessment of needs, examine expectations of cost, and then set rates for the period,with the requirement that the grid must meet contractual obligations, but without specify-ing the means of meeting those obligations. Customers care about results. Managers – notregulators – get paid to figure out how to produce those results.

Ch04-I044949.qxd 5/23/06 11:55 AM Page 59

Page 65: Managing Enterprise Risk

60 Managing Enterprise Risk

Of course, there is no such entity as the grid. No organization can take the responsibilityfor providing a package of transmission and reliability services on a long-term basis.Regulators chose not to create such an entity. But the government did put the RTO in thecenter of the action. Since the RTO does operate the network, it seems the most likely can-didate to take charge, which means taking responsibility for its actions and for those of thefirms that it supervises. It is the RTO that should guarantee that reliability. The regulatoryregime should impose a pricing formula on the RTO that makes it responsible for all costswithin its control, rather than allowing it to pass on costs, because without such a provi-sion, the RTO could pay extravagantly for the services it would need to assure reliability,thereby vitiating the discipline of a price-oriented regulatory policy. The price formulashould allow adjustments only for inflation and to promote productivity, thereby encour-aging the RTO to pay attention to cost of inputs.

The RTO, as presently constituted, probably could not accept such responsibilities. It lacksthe capital required to take risks, an orientation to customer needs, or any fiscal responsi-bility to those served or to those under its operating control. A single, for-profit owner ofassets and operator – a stand-alone for-profit transmission business – would have thecapability to take full responsibility, but the regulatory establishment and a large part ofthe industry have made clear that they will not accept such an entity. Given that attitude,perhaps they would accept a compromise: transform the RTO into a for-profit company thatwould operate as the equivalent of a mutual insurance company, owned, in effect, by itspolicyholders, in this case transmission owners. The new RTO would offer contracts forits services, tailored to the needs of users, with premium payments for premium services.(This is no different, in concept, from transmission rights.) It would operate the networkin a way that met the requirements of customers, but when reliability fell below the con-tracted level, it would have to pay customers as agreed in the contracts. The new RTOwould accumulate reserves to cover future risks, and it would, undoubtedly, reinsure therisks with commercial insurance carriers. It could make payments to those who take stepsto improve reliability. And it would pay dividends to policyholders, in the form of reducedpremiums, if it succeeded in reducing the cost of providing reliable service. Since the newRTO would retain a monopoly, the regulator may have to control the rates charged, althoughstrict regulation may not be required because the new RTO’s customers would recoverovercharges by means of the dividend.

Conclusion

Regulators, for all practical purposes, determined industry structure in 1996, selecting acompromise that seemed to satisfy the loudest stakeholders. The structure will not go away.It fragments the management of the network. It diffuses responsibility. Its provisions readlike the what-not-to-do page of a primer on management. It seems not to have improvedreliability. It has a spotty record for attracting capital. Oddly enough, the costs of fixingthe network appears small in comparison with the costs of unreliability borne by consumers.The inability to execute a fix is a tribute to the decision-making process put in place, notto the cost of the fix.

The grid needs an organization that evaluates risks, considers what customers want, andarranges investment where necessary to fill gaps left by imperfectly conceived market

Ch04-I044949.qxd 5/23/06 11:55 AM Page 60

Page 66: Managing Enterprise Risk

Presently planned transmission networks and public needs 61

mechanisms. Incentives to operators, owners, and consumers could improve reliability.Providing consumers with the reliability that they want will require more than a rewrite ofthe operating rules. It will require a careful look at what works elsewhere. It will requirea willingness to test market structures before attempting to launch them on a nationwidescale. It will require humility in high places. It might require the regulatory equivalent ofan earthquake. Without all of the above, the transmission sector will continue to move inthe direction of imposed mandates, inefficient operation, and command-and-control. Maybeit is there already.

Notes

1. Federal Energy Regulatory Commission, Promoting Wholesale Competition Through OpenAccess Non-discriminatory Transmission Services by Public Utilities and TransmittingUtilities, Final Rule, Docket Nos. RM 95-8-000 and RM 94-7-001, Order No. 888 (April 24,1996).

2. Awerbuch, Shimon, Leonard S. Hyman and Andrew Vesey, Unlocking the Benefits ofRestructuring: A Blueprint for Transmission (Vienna, VA: Public Utilities Reports, 1999), p. 4.

3. Federal Energy Regulatory Commission, Regional Transmission Organizations, Final Rule,Docket No. RM 99-2-000, Order 2000 (December 20, 1999).

4. Awerbuch, Hyman and Vesey, ibid.

5. Federal Energy Regulatory Commission, Remedying Undue Discrimination through OpenAccess Transmission Service and Standard Electricity Market Design, Notice of ProposedRulemaking, RM 01-12 (July 31, 2002).

6. Federal Energy Regulatory Commission, Proposed Pricing Policy for Efficient Operation andExpansion of Transmission Grid, Docket No. PL 03-01-000 (January 15, 2003).

7. Federal Energy Regulatory Commission, White Paper Wholesale Power Market Platform(April 28, 2003).

8. Hyman, Leonard S., “A Financial Postmortem: Ten Years of Electricity Restructuring,” PublicUtilities Fortnightly, November 15, 2003, 13–14.

9. Diamond, Mitchell, “Prometheus Unbound – Electricity in the Era of Competition,” Presentedat Smith Barney Energy Conference, Miami, FL, February 7, 1997.

10. Amin, Massoud, “Security & Resilience of Energy Infrastructure,” presentation to Energy 2003Exposition, Orlando, FL, Aug. 17–20, 2003.

11. Rassenti, Stephen J., Vernon L. Smith and Bart J. Wilson, “Using Experiments to Inform thePrivatization/ Deregulation Movement in Electricity,” Presented at IFREE AcademicPresentation, Tucson, AZ, May 5, 2001.

12. Fama, James, “Transmission Investment,” Presented at INFOCAST Transmission Summit,Washington, DC, January 28, 2004.

13. Gellings, Clark, “What Investment Will Be Needed to Enable a Fully Functional Power DeliverySystem,” Presented at INFOCAST Transmission Summit, Washington, DC, January 28, 2004.

14. Hogan, William W., “Electricity Transmission Investment: Theory and Practice,” Presented atINFOCAST Transmission Summit, Washington, DC, January 28, 2003.

Ch04-I044949.qxd 5/23/06 11:55 AM Page 61

Page 67: Managing Enterprise Risk

62 Managing Enterprise Risk

15. “FERC Pushes Ahead Despite Stiff Resistance,” EEnergy Observer, January 2004, p. 7.

16. Reader’s Digest Oxford Complete Wordfinder (Pleasantville, NY: Reader’s Digest, 1996), p. 747.

17. Reader’s Digest Oxford, op. cit., p. 409.

18. Averch, Harvey and Leland Johnson, “Behavior of the Firm under Regulatory Constraint,”American Economic Review, December 1962, 52, 1053–1069.

19. More than 40 years ago, economist Clair Wilcox wrote that “… regulation has shown littleinterest … in the promotion of efficiency…. The hand of regulation has been lax. Operatingexpenses have not been tightly controlled…. Facilities have been overpriced.” And that waswritten before the skyrocketing costs in the subsequent two decades. Clair Wilcox, PublicPolicies Toward Business (Homewood, IL: Richard D. Irwin, 1960), p. 558. For a more recentdiscussion, see Awerbuch, Hyman and Vesey, op. cit., pp. 23–24.

20. See Philip Chantler, The British Gas Industry: An Economic Study (Manchester: ManchesterUniversity Press, 1938); Irvin Bussing, Public Utility Regulation and the So-Called SlidingScale (NY: Columbia University Press, 1936).

21. Smith, Adam, An Inquiry into the Nature and Causes of the Wealth of Nations (NY: ModernLibrary, 1937), p. 14.

22. “Empirical evidence indicates that firms under partial PBR [performance index regulation] gainno more efficiency than firms under traditional ROR [rate of return regulation].” Awerbuch,Hyman and Vesey, op. cit., p. 149.

23. For a discussion, see Leonard S. Hyman, “The Next Big Crunch: T&D Capital Expenditures,”R.J. Rudden Associates, 2003.

24. Kahn, Alfred E., The Economics of Regulation: Principles and Institutions, Vol. II (SantaBarbara, CA: John Wiley & Sons, 1971), pp. 325–326.

25. Treacy, Michael and Fred Wiersema, The Discipline of Market Leaders (Reading, MA:Addison-Wesley, 1995), p. 5 (uncorrected page proof).

26. Federal Power Commission v. Hope Natural Gas Co., 320 US 591.

27. Hyman, Leonard S., “Investing in the ‘Plain Vanilla’ Utility,” Energy Law Journal, 24(1), 2003,1–32.

27. Hyman, Leonard S., “A Financial Postmortem: Ten Years of Electricity Restructuring,” PublicUtilities Fortnightly, November 15, 2003, 10–15.

28. Smith, Vernon L. and Lynne Kiesling. “Demand, Not Supply,” The Wall Street Journal, August20, 2003, Opinion page.

29. EPRI, Electricity Sector Framework for the Future (Palo Alto: Electric Power Research Institute,August 6, 2003).

30. “National Grid believes independence is key to driving efficiency in transmission business,”Electric Transmission Week, March 10, 2003, 1–2.

31. Awerbuch, Hyman and Vesey, op. cit.; EPRI, op. cit.; Hyman. “The Next Big Crunch,” Amin.op. cit.

32. Hyman, “The Next Big Crunch.”

33. Amin, op. cit.

34. Reader’s Digest Oxford, op. cit., p. 1270.

Ch04-I044949.qxd 5/23/06 11:55 AM Page 62

Page 68: Managing Enterprise Risk

35. Reader’s Digest Oxford, op. cit., p. 1269.

36. Terhune, Harry, “Transmission Planning and Reliability,” Presented at EEI TransmissionBusiness School, Chicago, IL, June 23–26, 2003, p. 9.

37. Sally Hunt, “Value of lost load estimated at $2000–5000 per mWh,” Making Competition Workin Electricity (NY: John Wiley & Sons, 2002), p. 102.

38. EPRI, op. cit. Vol. I, p. 40.

39. Number of bottlenecks from Shmuel Oren, “Market Design,” Presented to EEI TransmissionSchool, Chicago, IL, June 23–26, 2003.

40. For calculations, see Hyman, “The Next Big Crunch.”

Presently planned transmission networks and public needs 63

Ch04-I044949.qxd 5/23/06 11:55 AM Page 63

Page 69: Managing Enterprise Risk

CHAPTER 5

The DCF Approach to Capital Budgeting Decision-Making

Diane M. Lander

Department of Finance and EconomicsSouthern New Hampshire UniversityManchester, New Hampshire, USA

Karyl B. Leggio

Henry W. Bloch School of Business and Public AdministrationUniversity of Missouri at Kansas CityKansas City, MO, USA

Introduction

In the capital budgeting process, management must decide which long-term and, oftentimes, high dollar assets the firm is going to acquire. Such decisions are based both on thefirm’s strategic plan and expectations and the resulting asset valuations and risk assess-ments. The assets management decides to acquire may be purchased intact from otherfirms for a price or they may be manufactured in-house for a building cost. Sometimes anasset is acquired by purchasing another firm in its entirety.

Finance academicians have long proposed that corporate managers use a discounted cashflow (DCF) approach for making capital budgeting decisions.1 This traditional valuationframework ties directly to finance theory, where the objective of corporate managementdecision-making is stated to be maximize the value of the firm, and focuses on what financeconsiders to be the most, maybe even the only, important valuation factor – the present

67

1For a survey and discussion of managerial capital budgeting practices, see Farragher, Edward J., Robert T. Kleiman,and Anandi P. Sahu. Current Capital Investment Practices, Engineering Economist, Volume 44 Issue 2, 1999,137–150.

Ch05-I044949.qxd 5/23/06 11:59 AM Page 67

Page 70: Managing Enterprise Risk

68 Managing Enterprise Risk

value (PV) of expected cash flows. According to finance, the value of any asset – real, as ina factory, or financial, as in a share of common stock – is determined by the magnitude,timing, and risk of the after-tax net cash flows the asset is expected to generate over its life.

The DCF Approach

The DCF approach to capital budgeting is mathematically relatively straightforward, andresults in determining the net value of a project, in today’s dollars, adjusted for risk. A six-step DCF valuation process is described below. Others may suggest, for example, a five-step process or a seven-step process, but, regardless of the number of steps delineated, thetasks to complete are the same.

1. The firm estimates, based on current expectations about future market conditions, therevenues and costs both relevant to the project and incremental to the firm. A capitalbudgeting analysis is not the place to bury revenues and costs relating to other firmactivities. Sunk and allocated project costs, although needed for purposes of full proj-ect accounting, are not included because they are not cash flows incremental to thefirm. These revenue and cost forecasts also should be adjusted for inflation.

2. From the forecasts, the firm creates pro forma accounting statements, which are thenused to derive the per period free cash flows (FCFs) expected to occur over the fore-cast period.2 Given that FCFs are defined to be net after-tax cash flows available topay all providers of capital, FCFs are cash flows (i.e., not earnings) that account forall operating and investing activity (short-term and long-term), but do not accountfor any financing activity (e.g., interest expense, debt repayment, dividends, newstock issued). The pro forma statements are also used for a financial analysis (com-mon size statements, ratios, etc.) to verify the earnings based viability and financialperformance of the project, or of the firm given the project.

3. The firm next determines a final, or terminal, value. If a project is finite lived, a netsalvage value (NSV), representing the salvage value net of taxes, is computed. If aproject is infinite lived, a horizon value (HV), representing the value of the expectedFCFs from the end of the forecast period on, is computed.

4. The firm discounts to the present the per period FCFs and the terminal value, takinginto account the time value of money and the riskiness of the FCFs. That is, eachcash flow is discounted accounting for both when the cash flow will be received andhow risky the cash flow is. When the project has the same risk level as the firm overall,the discount rate appropriate to use with FCFs is the firm’s weighted average cost of capital (WACC). The WACC represents the firm’s on average cost of capital, andaccounts for all (after-tax) financing effects (i.e., returns to the debt holders, pre-ferred stockholders, and common stock holders). Adding up the present values ofthe individual cash flows results in a sum, called the PV of the project, that repre-sents the value of the project in today’s dollars adjusted for risk.

2There are different types of cash flows (e.g., free cash flows, flows to equity). The type of cash flow most oftenassociated with capital budgeting and most commonly presented in financial management texts is free cash flows.

Ch05-I044949.qxd 5/23/06 11:59 AM Page 68

Page 71: Managing Enterprise Risk

5. The fifth step is to subtract the initial expenditure, or purchase price, from the proj-ect’s PV. If expenditures occur over a period of time, the total PV of the expendituresis subtracted. The resulting value is the project’s net PV (NPV), and the decision ruleis to accept all NPV � 0 projects.a. If the NPV is greater than zero, the PV of the project’s FCFs exceeds the PV of

its costs, meaning that the project earns more than enough to pay all providers ofcapital their expected returns. Because the excess accrues to the common share-holders, NPV positive projects increase firm value and create shareholder wealth.

b. Technically, firms would be indifferent to NPV zero projects because the PV ofthe project’s FCFs only equals the PV of its costs. In other words, the projectearns exactly enough to pay all providers of capital their expected returns, andfirm value and shareholder wealth neither increase nor decrease. In practice,however, firms tend to look favorably on NPV zero projects. NPV positive proj-ects are not so easy to find, and NPV zero projects do earn the shareholders theirrequired rate of return.

c. The traditional decision rule says that firms should not accept NPV negative proj-ects. For these projects, the PV of the project’s FCFs is less than the PV of its costs.Since, in this case, the shortfall accrues to the common shareholders, acceptingNPV negative projects will decrease firm value and destroy shareholder wealth.

6. The sixth and last step of an NPV analysis is to perform a quantitative risk assess-ment of the project to determine its value drivers and to find its range of exposure.The long-established and commonly used risk assessment techniques are sensitivityanalysis, scenario analysis, and simulation. In practice, sensitivity analysis is mostcommonly used.3

A sensitivity analysis determines the primary drivers of project value and the range foreach that results in positive project NPVs. Then the available slack in the value drivers’forecasts is compared to the firm’s ability to forecast accurately, giving an indication ofconfidence in the investment decision. If the available slack in the value drivers’ forecastsexceeds the firm’s ability to forecast accurately, that is good news. If, however, the slackin the value drivers’ forecasts is narrower than the firm’s ability to forecast accurately, theproject’s NPV may well turn negative as the project is implemented.

Limitations of NPV analysis

The apparent straight forwardness of the DCF approach and NPV calculation may sug-gest that traditional capital budgeting decision-making is simply a matter of forecasting,discounting, and summing. That impression, in fact, is not an accurate one. In addition,all valuation frameworks have modeling constraints and underlying assumptions, bothexplicit and implicit, that impose limitations on the analyses and reduce the merit of theresulting valuations. A DCF analysis is no exception.

Below we discuss limitations of the DCF approach that are related to FCFs, discountrates, and option-like project characteristics. For completeness’ sake, we should mention

The DCF approach to capital budgeting decision-making 69

3Farragher, Edward J., Robert T. Kleiman, and Anandi P. Sahu. Current Capital Investment Practices, EngineeringEconomist, Volume 44 Issue 2, 1999, 143–144 and Table 4.

Ch05-I044949.qxd 5/23/06 11:59 AM Page 69

Page 72: Managing Enterprise Risk

that there are three additional technical assumptions underlying the DCF approach to cap-ital budgeting. The first is that capital markets are perfect and complete, and the second isthat governments do not exist or are neutral. Most managers recognize that these assump-tions do not altogether hold, and understand that, although they are needed for the theory,they are often violated in practice. The third technical assumption, that NPV positive proj-ects exist only when firms can exploit temporary competitive advantages, is, for the mostpart, true. Unless an existing firm in an industry has some kind of market power, such asbeing a monopoly, new firms will enter a market where there are apparent profits, andeventually compete away any NPV positive opportunities.

Cash flow limitationsForecasting is a difficult task under any circumstances, even when the best of informationabout the future is available, and a DCF valuation requires management to forecast a proj-ect’s FCFs into an uncertain future. Furthermore, project value is not an absolute or con-stant. As the firm forms new expectations about the future, either because new informationbecomes available or because of a new way of looking at current information, the firm andother market participants will adjust a project’s value.

Two other related cash flow assumptions are that the expected values of the individual cashflows are given and that these expected values are acceptable proxies for the cash flows’distributions. If the expected values are not given, the analysis requires the relevant distri-butions for and related subjective probabilities of the individual and uncertain cash flowsbe known, both of which may be difficult to obtain or to estimate. Second, there are timeswhen the expected values of the future uncertain cash flows may not best represent the cashflows’ distributions. For example, if a cash flow’s distribution is skewed, the expectedvalue will be different from the mode, and this may have implications for project value.Moreover, when cash flow uncertainty is high, replacing future cash flow distributions withtheir expected values may lead to errors in the discount rate estimate as well.

Discount rate limitationsOnce the expected FCFs for each period in the life of the project have been calculated, thenext step is to discount them. An implicit discount rate assumption of the DCF valuationframework is that the discount rate appropriately adjusts for the time value of money andall relevant risk. A second assumption is that the discount rate is known, constant, and afunction of only project risk. In practice, however, determining the appropriate discountrate, and, thus, the discount factors, to apply to the project FCFs is one of the most diffi-cult, and often controversial, aspects of any DCF analysis. The difficulty comes in deter-mining just how much more expected return (i.e., risk premium) is appropriate for a givenrisky project. An equilibrium asset pricing model, such as the capital asset pricing model(CAPM), could be used, but rarely are there project market prices to use for determiningthe required inputs, such as the project’s beta. Although a replicating, or twin, asset canbe used for determining a project’s discount rate, such an asset is often not available.Furthermore, project risk is not dependent on just one factor. For example, project riskmay also depend on the remaining life of the project, current firm profitability, and thedegree to which managers can modify the investment and operating strategy. In otherwords, the discount rate is more than likely uncertain, time varying, and state and invest-ment and operating strategy dependent.

70 Managing Enterprise Risk

Ch05-I044949.qxd 5/23/06 11:59 AM Page 70

Page 73: Managing Enterprise Risk

Because determining risk adjusted discount rates is so problematic, firms often default tousing their WACC as a proxy for the opportunity cost of all projects. This simplificationtypically introduces biases into the capital budgeting decision-making process, and thesedecision-making biases become systematic.4 For example, a project that is less risky thanthe firm will be undervalued and may be rejected. If the project is truly an NPV positiveproject when the correct project discount rate is used, by rejecting the project, the firmloses out on the opportunity to increase firm value and shareholder wealth. On the otherhand, a project that is more risky than the firm will be overvalued and may be accepted. Ifthe project is truly an NPV negative project when the correct project discount rate is used,by accepting the project, the firm destroys shareholder wealth.

Option-like project characteristicsWe now understand that, in certain circumstances, the DCF technique undervalues proj-ects. Academicians first accounted for this by focusing on the possibility of downwardlybiased expectations of future cash flows (putting too high a probability on low cash flows)or on the possibility of excessively high discount rates (overly adjusting for risk). But evenif managers make errors in estimating project cash flows or determining discount rates,there is no reason to believe that such errors are always in the conservative direction.

The likely source of project undervaluation is the DCF technique itself – a DCF analysisis linear in nature and, at the time of the analysis, the future is taken to be static. The DCFapproach assumes that once the decision to invest is made and the future uncertaintiesstart to reveal themselves, management will not change the project’s investment or oper-ating strategy. In other words, managers are passive and the project cannot be expanded,contracted, re-directed, temporarily shut-down, or abandoned. But this is just not howmanagers behave. In practice, projects are actively managed and changed as the invest-ment and operating environments change.

Moreover, managers have always recognized that some projects appearing to have a zero,or even negative, NPV may still add value to the firm, and have justified investing in suchprojects by claiming that the projects have strategic, or hidden, value. That is, the nega-tive NPV project carries with it some type of strategic option – production, growth, aban-don, defer – that has value, but this option value is not included in the project’s NPV.

Production options account for management’s ability to alter the operating scale by expand-ing or contracting capacity as demand conditions change or to switch production inputs oroutputs in response to price changes. Such actions are common and well understood, but notmodeled in a traditional DCF analysis.

Growth options are value-creating actions that managers can take once they see how thefuture is unfolding, and are more than just expanding the current lines of business or pro-duction capacity. Look no further than the computer industry. The very first line of com-puters was probably not a positive NPV venture. If demand for these first computers were

The DCF approach to capital budgeting decision-making 71

4Rubinstein, Mark E. A Mean-Variance Synthesis of Corporate Finance Theory, Journal of Finance, Volume 28,Issue 1, 1973, 167–181 (graph on page 172).

Ch05-I044949.qxd 5/23/06 11:59 AM Page 71

Page 74: Managing Enterprise Risk

low, the firm would stop the project and move on to new and different projects. On theother hand, if these first computers were successful, then later versions and generationscould be successful, and highly profitable too.

In fact, the mainframe computer being successful opened the opportunity to develop themini computer, and the mini computer being successful opened the opportunity to developthe desktop computer, and the desktop computer being successful opened the opportunityto develop the laptop computer.

Similarly, if online book buying is successful for Amazon.com, maybe then so will beonline CD buying, and then online electronics buying, and then online tools and hardwarebuying, and so on.

The third type of strategic option is the abandonment option. Abandonment options allowmanagers to take actions that protect the firm from (additional) loss, and, similar to growthoptions, are more than just contracting the current lines of business or production capacity.An example is when management is able to prematurely terminate a project if sufficient mar-ket demand does not develop, or does not develop soon enough. The Edsel and New Coke didnot have long market lives. A classic example is that of the Research & Development processwhere, for example, management can terminate the production of a new drug if the clinicaltrials show that the new drug is not effective or has severe side effects.

The final strategic option we discuss is the option to defer.5 A DCF analysis assumes thatprojects are totally reversible and now-or-never opportunities. The totally reversible assump-tion says that, if the firm goes ahead with the project, the firm can, at any time, stop the proj-ect and recoup everything it has invested to that point. That is, the firm can become whole asif it never invested in the project at all. This is typically not true for real world projects. Thereis usually some cost to undertaking a project that cannot be recouped, and, if there are sunkcosts to investing, managers must be careful about when any investment is made. But thenow-or-never assumption means that a DCF analysis typically considers only whether a proj-ect be undertaken now or if it should be rejected – forever. There is no considering if the firmhas either the opportunity of investing this year or waiting until next year to invest. There isno considering when to invest. A DCF analysis simply ignores any invest later windows ofopportunity.6

Nevertheless, managers do delay making decisions, and they do so because the future isuncertain. Delaying allows the uncertain future to resolve to some extent and the man-agers then to obtain more information. And if management can wait before decidingwhether or not to invest, this ability to wait has value that must somehow be taken intoaccount.7 So a DCF valuation that includes no consideration for this option to wait, notonly tends to undervalue a project, but also may direct the firm to invest in the project too

72 Managing Enterprise Risk

5Dixit, A.K. and Pindyck, R.S. Investment Under Uncertainty. NJ: Princeton University Press, 1994.6For a detailed example, see Feinstein, Steven, P. and Diane M. Lander. A Better Understanding of Why NPVUndervalues Managerial Flexibility, The Engineering Economist, Volume 47, No 3, 2002, 418–435.7Waiting to decide also can be costly. For a detailed example of the costs to waiting, see Diane M. Lander, DoForegone Earnings Matter When Modeling and Valuing Real Options: A Black-Scholes Teaching Exercise,Financial Practice and Education, Volume 10, No 2, 2000, 121–127.

Ch05-I044949.qxd 5/23/06 11:59 AM Page 72

Page 75: Managing Enterprise Risk

early. Consider a project that has a negative NPV given that investment must take placenow, but may have a positive NPV if investment in the project is deferred to the future.Renewable energy might be such an example: a renewable energy generation plant is morelikely to be valuable in the future when traditional energy supplies are almost exhausted.

Since firms are really just a collection of projects, a firm itself can be viewed as a projectand valued using the DCF technique, which is how many market analysts decide whichstocks to recommend and many Chief Financial Officers (CFOs) decide which firms tomerge with or acquire. Yet, just as the DCF technique, in certain circumstances, under-values projects, it similarly undervalues firms. That is, the market value of a firm mostoften exceeds its DCF value because of its production, growth, abandonment, and deferstrategic options.

Considering only growth options suggests that the market value of a firm is composed of twoparts. The first component is the DCF value of the firm’s assets currently in place. That is,holding the firm’s existing asset level constant, the DCF value of the firm is the PV of thefuture FCFs the firm is expected to generate from this level of assets. The second componentof a firm’s market value, then, is the value of its growth opportunities. That is, allowing thefirm’s asset base to expand in the future, the value of the firm’s growth opportunities is the PV of the future FCFs the firm is expected to generate from additional assets. Carl Kester ofHarvard University estimated that, for large publicly traded firms, growth opportunities nottaken into account in a DCF firm valuation represent between 7 and 88% of a firm’s marketvalue, depending on the firm (industry) and the assumed earnings capitalization rate.8

Conclusion

The DCF approach to capital budgeting ties directly to finance theory, focuses on cashflows as the source of value, and, despite its limitations and shortcomings, continues to betaught in business schools and is widely used in practice.9 Since managers are rarely will-ing to spend time, effort, and resources on pointless activities, managers must see practi-cal value in using DCF analyses. Yet, in practice, firms can determine NPVs, but whetherfirms do or do not use the resulting valuations in their actual decision-making is difficultto assess.10

The real question is: Does a DCF analysis yield a near optimal decision when used forcapital budgeting decision-making? Do firms that use a DCF analysis for valuing projectsand follow the NPV decision rule tend to outperform firms that do not? That is, will firmsusing the DCF approach make better capital budgeting decisions? Will these firms tend toinvest in value adding projects and reject value-destroying projects more often than firmsnot using the DCF approach? Maybe not.

The DCF approach to capital budgeting decision-making 73

8Kester, W.C. “Today’s Options for Tomorrow’s Growth.” Harvard Business Review (March–April) 1984, 153–160.9Farragher, Edward J., Robert T. Kleiman and Anandi P. Sahu. Current Capital Investment Practices, EngineeringEconomist, Volume 44 Issue 2, 1999, 137–150.10Alessandri, Todd. A Portfolio of Decision Processes. Unpublished dissertation. University of North Carolina,Chapel Hill 2002.

Ch05-I044949.qxd 5/23/06 11:59 AM Page 73

Page 76: Managing Enterprise Risk

A DCF analysis, like any other valuation framework, has modeling constraints and under-lying assumptions that impose limitations on the analyses and reduce the merit of theresulting valuations. But it also is well accepted that the traditional DCF methodology isoften not an adequate capital budgeting decision-making framework, systematically under-valuing projects (and firms) when there are future discretionary investment opportunities –when there is a wide degree of managerial flexibility.

We now know that the DCF approach to capital budgeting (1) yields nearly optimal deci-sions only in relatively certain environments; (2) incorrectly assumes investments arereversible or now-or-never opportunities; (3) incorrectly assumes firms hold real assetspassively; and (4) cannot incorporate project strategic value. Brennan and Schwartz state,“… the classical approach may be likened to valuing a stock option contract while ignor-ing the right of the holder not to exercise when it is unprofitable.”11

What is missing in the DCF approach is the ability to value managerial flexibility – tovalue real options. When dealing with real options, a project investment decision is to bemade, but that decision can be deferred to a later date, and, since the future is uncertain,most of the time, more information can be gained by waiting. If the real options presentin a project are sufficiently valuable, a positive NPV can become more positive, or a nega-tive NPV can become positive, once the value of the real options is taken into account. Yetthese real options are not accounted for, nor can they be accurately valued, using a DCFvaluation framework.

The traditional DCF approach to capital budgeting assumes that the payoffs to a project aresymmetrical and the project will absorb the effect of whichever outcome occurs. The pay-off to an option, however, is not symmetric. For an option, the upside potential remains asis, but the ability to walk away if a negative outcome occurs truncates, may even eliminate,any downside effect. This asymmetric, or kinked, payoff is the classic characteristic of alloptions, financial and real. By using option pricing techniques, strategic options embeddedin projects can be modeled and valued, and projects, themselves, can be modeled and val-ued as real options as well. The real options approach to capital budgeting gives the acad-emicians, strategists, managers, and analysts a way to extend the traditional DCF analysisand now also account for project strategic value.

74 Managing Enterprise Risk

11Brennan, M.J. and Schwartz, E.S. A New Approach to Evaluating Natural Resource Investments. MidlandCorporate Finance Journal, 3 (Spring), 1985a, 37–47.

Ch05-I044949.qxd 5/23/06 11:59 AM Page 74

Page 77: Managing Enterprise Risk

CHAPTER 6

Real Options and Monte Carlo Simulation versusTraditional DCF Valuation in Layman’s Terms

Johnathan Mun, Ph.D.

This chapter is contributed by Dr. Mun, Founder and CEO of Real Options Valuation, Inc.(www.realoptionsvaluation.com), a software training, and consulting firm specializing in riskanalysis, simulation, forecasting, and real options analysis. He is also a finance and researchprofessor, author of eight books on risk, options, and decision analysis, and is the creator of theRisk Simulator and Real Options SLS software briefly described in this chapter. He can becontacted at [email protected]

Introduction

This chapter provides a contrast between traditional valuation methods and the new gener-ation of valuation analytics, namely real options analysis and Monte Carlo simulation. Inaddition, it briefly illuminates the advantages as well as the pitfalls of using each method-ology. It should be clear to the reader that the new analytics described here do not com-pletely replace the traditional approaches. Rather, the new analytics complement and buildupon the traditional approaches – in fact, one of the precursors to performing real optionsand simulation is the development of a traditional model. Thus, do not ignore or discardwhat is tried and true – traditional approaches are not incorrect, they are simply incompletewhen modeled under actual business conditions of uncertainty and risk – but complementit with more advanced analytics to obtain a much clearer view of the realities in business.

There are two major takeaways from this chapter. The first is the fact that real options analy-sis is not an equation or a set of equations. It is both an analytical process as well as a deci-sion analysis thought process, which leads us to the second takeaway, that 50% of the valueof real options is simply thinking about it. Another 25% comes from generating the modelsand getting the right numbers, and the remaining 25% of the value of real options is explain-ing the results and insights to senior management, to the person beside you, or to yourself,so that the optimal decisions can and will be made when it is appropriate to do so. Therefore,this chapter would have served its purpose well if it gets the reader to start thinking of thevarious conditions and scenarios under which these new analytics can be applied.

75

Ch06-I044949.qxd 5/23/06 12:01 PM Page 75

Page 78: Managing Enterprise Risk

76 Managing Enterprise Risk

This chapter starts by reviewing the traditional views of valuation and decision analysis,and moves on to an introduction of real options and Monte Carlo simulation. It is onlyintended as an introductory view into these new analytics, and is based upon the author’sbooks, Modeling Risk: Applying Monte Carlo Simulation, Real Options Analysis, Forecast-ing, and Optimization (Wiley, 2006), Real Options Analysis: Tools and Techniques, 2ndEdition (Wiley, 2005), Real Options Analysis Course (Wiley, 2003), and Applied RiskAnalysis: Moving Beyond Uncertainty (Wiley Finance, 2003). Please refer to these booksfor more detailed and technical information on performing Monte Carlo simulation, realoptions analysis, time-series forecasting, and stochastic optimization. These books willalso present step-by-step details on using the author’s Risk Simulator software and RealOptions Super Lattice Software (SLS) for running Monte Carlo simulations, time-seriesforecasting, real options analysis, and optimization.

The traditional views of valuation

Value is defined as the single time-value discounted number that is representative of allfuture net profitability. In contrast, the market price of an asset may or may not be identi-cal to its value. The terms assets, projects, and strategies are used interchangeably through-out this chapter. For instance, when an asset is sold at a significant bargain, its price maybe somewhat lower than its value, and one would surmise that the purchaser has obtaineda significant amount of value. The idea of valuation in creating a fair market value is todetermine the price that closely resembles the true value of an asset. This true value comesfrom the physical aspects of the asset as well as the non-physical, intrinsic, or intangibleaspect of the asset. Both aspects have the capabilities of generating extrinsic monetary orintrinsic strategic value. Traditionally, there are three mainstream approaches to valuation,namely, the market approach, the income approach, and the cost approach.

Market Approach

The market approach looks at comparable assets in the marketplace and their correspon-ding prices, and assumes that market forces will tend to move the market price to an equi-librium level. It is further assumed that the market price is also the fair market value afteradjusting for transaction costs and risk differentials. Sometimes a market-, industry-, orfirm-specific adjustment is warranted to bring the comparables closer to the operatingstructure of the firm whose asset is being valued. These approaches could include common-sizing the comparable firms, such as performing quantitative screening using criteria thatclosely resemble the firm’s industry, operations, size, revenues, functions, profitability levels, operational efficiency, competition, market, and risks.

Income Approach

The income approach looks at the future potential profit or free cash flow (FCF) generat-ing potential of the asset and attempts to quantify, forecast, and discount these net FCF toa present value. The cost of implementation, acquisition, and development of the asset isthen deducted from this present value of cash flows to generate a net present value (NPV).Often, the cash flow stream is discounted at a firm-specified hurdle rate, at the weighted

Ch06-I044949.qxd 5/23/06 12:01 PM Page 76

Page 79: Managing Enterprise Risk

average cost of capital (WACC), or at a risk-adjusted discount rate based on the perceivedproject-specific risk, historical-firm risk, or overall business risk.

Cost Approach

The cost approach looks at the cost a firm would incur if it were to replace or reproducethe asset’s future profitability potential including the cost of its strategic intangibles, if theasset were to be created from the ground up. Although the financial theories underlyingthese approaches are sound in the more traditional deterministic view, they cannot be reasonably used in isolation when analyzing the true strategic flexibility value of a firm,project, or asset.

Other Traditional Approaches

Other approaches used in valuation, more appropriately applied to the valuation of intan-gibles, rely on quantifying the economic viability and economic gains the asset brings tothe firm. There are several well-known methodologies to intangible-asset valuation, par-ticularly in valuing trademarks and brand names. These methodologies apply the combi-nation of the market, income, and cost approaches described above.

The first method compares pricing strategies and assumes that by having some dominantmarket position by virtue of a strong trademark or brand recognition – for instance, Coca-Cola – the firm can charge a premium price for its product. Hence, if we can find marketcomparables producing similar products, in similar markets, performing similar func-tions, facing similar market uncertainties and risks, the price differential would then per-tain exclusively to the brand name. These comparables are generally adjusted to accountfor the different conditions under which the firms operate. This price premium per unit isthen multiplied by the projected quantity of sales, and the outcome after performing a dis-counted cash flow (DCF) analysis will be the residual profits allocated to the intangible.A similar argument can be set forth in using operating profit margin in lieu of price perunit. Operating profit before taxes is used instead of net profit after taxes because it avoidsthe problems of comparables having different capital structure policies or carry-forwardnet operating losses and other tax-shield implications.

Another method uses a common-size analysis of the profit and loss statements betweenthe firm holding the asset and market comparables. This takes into account any advantagefrom economies of scale and economies of scope. The idea here is to convert the incomestatement items as a percentage of sales, and balance sheet items as a percentage of totalassets. In addition, in order to increase comparability, the ratio of operating profit to salesof the comparable firm is then multiplied by the asset-holding firm’s projected revenuestructure, thus eliminating the potential problem of having to account for differences ineconomies of scale and scope. This approach uses a percentage of sales, return on invest-ment, or return on asset ratio as the common-size variable.

Practical issues using traditional valuation methodologies

The traditional valuation methodology relying on a DCF series does not get at some of theintrinsic attributes of the asset or investment opportunity. Traditional methods assume that

Real options and Monte Carlo simulation versus DCF valuation 77

Ch06-I044949.qxd 5/23/06 12:01 PM Page 77

Page 80: Managing Enterprise Risk

the investment is an all-or-nothing strategy and do not account for managerial flexibility,the concept that management can alter the course of an investment over time when cer-tain aspects of the project’s uncertainty become known. One of the value-added compo-nents of using real options is that it takes into account management’s ability to create,execute, and abandon strategic and flexible options.

There are several potential problem areas in using a traditional DCF calculation on strate-gic optionalities. These problems include undervaluing an asset that currently produceslittle or no cash flow, the non-constant nature of the WACC discount rate through time, theestimation of an asset’s economic life, forecast errors in creating the future cash flows,and insufficient tests for plausibility of the final results. Real options, when applied usingan options theoretical framework, can mitigate some of these problematic areas.Otherwise, financial profit level metrics like NPV, or IRR (internal rate of return) will beskewed and not provide a comprehensive view of the entire investment value. However,the DCF model does have its merits.

Advantages of using the DCF● Clear, consistent decision criteria for all projects.

● Same results regardless of risk preferences of investors.

● Quantitative, decent level of precision, and economically rational.

● Not as vulnerable to accounting conventions (depreciation, inventory valuation).

● Factors in the time value of money and basic risk structures.

● Relatively simple, widely taught, widely accepted.

● Simple to explain to management: “If benefits outweigh the costs, do it!”

In reality, there are several issues that an analyst should be aware of prior to using DCFmodels, as shown in Table 6.1. The most important aspects include the business reality thatrisks and uncertainty abound when decisions have to be made and that management has thestrategic flexibility to make and change decisions as these uncertainties become knownover time. In such a stochastic world, using deterministic models like the DCF may poten-tially grossly underestimate the value of a particular project. A deterministic DCF modelassumes at the outset that all future outcomes are fixed. If this is the case, then the DCFmodel is correctly specified as there would be no fluctuations in business conditions thatwould change the value of a particular project. In essence, there would be no value in flex-ibility. However, the actual business environment is highly fluid, and if management hasthe flexibility to make appropriate changes when conditions differ, then there is indeedvalue in flexibility, a value that will be grossly underestimated using a DCF model.

Figure 6.1 shows a simple example of applying DCF analysis. Assume there exists a projectthat costs $1000 to implement at Year 0 that will bring in the following projected positivecash flows in the subsequent 5 years: $500, $600, $700, $800, and $900. These projectedvalues are simply subjective best-guess forecasts on the part of the analyst. As can be seenin Figure 6.1, the timeline shows all the pertinent cash flows and their respective dis-counted present values. Assuming the analyst decides that the project should be discountedat a 20% risk-adjusted discount rate using a WACC, we calculate the NPV to be $985.92

78 Managing Enterprise Risk

Ch06-I044949.qxd 5/23/06 12:01 PM Page 78

Page 81: Managing Enterprise Risk

Real options and Monte Carlo simulation versus DCF valuation 79

and a corresponding IRR of 54.97%.1 Furthermore, the analyst assumes that the projectwill have an infinite economic life and assumes a long-term growth rate of cash flows of5%. Using the Gordon constant growth model (GGM), the analyst calculates the terminalvalue of the project’s cash flow at Year 5 to be $6300. Discounting this figure for 5 yearsat the risk-adjusted discount rate and adding it to the original NPV yields a total NPV withterminal value of $3517.75.

The calculations can all be seen in Figure 6.1, where we further define w as the weights, d fordebt, ce for common equity and ps for preferred stocks, FCF as the free cash flows, tax as thecorporate tax rate, g as the long-term growth rate of cash flows, and rf as the risk-free rate.

Table 6.1. Disadvantages of DCF: assumptions versus realities.

DCF Assumptions Realities

Decisions are made now, and cash flow Uncertainty and variability in future outcomes.streams are fixed for the future. Not all decisions are made today as some may

be deferred to the future, when uncertaintybecomes resolved.

Projects are “mini firms,” and they are With the inclusion of network effects, interchangeable with whole firms. diversification, interdependencies, and synergy,

firms are portfolios of projects and their resulting cash flows. Sometimes projects cannot be evaluated as stand-alone cash flows.

Once launched, all projects are Projects are usually actively managed through passively managed. project life cycle, including checkpoints,

decision options, budget constraints, etc.

Future FCF streams are all highly It may be difficult to estimate future cash predictable and deterministic. flows as they are usually stochastic and risky

in nature.

Project discount rate used is the opportunity There are multiple sources of business risks cost of capital, which is proportional to with different characteristics, and some are non-diversifiable risk. diversifiable across projects or time.

All risks are completely accounted for by Firm and project risk can change during the the discount rate. course of a project.

All factors that could affect the outcome Due to project complexity and so-called of the project and value to the investors are externalities, it may be difficult or impossible reflected in the DCF model through the to quantify all factors in terms of incrementalNPV or IRR. cash flows. Distributed, unplanned outcomes

(e.g., strategic vision and entrepreneurial activity)can be significant and strategically important.

Unknown, intangible, or immeasurable Many of the important benefits are intangiblefactors are valued at zero. assets or qualitative strategic positions.

1The NPV is simply the sum of the present values of future cash flows less the implementation cost. The IRR isthe implicit discount rate that forces the NPV to be zero. Both calculations can be easily performed in Excelusing its “NPV( )” and “IRR( )” functions.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 79

Page 82: Managing Enterprise Risk

Even with a simplistic DCF model like this, we can see the many shortcomings of usinga DCF model that are worthy of mention. Figure 6.2 lists some of the more noteworthyissues. For instance, the NPV is calculated as the present value of future net FCF (bene-fits) less the present value of implementation costs (investment costs). However, in manyinstances, analysts tend to discount both benefits and investment costs at a single identicalmarket risk-adjusted discount rate, a hurdle rate, or the WACC. This, of course, is flawed.

The benefits should be discounted at a market risk-adjusted discount rate or using theWACC, but the investment cost should be discounted at a reinvestment rate similar to therisk-free rate. Cash flows that have market risks should be discounted at the market risk-adjusted rate, while cash flows that have private risks should be discounted at the risk-freerate. This is because the market will only compensate the firm for taking on the market risksbut not private risks. It is usually assumed that the benefits are subject to market risks(because benefit FCF depends on market demand, market prices, and other exogenous marketfactors) while investment costs depend on internal private risks (such as the firm’s abilityto complete building a project in a timely fashion or the costs and inefficiencies incurredbeyond what is projected). On occasion, these implementation costs may also be dis-counted at a rate slightly higher than a risk-free rate, such as a money-market rate or at theopportunity cost of being able to invest the sum in another project yielding a particularinterest rate. Suffice it to say that benefits and investment costs should be discounted atdifferent rates if they are subject to different risks. Otherwise, discounting the costs at amuch higher market risk-adjusted rate will reduce the costs significantly, making the projectlook as though it were more valuable than it actually is.

80 Managing Enterprise Risk

Figure 6.1. Applying DCF analysis.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 80

Page 83: Managing Enterprise Risk

Real options and Monte Carlo simulation versus DCF valuation 81

The discount rate that is used is usually calculated from a model, such as WACC, capitalasset-pricing model (CAPM), multiple asset-pricing model, or arbitrage pricing theory(APT), set by management as a requirement for the firm, or defined as a hurdle rate forspecific projects. In most circumstances, if we were to perform a simple DCF model, themost sensitive variable is usually the discount rate. The discount rate is also the most dif-ficult variable to correctly quantify. Hence, this leaves the discount rate to potential abuseand subjective manipulation. A target NPV value can be obtained by simply massaging thediscount rate to a suitable level.

In addition, certain input assumptions required to calculate the discount rate are also subjectto question. For instance, in the WACC, the input for cost of common equity is usuallyderived using some form of the CAPM. In the CAPM, the infamous beta (�) is extremelydifficult to calculate. In financial assets, we can obtain beta through a simple calculation ofthe covariance between a firm’s stock prices and the market portfolio, divided by the vari-ance of the market portfolio. Beta is then a sensitivity factor measuring the co-movementsof a firm’s equity prices with respect to the market. The problem is that equity prices changeevery few minutes! Depending on the time frame used for the calculation, beta may fluc-tuate wildly. In addition, for non-traded physical assets, we cannot reasonably calculatebeta this way. Using a firm’s tradable financial assets’ beta as a proxy for the beta on aproject within a firm that has many other projects is ill-advised.

Figure 6.2. The shortcomings of DCF analysis.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 81

Page 84: Managing Enterprise Risk

There are risk and return diversification effects among projects as well as investor psy-chology and over-reaction in the market that are not accounted for. There are also othermore robust asset-pricing models that can be used to estimate a project’s discount rate, butthey require great care. For instance, the APT models are built upon the CAPM and haveadditional risk factors that may drive the value of the discount rate. These risk factorsinclude maturity risk, default risk, inflation risk, country risk, size risk, non-marketablerisk, control risk, minority shareholder risk, and others. Even the firm’s CEO’s golf scorecan be a risk hazard (e.g., rash decisions may be made after a bad game or bad projectsmay be approved after a hole-in-one, believing in a lucky streak). The issue arises whenone has to decide which risks to include and which not to include. This is definitely a difficult task, to say the least.2

One other method that is widely used is that of comparability analysis. By gathering pub-licly available data on the trading of financial assets by stripped-down entities with simi-lar functions, markets, risks, and geographical locations, analysts can then estimate thebeta (a measure of systematic risk) or even a relevant discount rate from these compara-ble firms. For instance, an analyst who is trying to gather information on a research anddevelopment effort for a particular type of drug can conceivably gather market data onpharmaceutical firms performing only research and development on similar drugs, exist-ing in the same market, and having the same risks. The median or average beta value canthen be used as a market proxy for the project currently under evaluation. Obviously, thereis no silver bullet, but if an analyst were diligent enough, he or she could obtain estimatesfrom these different sources and create a better estimate. Monte Carlo simulation is mostpreferred in situations like these. The analyst can define the relevant simulation inputsusing the range obtained from the comparable firms and simulate the DCF model to obtainthe range of relevant variables (typically the NPV and IRR).

Now that you have the relevant discount rate, the FCF stream should then be discountedappropriately. Herein lies another problem: forecasting the relevant FCF and deciding if they should be discounted on a continuous basis or a discrete basis, versus using end-of-year or mid-year conventions. FCF should be net of taxes, with the relevant non-cashexpenses added back. Since FCF are generally calculated starting with revenues and pro-ceeding through direct cost of goods sold, operating expenses, depreciation expenses, inter-est payments, taxes, and so forth, there is certainly room for mistakes to compound over time.

Forecasting cash flows several years into the future is often very difficult and may require theuse of fancy econometric regression modeling techniques, time-series analysis, managementhunches, and experience. A recommended method is not to create single-point estimatesof cash flows at certain time periods, but to use Monte Carlo simulation and assess the rel-evant probabilities of cash flow events. In addition, because cash flows in the distant futureare certainly riskier than in the near future, the relevant discount rate should also changeto reflect this. Instead of using a single discount rate for all future cash flow events, thediscount rate should incorporate the changing risk structure of cash flows over time. This can

82 Managing Enterprise Risk

2A multiple regression or principal component analysis can be performed but probably with only limited suc-cess for physical assets as opposed to financial assets because there are usually very little historical data avail-able for such analyses.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 82

Page 85: Managing Enterprise Risk

Real options and Monte Carlo simulation versus DCF valuation 83

be done by either weighing the cash flow streams’ probabilistic risks (standard deviationsof forecast distributions) or using a stepwise technique of adding the maturity risk pre-mium inherent in U.S. Treasury securities at different maturity periods. This bootstrappingapproach allows the analyst to incorporate what the market experts predict the future market risk structure looks like.

Finally, the issue of terminal value is of major concern for anyone using a DCF model.Several methods of calculating terminal values exist, such as the GGM, zero growth per-petuity consul, and the supernormal growth models. The GGM is the most widely used,where at the end of a series of forecast cash flows, the GGM assumes that cash flowgrowth will be constant through perpetuity. The GGM is calculated as the FCF at the endof the forecast period multiplied by a relative growth rate, divided by the discount rate lessthe long-term growth rate. As shown in Figure 6.2, we see that the GGM breaks downwhen the long-term growth rate exceeds the discount rate. This growth rate is also assumedto be fixed, and the entire terminal value is highly sensitive to this growth rate assumption.In the end, the value calculated is highly suspect because a small difference in growthrates will mean a significant fluctuation in value. Perhaps a better method is to assumesome type of growth curve in the FCF series. These growth curves can be obtained throughsome basic time-series analysis as well as using more advanced assumptions in stochas-tic modeling. Nonetheless, we see that even a well-known, generally accepted and appliedDCF model has significant analytical restrictions and problems. These problems are rathersignificant and can compound over time, creating misleading results. Great care should begiven in performing such analyses. These new analytical methods address some of the issuesdiscussed above. However, it should be stressed that these new analytics do not providethe silver bullet for valuation and decision-making. They provide value-added insights,and the magnitude of insights and value obtained from these new methods depend solelyon the type and characteristic of the project under evaluation.

The New Analytics

The applicability of traditional analysis versus the new analytics across a time horizon isdepicted in Figure 6.3. During the shorter time period, holding everything else constant,the ability for the analyst to predict the near future is greater than when the period extendsbeyond the historical and forecast periods. This is because the longer the horizon, theharder it is to fully predict all the unknowns, and hence, management can create value bybeing able to successfully initiate and execute strategic options.

The traditional and new analytics can also be viewed as a matrix of approaches as seen inFigure 6.4, where the analytics are segregated by its analytical perspective and type. Withregard to perspective, the analytical approach can be either a top-down or a bottom-upapproach. A top-down approach implies a higher focus on macro variables than on microvariables. The level of granularity from the macro to micro levels include starting fromthe global perspective, and working through market or economic conditions, impact on aspecific industry, and more specifically, the firm’s competitive options. At the firm level,the analyst may be concerned with a single project and the portfolio of projects from arisk management perspective. At the project level, detail focus will be on the variablesimpacting the value of the project.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 83

Page 86: Managing Enterprise Risk

84 Managing Enterprise Risk

Figure 6.3. Traditional versus new analytics.

Figure 6.4. An analytical perspective.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 84

Page 87: Managing Enterprise Risk

A paradigm shift

In the past, corporate investment decisions were cut-and-dried. Buy a new machine that ismore efficient, make more products costing a certain amount, and if the benefits outweighthe costs, execute the investment. Hire a larger pool of sales associates, expand the currentgeographical area, and if the marginal increase in forecast sales revenues exceeds the addi-tional salary and implementation costs, start hiring. Need a new manufacturing plant? Showthat the construction costs can be recouped quickly and easily by the increase in revenuesit will generate through new and more improved products, and the initiative is approved.

However, real-life business conditions are a lot more complicated. Your firm decides to go with an e-commerce strategy, but multiple strategic paths exist. Which path do youchoose? What are the options that you have? If you choose the wrong path, how do you getback on the right track? How do you value and prioritize the paths that exist? You are a ven-ture capitalist firm with multiple business plans to consider. How do you value a start-upfirm with no proven track record? How do you structure a mutually beneficial investment orlicensing deal? What is the optimal timing to a second or third round of financing?

Real options are useful not only in valuing a firm through its strategic business options butalso as a strategic business tool in capital investment decisions. For instance, should a firminvest millions in a new e-commerce initiative? How does a firm choose among several seem-ingly cashless, costly, and unprofitable information-technology (IT) infrastructure projects?Should a firm indulge its billions in a risky research and development initiative? The conse-quences of a wrong decision can be disastrous or even terminal for certain firms. In a tradi-tional DCF model, these questions cannot be answered with any certainty. In fact, some ofthe answers generated through the use of the traditional DCF model are flawed because themodel assumes a static, one-time decision-making process while the real options approachtakes into consideration the strategic managerial options certain projects create under uncer-tainty and management’s flexibility in exercising or abandoning these options at differentpoints in time, when the level of uncertainty has decreased or has become known over time.

The real options approach incorporates a learning model, such that management makesbetter and more informed strategic decisions when some levels of uncertainty are resolvedthrough the passage of time, actions, and events. The DCF analysis assumes a static invest-ment decision, and assumes that strategic decisions are made initially with no recourse tochoose other pathways or options in the future. To create a good analogy of real options,visualize it as a strategic road map of long and winding roads with multiple perilous turnsand branches along the way. Imagine the intrinsic and extrinsic value of having such a roadmap when navigating through unfamiliar territory, as well as having road signs at everyturn to guide you in making the best and most informed driving decisions. This is theessence of real options.

The answer to evaluating such projects lies in real options analysis, which can be used ina variety of settings, including pharmaceutical drug development, oil and gas explorationand production, manufacturing, e-business, start-up valuation, venture capital investment,IT infrastructure, research and development, mergers and acquisitions, e-commerce ande-business, intellectual capital development, technology development, facility expansion,business project prioritization, enterprise-wide risk management, business unit capitalbudgeting, licenses, contracts, intangible asset valuation, and the like.

Real options and Monte Carlo simulation versus DCF valuation 85

Ch06-I044949.qxd 5/23/06 12:01 PM Page 85

Page 88: Managing Enterprise Risk

The real options solution

Simply defined, real options is a systematic approach and integrated solution using finan-cial theory, economic analysis, management science, decision sciences, statistics, andeconometric modeling in applying options theory in valuing real physical assets, as opposedto financial assets, in a dynamic and uncertain business environment where business deci-sions are flexible in the context of strategic capital investment decision-making, valuinginvestment opportunities, and project capital expenditures.

Real options are crucial in:

● Identifying different corporate investment decision pathways or projects that man-agement can navigate given the highly uncertain business conditions.

● Valuing each of the strategic decision pathways and what it represents in terms offinancial viability and feasibility.

● Prioritizing these pathways or projects based on a series of qualitative and quantita-tive metrics.

● Optimizing the value of your strategic investment decisions by evaluating differentdecision paths under certain conditions or determining how using a differentsequence of pathways can lead to the optimal strategy.

● Timing the effective execution of your investments and finding the optimal triggervalues and cost or revenue drivers.

● Managing existing or developing new optionalities and strategic decision pathwaysfor future opportunities.

Issues to consider

Strategic options do have significant intrinsic value, but this value is only realized whenmanagement decides to execute the strategies. Real options theory assumes that manage-ment is logical and competent and that it acts in the best interests of the company and itsshareholders through the maximization of wealth and minimization of risk of losses. Forexample, suppose a firm owns the rights to a piece of land that fluctuates dramatically inprice, an analyst calculates the volatility of prices and recommends that management retainownership for a specified time period, where within this period there is a good chance thatthe price of real estate will triple. Therefore, management owns a call option, an option towait and defer sale for a particular time period. The value of the real estate is thereforehigher than the value that is based on today’s sale price. The difference is simply thisoption to wait. However, the value of the real estate will not command the higher value if prices do triple but management decides not to execute the option to sell. In that case,the price of real estate goes back to its original levels after the specified period and thenmanagement finally relinquishes its rights.

Was the analyst right or wrong? What was the true value of the piece of land? Should it havebeen valued at its explicit value on a deterministic case where you know what the price of

86 Managing Enterprise Risk

Ch06-I044949.qxd 5/23/06 12:01 PM Page 86

Page 89: Managing Enterprise Risk

Real options and Monte Carlo simulation versus DCF valuation 87

land is right now, and therefore this is its value; or should it include some types of optional-ity where there is a good probability that the price of land could triple in value, and hence thepiece of land is truly worth more than it is now and should therefore be valued accordingly?The latter is the real options view. The additional strategic optionality value can only beobtained if the option is executed; otherwise, all the options in the world are worthless. Thisidea of explicit versus implicit value becomes highly significant when management’s com-pensation is tied directly to the actual performance of particular projects or strategies.

To further illustrate this point, suppose the price of the land in the market is currently $10million. Further, suppose that the market is highly liquid and volatile, and that the firmcan easily sell it off at a moment’s notice within the next 5 years, the same amount of timethe firm owns the rights to the land. If there is a 50% chance that the price will increase to$15 million and a 50% chance it will decrease to $5 million within this time period, is theproperty worth an expected value of $10 million? If prices rise to $15 million, managementshould be competent and rational enough to execute the option and sell that piece of landimmediately to capture the additional $5 million premium. However, if management actsinappropriately or decides to hold off selling in the hopes that prices will rise even further, theproperty value may eventually drop back down to $5 million. Now, how much is this prop-erty really worth? What if there happens to be an abandonment option? Suppose there is aperfect counterparty to this transaction who decides to enter into a contractual agreementwhereby for a contractual fee, the counterparty agrees to purchase the property for $10million within the next 5 years, regardless of the market price and executable at the whimof the firm that owns the property. Effectively, a safety net has been created whereby theminimum floor value of the property has been set at $10 million (less the fee paid). That is,there is a limited downside but an unlimited upside, as the firm can always sell the propertyat market price if it exceeds the floor value. Hence, this strategic abandonment option hasincreased the value of the property significantly. Logically, with this abandonment option inplace, the value of the land with the option is definitely worth more than $10 million.

Industry leaders embracing real options

Industries using real options as a tool for strategic decision-making started with oil and gas as well as mining companies, and later expanded into utilities, biotechnology,pharmaceuticals, and now into telecommunications, high-tech, and across all industries.Following are some very brief examples of how real options have been or should be used indifferent industries.

Automobile and Manufacturing Industry

In automobile and manufacturing, General Motors (GM) applies real options to createswitching options in producing its new series of autos. This is essentially the option to usethe cheaper resource over a given period of time. GM holds excess raw materials and hasmultiple global vendors for similar materials with excess contractual obligations abovewhat it projects as necessary. The excess contractual cost is outweighed by the significantsavings of switching vendors when a certain raw material becomes too expensive in a

Ch06-I044949.qxd 5/23/06 12:01 PM Page 87

Page 90: Managing Enterprise Risk

particular region of the world. By spending the additional money in contracting with vendorsby meeting their minimum purchase requirements, GM has essentially paid the premiumon purchasing a switching option. This is important especially when the price of rawmaterials fluctuate significantly in different regions around the world. Having an optionhere provides the holder a hedging vehicle against pricing risks.

Computer Industry

In the computer industry, HP-Compaq used to forecast sales in foreign countries monthsin advance. It then configured, assembled, and shipped the highly specific configurationprinters to these countries. However, given that demand changes rapidly and forecast fig-ures are seldom correct, the pre-configured printers usually suffer the higher inventoryholding cost or the cost of technological obsolescence. HP-Compaq can create an optionto wait and defer making any decisions too early through building assembly plants inthese foreign countries. Parts can then be shipped and assembled in specific configura-tions when demand is known, possibly weeks in advance rather than months in advance.These parts can be shipped anywhere in the world and assembled in any configurationnecessary, while excess parts are interchangeable across different countries. The premiumpaid on this option is building the assembly plants, and the upside potential is the savingsin making wrong demand forecasts.

Airline Industry

In the airline industry, Boeing spends billions of dollars and several years to decide if acertain aircraft model should even be built. Should the wrong model be tested in this elab-orate strategy, Boeing’s competitors may gain a competitive advantage relatively quickly.As there are so many technical, engineering, market, and financial uncertainties involvedin the decision-making process, Boeing can conceivably create an option to choose throughparallel development of multiple plane designs simultaneously, knowing very well theincreasing development cost of developing multiple designs simultaneously with the solepurpose of eliminating all but one in the near future. The added cost is the premium paidon the option. However, Boeing will be able to decide which model to abandon or continuewhen these uncertainties and risks become known over time. Eventually, all the modelswill be eliminated save one. This way, the company can hedge itself against making thewrong initial decision, and benefit from the knowledge gained through parallel develop-ment initiatives.

Oil and Gas Industry

In the oil and gas industry, companies spend millions of dollars to refurbish their refiner-ies and add new technology to create an option to switch their mix of outputs among heat-ing oil, diesel, and other petrochemicals as a final product, using real options as a meansof making capital and investment decisions. This option allows the refinery to switch itsfinal output to one that is more profitable based on prevailing market prices, to capture thedemand and price cyclicality in the market.

88 Managing Enterprise Risk

Ch06-I044949.qxd 5/23/06 12:01 PM Page 88

Page 91: Managing Enterprise Risk

Telecommunications Industry

In the telecommunications industry, in the past, companies like Sprint and AT&T installedmore fiber-optic cable and other telecommunications infrastructure than any other com-pany in order to create a growth option in the future by providing a secure and extensivenetwork, and to create a high barrier to entry, providing a first-to-market advantage. Imaginehaving to justify to the Board of Directors the need to spend billions of dollars on infra-structure that will not be used for years to come. Without the use of real options, thiswould have been impossible to justify.

Utilities Industry

In the utilities industry, firms have created an option to execute and an option to switch byinstalling cheap-to-build inefficient energy generator peaker plants only to be used whenelectricity prices are high and to shut down when prices are low. The price of electricitytends to remain constant until it hits a certain capacity utilization trigger level, whenprices shoot up significantly. Although this occurs infrequently, the possibility still exists,and by having a cheap standby plant, the firm has created the option to turn on the switchwhenever it becomes necessary, to capture this upside price fluctuation.

Real Estate Industry

In the real estate arena, leaving land undeveloped creates an option to develop at a laterdate at a more lucrative profit level. However, what is the optimal wait time or the optimaltrigger price to maximize returns? In theory, one can wait for an infinite amount of time,and real options provide the solution for the optimal timing and price trigger value.

Pharmaceutical Research and Development Industry

In pharmaceutical or research and development initiatives, real options can be used to justifythe large investments in what seems to be cashless and unprofitable under the DCF methodbut actually creates compound expansion options in the future. Under the myopic lenses ofa traditional DCF analysis, the high initial investment of, say, a billion dollars in researchand development may return a highly uncertain projected few million dollars over the nextfew years. Management will conclude under a net-present-value analysis that the project isnot financially feasible. However, a cursory look at the industry indicates that research anddevelopment is performed everywhere. Hence, management must see an intrinsic strategicvalue in research and development. How is this intrinsic strategic value quantified? A realoptions approach would optimally time and spread the billion dollar initial investment intoa multiple-stage investment structure. At each stage, management has an option to waitand see what happens as well as the option to abandon or the option to expand into thesubsequent stages. The ability to defer cost and proceed only if situations are permissiblecreated value for the investment.

Real options and Monte Carlo simulation versus DCF valuation 89

Ch06-I044949.qxd 5/23/06 12:01 PM Page 89

Page 92: Managing Enterprise Risk

High-Tech and e-Business Industry

In e-business strategies, real options can be used to prioritize different e-commerce initia-tives and to justify those large initial investments that have an uncertain future. Real optionscan be used in e-commerce to create incremental investment stages compared to a largeone-time investment (invest a little now, wait and see before investing more) and createsoptions to abandon and other future growth options.

All these cases where the high cost of implementation with no apparent payback in thenear future seems foolish and incomprehensible in the traditional DCF sense are fully jus-tified in the real options sense when taking into account the strategic options the practicecreates for the future, the uncertainty of the future operating environment, and manage-ment’s flexibility in making the right choices at the appropriate time.

Mergers and Acquisition

In valuing a firm for acquisition, you should not only consider the revenues and cash flowsgenerated from the firm’s operations but also the strategic options that come with the firm.For instance, if the acquired firm does not operate up to expectations, an abandonment optioncan be executed where it can be sold for its intellectual property and other tangible assets.If the firm is highly successful, it can be spun off into other industries and verticals, or newproducts and services can be eventually developed through the execution of an expansionoption. In fact, in mergers and acquisition, several strategic options exist. For instance, afirm acquires other entities to enlarge its existing portfolio of products or geographic loca-tion or to obtain new technology (expansion option); or to divide the acquisition intomany smaller pieces and sell them off as in the case of a corporate raider (abandonmentoption); or it merges to form a larger organization due to certain synergies and immedi-ately lays off many of its employees (contraction option). If the seller does not value itsreal options, it may be leaving money on the negotiation table. If the buyer does not valuethese strategic options, it is undervaluing a potentially highly lucrative acquisition target.

The Fundamental Essence of Real Options

The use of traditional DCF alone is inappropriate in valuing certain strategic projectsinvolving managerial flexibility. Two finance professors, Michael Brennan and EduardoSchwartz, provided an example on valuing the rights to a gold mine. In their example, amining company owns the rights to a local gold mine. The rights provide the firm theoption, and not the legal obligation, to mine the gold reserves supposedly abundant in saidmine. Therefore, if the price of gold in the market is high, the firm might wish to startmining and, in contrast, stop and wait for a later time to begin mining should the price ofgold drop significantly in the market. Suppose we set the cost of mining as X and the pay-off on the mined gold as S, taking into consideration the time value of money. We thenhave the following payoff schedule:

S � X if and only if S � X0 if and only if S � X

90 Managing Enterprise Risk

Ch06-I044949.qxd 5/23/06 12:01 PM Page 90

Page 93: Managing Enterprise Risk

This payoff is identical to the payoff on a call option on the underlying asset, the value ofthe mined gold. If the cost exceeds the value of the underlying asset, the option is left toexpire worthless, without execution; otherwise, the option will be exercised. That is, mineif and only if S exceeds X; otherwise, do not mine.

As an extension of the gold mine scenario, say we have a proprietary technology in devel-opment or a patent that currently and in the near future carries little or no cash flow butnonetheless is highly valuable due to the potential strategic positioning it holds for thefirm that owns it. A traditional DCF method will grossly underestimate the value of thisasset. A real options approach is more suitable and provides better insights into the actualvalue of the asset. The firm has the option to either develop the technology if the potentialpayoff exceeds the cost or abandon its development should the opposite be true.

For instance, assume a firm owns a patent on some technology with a 10-year economic life.To develop the project, the present value of the total research and development costs is$250 million, but the present value of the projected sum of all future net cash flows is only$200 million. In a traditional DCF sense, the NPV will be �$50 million, and the projectshould be abandoned. However, the proprietary technology is still valuable to the firmgiven that there is a probability that it will become more valuable in the future than projectedor that future projects can benefit from the technology developed. If we apply real optionsto valuing this simplified technology example, the results will be significantly different.By assuming the nominal rate on a 10-year risk-free U.S. Treasury note is 6%, and simu-lating the projected cash flow, we calculate the value of the research and development ini-tiative to be $2 million. This implies that the value of flexibility is $52 million or 26% ofits static NPV value. By definition, a research and development initiative involves creat-ing something new and unique or developing a more enhanced product. The nature of mostresearch and development initiatives is that they are highly risky and involve a significantinvestment up-front, with highly variable potential cash flows in the future that are gener-ally skewed toward the low end. In other words, most research and development projectsfail to meet expectations and generally produce lower incremental revenues than deemedprofitable. Hence, in a traditional DCF sense, research and development initiatives are usu-ally unattractive and provide little to no incentives. However, a cursory look at the currentindustry would imply otherwise. Research and development initiatives abound, implyingthat senior management sees significant intrinsic value in such initiatives. So there arises aneed to quantify such strategic values.

The basics of real options

Real options, as its name implies, uses options theory to evaluate physical or real assets, asopposed to financial assets or stocks and bonds. In reality, real options have been in the pastvery useful in analyzing distressed firms and firms engaged in research and developmentwith significant amounts of managerial flexibility under significant amounts of uncertainty.Only in the past decade has real options started to receive corporate attention in general.

Why are real options important?

An important point is that the traditional DCF approach assumes a single decision path-way with fixed outcomes, and all decisions are made in the beginning without the ability

Real options and Monte Carlo simulation versus DCF valuation 91

Ch06-I044949.qxd 5/23/06 12:01 PM Page 91

Page 94: Managing Enterprise Risk

to change and develop over time. The real options approach considers multiple decisionpathways as a consequence of high uncertainty coupled with management’s flexibility inchoosing the optimal strategies or options along the way when new information becomesavailable. That is, management has the flexibility to make midcourse strategy correctionswhen there is uncertainty involved in the future. As information becomes available anduncertainty becomes resolved, management can choose the best strategies to implement.Traditional DCF assumes a single static decision, while real options assume a multi-dimensional dynamic series of decisions, where management has the flexibility to adaptgiven a change in the business environment.

Another way to view the problem is that there are two points to consider: one, the initialinvestment starting point where strategic investment decisions have to be made, and two,the ultimate goal, the optimal decision that can ever be made to maximize the firm’s returnon investment and shareholder’s wealth. In the traditional DCF approach, joining thesetwo points is a straight line, whereas the real options approach looks like a map with multiple routes to get to the ultimate goal, where each route is conjoined with others. Theformer implies a one-time decision-making process while the latter implies a dynamicdecision-making process, wherein the investor learns over time and makes different updateddecisions as time passes and events unfold.

As outlined above, traditional approaches coupled with DCF analysis have their pitfalls.Real options provide additional insights beyond the traditional analyses. At its least, realoptions provide a sobriety test of the results obtained using DCF and, at its best, providea robust approach to valuation when coupled with the DCF methodology. The theory behindoptions is sound and reasonably applicable.

Some examples of real options using day-to-day terminology include:

● option to defer,

● option to wait and see,

● option to delay,

● option to expand,

● option to contract,

● option to choose,

● option to switch resources,

● option for phased and sequential investments.

Notice that the names used to describe the more common real options are rather self-explanatory, unlike the actual model names such as a “Barone–Adesi–Whaley approxi-mation model for an American option to expand.” This is important because when it comesto explaining the process and results to management, the easier it is for them to under-stand, the higher the chances of acceptance of the methodology and results.

92 Managing Enterprise Risk

Ch06-I044949.qxd 5/23/06 12:01 PM Page 92

Page 95: Managing Enterprise Risk

Real options and Monte Carlo simulation versus DCF valuation 93

Traditional approaches to valuing projects associated with the value of a firm, includingany strategic options the firm possesses, or flexible management decisions that aredynamic and have the capacity to change over time, are flawed in several respects.Projects valued using the traditional DCF model often provide a value that grossly under-states the true fair-market-value of the asset. This is because projects may provide a lowor zero cash flow in the near future but nonetheless be valuable to the firm. In addition,projects can be viewed in terms of owning the option to execute the rights, not the obli-gation per se, because the owner can execute the option or allow it to expire should thecost outweigh the benefits of execution. The recommended options approach takes intoconsideration this option to exercise and prices it accordingly. Compared to traditionalapproaches, real options provide added elements of robustness to the analysis. Its inputsin the option-pricing model can be constructed via multiple alternatives, thus providing amethod of stress testing or sensitivity testing of the final results. The corollary analysisresulting from real options also provides sanity checks without having to perform theentire analysis again from scratch using different assumptions.

The following example provides a simplified analogy to why optionality is important andshould be considered in corporate capital investment strategies. Suppose you have aninvestment strategy that costs $100 to initiate and you anticipate that on average, the payoff will yield $120 in exactly 1 year. Assume a 15% WACC and a 5% risk-free rate, both of which are annualized rates. As the example below illustrates, the NPV of the strategy is $4.3, indicating a good investment potential because the benefits outweighthe costs.

However, if we wait and see before investing, when uncertainty becomes resolved, we getthe profile below, where the initial investment outlay occurs at time one and positive cashinflows are going to occur only at time two. Let’s say that your initial expectations werecorrect and that the average or expected value came to be $120 with good market demandproviding a $140 cash flow and in the case of bad demand, only $100. If we had the optionto wait a year, then we could better estimate the trends in demand and we would have seenthe payoff profile bifurcating into two scenarios. Should the scenario prove unfavorable,we would have the option to abandon the investment because the costs are identical to thecash inflow (�$100 versus �$100), and we would rationally not pursue this avenue.Hence, we would pursue this investment only if a good market demand is observed for theproduct, and our NPV for waiting an extra year will be $10.6. This analysis indicates atruncated downside where there is a limited liability because a rational investor wouldnever knowingly enter a sure-loss investment strategy. Therefore, the value of flexibilityis $6.3.

�$100 �$120

Time � 1Time � 0

� �NPV 100 � $4.3(1.15)1

120

Ch06-I044949.qxd 5/23/06 12:01 PM Page 93

Page 96: Managing Enterprise Risk

However, a more realistic payoff schedule should look like the example below. By wait-ing a year and putting off the investment until Year 2, you are giving up the potential fora cash inflow now, and the leakage or opportunity cost by not investing now is the $5 lessyou could receive ($140 � 135). However, by putting off the investment, you are alsodefraying the cost of investing in that the cost outlay will only occur a year later. The cal-culated NPV in this case is $6.8.

Comparing traditional approaches with real options

Figures 6.5–6.10 show a step-by-step analysis comparing a traditional analysis with thatof real options, from the analyst’s viewpoint. The analysis starts off with a DCF model inanalyzing future cash flows. The analyst then applies sensitivity and scenario analysis.This is usually the extent of traditional approaches. As the results are relatively negative,the analyst then decides to add some new analytics. Monte Carlo simulation is then used,as well as real options analysis. The results from all these analytical steps are then com-pared and conclusions are drawn. This is a good comparative analysis of the results andinsights obtained by using the new analytics. In this example, the analyst has actuallyadded significant value to the overall project by creating optionalities within the projectby virtue of actively pursuing and passively waiting for more information to becomeavailable prior to making any decisions.

NPV �(1.05)1

100

(1.15)2

135� � $6.8

�$78

Cost �$100

Time � 2Time � 1

�$135Good

Bad

Expected value�$106.5

NPV �(1.05)1

100

(1.15)2

140� � $10.6

�$100

Cost �$100

Time � 2Time � 1

�$140Good

Bad

Expected value�$120

94 Managing Enterprise Risk

Ch06-I044949.qxd 5/23/06 12:01 PM Page 94

Page 97: Managing Enterprise Risk

Real options and Monte Carlo simulation versus DCF valuation 95

A. Discounted cash flowThe extended example below shows the importance of waiting. That is, suppose a firm needs to make a rather large capital investment decision but at the same time has the option to wait and defer on making the decision until later. The firm may be involved in pharmaceutical research and development activities, IT-investment activities, or simply in marketing a new product that is yet untested in the market.

Suppose the analyst charged with performing a financial analysis on the project estimates that the most probable level of net revenues generated through the implementation of the project with an economic life of 5 years is presented in the time line below. Further, she/he assumes that the implementation cost is $200M and the project's risk-adjusted discount rate is 20%, which also happens to be the firm's weighted average cost of capital. The calculated NPV is found to be at aloss of �$26.70M.

B. Sensitivity analysis on discounted cash flowEven though the NPV shows a significant negative amount, the analyst feels that the investment decision can be better improved through more rigor. Hence, she decides to perform a sensitivity analysis. Since in this simplified example, we only have three variables (discount rate, cost, and future net revenue cash flows), the analyst increases each of these variables by 10% to note the sensitivity of calculated NPV to these changes.

The entire set of possible sensitivities are presented in the table below, arranged in descending order based on the range of potential outcomes (indication of the variable's sensitivity). A Tornado Diagram is also created based on this sensitivity table.

Comparing Traditional Approaches and Real Options with Simulation

Input

Upside

Expected NPV

Variable

($180)Cost

22%Discount rate

$40Cash flow 2

$121Cash flow 5

$77Cash flow 3

$88Cash flow 4

Downside ($)

(46.70)

(16.77)

(29.20)

(31.12)

(30.75)

(30.56)

(29.20)

Upside ($)

(6.70)

(35.86)

(24.20)

(22.28)

(22.65)

(22.85)

(24.20)

Range ($)

40.00

19.09

5.00

8.84

8.10

7.72

5.00

Downside

($220)

18%

$32

$99

$63

$72

$27 $33

Base case

($200)

20%

$36

$110

$70

$80

$30Cash flow 1

Time

Time

Time

Time

t � 0 t � 1 t � 2 t � 3 t � 4 t � 5

t � 0 t � 1 t � 2 t � 3 t � 4 t � 5

t � 0 t � 1 t � 2 t � 3 t � 4 t � 5

t � 0 t � 1 t � 2 t � 3 t � 4 t � 5

�$200M $30M $36M $70M $80M $110MDiscount rate � 20%

�$220M $30M $36M $70M $80M $110MDiscount rate � 20%

Calculated NPV goes from �$26.70M to �$46.70MIncrease cost by 10%

(from �$200M to �220M)

�$200M $33M $40M $77M $88M $121MDiscount rate = 20%

Increase projected revenues by 10%

�$220M $30M $36M $70M $80M $110MDiscount rate = 22%

Calculated NPV goes from �$26.70M to �$35.86MIncrease discount

rate to 22%

Calculated NPV � �$26.70M

Calculated NPV goes from �$26.70M to �$9.37M

Figure 6.5. DCF model and sensitivity analysis.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 95

Page 98: Managing Enterprise Risk

96 Managing Enterprise Risk

C. Scenario Analysis

Tornado diagram(Range of net present values)

3.6

$72

4.5

$63

$99

18%

($220)

4.4

$88

5.5

$77

$121

22%

($180)

�60 �40 �20 0 20

Cost

Discount rate

Cash flow 5

Cash flow 3

Cash flow 4

Cash flow 1

Cash flow 2

Downside

Upside

�$200M $45M $54M $105M $120M $165M Discount rate � 20%

Calculated NPV � $59.94M

�$200M $30M $36M $70M $80M $110MDiscount rate � 20%

Calculated NPV � �$26.70M

�$200M $15M $35M $40M $55MDiscount rate = 20%

Calculated NPV � �$113.25M

Worst case scenario

Nominal case scenario

Best case scenario

20% probability of occurrence

50% probability of occurrence

30% probability of occurrence

Next, scenarios were generated. The analyst creates three possible scenarios and provides a subjective estimate of theprobabilities each scenario will occur. For instance, the worst case scenario is 50% of the nominal scenario’s projected revenues, while the best case scenario is 150% of the nominal scenario’s projected revenues.

NPVs for each of the scenarios are calculated, and an expected NPV is calculated to be �$18.04M based on the probabilityassumptions. The problem here is obvious. The range of possibilities is too large to make any inferences. That is, whichfigure should be believed? The �$18.04 or perhaps the nominal case of �$26.70? In addition, the upside potential anddownside risks are fairly significantly different from the nominal or expected cases. What are the chances that any of thesewill actually come true? What odds or bets or faith can one place in the results? The analyst then decides to perform someMonte Carlo simulations to answer these questions.

Expected NPV � 0.20 (�$113.25M) � 0.50 (�$26.70M) � 0.30 ($59.94M) � �$18.04M

t � 0 t � 1 t � 2 t � 3 t � 4 t � 5

t � 0 t � 1 t � 2 t � 3 t � 4 t � 5

t � 0 t � 1 t � 2 t � 3 t � 4 t � 5

Time

Time

Time

$18M

Figure 6.6. Tornado Diagram and scenario analysis.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 96

Page 99: Managing Enterprise Risk

Real options and Monte Carlo simulation versus DCF valuation 97

D. SimulationThere are two ways to perform a Monte Carlo simulation in this example. The first is to take the scenario analysis above andsimulate around the calculated NPVs. This assumes that the analyst is highly confident of his/her future cash flow projections and that the worst-case scenario is indeed the absolute minimum the firm can attain and the best case scenario is exactly atthe top of the range of possibilities. The second approach is to use the most likely or nominal scenario and simulate its inputs based on some management-defined ranges of possible cost and revenue structures.

(i) Simulating around scenariosThe analyst simulates around the three scenarios using a Triangular Distribution with the worst-case, nominal-case, and best- case scenarios as input parameters into the simulation model. The example below uses Risk Simulator software developedby the author (see www.realoptionsvaluation.com for more details). The results are shown below.

Mean �27.06 Standard deviation 35.31 Range minimum �112.21 Range maximum 57.43 Range width 169.64

We see that the range is fairly large as the scenarios were ratherextreme. In addition, there is only a23.89% chance that the project willbreak even or have an NPV � 0.

The 90% statistical confidenceinterval is between �$85.15M and$33.22M, which is also rather wide.Given such a huge swing inpossibilities, we are much better offwith performing a simulation usingthe second method, that is, to lookat the nominal case and simulatearound that case’s input parameters.

(ii) Simulating around the nominal scenarioIn the scenario analysis, the analyst created two different scenarios (worst-case and best-case) based on a 50% fluctuation in projected revenues from the base case, here we simply look at the base case and by simulation, generate 10,000 scenarios. Looking back at the Tornado Diagram, we noticed that discount rate and cost were the two key determining factors in theanalysis; the second approach can take the form of simulating these two key factors. The analyst simulates around thenominal scenario assuming a normal distribution for the discount rate with a mean of 20% and a standard deviation of 2%based on historical data on discount rates used in the firm. The cost structure is simulated assuming a uniform distribution witha minimum of �$180M and a maximum of �$220M based on input by management. This cost range is based on managementintuition and substantiated by similar projects in the past. The results of the simulation are shown below.

Mean �25.06Standard deviation 14.3Range minimum �69.54Range maximum 38.52Range width 108.06

Here we see that the range issomewhat more manageable andwe can make more meaningfulinferences. Based on the simulationresults, there is only a 3.48% chancethat the project will break even.

Freq

uen

cy

Certainty is 3.48% from $0.00 to �infinity

0.000

0.005

0.010

0.016

0.021

Forecast: expected NPV Frequency chart10,000 trials

Pro

bab

ility

208

156

104

52

0

97 outliers

$6.38($5.71)($17.80)($29.89)($41.98)

Frequency chart

Certainty is 90.00% from �85.15 to 33.22 net present value dollars

�109.29 �68.47 �27.66 13.16 53.98

0.000

0.005

0.010

0.014

0.019

10,000 trials

Pro

bab

ility

0

47.75

95.5

143.2

191

Freq

uen

cy

46 outliersForecast: all three conditions

Frequency chart

Certainty is 23.89% from 0.00 to �infinity net present value dollars

0.000

0.005

0.010

0.014

0.019

0

47.75

95.5

143.2

191

�109.29 �68.47 �27.66 13.16 53.98

10,000 trials

Pro

bab

ility

Freq

uen

cy46 outliers

Forecast: all three conditions

Figure 6.7. Monte Carlo simulation.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 97

Page 100: Managing Enterprise Risk

98 Managing Enterprise Risk

The 90% statistical confidence interval isbetween �$32.55M and �$1.19M.

Most of the time, the project is in negative NPV territory, suggesting a rather grim outlook for the project. However, theproject is rather important to seniormanagement and they wish to know ifthere is some way to add value to thisproject or make it financially justifiable toinvest in. The answer lies in using realoptions.

E. Real optionsWe have the option to wait or defer investing until a later date. That is, wait until uncertainty becomes resolved and then decide on the next course of action afterwards. Invest in the project only if market conditions indicate a good scenario and decide to abandonthe project if the market condition is akin to the nominal or worst-case scenarios because they both bear negative NPVs.

(i) Option to Wait I (Passive wait and see strategy)Say we decide to wait one year and assuming that we will gather more valuable information within this time frame, we can then decide whether to execute the project or not at that time. Below is a decision tree indicating our decision path.

We see here that the NPV is positive because if after waiting for a year, the market demand is nominal or sluggish, thenmanagement has the right to pull the plug on the project. Otherwise, if it is a great market which meets or exceeds the best-casescenario, management has the option to execute the project, thereby guaranteeing a positive NPV. The calculated NPV is basedon the forecast revenue stream and is valued at $49.95M.

(ii) Option to Wait II (Active market research strategy)Instead of waiting passively for the market to reveal itself over the 1-year period as expected previously, management can decide on an active strategy of pursuing a market research strategy. If the market research costs $5M to initiate and takes 6 months to obtain reliable information, the firm saves additional time without waiting for the market to reveal itself. Here, if the marketresearch indicates a highly favorable condition where the best-case scenario revenue stream is to be expected, then the project will be executed after 6 months. The strategy path and time lines are shown below.

The calculated NPV here is $49.72M, relatively close to the passive waiting strategy. However, the downside is the $5M which also represents the greatest possible loss, which is also the premium paid to obtain the option to execute given the right market conditions.

Calculated NPV after waiting for 1 year on new information � $49.95M

0

$6.38

10,000 trials

Pro

bab

ility

Forecast: expected NPV Frequency chart 97 outliers

Freq

uen

cy

208

156

104

52

($5.71)($17.80)($29.89)($41.98)

Certainty is 90.00% from ($32.55) to ($1.19)

0.021

0.016

0.010

0.005

0.000

Best case

Wait and see

Discount rate � 20%

Exit and abandon

Startt � 0

Worst andnominal case

Timet � 1 t � 2 t � 3 t � 4 t � 5 t � 6

�$200M $45M $54M $105M $120M $165M

Calculated NPV after active market research � $49.72M(after accounting for the �$5M in market research costs)

t � 0.5 t � 1.5 t � 2.5 t � 3.5 t � 4.5 t � 5.5Time

�$200M $45M $54M $105M $120M $165MBest case

Worst and nominal case

Marketresearch

Start

Discount rate = 20%

Exit and abandon

Cost �$5M

Figure 6.8. Real options analysis.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 98

Page 101: Managing Enterprise Risk

Real options and Monte Carlo simulation versus DCF valuation 99

In retrospect, management could find out the maximum it is willing to pay for the market research in order to cut down thetime it has to wait before making an informed decision. That is, at what market research price would the first option to waitbe the same as the second option to wait? Setting the difference between $49.95M and $49.72M as the reduction in marketresearch cost brings down the initial $5M to $4.77M. In other words, the maximum amount the firm should pay for the marketresearch should be no more than $4.77M; otherwise, it is simply wise to follow the passive strategy and wait for a year.

MeanStandard deviation

49.73

Range minimum12.43

Range maximum�0.25

Range width 94.5794.82

The resulting distribution rangeis less wide, providing a moremeaningful inference. Based onthe simulation results, the 90%confidence interval has the NPVbetween $29.40M and $70.16M.The range, which means almost100% of the time, the NPV takeson a positive value.

F. ObservationsWe clearly see that by using the three scenarios versus an expected value approach, we obtain rather similar results in terms of NPV but through simulation, the expected value approach provides a much tighter distribution and the results are morerobust as well as easier to interpret. Once we added in the real options approach, the risk has been significantly reduced andthe return dramatically increased. The overlay chart below compares the simulated distributions of the three approaches. Theblue series is the scenario approach incorporating all three scenarios and simulating around them. The green series is theexpected value approach, simulating around the nominal revenue projections, and the red series is the real options approachwhere we only execute if the best condition is obtained.

The example here holds true in most cases when we compare the approach used in a traditional DCF method to real options.As we can define risk as uncertain fluctuations in revenues and the NPV level, all downside risks are mitigated in real optionsas you do not execute the project if the nominal or worst-case scenario occurs in time. In retrospect, the upside risks aremaximized such that the returns are increased because the project will only be executed when the best-case scenario occurs.This creates a win–win situation where risks are mitigated and returns are enhanced, simply by having the right strategicoptionalities available, acting appropriately, and valuing the project in terms of its ‘‘real’’or intrinsic value, which includes thisopportunity to make midcourse corrections when new information becomes available.

The 50% confidence interval has theNPV between $41.32M and $58.19M.We can interpret this range as theexpected value range because 50%of the time, the real NPV will fallwithin this range, with a mean of$49.73M.

Expected NPV(Green)

Best condition only(Red)

All three conditions(Blue)

Overlay chart Distributional comparison

Pro

bab

ility

0.075

0.056

0.037

0.019

0.000

Blue

Green Red

100.0043.75�12.50�68.75�125.00

Forecast: best condition only Frequency chart

0.000 0

61

122

183

244

89 outliers

Pro

bab

ility

Freq

uen

cy

84.2167.7251.2334.7418.25

Certainty is 50.00% from 41.32 to 58.19 net present value dollars

0.006

0.012

0.018

0.024

10,000 trials

0

61

122

183

244

Forecast: best condition only Frequency chart

Freq

uen

cy

84.2167.7251.2334.7418.25

0.000

Pro

bab

ility

0.006

0.012

0.018

0.024

10,000 trials

Certainty is 90.00% from 29.40 to 70.16 net present value dollars

89 outliers

Figure 6.9. Combining real options analysis with Monte Carlo simulation.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 99

Page 102: Managing Enterprise Risk

100 Managing Enterprise Risk

Of course, several simplifying assumptions have to be made here, including the ability forthe firm to simply wait and execute a year from now without any market or competitiverepercussions. That is, the 1-year delay will not allow a competitor to gain a first-to-market advantage or capture additional market share, where the firm’s competitor may bewilling to take the risk and invest in a similar project and gain the advantage while thefirm is not willing to do so. In addition, the cost and cash flows are assumed to be the same

In addition, what seems on the outset as an unprofitable project yielding an NPV of �$26.70M can be justified and madeprofitable because the project has in reality an option to wait or defer until a later date. Once uncertainty becomes resolved and we have more available information, management can then decide whether to go forward based on market conditions. This call option could be bought through the use of active market research. By having this delay tactic, the firm has indeed truncated any downside risks but still protected its upside potential.

Next, if we look at the minimax approach, where we attempt to minimize the maximum regret of making a decision, themaximum level of regret for pursuing the project blindly using a DCF approach may yield the worst-case scenario of�$113.25M while using an option to wait but simultaneously pursuing an active marketing research strategy will yield amaximum regret of �$4.77M. This is because the levels of maximum regret occur under the worst possible scenario. If thisoccurs, investing in the project blindly will yield the worst case of �$113.25, but the maximum loss in the real options world isthe limited liability of the premium paid to run the market research, adding up to only �$4.77M because the firm would neverexecute the project when the market is highly unfavorable.

In addition, the value of perfect information can be calculated as the increase in value created through the option to wait as compared to the naïve expected NPV approach. That is, the Value of having perfect information is $68M. We obtain this level of perfect information through the initiation of a marketing research strategy which costs an additional $4.77M. This meansthat the strategic real options thinking and decision-making process has created a leverage of 14.25 times. This view isanalogous to a financial option where we can purchase a call option for, say, $5 with a specified exercise price for a specifiedtime of an underlying common equity with a current market price of $100. With $5, the call purchaser has leveraged his purchasing power into $100, or 20 times. In addition, if the equity price rises to $150 (50% increase akin to our exampleabove), the call holder will execute the option, purchase the stock at $100, turn around and sell it for $150, less the $5 costand yield a net $45. The option holder has, under this execution condition, leveraged the initial $5 into a $45 profit, or 9 times the original investment.

Finally and more importantly is that we see by adding in a strategic option, we have increased the value of the projectimmensely. It is therefore wise for management to consider an optionality framework in the decision-making process. That is, to find the strategic options that exist in different projects or to create strategic options in order to increase the project’s value.

Limited losses Average 1

Real options provides total risk reduction, expected valueenhancement, and limited downside losses

Average 2

Risk 2

NPV � Benefits � Cost Option � Benefits F(d1) � Cost F(d2) eNPV � NPV � Options value

Risk 1

Where F is a probability distribution of outcomes. If there is 0% uncertainty, the probability of an outcome is 100%, hence, the option

value reverts back to the NPV

DCFReal options

Figure 6.10. Comparing DCF and real options results.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 100

Page 103: Managing Enterprise Risk

Real options and Monte Carlo simulation versus DCF valuation 101

whether the project is initiated immediately or in the future. Obviously, these more com-plex assumptions can be added into the analysis, but for illustration purposes, we assumethe basic assumptions hold, where costs and cash flows remain the same no matter theexecution date, and that competition is negligible.

Critical steps in performing real options

Figure 6.11 shows the real options process up close. This framework comprises eight dis-tinct phases of a successful real options implementation, going from a qualitative man-agement screening process to creating clear and concise reports for management. Theprocess was developed by the author based on previous successful implementations ofreal options both in the consulting arena and in industry-specific problems. These phasescan be performed either in isolation or together in sequence for a more robust real optionsanalysis. We can segregate the real options process into the following eight simple steps.These steps include:

● Qualitative management screening.

● Base case net-present-value analysis.

● Monte Carlo simulation.

Figure 6.11. Real options integrated process.

Ch06-I044949.qxd 5/23/06 12:01 PM Page 101

Page 104: Managing Enterprise Risk

● Real options problem framing.

● Real options modeling and analysis.

● Portfolio and resource optimization.

● Reporting.

● Update analysis.

Qualitative Management Screening

Qualitative management screening is the first step in any real options analysis (Figure 6.11).Management has to decide which projects, assets, initiatives, or strategies are viable for fur-ther analysis, in accordance with the firm’s mission, vision, goal, or overall business strat-egy. The firm’s mission, vision, goal, or overall business strategy may include marketpenetration strategies, competitive advantage, technical, acquisition, growth, synergistic, orglobalization issues. That is, the initial list of projects should be qualified in terms of meet-ing management’s agenda. Often this is where the most valuable insight is created as management frames the complete problem to be resolved.

Forecasting and Base Case Net-Present-Value Analysis

For each project that passes the initial qualitative screens, a DCF model is created (Figure6.11). This serves as the base case analysis where a NPV is calculated for each project.This also applies if only a single project is under evaluation. This NPV is calculated usingthe traditional approach of forecasting revenues and costs, and discounting the net of theserevenues and costs at an appropriate risk-adjusted rate.

The use of time-series forecasting may be appropriate here if historical data exist and thefuture is assumed to be somewhat predictable using past experiences. Otherwise, man-agement assumptions may have to be used.

Monte Carlo Simulation

Since the static DCF produces only a single-point estimate result, there is often little con-fidence in its accuracy given that future events that affect forecast cash flows are highlyuncertain. To better estimate the actual value of a particular project, Monte Carlo simula-tion may be employed (Figure 6.11).

Usually, a sensitivity analysis is first performed on the DCF model. That is, setting theNPV as the resulting variable, we can change each of its precedent variables and note thechange in the resulting variable. Precedent variables include revenues, costs, tax rates, dis-count rates, capital expenditures, depreciation, and so forth, which ultimately flow throughthe model to affect the NPV figure. By tracing back all these precedent variables, we canchange each one by a preset amount and see the effect on the resulting NPV. A graphicalrepresentation can then be created, which is often called a Tornado Diagram because of its

102 Managing Enterprise Risk

Ch06-I044949.qxd 5/23/06 12:01 PM Page 102

Page 105: Managing Enterprise Risk

shape, where the most sensitive precedent variables are listed first, in descending order ofmagnitude. Armed with this information, the analyst can then decide which key variablesare highly uncertain in the future and which are deterministic. The uncertain key variablesthat drive the NPV and hence the decision are called critical success drivers. These criticalsuccess drivers are prime candidates for Monte Carlo simulation. Since some of these crit-ical success drivers may be correlated – for example, operating costs may increase in pro-portion to quantity sold of a particular product, or prices may be inversely correlated toquantity sold – a correlated Monte Carlo simulation may be required. Typically, these cor-relations can be obtained through historical data. Running correlated simulations providesa much closer approximation to the variables’ real-life behaviors.

Real Options Problem Framing

Framing the problem within the context of a real options paradigm is the next critical step(Figure 6.11). Based on the overall problem identification occurring during the initial qual-itative management screening process, certain strategic optionalities would have becomeapparent for each particular project. The strategic optionalities may include among otherthings, the option to expand, contract, abandon, switch, choose, and so forth. Based on theidentification of strategic optionalities that exist for each project or at each stage of theproject, the analyst can then choose from a list of options to analyze in more detail.

Real Options Modeling and Analysis

Through the use of Monte Carlo simulation, the resulting stochastic DCF model will havea distribution of values. In real options, we assume that the underlying variable is the futureprofitability of the project, which is the future cash flow series. An implied volatility ofthe future FCF or underlying variable can be calculated through the results of a MonteCarlo simulation previously performed. Usually, the volatility is measured as the standarddeviation of the logarithmic returns on the FCF stream. In addition, the present value offuture cash flows for the base case DCF model is used as the initial underlying asset valuein real options modeling (Figure 6.11).

Portfolio and Resource Optimization

Portfolio optimization is an optional step in the analysis (Figure 6.11). If the analysis is doneon multiple projects, management should view the results as a portfolio of rolled-up proj-ects. This is because the projects are, in most cases, correlated with one another, and view-ing them individually will not present the true picture. As firms do not only have singleprojects, portfolio optimization is crucial. Given that certain projects are related to others,there are opportunities for hedging and diversifying risks through a portfolio. Since firmshave limited budgets, have time and resource constraints, while at the same time haverequirements for certain overall levels of returns, risk tolerances, and so forth, portfoliooptimization takes into account all these to create an optimal portfolio mix. The analysiswill provide the optimal allocation of investments across multiple projects.

Real options and Monte Carlo simulation versus DCF valuation 103

Ch06-I044949.qxd 5/23/06 12:01 PM Page 103

Page 106: Managing Enterprise Risk

Reporting

The analysis is not complete until reports can be generated (Figure 6.11). Not only are resultspresented, but also the process should be shown. Clear, concise, and precise explanationstransform a difficult black-box set of analytics into transparent steps. Management willnever accept results coming from black boxes if they do not understand where the assump-tions or data originate and what types of mathematical or financial massaging takes place.

Update Analysis

Real options analysis assumes that the future is uncertain and that management has the rightto make midcourse corrections when these uncertainties become resolved or risks becomeknown; the analysis is usually done ahead of time and thus, ahead of such uncertainty andrisks. Therefore, when these risks become known, the analysis should be revisited toincorporate the decisions made or revising any input assumptions. Sometimes, for long-horizon projects, several iterations of the real options analysis should be performed,where future iterations are updated with the latest data and assumptions.

Conclusion

So, how do you get real options implemented in your organization? First of all, it is vitalto truly understand that real options analysis is not a simple set of equations or models. Itis an entire decision-making process that enhances and complements the traditional deci-sion analysis approaches. It takes what has been tried and true financial analytics andevolves them to the next step by pushing the envelope of analytical techniques. Severalissues should be considered when attempting to implement real options analysis:

● Tools: The correct tools are important. These tools must be more comprehensive thaninitially required because analysts will grow into them over time. Do not be restric-tive in choosing the relevant tools. Always provide room for expansion. Advancedsoftware tools will relieve the analyst of detailed model building and let him or herfocus instead on 75% of the value – thinking about the problem and interpreting theresults.

● Resources: The best tools in the world are useless without the relevant humanresources to back them up. Tools do not eliminate the analyst, but enhance the ana-lyst’s ability to effectively and efficiently execute the analysis. The right people withthe right tools will go a long way. Because there are only a few true real optionsexperts in the world who truly understand the theoretical underpinnings of the mod-els as well the practical applications, care should be taken in choosing the correctteam. A team of real options experts is vital in the success of the initiative. A com-pany should consider building a team of in-house experts to implement real optionsanalysis and to maintain the ability for continuity, training, and knowledge transferover time. Knowledge and experience in the theories, implementation, training, andconsulting are the core requirements of this team of individuals. Training plays avital part in developing this in-house expertise. Nothing will kill a real options analysis

104 Managing Enterprise Risk

Ch06-I044949.qxd 5/23/06 12:01 PM Page 104

Page 107: Managing Enterprise Risk

project faster than over-promising and under-delivering due to insufficient training. Atypical in-house real options expert needs to have the following theoretical knowl-edge and applied expertise: econometrics, statistics, simulation, financial modeling,and experience in participating in at least two to three real options analysis projects.

● Senior Management Buy-in: The analysis buy-in has to be top-down where seniormanagement drives the real options analysis initiative. A bottom-up approach wherea few inexperienced junior analysts try to impress the powers that be will fail miserably.

The author’s Real Options Super Lattice Software and Risk Simulator software comprisesseveral modules, including the Single Super Lattice Solver (SSLS), Multiple SuperLattice Solver (MSLS), Multinomial Lattice Solver (MNLS), SLS Excel Solution, and SLSFunctions. These modules are highly powerful and customizable binomial and MNLSs,and can be used to solve many types of options (including the three main families ofoptions: real options, which deals with physical and intangible assets; financial options,which deals with financial assets and the investments of such assets; and employee stockoptions, which deals with financial assets provided to employees within a corporation).

● The SSLS is used primarily for solving options with a single underlying asset usingbinomial lattices. Even highly complex options with a single underlying asset can besolved using the SSLS. Example options solved include options to abandon, choose,contract, defer, expand, wait, and so forth.

● The MSLS is used for solving options with multiple underlying assets and sequentialcompound options with multiple phases using binomial lattices. Highly complexoptions with multiple underlying assets and phases can be solved using the MSLS.Example options solved include sequential compound options, phased stage-gateoptions, switching options, and multiple asset options.

● The MNLS uses multinomial lattices (trinomial, quadranomial, pentanomial) to solvespecific options that cannot be solved using binomial lattices. Example options solvedinclude rainbow options, jump-diffusion options, mean-reverting options, and so forth.

● The SLS Excel Solution implements the SSLS and MSLS computations within theExcel environment, allowing users to access the SSLS and MSLS functions directlyin Excel. This feature facilitates model building, formula, and value linking andembedding, as well as allows the running of simulations, and provides the user sam-ple templates to create such models.

● The SLS Functions are additional real options and financial options models accessi-ble directly through Excel. This facilitates model building, linking and embedding,and running simulations. These functions are able to replicate the results from all theother modules directly in Excel.

There are however several pitfalls in using real options. It is by no means the silver bulletor the end-all and be-all methodology that will solve all your problems. Some of the sameproblems found in DCF modeling is also captured in real options (the accuracy of cashflow forecasting, for instance), and in the end, when uncertainty is zero, the options analy-sis results revert back to the DCF value. If care is taken in the modeling and the relevant

Real options and Monte Carlo simulation versus DCF valuation 105

Ch06-I044949.qxd 5/23/06 12:01 PM Page 105

Page 108: Managing Enterprise Risk

106 Managing Enterprise Risk

projects have been chosen, real options analysis will provide a wealth of information thatcannot be obtained otherwise. The following criteria should be used in selecting the rele-vant projects for implementation: The project has to be faced with or operates underuncertainty; management must have the strategic and operational flexibility (i.e., optionsexist) to make mid-course corrections when uncertainty becomes resolved over time; andmanagement must be credible enough to execute the profit-maximizing behavior at theappropriate time, otherwise all the options in the world are useless.

The idea of this chapter is to demystify the black-box analytics in real options and tomake transparent its concepts and applications. Rather than relying on stochastic Ito cal-culus, variance reduction, differential equations, or stochastic path-dependent simulationsto solve real options problems, I have found that by my relying heavily on binomial lat-tices (which I have shown time and again to be reliable and produce identical results, atthe limit, to the former approaches)3 complex concepts can be explained very easily tosenior management. While it is extremely easy to modify binomial lattices depending onthe real options or to more accurately mirror the intricacies of actual business cases, it isextremely difficult to do so using the more advanced techniques. In the end, the more flex-ible and mathematically manageable approach becomes the pragmatic approach. Theflexibility in the modeling approach flows well: “if you can think it, you can solve it!”Finally, my intention is to briefly reveal the applications of real options. A black box willremain a black box if no one can understand the concepts despite its power and applica-bility. It is only when the black box becomes transparent that analysts can understand,apply, and convince others of its results and applicability, that the approach will receivewide-spread influence. So, buy yourself an option and learn more about the subject beforeattacking it head first and biting off more than you can chew. Test the applications on asmaller scale pilot project but with significant visibility, attack problems that have clearoptionalities, choose projects with cross-functional and interdepartmental implications,obtain management buy-in and sponsorship, and perform some back-casting (as opposedto forecasting where you look to the future, back-casting uses data from a project in thepast – you get the results instantly, as opposed to having to wait for years before the accu-racy of the results can be verified).

3For the technical details, please see my books on real options and simulation: “Real Options Analysis: Toolsand Techniques, 2nd Edition,” (Wiley, 2005), “Real Options Analysis Course,” (Wiley, 2003), and “ModelingRisk: Applying Monte Carlo Simulation, Real Options Analysis, Forecasting, and Optimization,” (Wiley, 2006).

Ch06-I044949.qxd 5/23/06 12:01 PM Page 106

Page 109: Managing Enterprise Risk

CHAPTER 7

Enterprise Risk Management in 2005 – MovingBeyond Market and Credit Risk

Jana S. Utter

Enterprise Risk Manager, Great Plains Energy Inc.Kansas City, MO, USA

“What we anticipate seldom occurs; what we least expected generally happens.”Benjamin Disraeli

The Birth of Enterprise Risk Management

Since the early 1990s, enterprise risk management (ERM) has become a common acronym incorporate vocabulary. Globalization has added to the complexity of business; informationtechnology has enabled efficiencies in the gathering and dissemination of knowledge leadingto the ability to apply statistics across an almost endless array of data. Consequently, a proac-tive and comprehensive approach can be taken towards risks resulting in the centralized man-agement of risks, or ERM. Many people refer to the “old way” of managing risks as managingrisks in “silos” and the new way of managing risks as ERM. “Silo” risk management is still anecessary business practice. Each business unit, department, or functional area within a com-pany has the best expertise to manage the risks within their area of responsibility. The purposeof ERM is not to replace the risk efforts already occurring within the company. The purposeof ERM is to act as the central repository for risk efforts occurring within the company and toserve as the single point of reference for knowledge about all risk management activity. Inorder to accomplish this purpose, the role of ERM is multi-faceted and involved.

The foundation of ERM is built on internal control processes and the monitoring and man-agement of market risks. The banking industry laid the groundwork for the ERM functionwith the release of the Basel Capital Accord in 1988 setting minimum capital requirementsfor banks. Since 1988, the evolution of ERM within the banking industry has been furtheredby the need to calculate and manage market risks leading to the combined measurement and

107

Ch07-I044949.qxd 5/23/06 12:03 PM Page 107

Page 110: Managing Enterprise Risk

108 Managing Enterprise Risk

monitoring of market, credit, and operational risks to ensure capital adequacy. The value-at-risk (VaR) methodology was introduced as a tool to measure the market risk of loan andcommodity portfolios with open positions. Released in 1992, the Committee of SponsoringOrganizations (COSO) issuance of Internal Control – Integrated Framework provided com-panies with practices for increasing the robustness of their governance structure and relatedaccountability. Also occurring in the late 1980s through the early to mid-1990s was thederegulation of energy markets. Energy deregulation opened the door for trading natural gasand electricity. The simultaneous timing of the emergence of ERM practices with entitiesentering the portfolio markets for the first time with newly commoditized products, led tointerest in applying the latest business practices to the latest business trend. Thus, the newlyformed energy conglomerates quickly adopted a facet of ERM from the financial institu-tions and the position of Chief Risk Officer (CRO) was created.

In a typical risk management program, the greatest level of attention is often given to thearea with the highest risk impact measured from a dollar magnitude and likelihood of occur-rence. In the mid to late 1990s, for both banks and energy companies, commodity marketrisk and the related exposure to credit risk ranked high on the scale when assessing risks.Banks and energy companies transitioned from single line of business entities to diversifiedholding companies with multiple lines of business. Consequently, the ERM role of the CROwas micro-focused, albeit still a monumental task, to the assessment, consolidation, andmanagement of market and credit risk across the organization. Upheavals in the energy mar-kets in the 2001–2002 timeframe heightened, then diminished, then transcended the role ofthe CRO to move from the concentration on market and credit risk to the literal sense of therole to assessing and managing the risks of the enterprise.

Heightened Importance of ERM

From the beginning of the use of the phrase “Enterprise Risk Management”, the intent ofthe function has been to cohesively and strategically understand and assess all of the risksof the enterprise. The Chief Executive Officer (CEO) is responsible for the vision of theorganization and therefore must know the goals and roles of all of the functional areas ofthe company. The CRO must also know the goals and roles of each functional area in addi-tion to the risks related to the goals and the ability to interrelate the risks and objectivelyconvey risk related information to the CEO and other executive management. In essence,the CRO is responsible for discerning the risk-adjusted vision of the organization. TheCEO relies on knowledge to effectively perform his role; the CRO relies on knowledge aswell as the data comprising the knowledge, for the purpose of determining the likelihoodand potential impacts of risk. The gathering and dissemination of such data is not an easytask and is a factor in achieving a true ERM function. Energy and financial firms, both onthe forefront for implementing ERM, have a critical need for transaction management sys-tems in order to keep track of positions and related exposure. Transaction managementsystems serve as a database for a multitude of aspects of a transaction contributing to mar-ket and credit risks. It is no surprise that initial ERM efforts were centered on market andcredit risks; both types of risks are high impact with likely occurrence and the data neededto assess exposure was readily available from transaction management systems.

Sophistication in data gathering and warehousing is constantly improving easing the CRO’sability to build and store information necessary for formulating probability distributions around

Ch07-I044949.qxd 5/23/06 12:03 PM Page 108

Page 111: Managing Enterprise Risk

key risk drivers to earnings, cash flow, and return on capital. More importantly, other driversfor comprehensive ERM are legislative requirements including Sarbanes-Oxley and FERCOrder 2004 and private sector initiatives including COSO’s ERM – Integrated Framework,released in September 2004. Finally, the financial community has honed in on risk man-agement as a key indicator of the financial stability and creditworthiness of a company.Standard and Poor’s assesses the overall business risk of a company and evaluates howwell management addresses the risk of the enterprise. Investment analysts are drilling theInvestor Relations Officer for risk metrics and information on corporate risk practices. Inresponse, companies are continuing to include additional details on exposure to risks andmanagement and mitigation of risk in financial reports and earnings calls.

ERM is steadily transforming from an optional business tool to a required business func-tion. Similar to the rise of the Chief Information Officer, the CRO is becoming a standardexecutive position. ERM’s value proposition is three-fold because it:

● aids in risk identification and reporting;

● centralizes risk information and analytics providing senior management and the boardwith enhanced understanding of risks, the effects of risk, and risk mitigation andresponse;

● provides increased transparency and stakeholder confidence.

The three-fold ERM value proposition makes ERM an essential business process supportingstrategic planning and corporate development by enabling business preparedness for reac-tive and proactive response to changes in the business environment. Identifying risk driversand determining their effect on earnings increases financial certainty. Key stakeholders,including investors, shareholders, regulatory bodies, financial institutions, and employees,are given greater transparency on the risks of the company and the potential effects of suchrisks, thereby allowing for a boosting of the confidence of stakeholders.

Building the ERM Framework

Simplistically and logistically speaking, initiating an ERM framework entails designating asenior-level position responsible for ERM and appointing a person to fill the position. How-ever, launching, creating, and developing a full ERM function takes time, patience, support,education, and awareness. To ease and quicken the ERM effort, the ERM position and theperson filling the position should unequivocally have the responsibility for ERM. In mostcompanies, the ERM position should have sole responsibility for, and responsibility solelyfor, ERM. Ideally, the ERM position should be an officer level position reporting to the CEOor the Chief Financial Officer (CFO) with a direct communication channel to the board or aboard committee. An officer-level position with dedicated accountability for ERM projectsthe message both internally and externally that the board and senior level management arefully engaged and supportive of ERM and believe in the ERM value proposition.

Initially, the role of the CRO can be difficult for the existing risk management positionswithin the company. Naturally, a company, especially a diversified energy company, will havepositions managing commodity portfolio risk, insurance risk, credit risk, regulatory risks,

Enterprise risk management in 2005 – moving beyond market and credit risk 109

Ch07-I044949.qxd 5/23/06 12:03 PM Page 109

Page 112: Managing Enterprise Risk

and a sometimes a long list of other types of risk. The CRO and his department do not replacethose positions in the organization already responsible for risk but instead creates a centralrisk repository for the company. The CRO has a vantage point within the company enablinghim to perform a role that no other position can. Much like the CEO, the CRO is privy to theintertwining of each business unit and functional area within the company and understandshow the pieces fit together to accomplish the corporate vision. In the way the CFO is respon-sible for managing the financial related activity of the company for the purpose of support-ing the corporate vision, comparatively, the CRO is responsible for managing the riskinformation of the company. Primary responsibilities of the CRO include:

● corporate risk policy development;

● ensuring corporate risk tolerances (i.e., credit exposure limits, return on invested capital (ROIC), value at risk (VaR)) are conveyed to business units and adhered to;

● corporate risk assessments identifying key risk drivers measured by financial impactand likelihood of occurrence;

● formation of a corporate risk management committee and generally serves as the chairof the committee;

● general oversight, but not necessarily direct responsibility, for business unit and depart-mental risk policies development and maintaining for current application;

● direction to business units regarding required risk analysis and risk metrics to facili-tate corporate risk reporting;

● consolidated risk reporting;

● creating or assisting with the development of risk management process maps and ensur-ing that such maps are maintained;

● keeping records of risk policy non-compliance occurrences and risk policy waiversgranted across the organization;

● staying abreast of ERM developments and trends within the industry and facilitatingbest practices and regulatory compliance for ERM within the corporation.

The above list in not exhaustive and serves as a general overview of the role of the CROand functions of the ERM process. Clearly, the CRO and the ERM business function aredistinct roles within the organization. Business unit and departmental risk managers do notview or manage risk in the same way as the CRO. Typically a business unit or departmen-tal risk manager micro manages a particular area of risk. The CRO must communicate andcoordinate efforts with several areas of the company in order to have a streamlined andeffective ERM function. Other functional areas of the company that are critical to the ERMfunction include audit, legal, corporate governance and compliance, investor relations,treasury/finance, executive management (both corporate and business unit), and the boardof directors. Each of these functional areas are integral either to the success of the ERMfunction or receive direct benefits from the ERM function. A brief description of the correlation between ERM and the business functions listed follows:

Audit – While the ERM function is proactive in nature through the establishment of riskpolicies and risk metrics, the audit function serves as one form of checks and balances to

110 Managing Enterprise Risk

Ch07-I044949.qxd 5/23/06 12:03 PM Page 110

Page 113: Managing Enterprise Risk

ensure that the ERM function adequately fulfills and sets forth the controls desired. Theaudit function is investigative and provides assurance, upon conducting an internal auditand producing the audit report, that the ERM function objectively evaluates risk and theresulting metrics are statistically sound.

Legal – A delicate balance is required for complying with the intent of such rulings asSarbanes-Oxley and FERC Order 2004. The risk of the enterprise is expected to be ade-quately monitored and managed yet the bright line separation of duties desired betweenthose positions overseeing both regulated and unregulated entities can be difficult to achieve.Legal counsel provides the expertise and advice to the CRO in carrying out the ERMfunction without crossing boundaries that question whether or not market knowledge aboutone entity might be used inappropriately to the jeopardy of another related entity. Legalcounsel with expertise in regulatory compliance should review corporate risk policies andshould be present at risk management committee meetings.

Corporate governance and compliance – In the early 1990s, the U.S. Sentencing Guidelineswere set forth for the purpose of establishing minimum standards for due diligence to be con-ducted by the corporation to prevent violations of laws and determine if violations occur. Asound risk policy should include the appropriate consequences for policy violations or inap-propriate actions associated with the risk policy. Such consequences should be in line withgeneral corporate policies for code of conduct and business ethics. Treatment of employeeswho violate a risk policy should be consistent. Proper and adequate training should be con-ducted and required of each employee that is expected to comply with a risk policy. ERMworks in conjunction with corporate governance and compliance to achieve consistency inrisk policies with general corporate policies, ensure risk policies meet governance and com-pliance regulations, and provide employees with training to understand the risk policies.

Investor relations – Enterprise risk metrics such as earnings at risk (EaR) and cash flow atrisk serve as a useful tool for investor relations to have in their back pocket. Although risk metrics are not generally publicly disclosed, the information risk metrics provide aid inincreasing the confidence around earnings projections. Components measured to derive riskmetrics identify key risk drivers and the correlation between risk drivers that again serve asuseful information for the Investor Relations Officer. Even though the risk metrics may notbe directly conveyed outside the company, the knowledge gained from knowing and under-standing the metric enables the company to give appropriate assurances and information toinvestors.

Treasury/finance – Consolidated corporate market and credit risks are of primary concernto the CFO. Business units engaged in commercial activities that can result in exposure tothe market and counterparty credit are responsible for measuring and reporting mark-to-market and credit exposure risk to ERM. ERM consolidates, as appropriate, market andcredit risk for the purpose of determining if exposures are within corporate tolerances.Monitoring of market and credit exposures at the corporate level identifies unintentionalexcessive exposure in a particular market or to a particular counterparty. Two individualbusiness units may be within their defined tolerance level of market or credit exposure,but if both business units are similarly exposed then the corporation may find that they aremore deeply exposed than anticipated.

Enterprise risk management in 2005 – moving beyond market and credit risk 111

Ch07-I044949.qxd 5/23/06 12:03 PM Page 111

Page 114: Managing Enterprise Risk

Executive management – The risk reporting generated by the ERM function is crucial tocontributing to the knowledge needed to lead the corporation in determining the visionand aligning the strategy to support that vision. ERM is advanced scenario planning.While scenario planning takes a “what if” approach, ERM adds statistical analysis in theform of probability distributions and correlations to forecasted outcomes. The result is arobust analysis that scrutinizes the inner strategies of the company. ERM aides in businesspreparedness for unexpected or uncontrollable events by forcing detailed thought processto occur on situational risks inherent to the business. The correlation matrix for key riskdrivers serves as a crystal ball into determining the relationship between changing busi-ness environment and market conditions and their effects on the corporate strategy. All ofthe risk information gathered and disseminated amongst corporate and business unit exec-utive management as well as to the employees leads to a nimble organization that canproactively or quickly respond to opportunities and potential crises.

Board of directors – An obvious benefit of ERM to the board is the assurance that risks areproactively and adequately addressed across the corporation. As important as providingassurance for risk management is the information derived from the risk metrics. Probabilitydistributions of risk drivers can be applied to reviewing existing strategies and testing newstrategies to determine risk-adjusted returns on capital. Net present value, internal rate ofreturn, and other investment evaluation equations either ignore risk or consider only staticrisk. The CRO should serve on the capital allocations committee for the purpose of givingthe risk perspective to capital allocation proposals.

The preceding discussion is not all-inclusive. ERM interacts with virtually every area ofthe company. Each of the functional areas highlighted have responsibility for serving thecorporate needs of the enterprise and ERM falls into the category of corporate servicesand therefore is extensively involved with these corporate functions.

The time required to build a fully operational ERM function is not short, however the timerequired is dependent upon many factors such as degree of management support, resourcesdevoted to the effort, and existing data systems. As a guideline, developing a sound ERMfunction can take up to 3 years. Year 1 entails the initial start of the ERM department whosenear term focus will be on writing or revamping corporate risk policies and conducting orupdating a corporate risk assessment. Year 2 shifts to the development of risk metrics and risk reporting including the use of risk measurements in corporate reports such as a Key Per-formance Indicators report or Balanced Scorecard metric. Year 3 work is centered on deter-mining if systems are adequate for ongoing risk metric calculation and fine tuning the ERMfunction to make sure the function is accomplishing its intent. For example, by Year 3 ERMshould be socialized well enough within the organization so that ERM is viewed as the cen-tral repository for risk management information within the company. ERM should also bewell integrated with financial modeling, budgeting, and the capital allocations process.

The timeline for ERM development will vary greatly with the degree of information readilyavailable for risk measurement calculations. For market and credit risk, transaction man-agement systems are essential to gathering the data necessary to calculate mark-to-marketexposure and related credit exposure and VaR. A data warehouse is very helpful for consol-idating the risks of different business units for corporate risk reporting. Statistical analysistools like @Risk and Crystal Ball meet the minimum requirements for building risk

112 Managing Enterprise Risk

Ch07-I044949.qxd 5/23/06 12:03 PM Page 112

Page 115: Managing Enterprise Risk

models. Depending upon the business operations, especially for an integrated energy com-pany, sophisticated risk models may be necessary.

Benefits of ERM

Throughout this discussion, the benefits and uses of ERM have been touted. A synopsisof the benefits and uses of ERM includes:

● Central repository for risk information – The ERM function should serve as the “go to”source for risk information across the organization. Although the CRO and the ERMdepartment are not directly responsible for micro managing the risks of the entireorganization, the CRO and ERM are expected to macro manage the risks of the organ-ization. Thus ERM should be the keeper of risk reports produced by the business unitsincluding business unit risk assessments, risk policies, and risk metrics. ERM shouldbe familiar with the organizational structure regarding those individuals within theorganization responsible for risk management and should also have general knowledgeabout systems used for risk management and reporting.

● Risk assessments – ERM is best suited to conduct and compile a corporate risk assess-ment. Business units and other functional areas may conduct risk assessments as well,but such risk assessments will be specific to the business unit or functional area. A cor-porate risk assessment sets forth to determine the key risks to the enterprise by evalu-ating likelihood of occurrence and degree of impact. The corporate risk assessment isused as a tool to make sure corporate risk, or ERM, focuses on monitoring and mea-suring the right risks. The corporate risk assessment should be utilized in determiningwhere naturally occurring risk mitigation exists and for identifying problematic riskconcentration levels.

● Risk metrics – A distinct function of ERM is the calculation of corporate risk metrics.ERM in conjunction with executive management can determine which risk metricsare desired and pertinent to the corporation for use as an analytical tool. Common riskmetrics are EaR, cash flow at risk (CfaR), risk-adjusted return on capital (RAROC),and economic capital. EaR and CfaR are useful for gaining confidence around pro-jected earnings and cash flows. RAROC helps evaluate business strategies, both exist-ing and proposed. Economic capital combines market risk, credit risk, and operativerisk for the purpose of determining if the enterprise has the financial wherewithal orcapital adequacy to support itself.

● Corporate development and strategic planning – The tie between ERM and corporatedevelopment is sometimes overlooked. Adding statistical insight into risk to the cap-ital allocation process shifts the investment decision from a point in time considera-tion to a point over time consideration. The desire for near term results often cloudsthe realistic long-term view. Risk metrics used either alone or in conjunction withscenario planning prepares the enterprise for the long-term results of strategy imple-mentation. By understanding the key risk drivers and the correlation between keyrisk drivers of a strategy upfront, executive management and the board can proactivelyaddress issues as they arise. Too often, more capital is thrown to a struggling strategybut the strategy falters anyway. Understanding risks and the interrelationships of riskcan help to determine if a stop loss strategy is best.

Enterprise risk management in 2005 – moving beyond market and credit risk 113

Ch07-I044949.qxd 5/23/06 12:03 PM Page 113

Page 116: Managing Enterprise Risk

● Risk reporting – ERM should be responsible for integrating risk reporting with corpo-rate reporting. The integration with corporate reporting should include both internal andexternal channels. Internally, ERM reporting is intertwined with year-to-date results andforecasts of expected corporate performance. ERM reporting is a part of the executivedashboard of key performance indicators. Externally, ERM reporting is implicitly andexplicitly communicated through corporate reporting, investor relations, and corporatefinance. Executive management must exercise their discretion regarding risk informa-tion to be used as support for providing certainty around expected results in earningscalls and for use as additional disclosure in financial statements.

● Consistency in approach – ERM unifies and solidifies risk management across theorganization by funneling best and desired practices back and forth between businessunits, departments, and the corporation. A well-entrenched ERM function sets thetone for how risk is thought about and handled throughout the organization. Thepresence of ERM conveys the message that managing risks is considered an essen-tial and intentional business function.

Benchmarks for ERM

ERM implementation is an evolving process. Business schools, with support and prompt-ing from the corporate arena, have begun to devote efforts to addressing the need to edu-cate about ERM. Corporate organizations, such as COSO, have recently publishedframeworks for ERM. The consulting arms of public accounting firms are also publishingmaterial on ERM and offering services to companies initiating, further developing, or finetuning their ERM function. In 2001, the Committee of Chief Risk Officers was formed tostudy and report on best practices and emerging best practices for risk management withinthe energy industry. Conferences specifically headlining ERM are prevalent. Books andcase studies on ERM are also on the rise. Even insurance firms are addressing the man-agement of insurable risks from an enterprise perspective. In summary, the resources fordeveloping, enhancing, and benchmarking an ERM function are widely available, how-ever, due to the infancy of ERM, a cookbook or textbook approach does not exist. A com-pany implementing or reviewing their ERM function will need to rely on conducting asignificant amount of self-study and self-examination to determine the appropriate andoptimum structure.

Conclusions

Inherent risks exist in conducting business so ERM is here to stay. The natural progres-sion of sophistication as applied to the business world has led to the evolution of ERM.Expect to see continued discussions of ERM in all arenas including regulatory, academic,and business practice. Like accounting, finance, and legal, ERM has been added to the listof standard business functions. Managing business forecasts without using the availabletools for measuring and identifying the effects of risks is short-sided. Not every companywill need to designate a position solely responsible for ERM nor will all companies needto incorporate highly sophisticated risk metrics into their financial model. All companies

114 Managing Enterprise Risk

Ch07-I044949.qxd 5/23/06 12:03 PM Page 114

Page 117: Managing Enterprise Risk

need to evaluate the areas of risk within their business, assess the likelihood and impactof the risks identified, and review how those risks are currently managed. After the initialrisk assessment and review of the organizational structure for managing risks, ERM bestpractices can be tailored to fit the needs and matched with the size of the company.

Heading into 2005, remember that implementing an ERM function at any time producesbenefits. Even the best ERM program and most advanced risk metrics cannot prevent a riskevent. The bright side of ERM, and even the initial stage of an ERM function are proof thatmanagement is taking a proactive approach towards managing business risks. This proofserves as evidence to both external and internal stakeholders that the company operates likea well-tuned engine and is adequately prepared to proactively manage risks and subse-quently proactively react to risk events as they occur. In light of the heightened scrutinyand responsibility placed on the board and executive management to have complete knowl-edge and understanding of the risks of the business, ERM is essential. The ERM functioncould be the lifesaver for avoiding legal implications of inadequate corporate management.

Enterprise risk management in 2005 – moving beyond market and credit risk 115

Ch07-I044949.qxd 5/23/06 12:03 PM Page 115

Page 118: Managing Enterprise Risk

CHAPTER 8

Overview of Operational Risk Management at Financial Institutions

Linda Barriga

Risk and Policy Unit, Banking Supervision and RegulationFederal Reserve Bank of Richmond

Eric S. Rosengren

Senior Vice President, Supervision, Regulation, and CreditFederal Reserve Bank of Boston

Introduction: A Brief Historical Overview of Bank Capital Regulation

Over the past decade, significant advances in measuring and managing risk have revolution-ized the role of risk management. Increasingly firms are using internal models to quantifyrisks and determine whether risk-adjusted returns are sufficient to justify the capital neededto support their activities. Some of the most significant advances have occurred in the bank-ing industry, where the increasing complexity and size of financial institutions make it criti-cal to accurately measure risk. Banks that span a variety of activities have increasingly usedenterprise risk management to aid in setting managerial incentives and compensation, mak-ing investment decisions, and making internal evaluations of the performance of diverse busi-ness lines.

While the movement to quantify enterprise risk has grown rapidly, the response to incorpo-rate these innovations in bank regulations has moved much more slowly. Since the early1990s, banks in the United States have followed an international capital framework for main-taining minimum capital requirements, which was developed by the Basel Committee onBanking Supervision. At the time, the Basel I agreement was a major breakthrough, provid-ing a more level playing field for financial institutions that were competing globally. Whilethe Basel I capital requirements increased the capital cushion, particularly from the levelsmaintained by internationally active banks during the mid-1980s, they only incorporatedvery crude proxies for risk. In general, these requirements were intended as a rough proxy forthe credit risk of a bank, incorporating differing capital requirements for different types of

119

Ch08-I044949.qxd 5/23/06 12:04 PM Page 119

Page 119: Managing Enterprise Risk

120 Managing Enterprise Risk

asset categories. For example, a bank’s minimum capital requirement for commercial loanswas 8%, but only 4% for home mortgages.

Although Basel I promoted improved risk management, banks’ internal economic capitalmodels began diverging from Basel I’s static regulatory capital framework. As a result,many of banks’ safest assets were moved off-balance-sheets through asset securitizationsbecause the capital requirements tended to be too high for low-risk assets. In addition,these requirements were only very crude proxies for credit risk, and banks’ own internalmodels were far superior measures of credit risk. Finally, most banks had expanded theirenterprise risk management to capture other risks, particularly operational risk.

Under the new Basel Capital Accord, Basel II, internationally active banks would be expectedto calculate capital requirements using many of the techniques currently being employed bybest-practices global banks. The revised capital requirements would promote greater risk sen-sitivity, more accurately reflect the risk of off-balance-sheet assets, and include a capitalcharge for operational risk. While the new regulations are expected to cause banks to holdcapital more in line with their risks, they are also intended to promote best practices in riskmanagement, since the possible systemic implications of a failure of a large internationalbank has grown with the globalization of banking markets.

While Basel II devotes significant attention to credit risk posed by banks’ on-balance-sheet and off-balance-sheet activities, this paper is going to focus on operational risk. Notonly have the recent innovations in operational risk been particularly dramatic, but appro-priately measuring operational risk is a challenge facing many firms and may be particu-larly important in the electric utility industry.

This paper discusses several areas of operational risk management and quantification.Section II describes how operational risk is defined by the new regulations and how thesedefinitions are being employed by banks. The standardization in nomenclature of opera-tional risk has greatly advanced the design of databases that have facilitated peer analysisand the use of external data. This section also describes banks’ internal operational lossdatabases, and how they can be utilized to measure operational risk. In addition, a heuristicdescription of some of the statistical techniques in modeling will be discussed, leaving themore mathematically inclined to refer to the references. Section III discusses the challengesin solely utilizing internal data to measure operational risk, and how banks are augmentingtheir internal data with external data, scenario analyses, qualitative risk adjustments, andrisk-mitigation techniques. Section IV discusses areas not being covered by Basel II andsome of the challenges facing banks and their supervisors. The final section describes howother industries can also benefit from operational risk management and quantificationmethodologies.

Overview of the Current Proposal

Definition of operational risk

Prior to the Basel II proposal, one of the impediments to quantifying operational risk wasthe lack of a common definition. Not only did the definition of operational risk differ acrossdifferent banks, but frequently, it differed across business lines within the same bank, as

Ch08-I044949.qxd 5/23/06 12:04 PM Page 120

Page 120: Managing Enterprise Risk

operational risk was often left to the business lines to manage. The operational risk defini-tion used in Basel II was produced through extensive consultation with the industry and isdefined by the Basel Committee as, “the risk of loss resulting from inadequate or failedinternal processes, people, and systems or from external events.” This definition includeslegal risk, but excludes strategic and reputational risk where direct losses would be moredifficult to ascertain.

Banks complying with Basel II will be expected to map their internal loss data to specificBasel-defined loss event types and business line classifications.1 Basel II characterizes oper-ational risk losses by seven event factors which include Internal Fraud and EmploymentPractices and Workplace Safety.2 In addition, banks’ activities are divided into eight busi-ness lines which include trading and sales and retail banking. These classifications give asense of the scope of operational risk exposure facing the industry, as operational losses canoccur in any activity, function, or unit of an institution (Table 8.1).

While large losses have occurred in all business lines and across all event types, there aredistinct differences among these business lines and event types. For example, retail bank-ing tends to experience high-frequency, low-severity losses created by check-kiting andcredit card fraud. However, even in retail banking there have been high-severity lossesprimarily stemming from class-action law suits. At the other extreme are losses in pay-ment and settlement that happen infrequently but often result in severe losses, such as thefailure of a major computer system.

Examples of large operational losses are widespread and discussions of large operationallosses events occur frequently. In fact, more than 100 instances of operational losses in excessof $100 million for financial institutions have occurred over the past decade. Table 8.2 pro-vides some recent examples of major operational losses in the financial services industry.These examples highlight the magnitude as well as the scope of operational loss events.

Due to large losses that have occurred as a result of operational risk, many internationallyactive banking organizations have been allocating internal economic capital for operationalrisk for some time. In a survey conducted by the Risk Management Group, a Subcommitteeof the Basel Committee, banks reported holding 15% of their capital for operational risk. Inaddition, some banks have begun to report the amount of capital held for operational risk intheir financial reports. For example, Deutsche Bank reported holding 2.5 billion euros andJP Morgan Chase reported holding $6.8 billion for operational risk.

Elements of the advanced measurement approach

The current proposal in the United States only requires large, internationally active bank-ing organizations to be subject to the advanced risk and capital measurement approaches,including a specific capital charge for operational risk. These institutions are identified as core banks and are those with total banking assets of $250 billion or more or total

Overview of operational risk management at financial institutions 121

1While an institution would not be required to internally manage its operational risk according to the Basel-defined loss event types and business line classifications, it would be required to map its internal loss data tothese categories. See Table 8.1 for a full list of categories.2For a complete list of Basel-defined event types and their definitions refer to Table 8.1.

Ch08-I044949.qxd 5/23/06 12:04 PM Page 121

Page 121: Managing Enterprise Risk

122M

anaging Enterprise R

isk

Table 8.1. Loss event type definitions.

Loss Event Types

Employment Clients, Business Execution, Practices and Products and Damage to Disruption Delivery and Workplace Business Physical and System Process

Internal Fraud External Fraud Safety Practices Assets Failures Management

Business Lines

Corporate finance

Trading and Sales

Retail banking

Payment and settlement

Agency services

Commercial banking

Asset management

Retail Brokerage

Loss event type definitionsInternal Fraud: Losses due to acts of a type intended to defraud, misappropriate property or circumvent regulation, the law or company policy, which involves at least one internal party.External Fraud: Losses due to acts of a type intended to defraud, misappropriate property, or circumvent the law by a third party.Employment Practices and Workplace Safety: Losses arising from acts inconsistent with employment, health or safety laws or agreements, from payment of personalinjury claims, or from diversity/discrimination events.Clients, Products and Business Practices: Losses arising from an unintentional or negligent failure to meet a professional obligation to specific clients, or from the nature, or design of a product.Damage to Physical Assets: Losses arising from loss or damage to physical assets from natural disaster or other events.Business Disruption and System Failures: Losses arising from disruption of business or system failures.Execution, Delivery and Process Management: Losses from failed transaction processing or process management, from relations with trade counterparties and vendors.

Ch08-I044949.qxd 5/23/06 12:04 PM Page 122

Page 122: Managing Enterprise Risk

on-balance-sheet foreign exposure of $10 billion or more. Non-core banks can choose tovoluntarily calculate capital under the Basel II requirements if they meet certain require-ments, including the ability to calculate capital using sophisticated credit and operationalrisk models. Implementation of Basel II in the United States differs from many foreignregulators’ approaches that will require all banks to calculate capital under the Basel IIAccord but will provide simpler approaches for smaller institutions or institutions unableto qualify for the more advanced approaches.3

Under the advanced measurement approach (AMA) banks will need to incorporate fivemajor elements into their operational risk quantification methodology. Institutions mustdemonstrate that they have collected adequate internal loss data, integrated relevant externaldata, conducted scenario analyses, performed appropriate statistical modeling techniques,and included assessments of their business environment and internal control factors. Inorder to use the AMA framework, banks must demonstrate that they have captured all ele-ments comprehensively, and while all factors will be required, there will be significant flex-ibility in how institutions choose to integrate them.

Banks will need to collect internal loss data to capture their historical operational lossexperience. In addition, banks will need to establish thresholds above which all internaloperational losses will be captured. While the threshold for collecting loss data differs

Overview of operational risk management at financial institutions 123

Table 8.2. Recent examples of major operational losses in the financial services industry.

• Internal Fraud: Allied Irish Bank, Barings, and Daiwa Bank Ltd – $691 million, $1 billion,and $1.4 billion, respectively – fraudulent trading.

• External Fraud: Republic New York Corporation – $611 million – fraud committed by custodial client.

• Employment Practices and Workplace Safety: Merrill Lynch – $250 million – legal settlementregarding gender discrimination.

• Clients, Products and Business Practices: Household International – $484 million – improperlending practices; Providian Financial Corporation – $405 million – improper sales and billingpractices.

• Damage to Physical Assets: Bank of New York – $140 million – damage to facilities related to September 11, 2001.

• Business Disruption and System Failures: Solomon Brothers – $303 million – change in computer technology resulted in “unreconciled balances.”

• Execution, Delivery & Process Management: Bank of America and Wells Fargo Bank – $225 million and $150 million, respectively – systems integration failures/failed transactionprocessing.

3In addition to the AMA, foreign regulators are providing two simpler approaches to operational risk: the basicindicator and the standardized approaches, which are targeted to banks with less significant operational riskexposures. Banks using the basic indicator approach will be expected to hold capital for operational risk equalto a fixed percentage of a bank’s average annual gross income over the previous 3 years. The standardizedapproach is similar, but rather than calculating capital at the firm level, banks must calculate a capital require-ment for each business line and then must sum the capital charges across each of the business lines to arrive atthe firm’s total capital charge. The capital charge for each business line is determined by multiplying grossincome by specific supervisory factors determined by the Basel Committee.

Ch08-I044949.qxd 5/23/06 12:04 PM Page 123

Page 123: Managing Enterprise Risk

across banks, the most common threshold has been $10,000. Table 8.3 provides an exam-ple of the type of format frequently used for capturing loss data. While the data collectionprocess might seem to be straightforward, it is in fact quite difficult and costly. First, mostbanks have found that the general ledger did not capture major loss types, with opera-tional losses often subsumed in broader business line categories. Thus they have chosento supplement their general ledger-based data collection systems with a web-based plat-form whereby business units can directly report the occurrence of an operational loss.Banks can then reconcile losses reported via a web-based system with those captured inthe general ledger. In addition, many operational losses can be difficult to classify by busi-ness line or by loss type.

While loss data collection is the most costly requirement of the AMA, it also provides thegreatest payoff. Banks that have comprehensive loss data have found that operationalrisks can be much better mitigated once there is a greater awareness of the pattern of his-torical losses. Realizing where large losses are generated can encourage greater use ofrisk-mitigation techniques and changes in controls. For example, reducing high-frequency,low-severity losses by eliminating fraud or automating a process where human error iscommon can frequently significantly improve profitability.

The second element of the AMA is concerned with utilizing relevant external loss data.Having external data is particularly useful in understanding the industry’s experience,especially in areas where a bank’s internal loss history is limited. Most banks have lim-ited historical data, and therefore some business lines or event types may have very fewentries. To the extent that this reflects the short-time period for collecting data, externaldata can provide insight into the high-severity losses that may occur but have not yetoccurred at the bank.

There are several sources for obtaining external operational loss data. Commercial vendorshave created operational loss databases using publicly disclosed information such asSecurity and Exchange Commission (SEC) filings and press reports. While this method ofgathering external data can result in a reporting bias in terms of the types of losses that arepublicly reported, it nonetheless provides a sobering account of how large losses can occur.4

Some insurance companies have also begun to sell their loss data based on insurance claims.While this data also has reporting biases based on the firm’s insurance business and itsincentives to file a claim, it captures losses that may not be captured in other public sources.

The third element of the AMA deals with the use of scenario analyses to consider possi-ble losses that have not occurred but could occur at the bank.5 An example might be toestimate damages resulting from a hurricane for a bank in Miami or an earthquake for abank in California, and derive reasoned assessments of the likelihood and impact of theseoperational loss events. These scenarios should provide losses that risk managers thinkare possible, but occur too infrequently to appear in the internal data.

124 Managing Enterprise Risk

4For an example of using external data to quantify operational risk refer to de Fontnouvelle et al. (2003) at http://papers.ssrn.com/sol3/papers.cfm?abstract_id�3950835Scenario analysis is a systematic process of obtaining expert opinions from business line managers and riskmanagement experts concerning the likelihood of possible operational loss events occurring.

Ch08-I044949.qxd 5/23/06 12:04 PM Page 124

Page 124: Managing Enterprise Risk

Overview

of operational risk managem

ent at financial institutions125

Table 8.3. An example of the type of format frequently used for capturing loss data.

[2] [3] [5] [6][1] Event Event [4] Cost Business [7] [8] [9] [10]Event # Code (1) Code (2) Date Center Line Loss Recoveries Insurance Event Description

1 IF 12 960116 10003 RB 19057.25 0.00 19057.252 EF 31 960116 20003 RB 40905.04 0.00 40905.043 SY 22 960116 33890 CF 10194.55 3433.00 10194.554 SY 11 960119 45359 CF 52831.68 0.00 52831.685 PD 11 960120 11101 CB 36558.11 0.00 36558.116 IF 32 960120 10003 PS 620537.37 0.00 620537.377 IF 22 960122 20203 AS 10181.69 0.00 10181.698 EF 31 960122 19767 AS 24783.17 13556.00 24783.179 EE 17 960122 19332 TS 11963.49 0.00 11963.49

10 EE 27 960122 18897 AS 20086.56 0.00 20086.56. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .

2701 UA 8 960146 10003 RB 14451.49 0.00 14451.492702 UA 3 960148 10003 RB 11010.46 0.00 11010.462703 WS 17 960150 33890 CF 24681.18 0.00 24681.182704 SF 26 960152 23223 AM 17963.66 16963.66 17963.66

Ch08-I044949.qxd 5/23/06 12:04 PM Page 125

Page 125: Managing Enterprise Risk

The fourth element of the AMA pertains to the use of statistical techniques to integrate theinternal data, external data, and scenario analyses. The loss distribution approach is the mostcommon approach and uses standard actuarial techniques borrowed from the insuranceindustry to model the behavior of a firm’s operational losses. The loss distribution approachproduces an objective estimate of a firm’s expected and unexpected losses through frequencyand severity estimation. This approach has three components which are shown in Figure 8.1.First, a frequency distribution is estimated from the data that models how often losses occur.Second, a severity distribution is estimated that captures, conditional upon a loss occurring,how severe the loss is. Once the loss severity and loss frequency distributions have been mod-eled separately, they are combined via a Monte Carlo simulation or other statistical techniqueto form a total loss distribution for a 1-year time period.6

The loss distribution generated represents the full range of possible total operational lossesthat could be experienced in any given year. The distribution is then used to determine the

126 Managing Enterprise Risk

25 million 250 million

Den

sity

Expected loss

Frequency distribution

Number of loss events per year

Den

sity

Severity distribution

$ Value of a loss events

Den

sity

Unexpected loss, 99.9%

Total operational loss over a 1 year time horizon

Figure 8.1. Loss distribution approach.

6In the case of a Monte Carlo simulation, the first step is to draw a random sample from the loss frequency distri-bution. For example, one selection may be a frequency of four events. This value is then used to determine thenumber of events to be randomly drawn from the corresponding severity distribution. For example, we might sim-ulate 4 events of size 11,250, 14,500, 103,545, and 250,000. These severity samples are then summed together togenerate one point on the total loss distribution. This process is repeated numerous times, and then these observedtotal loss points are then fit to a curve that best describes the underlying pattern of total loss occurrences. Thiscurve will allow extrapolation from the data points to determine the capital required at any given percentile.

Ch08-I044949.qxd 5/23/06 12:04 PM Page 126

Page 126: Managing Enterprise Risk

level of capital required at a desired percentile, or soundness standard. If the soundnessstandard were 99.9% as shown in Figure 8.1, the capital that would capture expected andunexpected losses in the example would be $250 million. Note that the distributions tendto be skewed and are not symmetric. In particular the loss distributions are heavy-taileddue to the large losses in the data. The larger the tail implied by the data, the larger the cap-ital that the bank would be expected to hold.

As Figure 8.1 shows, operational losses tend to exhibit “fat-tails”; that is, high-severitylosses occur more frequently than one would expect if one assumed that losses were distrib-uted normally. The fatter the tail, the more capital the bank would hold for infrequent butsevere types of losses. The amount of capital held for operational losses is significantlyimpacted by potential high-severity losses, and therefore estimation of the tail of the distri-bution becomes very important. However, high-severity losses occur relatively infrequentlyin an individual bank’s loss data, making the distributional assumptions, use of externaldata, and scenario analyses critical to obtaining good estimates of possible tail events.

The final element of the AMA is to incorporate more qualitative factors into the operationalrisk management model. Qualitative factors incorporate a forward-looking element into aninstitution’s operational risk profile and include audit scores, risk and control assessments,key risk indicators, and scorecards. These forward-looking measures can require more cap-ital to be held where significant findings occur. For example, key risk indicators attempt toquantify the drivers of operational losses such as employee turnover statistics or transactionsvolume, which are not captured in historical operational loss data. Once these indicators areidentified and tracked over time, management can analyze the data to determine where themajor risks lie within the institution. Tying qualitative factors to an institution’s internal lossexperience ensures that operational risk is managed to factors related to an institution’sactual risk.

Insurance as a risk mitigant

For some time, institutions have been using a variety of insurance products to reduce theeconomic impact of unexpected losses due to operational risks. Insurance should be an idealmitigant for operational risk because insurers have the ability to achieve greater diversifica-tion than individual firms. As part of the new accord, the Basel Committee will allow banksto recognize the risk-mitigating impact of insurance in the measure of operational risk usedfor calculating regulatory capital requirements.

Although insurance is a well-established risk management tool that has been used by thebanking sector for years, insurance policies have a number of potential problems: First,insurers transfer operational risk to credit risk as insurers may not be able to pay off aclaim. Second, insurers may terminate or decline renewal policies if they encounter sig-nificant claims. Third, large claims often face legal challenges that affect the timelinessand certainty of the insurance being paid. As a result of these shortcomings, the BaselCommittee will limit the amount that banks can reduce their operational risk exposure to20%. In addition, a bank’s ability to take advantage of such risk mitigation will depend oncompliance with a set of qualifying criteria for insurance policies.

Overview of operational risk management at financial institutions 127

Ch08-I044949.qxd 5/23/06 12:04 PM Page 127

Page 127: Managing Enterprise Risk

Current Implementation Issues

Most large internationally active banks have made significant progress in creating opera-tional risk loss databases. While costly, the implementation of internal loss databases oftengenerates immediate benefits as management is able to observe patterns of operationallosses and begin to take corrective actions in managing losses more effectively. The mostsophisticated banks have the ability to model their exposure to operational risk based oninternal data and allocate operational capital to their business lines. These banks tend to beof sufficient size to have high-severity operational losses in their business. They are alsousing this allocated capital in making compensation and investment decisions. However,integrating scenario analyses, qualitative adjustments, and insurance adjustments into themodels remains a work-in-progress even at the most sophisticated banks.

For medium-size banks, having limited internal data can pose problems for effectivelyusing comprehensive modeling techniques. Many of these banks have very few high-severity losses, which implies that they cannot rely primarily on internal data when mod-eling many of the business lines and event types. Some banks have focused on usingexternal data, assuming their own processes are not dramatically different from their com-petitors. Other banks view their control systems as being sufficiently different and preferutilizing scenario analyses that can be tailored to the business activity of their bank.

Having limited high-severity events make statistical modeling more difficult. In order todeal with this issue, institutions have been experimenting with alternative techniques. Someinstitutions have been using fat-tailed distributions to quantify their operational risk expo-sure and generate their capital charge.7 However, with limited data it is difficult to rejectalternative distributional assumptions, some of which imply a significant impact on capital.8

Other institutions have experimented with using extreme value theory, which is an alterna-tive to the loss distribution approach described earlier and focuses on estimating the tail ofthe distribution. Extreme value theory provides the basis for modeling extreme events thatare rare but have significant consequences for institutions. Again with limited data it is dif-ficult to verify parameter estimates, and implausible estimates can sometimes be generatedusing small data sets. However, extreme value theory is designed to get more precise esti-mates of low-frequency, high-severity events, in particular capturing losses over a certainhigh threshold. While the application of extreme value theory in operational risk modelingis still in early stages, the initial work in this area seems very promising.

As a result of these data issues, most medium-size banks have not rolled out comprehen-sive capital measurement models. In addition, they have not integrated qualitative adjust-ments or insurance into their models. However, they have found the data mining of internaldata extremely useful in establishing patterns in operational losses that can be managedand mitigated.

Many of these banks have tended to focus on scorecard approaches that utilize loss distri-bution techniques to obtain the overall operational risk capital. This usually involves pro-

128 Managing Enterprise Risk

7Fat-tailed distributions tend to have more observations in the tail and to be thinner in the mid-range than a nor-mal distribution. Fat-tailed distributions include the lognormal, pareto, and weibull distributions.8See de Fontnouvelle et al. (2006).

Ch08-I044949.qxd 5/23/06 12:04 PM Page 128

Page 128: Managing Enterprise Risk

viding management with questions on how many losses they might anticipate over thenext year and comparing these losses to the firm’s historical data as well as the industry’sexperience. In addition, for the more severe outcomes management is asked to producescenarios that could generate the high-severity losses.

The new Basel II proposal anticipated the need to tailor operational risk capital models toeach institution and provides significant flexibility. The proposal is not prescriptive and there-fore gives banks the ability to choose the techniques that fit their specific institution. Thus,some institutions have capital models that are very analytical and primarily utilize internaldata, while others use much more judgment-based models and are far more reliant on exter-nal data and scenario analyses. This flexibility for operational risk differs from the proposal’streatment of credit risk, where the distributional assumptions are embedded in the benchmarkformulas, and substantial modeling details are built into the proposed regulations.

Challenges in Implementing Operational Risk Models

The flexibility of the operational risk proposal is appropriate given the diversity of approachesused by banks to manage risk. Nonetheless, this flexibility presents challenges to consistentsupervisory implementation. As banks are focused on internally consistent models, consistentsupervisory treatment will require across industry perspectives. However, significant chal-lenges to benchmarking banks will need to be overcome.

One challenge facing supervisors is the inconsistent classification of operational losses,which complicates industry-wide analyses as well as across institution comparisons. Banks’internal operational loss data are collected based upon rules set up by corporate-wide riskmanagers. However, the classification of loss data can be quite difficult, and reasonable indi-viduals may classify the same event in different business lines or event types. This inconsis-tency becomes clear when examining external loss data where the same loss events often areclassified differently by different vendors. In addition, the structure of the data collection may be different. This is particularly true for centralized functions like human resources and information technology. A system failure at one bank may be included in an administrativeaccount and then allocated by number of system users, while another bank might assignlosses from system failures to the business line where the majority of the loss occurred. Suchdifferences complicate the process of making comparisons across institutions.

Differences in quantification techniques will also pose challenges for supervisors. Differencesmay occur because control environments and business activities may vary across banks, oralternatively, may just reflect problems in estimating small samples. Until significant datahave been gathered, statistical tests may have difficulty in distinguishing between alternativedistributional assumptions or different modeling choices.

Scaling data is another problem facing banks and supervisors. Banks have experienced asignificant wave of mergers that make merging historical data problematic. Reconcilingloss data between entities is likely to be time consuming and expensive. In addition, as aninstitution changes, the appropriate way to scale historical data is uncertain. In some busi-ness activities, increased volumes may rise little with the additional business volume whilein other activities it may be proportional to the business volume.

Overview of operational risk management at financial institutions 129

Ch08-I044949.qxd 5/23/06 12:04 PM Page 129

Page 129: Managing Enterprise Risk

As institutions currently only have limited internal operational loss data and do not have his-torical data on key risk indicators or metrics for the control environment, most of the mod-eling to date has concentrated on statistical models, which primarily rely on internal lossdata. Causal modeling is not yet possible, since most institutions do not have historical dataon key risk indicators or metrics for the control environment. However, with improvementsin data collection and management of operational risk it should be possible to improve thestatistical modeling currently being done at most banks.

The process of integrating operational risk into enterprise risk models is likely to evolve.Currently the modeling of operational risk tends to be distinct from credit and market riskmodeling. However, over time, institutions should develop models that better capture theinteraction of these risks. In addition, many institutions are conducting preliminary stud-ies on modeling reputational risk. Many reputational risks are generated by operationalrisks, yet this interaction is not captured in the capital requirements. Recent experiencesfrom Arthur Anderson and Enron have focused management’s attention on the need toconsider reputational risk when thinking about its operational risk environment.

Finally, strategic risk should be a major risk captured by management but is not incorpo-rated into the capital requirements. Changes in the competitive environment, changes ineconomic circumstances, or changes in customer behavior can significantly impact banks,but are currently not captured in many of the enterprise risk management models.

Despite the many hurdles in developing a full economic capital model for operational risk,significant changes have occurred over the past several years. Most large banks are nowsystematically collecting and analyzing operational loss data. In addition, most banks havealso introduced some quantitative modeling and integration with qualitative measures. Afew banks have also rolled out comprehensive operational risk management programs thatcan be used to quantify operational risk, allocate capital by business lines so that it can beused for compensation and investment decisions, and calculate capital for operational riskalong the requirements of the Basel II proposal. Given the resources being spent and theprogress made to date, many large banks should be ready for Basel II once the proposal hasbeen finalized.

Application to Other Industries

Discussion of a possible explicit capital charge for operational risk has provided a signifi-cant boost to the banking industry’s efforts to quantify operational risk. While the largestbanks were already trying to quantify operational risk for their internal economic capitalmodels prior to the Basel proposal, the regulatory discussion has spurred the industry todevelop programs more quickly and have them applied to a broader set of banks than likelywould have occurred in the absence of the Basel proposal.

While the regulatory impetus has caused banks to have more developed quantifiable oper-ational risk programs, the operational risk quantification techniques are no less relevant inother industries. Many of the loss event types would apply to any industry, such as Damageto Physical Assets, Employment Practices and Workplace Safety, Clients, Products, and

130 Managing Enterprise Risk

Ch08-I044949.qxd 5/23/06 12:04 PM Page 130

Page 130: Managing Enterprise Risk

Business Practices, and Business Disruption and System Failures. Other categories mayappear less frequently in non-transaction oriented industries, such as Execution, Delivery,and Process Management. Similarly, the frequency and severity of losses may differ acrossindustries. For example, ice storms can be very disruptive for electric utilities, but are notof particular concern in the banking industry.

While the nature of losses may differ, most of the AMA is applying risk management tech-niques that are applicable to any industry. First, virtually any firm can benefit from collect-ing operational loss data, and thereby enabling it to measure and manage operational risk.Without data it is very difficult to manage a risk since it cannot be measured. Most banksthat have created operational loss databases have been surprised by the size and distribu-tion of these losses. Almost all banks have made adjustments to their management of oper-ational risk once they have better understood their loss experience. Similar benefits arelikely to occur in other industries.

Second, the governance of large diversified firms provides a premium on identifying risk.A well-functioning operational risk management system should fit well with new regula-tions related to financial reporting, such as Sarbanes-Oxley. Having effective manage-ment information systems on operational risk will be crucial as senior management andboards of directors become more accountable for understanding and mitigating risks attheir institutions.

Third, while many banks are focused on using statistical models, external data, and scenarioanalyses to measure operational risk capital, this capital is useful for purposes other than sat-isfying minimum regulatory capital requirements. The most effective risk managementunits use economic capital as an internal pricing mechanism for risk. Tying economic capi-tal to business lines in a way that impacts investment decisions and compensation gets busi-ness lines actively engaged in thinking about the risk they pose to the larger organization.

Fourth, while most firms have qualitative operational risk management often tied to keyrisk indicators, they often have not been tested relative to loss experience. Managementstrategies that use risk indicators that are uncorrelated with loss experience can be counter-productive. Integrating qualitative adjustments into a broader operational risk frameworkinsures that risk indicators are tested relative to internal and external loss experience.

Finally, all firms and industries have experienced operational losses. Rarely a week goes bythat does not entail the discovery of a major fraud or law suit that results in losses in excessof $100 million in some industry. The statistical regularities found in the banking indus-try’s loss experience, and the major management innovations that have occurred to date,indicate that other industries may well be underinvested in thinking about operational risk.

Conclusion

Operational risk is a substantial and growing risk facing firms, due to the increaseddependence on automated technology, the growth of e-commerce, and the increased preva-lence of outsourcing. External data and internal data provided by banks have shown that

Overview of operational risk management at financial institutions 131

Ch08-I044949.qxd 5/23/06 12:04 PM Page 131

Page 131: Managing Enterprise Risk

operational losses are extensive. This reality encouraged many banks to begin allocatingcapital for operational risk prior to the Basel II process. As banks and bank supervisorswatched developments at the largest banks, it has become clear that risk managementcould be improved with a more systematic approach towards operational risk.

The Basel II proposal provides a flexible regulatory environment for quantifying opera-tional risk. This flexibility reflects the differences in operational loss experiences acrossbusiness lines and the early stage of development in quantifying operational risk at manybanks. Having a flexible regulatory environment provides banks with an opportunity toemphasize those quantification techniques most appropriate for the management of oper-ational risk at their institution given the nature of their activities, business environment,and internal controls.

While the flexibility of the AMA allows for a competition of ideas to establish best prac-tices in the management of operational risk, it also creates supervisory challenges. Sincethe proposed capital calculation is not solely designed for internal purposes, but also tomeet minimum regulatory thresholds, consistency of application across institutions will bean important issue that needs to be addressed. In addition, supervisors will need to under-stand statistical modeling issues as well as the nature of operational risk at each of theirbusiness lines. Similarly, having sufficient supervisory staff capable of understanding intri-cate risk management models will be a challenge, particularly as these skills will be in highdemand in the private sector.

While the proposed capital regulation has encouraged banks and supervisors to betterunderstand operational risk quantification, there is more to managing operational risk thansimply quantification. Sound practices extend beyond numbers, and quantification is a toolto be integrated with a good internal control environment and a management structure thatencourages risk management. A strong risk management culture that encourages a greaterunderstanding of an institution’s exposure to risk is the single most important element toany move to measure, manage, and mitigate operational risk at any institution.

Bibliography

Basel Committee on Banking Supervision, Working Paper on the Regulatory Treatment of OperationalRisk, 2001.

Basel Committee on Banking Supervision, Sound Practices for the Management and Supervision ofOperational Risk, 2003.

Basel Committee on Banking Supervision, The New Basel Capital Accord, 2003.

De Fontnouvelle, Patrick, John Jordan, and Eric Rosengren, Implications of Alternative OperationalRisk Modeling Techniques, 2003. http:/papers.ssrn.com/sol3/papers.cfm?abstract_id�39508.Revised Paper forthcoming in Journal of Money Credit and Banking (2006).

De Fontnouvelle, Patrick, Virginia Dejesus-Rueff, John Jordan, and Eric Rosengren, Using Loss Datato Quantify Operational Risk, 2003. http://papers.ssrn.com/sol3/papers.cfm?abstract_id�556823.Revised Paper forthcoming chapter in Risks in Financial Institutions, NBER (2006).

132 Managing Enterprise Risk

Ch08-I044949.qxd 5/23/06 12:04 PM Page 132

Page 132: Managing Enterprise Risk

Federal Deposit Insurance Corporation, Federal Reserve System, Office of the Comptroller of theCurrency, and Office of Thrift Supervision, Advance Notice of Proposed Rulemaking – Risk-Based Capital Guidelines; Implementation of New Basel Capital Accord, 2003.

Federal Deposit Insurance Corporation, Federal Reserve System, Office of the Comptroller of theCurrency, and Office of Thrift Supervision, Supervisory Guidance on Operational Risk AdvancedMeasurement Approaches for Regulatory Capital, 2003.

Overview of operational risk management at financial institutions 133

Ch08-I044949.qxd 5/23/06 12:04 PM Page 133

Page 133: Managing Enterprise Risk

CHAPTER 9

The Application of Banking Models to the Electric Power Industry: Understanding BusinessRisk in Today’s Environment

Karyl B. Leggio

Henry W. Bloch School of Business and Public AdministrationUniversity of Missouri at Kansas CityKansas City, MO, USA

David L. Bodde

International Center for Automotive Research, Clemson University, Clemson, SC, USA

Marilyn L. Taylor

Department of Strategic Management, University of Missouri at Kansas City Kansas City, MO, USA

Introduction

Investors, Boards of Directors, and strategic planners are responsible for oversight in a cor-poration and, consequently, need to more fully understand the extent and character of busi-ness risk. Yet the complexity of many industries makes this task difficult. A thoroughunderstanding of the risk factors that cause a firm’s earnings to vary will enhance the Board’s,and management’s, ability to anticipate competitive, environmental, regulatory, and leg-islative changes and their impact upon the firm. In an era where firms are being called uponto meet increasing financial expectations, managing risk, and thus stabilizing earnings,becomes critical.

Currently many firms consider the timing and riskiness of anticipated cash flows in theirproject approval decision processes. However, existing discounted cash flow (DCF) models

134

Ch09-I044949.qxd 6/28/06 3:23 PM Page 134

Page 134: Managing Enterprise Risk

do not go far enough in quantifying risk. Recently, firms began implementing enterpriserisk management (ERM) systems to help manage business risk. However, many risksfaced by a firm are difficult to quantify using an ERM system. Additionally, managingrisk is more than protecting shareholders from downside risk; risk management can be apowerful tool for improving business performance since risk arises from missed opportuni-ties as well as from threats to earnings stability (Lam, 2000).

The goal of this narrative is to move toward an enhancement to ERM models by morethoroughly discussing what the risks are, what the sources of risk are, and how to improveour capabilities of identification and response to these risks. A better understanding of riskwill stem from an enhanced understanding of what is known, unknown, and unknowablein a firm’s operations. Specifically, we will develop a framework to identify more accu-rately the business risks faced by firms in the electric power industry by utilizing scenarioanalysis and contingency planning.

We will begin by looking at advancements in the banking industry in the field of risk man-agement. Much of the power industry’s current thinking on risk management can betraced to risk management modeling in banking, most specifically stemming from theBasle Capital Accord. We will look at banking requirements for risk management andtheir applicability to electric power. We will then look at traditional DCF models and theirshortcomings, and move to a discussion of real option analysis and ERM. We will discussthe application of ERM modeling in the electric power industry. The primary risks thatbusinesses look to manage fall into the following broad categories: credit risk such ascounterparty exposure; operational risk associated with human error or outright fraud;market risk stemming from exposure to swings in interest rates, foreign exchange ratesand commodity prices; and business risk arising from competitive factors that impactcosts, sales, and pricing policies. We will look at the importance of considering what are the:

● known risk exposures in the industry and how Boards of Directors and executive teamscan best manage these risks,

● unknown risks but risks that are knowable with new technologies, additional research,or a shift in resources to aid in making the unknown known; and finally,

● unknown variables that impact a firm yet no amount of research or resources deployedwill help to make these variables known at this time.

We will conclude with a discussion of contingency planning and scenario analysis and dis-cuss how these techniques can be used to illuminate and possibly reduce the unknown risksfaced by businesses today.

The Banking Industry

Most ERM models can trace their roots to the banking industry; in fact, banks have beenat the forefront of risk management for the past 25 years. These models of risk managementcame from the 1988 Basel Capital Accord that was the product of the Basel Committee onBanking Supervision.

The application of banking models to the electric power industry 135

Ch09-I044949.qxd 6/28/06 3:23 PM Page 135

Page 135: Managing Enterprise Risk

136 Managing Enterprise Risk

The 1988 Basel Capital Accord required international banks to hold capital equal to a pre-determined percentage of the bank’s assets (Darlington et al., 2001). A key outcome of thisapproach was the Value at Risk (VaR) metric to assess bank’s risk and capital requirements.

VaR measures the likelihood, under normal market conditions, that the institute will expe-rience a loss greater than $X. It is typically calculated on a daily basis and is usually basedon a 95% or 99% confidence level. In other words, banks are able to calculate, for exam-ple, that the bank is 99% confident that losses will not exceed $20 million on any givenday. The advantage of VaR is that it calculates one number to quantify a firm’s risk. Bankmanagement can then decide whether they are comfortable with that level of risk exposureand if their portfolio will generate adequate returns given this level of risk. The disadvantageis also in VaR’s simplicity. Typical banks are exposed to a multitude of risks from numer-ous sources. Many of these risks are difficult to quantify so risk managers make approxi-mations. These approximations can lead to inaccurate calculations as to the bank’s true riskexposure. To enhance the risk assessment of banks, the Basel Committee released a pro-posal in 1999 to replace the 1988 Accord.

The original Accord applied the same risk metric standards to all banks. Over time and sep-arately, banks began developing increasingly sophisticated internal risk measurement met-rics. The Banking Supervisors have come to realize that VaR alternatives for measuringrisk may be more appropriate depending on the nature of each bank’s primary business focus.Therefore, the 1999 New Basel Capital Accord for banks’ capital requirements allows foralternative risk and credit worthiness metrics.

The New Accord’s goal is to more closely align regulatory capital requirements withunderlying firm-specific risks while providing bank managers options for assessing capi-tal adequacy. The proposal is based upon three pillars to evaluate risk: minimum capitalrequirements, supervisory review, and market discipline.

According to William J. McDonough, Chairman of the Basel Committee and President andChief Executive Officer of the Federal Reserve Bank of New York, “This framework willmotivate banks to improve continuously their risk management capabilities so as to makeuse of the more risk-sensitive options and, thus, produce more accurate capital requirements”(Update on the New Basel Capital Accord, 2001). The first pillar of the New Accord allowsbanks to replace the VaR metric with alternative risk measurement metrics. Also, in additionto evaluating a bank’s credit and market risk exposure, the New Accord requires banks toaccount for, and reserve capital for, their operational risk.

The second pillar of the New Capital Accord requires supervisory oversight to validate theinternal risk measurement processes at each bank and to assure the reserve capital is ade-quate given the level of risk at each bank. Finally, the third pillar focuses on market disclo-sure. The goal of this pillar is to improve the transparency of each bank’s capital structure,risk exposures, and capital adequacies with the objective being enhanced market disci-pline for banks.

Many firms in the electric power industry have added retail power businesses; some ana-lysts claim these firms now look very similar to banks with similar exposure to credit,market, and operational risk. As a result, an industry has grown of firms developing and

Ch09-I044949.qxd 6/28/06 3:23 PM Page 136

Page 136: Managing Enterprise Risk

The application of banking models to the electric power industry 137

implementing ERM systems designed especially for energy firms. And, as the bankingindustry discovered, VaR is not a sufficient metric to use to capture all risks that an energyfirm is exposed to. Alternative risk metrics such as risk adjusted return on capital (RAROC)and Capital at Risk (CaR) are now common calculations for the energy industry. The goalof an ERM system is to consistently and accurately capture all of an industry’s risk expo-sures and determine what level of capital is required to maintain the firm’s credit rating.Advancements in banking risk management will lead to the development of improved riskmetrics in the energy industry. Thus, the energy industry will continue to monitor outcomesfrom the Basle Accords and other bank regulatory changes.

DCF Techniques

Firms consider the risk of new investments prior to undertaking a new project. The firmaccounts for risk through the capital budgeting function. In capital budgeting decision-making, the goal is to identify those investment opportunities with a positive net value tothe firm. DCF analysis is the traditional capital budgeting decision model used. It involvesdiscounting the expected, time dependent cash flows to account for the time value ofmoney and for the riskiness of the project via the calculation of a net present value (NPV).The NPV represents the expected change in the value of the firm if the project is accepted.The decision rule is straightforward: accept all positive NPV projects and reject all nega-tive NPV projects. A firm is indifferent to a zero NPV project as no change in currentwealth is expected.

Today, most academic researchers, financial practitioners, corporate managers, and strate-gists realize that, when market conditions are highly uncertain, expenditures are at leastpartially “reversible,” and decision flexibility is present, the traditional DCF methodologyalone fails to provide an adequate decision-making framework. It has been suggested thatcurrent corporate investment practices have been characterized as myopic due, in largepart, to their reliance on the traditional stand-alone DCF analysis (Pinches, 1982; Porter,1992). An alternative project valuation method is real options analysis (ROA).

Real options are a type of option where the underlying asset is a real asset, not a financialasset. In general, real options exist when management has the opportunity, but not therequirement, to alter the existing strategic investment decision. The most general or allinclusive real option is the option to invest (Pindyck, 1991; Dixit and Pindyck, 1994). Theanalogy is to a financial call option: the firm has the right, but not the obligation, now orfor some period of time, to undertake the investment opportunity by paying an upfrontfee. For example, by purchasing an option on land, an energy firm has the option to investin the design and development of a new power plant to be built on that land. As with finan-cial options, the option to invest is valuable due to the uncertainty relating to the underly-ing asset’s future value where, in this case, the underlying asset is the power plant. Theinvestment rule is to invest when the present value of the benefits of the investment oppor-tunity is greater than the present value of the direct cost of the investment opportunity plusthe value of keeping the option to invest “alive.” Begin building the power plant when thevalue of building the plant now exceeds the sum of the present value of the cost of build-ing the power plant plus the value of keeping the option to build alive. The decision

Ch09-I044949.qxd 6/28/06 3:23 PM Page 137

Page 137: Managing Enterprise Risk

138 Managing Enterprise Risk

to build will be based, in part, on the projected prices of energy and natural gas and theprojected available supply and demand for power in the region.

Each investment opportunity may be modeled in “total” as an option to invest. However,the investment opportunity itself may contain various individual real options or embeddedreal options such as the option to invest in a project in stages. Each decision point or“go/no go” decision is another real option to be valued.

The complexity of valuing these embedded options is one of the disadvantages of realoption analysis; the primary value in a ROA may derive from the process management usesto identify the options in a project. In looking for optionality in a project, a company mustevaluate the future and identify the set of possible scenarios that can come about if the firmpursues the project. This requires a reasonable amount of brainstorming and looking atthe project or decision from various angles. Typically there are one or two real options thatcapture the bulk of the uncertainty value in a project, but generating a list of potential futureoutcomes creates a process of thinking beyond the obvious that benefits the company. Wewill discuss this process further in the “Scenario Analysis” section.

ERM

The problem with using a DCF method of analyzing risk is that it evaluates risk for thecompany one project at a time. Companies now realize this silo effect of risk managementdoes not accurately depict the risk facing a firm. In some cases, risk in one division coun-terbalances risk in another division and the overall firm risk is reduced. Alternatively, andmore concerning, is similar risks in different divisions may have the effect of amplifyinga firm’s exposure. When corporations evaluate project risk or even divisional risk they failto accurately depict the firm’s exposures; this can have costly ramifications.

An ERM program, properly implemented, eliminates the problem of risk management bydivision. It requires a firm to identify firm-wide risks, quantify these risks, assess correla-tions, track changes in the organization that led to the risk exposure, and develop appropriatemeans of managing the risk. The goal of an effective ERM system is similar to the goal ofDCF modeling: to improve the quality of decision-making by implementing a structure thatidentifies risk and analyzes the impact of the risk on firm performance. Whereas DCF ana-lyzes risk on a project by project basis, ERM identifies and manages risk for the entire firm.

ERM is defined as “the process of systematically and comprehensively identifying criti-cal risks, quantifying their impacts, and implementing integrated risk management strate-gies to maximize enterprise value” (Darlington et al., 2001). The key factors are that ERMis a process that needs to be continually monitored and adapted to the changing corporateenvironment; it is not a one time activity. ERM is a comprehensive system of risk assess-ment, and it evaluates risk for the firm as a whole. This requires an understanding of the expo-sures of varying divisions. Finally, management must quantify the impact of risk on the firm.By understanding, mitigating, and managing risk, management is able to ascertain the cap-ital needed to fund and grow the organization.

Ch09-I044949.qxd 6/28/06 3:23 PM Page 138

Page 138: Managing Enterprise Risk

Creating an ERM for Power Companies

Competitive triage, deregulatory uncertainty, creation of new risk management products –the electric utility industry has coped with major changes such as these issues since itsinception. However, the 21st Century has made issues such as these even more salient.For example, the recent industry turmoil from the fallout of the Enron scandal and thetransmission-related blackout for a portion of the Eastern United States have been amongthe events adding to the challenges. The electric power industry has been called to evolvefrom a monopolistic “cost plus” business model to a competitive marketplace, able to adaptto changing regulatory and legislative agendas. The industry grew as new competitorswith alternative sources of power entered the market. Firms split along functional lines,leaving companies that specialize in generation, transmission or distribution of power,sometimes leading to instability in the basic infrastructure of power.

As a result of the Enron scandal, energy firms must now consider the governance issuesfacing their firms in these turbulent times. What accounting and corporate oversight isneeded to reassure analysts as to the stability of this industry? What can we learn aboutrisk management from the banking industry? And what can we do to reduce the risk associated with producing a non-storable asset with demand contingent upon the uncer-tainty of weather. ERM models serve the dual function of providing transparency to themarket and the Board and assisting management in assessing the true value and risk of the firm.

The business risk facing the power industry dramatically changed following deregulation.What the power industry and regulators know and understand is how to manage a monop-olistic industry with no competitors and a rate of return set by state regulators; they havebeen doing this for decades. But this is the past in the electric utility industry. The newpower industry is one of competition and deregulation. While deregulation began decadesago, it continues today. This process has brought great change to the industry; however, itis not done. The Federal Energy Regulatory Committee continues to consider regulatorychange while states gradually approve competition for power within their borders. At thesame time, pending legislation has far-reaching ramifications for this industry. The risksand competitive environment in power are evolving.

Competitive pressures mount in this industry. Record cold in the late 1990s sent power pricesskyrocketing and lead to the announcement of many additional power plants that were tobe built throughout the U.S. to increase the supply of power and reduce the likelihood ofexorbitant power prices in the future. However, a recession and several mild winters leftmany companies canceling plans to build these plants. What is the status of new powergeneration in the U.S.? And what international competitive pressures loom for U.S. firms?Questions such as these deserve careful consideration by power executives.

The pressure mounts for management and Boards of Directors of energy companies toprovide active oversight as to managerial actions. However, the complexity of the indus-try has made this task increasingly difficult. The use of derivative instruments can be ahedge or risk reduction strategy; however, they can also be used to speculate and increasethe riskiness of the firm. And recent spectacular corporate failures due to the misuse ofderivatives such as Barings Bank and Orange County tell us that a Board must adequately

The application of banking models to the electric power industry 139

Ch09-I044949.qxd 6/28/06 3:23 PM Page 139

Page 139: Managing Enterprise Risk

140 Managing Enterprise Risk

understand and monitor managerial activity. An ERM system must be able to aid the Boardin its oversight responsibilities.

Unlike other commodities, power is not storable. This trait makes the market for powerinfinitely more volatile than the market for other commodities. Concurrently, demand forpower is highly dependent upon weather, an unpredictable variable. Additionally, theprice for power is dependent upon the available supply. As firms cancel plans to build newgenerating facilities, the future stability of the industry is threatened. Firms must work tounderstand weather patterns and the latest weather forecasting techniques; must understandwhat derivative instruments can be used to reduce the firm’s exposure to extreme weatherconditions; must work to manage the volatility and the seasonality of the industry; andshould model to predict the future demand and supply for power in a given region. Theseare some of the challenges of developing an efficient ERM system in the power industrygiven the challenges of the business. The goal of the ERM program is to develop a method-ology to adequately identify and model the risk of a project in order to determine what activ-ities will add value to the firm.

Efficiently allocating capital is critical to the future success of a power company. In addi-tion to improving the firm’s ability to manage capital, a process for efficiently deployingcapital will improve return on equity while ensuring solvency to a standard demanded bydebtholders. An effective ERM process must be supported by business tools. ERM will helpmanagers understand risk exposure and the diversifying effects of the business units, improvethe firm’s competitive position, increase the company’s access to capital, determine the firm’soptimal capital allocation given a level of risk, and enhance it’s image in the market as aninnovative company.

Fundamentally, ERM will aggregate risk on an enterprise-wide basis and produce a riskprofile of the firm. This model will determine a current value distribution and will allowanalysis, prior to spending capital, of the impact potential new capital projects will haveon firm value and debt rating. In order to develop this model, a bottom up approach istaken. A 1-year time horizon for analyzing risk is often chosen since it is consistent withfinancial market reporting models. The key challenge in developing an ERM system isunderstanding the risks of the entire organization. Managers representing every business unitmust be interviewed to determine how each group manages risk, values market opportuni-ties and determines project performance. Furthermore, since the ERM model must be for-ward looking, an effort must be made to understand the business strategy that drives eachgroup’s approach to the market. Based on the results of these interviews, risk drivers foreach group and correlations between risk drivers need to be identified, and an approach tomodeling each group’s value distribution then is developed.

In the power industry, market risk is heavily driven by two commodities: gas (an input tomuch of the power produced) and electricity. The general approach underlying ERM is tovalue an asset portfolio using a set of projections representing the price of each commod-ity affecting the business. The correlations between various groups’ exposure to commod-ity risk needs to be measured, then each business unit’s value distribution will be modeledusing relevant projections. These distributions are then aggregated to come up with a sin-gle enterprise-wide value distribution.

Ch09-I044949.qxd 6/28/06 3:23 PM Page 140

Page 140: Managing Enterprise Risk

The application of banking models to the electric power industry 141

ERM also is a tool for decision making when evaluating capital projects. To date, mostinvestments are evaluated on a stand-alone basis, with traditional NPV or Internal Rate ofReturn (IRR) approaches. In these cases, the choice of hurdle rates has been essential todeciding on whether or not to proceed with a project. No formalized consideration hasbeen given to a project’s effect on the firm’s overall risk profile. ERM allows managers todetermine a project’s marginal contribution to value, taking into consideration the effectsof diversification with existing projects.

ERM is a vital tool to help managers determine a firm’s risk profile and to analyze theimpact potential capital projects will have on the company’s overall risk. This tool helpsmanagers deploy capital more efficiently throughout the company and will provide exec-utive leadership with the ability to steer the company towards the return on equity and sol-vency standards demanded by the market. This ability to link project by project decisionmaking with a firm’s overall strategic vision will improve access to capital and increasethe value the firm brings to its shareholders.

Known Versus Unknown

“It’s not what we don’t know that causes trouble. It’s what we know that ain’t so.”–Will Rogers

In quantifying risk, one of the difficulties is in identifying risk. From our earliest years inschool, we learn by reading textbooks. The assumption is that these books, with their vastresources and authoritative and confident tone, represent all there is to know about a given sub-ject; we grow up thinking that more is known than actually is (Gomory, 1995). By learningand questioning, we stretch our understanding beyond the original text. Unanswered ques-tions lead us to identify what is known and what is unknown. The unknown represents risk.

We need to differentiate between the unknown and the unknowable. Quite often, what isunknown today becomes known in the future. Scientific discovery expands the bounds ofour thinking. But there is a limit. Some phenomena are unknowable. Take for example,the weather. In determining when to shut down a power plant for routine maintenance, apower company looks at historic weather trends and chooses a month to shut down theplant when temperatures are typically mild and demand for power is typically low; forexample, the month of October. The power company cannot know with certainty that thecurrent year’s weather pattern will follow historic weather patterns. This could be the yearwith an unseasonably hot October and demand for power to run air conditioners exceedsexpectations. The actual weather during the next October is unknowable and no amountof modeling will change that.

Additionally, the perspective of the provider of knowledge influences what we know. Apower plant operator in West Virginia may believe he knows everything there is to knowabout running a power plant. However, if his knowledge base comes strictly from operat-ing his coal-fired generation facility, that knowledge may be useless to a hydro-poweredplant operator in Oregon. Our perspective also influences our understanding of risk. Someindividuals are, by nature, more comfortable assuming risk. Acknowledging and consider-ing alternative attitudes towards risk becomes an important component in risk management.

Ch09-I044949.qxd 6/28/06 3:23 PM Page 141

Page 141: Managing Enterprise Risk

142 Managing Enterprise Risk

One method of accounting for differing knowledge, alternative outcomes, and varyingattitudes towards risk is with scenario analysis.

Scenario Analysis

Not all risks are known; nor are all risks quantifiable. Yet the goal of an ERM system is tocapture all risk and factor it into the decision-making process for the firm. The system mustbe flexible to adapt as new developments become relevant to the decision-making process.Sensitivity and scenario analyses are means of encouraging management to look at an arrayof possible outcomes and their impact upon the firm’s earnings.

Sensitivity analysis and scenario analysis have been used, and are used, to supplement atraditional DCF analysis. Sensitivity, or “what if,” analysis is generally performed on thedeterminants of the cash flows. It is reasonably easy to perform, helps identify the princi-ple threats to the project, and calculates the consequences of incorrectly estimating a vari-able (i.e., the effect on NPV). However, sensitivity analysis only evaluates one variable ata time.

Scenario analysis does consider internally consistent combinations of variables. It leadsinto challenge thinking. The goal is for teams to challenge traditional beliefs and push atthe boundaries of knowledge which may lead to innovative ideas and approaches.Scenarios allow management to factor uncertainty into their decision-making process.Management looks at all possible outcomes without assigning probabilities for the likeli-hood of an outcome occurring. In considering what could happen, management mayuncover previously unheard of opportunities as well as identifying additional risk factorsin the project.

“The one enduring competitive advantage of an organization is it’s ability to learnbetter and faster than its competition.”

– Arie de Geus

Scenario analysis and contingency planning allow organizations to adapt faster than thecompetition. Teams look at all possible developments in a project. This allows flexibilityin divisional strategy to react to events that management has now considered. This processmakes management more flexible; they spend time thinking beyond the obvious, pre-dictable and desired outcomes. Scenario analysis forces management to think creatively;contingency planning forces management to plan creatively. Scenarios are used to developan action plan. Scenario analysis should lead to the conclusion that the original strategy issound or that there are warning signals that need to be attended to. It forces an under-standing of the causal relationship between seemingly unrelated factors. The goal of ERMis to understand risk, align the firm’s strategy with corporate objectives, and minimize theprobability of unexpected outcomes. Risk management is, and should be, a strategic prior-ity. Scenario analysis and contingency planning work together to minimize the unknownwhich leads to the ability of a firm to account for and manage risk.

Ch09-I044949.qxd 6/28/06 3:23 PM Page 142

Page 142: Managing Enterprise Risk

The application of banking models to the electric power industry 143

References

Basel Committee on Banking Supervision at the Bank for International Settlements, “The New BaselCapital Accord,” CH-4002, Basel, Switzerland, 2001.

Darlington, Angela, Simon Grout and John Whitworth, “How Safe is Safe Enough? An Introductionto Risk Management”, The Staple Inn Actuarial Society, June 12, 2001.

Dixit, Avinash and Robert Pindyck, “Investment Under Uncertainty: Keeping One’s Options Open,”Journal of Economic Literature, Nashville, December, 1994.

Gomory, Ralph E., “An Essay on the Known, the Unknown, and the Unknowable”, ScientificAmerican, June, 1995.

Lam, James, “Enterprise Risk Management and the Role of the Chief Risk Officer,” ERisk, March 25, 2000.

Pinches, George, “Myopia, Capital Budgeting and Decision-Making”, Financial Management, 11(3),1982, 6–20.

Pindyck, Robert, “Irreversibility, Uncertainty, and Investment”, Journal of Economic Literature,Nashville, September, 1991.

Porter, Michael, “Capital Disadvantage: America’s Failing Capital Investment System”, HarvardBusiness Review, Boston, September/October 1992.

Ch09-I044949.qxd 6/28/06 3:23 PM Page 143

Page 143: Managing Enterprise Risk

CHAPTER 10

What Risk Managers Can Learn from Investors Who Already Know and Consider the Risks and WhoWish that Professional Risk Managers and Decision-Making Executives (Note the Differentiation) WouldCoordinate Their Efforts and Figure Out How RiskTaking and Risk Management Efforts Can Mesh

Leonard S. HymanSenior ConsultantR.J. Rudden Associates

Adam Smith on Risk and Return

In all the different employments of stock, the ordinary rate of profit varies more or lesswith the certainty or uncertainty of the returns … The ordinary rate of profit always risesmore or less with the risk. It does not, however, seem to rise in proportion to it, or so as tocompensate it completely. Bankruptcies are most frequent in the most hazardous trades …The presumptuous hope of success seems to act here as upon all other occasions to enticeso many adventurers into those hazardous trades that their competition reduces the profitbelow what is sufficient to compensate the risk. To compensate it completely, the commonreturns ought, over and above the ordinary profits of stock, not only to make up for alloccasional losses, but to afford a surplus profit to the adventurers of the same nature withthe profit of insurers. But if the common returns were sufficient for all this, bankruptcieswould not be more frequent in these than in other trades.1

Peter Drucker on Future Events

We know only two things about the future:

1. It cannot be known.

2. It will be different from what exists now and from what we now expect …

144

Ch10-I044949.qxd 5/23/06 12:10 PM Page 144

Page 144: Managing Enterprise Risk

Risk managers and risk management efforts 145

Any attempt to base today’s actions and commitments on predictions of future events isfutile. The best we can hope to do is to anticipate future effects of events which have alreadyirrevocably happened …

Business … has accepted the need to work systematically on making the future. But long-range planning does not – and cannot – aim at the elimination of risks and uncertainties …

The deliberate commitment of present resources to an unknown and unknowable future isthe specific function of the entrepreneur in the term’s original meaning. J.B. Say … whocoined the word around the year 1800, used it to describe the man who attracts capitallocked up in the unproductive past … and commits it to the risk of making a differentfuture. English economists such as Adam Smith with their focus on the trader saw effi-ciency as the central economic function. Say, however, rightly stressed the creation of riskand the exploitation of the discontinuity between today and tomorrow as the wealth-pro-ducing economic activities.2

Introduction

I can’t tell you when alchemy became a respectable profession, but I do know that energyrisk management reached that milestone in the late 1990s, with the restructuring of the electricity supply industry. Alchemy, of course, had a longer run as a respectable profession.

The spectacular failures within the energy sector bring into question whether the risk man-agement procedures worked, or decision-making managements ignored the advice of riskmanagers, or they all focused on the wrong risks.

Any way you look at it, investors, the ultimate risk takers have to contemplate the possi-bility that on-the-job training took place at their expense. Why do you think so manyenergy companies have retreated back to basics? They got religion. They read Isaiah, firstthe line “He is brought as a lamb to the slaughter,” and then “All we like sheep have goneastray.” Of course, the second verse comes first in Isaiah, but many energy firms escapeddestruction with enough assets to survive, in a reduced and chastened state.

Reality Check

In the 12 years 1993–2004, since passage of the Energy Policy Act, the investor-ownedelectric utility industry and its affiliates:

earned an average return on equity of 9.1%, versus an average yield of 7.3% on com-posite utility bonds while its regulated utility affiliates earned 10.4% in this periodand its non-utility affiliates earned roughly a negative 14% on average equity invest-ment, basically wiping out the entire investment.

I know that the numbers don’t look right, but I had difficulty trying to work with net lossesas a percentage of negative equity.

Ch10-I044949.qxd 5/23/06 12:10 PM Page 145

Page 145: Managing Enterprise Risk

146 Managing Enterprise Risk

In the same period, the total returns earned by investors in the stocks of these entities were:

6.9% annually for electric companies

versus:

10.7% for gas companies11.0% for the market7.9% for corporate bonds

Why bring this up? Because either the risk management procedures failed miserably, therisk managers failed to convey risk information to top managers and directors, or the direc-tors and top managements chose to ignore or fiddle with the risk management parameters,or neither top managers nor risk managers knew which risks to consider.

I know about the rigorous procedures, the mathematics, the Nobel Prize winners and thehigh-priced consultants that speak in PowerPoint. The numbers, however, speak more elo-quently than the cant. As an investor, I care about results. If it doesn’t work, why wastemoney on it? If it does work, why did so many risks escape management? Don’t tell mewhat you do, tell me what you accomplish. Tell some utility investor that you are an energyrisk manager. You’d get a better reception as an axe murderer.

Keep in mind, too, that investors do not object to risk, per se. They want to earn a returncommensurate with the level of risk. More to the point, they want to earn a return in excessof that required by the risk level. That adds value to their investment.

What went wrong?

Wrong Problem or Wrong Technique?

Thomas P. Hughes, the historian of science and technology, differentiated between the“technical,” meaning the tools and “technology,” meaning the goal seeking system withinwhich the technical played a major role, but did so in conjunction with economic, politi-cal and other factors. He argued that people often failed to solve problems through tech-nical means because the problem was technological.

Look at risk management as a tool. It has to function, however, in a corporate environmentin which managers and directors balance regulatory and political pressures, Wall Streetdemands, executive rivalries, and the needs of competing interests within the organization.The goal of the organization is to produce a return at least commensurate with risks taken.If the risk manager concentrates on a few volatile areas, such as energy trading or short-term borrowing or investment, that manager will deal with only a fraction of risk decisions,and may not consider how decisions and conditions interact.

The risk manager not only needs to ferret out risks within the entire organization, becausethey affect each other, but also lay out this information in a manner that affects decisionmaking. The chief risk officer of a major financial institution recently explained to a risk

Ch10-I044949.qxd 5/23/06 12:10 PM Page 146

Page 146: Managing Enterprise Risk

management group that the board of directors had a hard time comprehending his pre-sentations. That organization needs a new Chief Risk Officer or a new board of directorsor no Chief Risk Officer. Take your choice.

But what about another possibility, that mathematically-based strategies based on past eventsor computer-driven formulas do not tell you enough about market risks? Other businessesengage in market research. The energy business does no research, to speak of. People don’tdo what computer programs do, but a risk manager would rather do a computer program thantalk to a customer. You all know why Sharpe and Scholes and Merton won Nobel Prizes. Ifyou don’t know, equally well, why Vernon Smith won the Nobel Prize, then you do live in thepast century, and I should remind you of John Maynard Keynes famous comment that,“Practical men, who believe themselves to be quite exempt from any intellectual influences,are usually the slaves of some defunct economist.” I realize that you don’t consider your-selves as practical business people, and a number of the economists are not defunct, but I hope you get the point. You should do experiments. That’s what scientists and real busi-nesses do. Vernon Smith demonstrated that you can experiment in economics. That’s likemarket research. Experiment and then implement. Don’t experiment by blowing up a multi-billion dollar corporation or a state’s economy.

I suspect that risk managers can narrow the range of possible outcomes for those risks theyunderstand. They could evaluate how multiple risks interact. They could consider possibil-ities based on political, social, environmental, and technical trends. For instance, I wouldargue that corporations, 10 or 15 years ago, should have considered the potential of the cellphone but not the Internet, but what about predicting SARS or 9-11?

Known, Unknown and Unknowable

A while ago, Donald Rumsfeld muttered something about knowable and unknowable risks,and received an award for political obfuscation. Not so. Those were the words of an unsuc-cessful risk taker who subsequently found out about the discipline of risk management.

Investors whom I have dealt with work in a world of uncertainty. They weigh lists of riskagainst potential return in order to produce superior results, which they may or may notachieve. (Except for the quants who work so hard to achieve mediocre results. True riskmanagers they. Definitely not entrepreneurs.)

Management guru Peter Drucker urged entrepreneurs to focus on “the time lag between theappearance of a discontinuity in economy and society and its full impact,” which he called“a future that has already happened.” I’m not sure of the value of protecting against theunknowable, beyond maintaining a prudent financial policy that would help the firm survivea period of unfavorable conditions, but would not help if consumers decided that the firm nolonger had a product they would buy. I believe that Drucker’s category of knowledge fallsinto the “knowable” box, and “known” really means “whatever already happened to us.”

I suspect that you already know about “known,” and spend your time fighting the last war.You’ll never know about “unknown,” and if your corporation obsesses on that category, itwill never get anything done. Investors tend to view the past as past, since they can’t make

Risk managers and risk management efforts 147

Ch10-I044949.qxd 5/23/06 12:10 PM Page 147

Page 147: Managing Enterprise Risk

148 Managing Enterprise Risk

money buying the past, and ignore the unknown because they can’t do anything about it,and figure that any portfolio will end up containing a few losers, so why worry too much?I’d rather draw some lessons from the known and then move onto the knowable.

Known

Most lessons from what happened predate the supposedly liberated energy market, such as:

● Finance in a manner appropriate to the risk taken;

● Know your customer;

● Know your supplier;

● Know your debtor;

● The law of supply and demand really works, especially in a commodity business.

In a regulated business – and the government never deregulated anything – the regulatorwill step in to prevent high prices and profits, but never to prevent low prices and profits.

Beyond those points, you enter the realms of corporate governance, market psychologyand fraud – the technological not the technical issues.

Knowable

What about stuff like having learned from recent events to properly stress test the modelsand all that other gobbledygook that risk officers put in footnotes to annual reports. Sure, dothat, but you’re locking the barn door after the horse was stolen. The market will change.You’ll work with the wrong assumptions. That’s guaranteed.

So, let’s look at the future that already happened:

● Sophisticated metering and peak load pricing will diminish price volatility, reducethe need for traditional assets, reduce the market value of standard generation andpossibly create stranded assets.

● Superconductors will reduce motor load, cut transmission congestion and make pos-sible the bypass of much of the conventional grid. They will enable private networks.

● Global climate change will increase the number and magnitude of violent storms,thereby exacerbating outage costs and forcing more expenditure on distribution reli-ability, and it will affect precipitation and water usage, geographic demand for elec-tricity, and it will eventually raise the cost of fossil fuels.

● National security considerations will require greater use of indigenous or non-fossilfuels, especially as the Chinese and Indian economies consume greater quantities offossil fuels. This will diminish the value of existing generation and require reconfig-uration of the grid.

Ch10-I044949.qxd 5/23/06 12:10 PM Page 148

Page 148: Managing Enterprise Risk

Risk managers and risk management efforts 149

● Small scale local generation will mitigate reliability problems and will reduce theneed for conventional peaking facilities.

● New computers, flat wall lighting and home entertainment centers will significantlyaffect demand patterns.

● The dramatically increased competitiveness of Asian economies will reduce U.S.manufacturing and even the service economy, thereby affecting demand for poweron a regional basis.

● The threat of terrorism will require a major reconfiguration of grid controls, markets,location of resources, and accessibility of plant, all of which will raise capital spending.

● Regulatory agencies, after a decade long hiatus in big rate cases, will not hand downrate orders that allow the utilities to actually earn the returns granted, which will below, anyway, due to the prevailing low interest rate environment.

● Attempts to hurriedly change the energy laws, wholesale, will lead to enactment ofhalf-baked legislation, and a change in administration will lead to more confusion,especially given the industry’s political biases which will leave it with little influencewhen the ins go out.

● Wind and solar power sources will reach competitive parity with standard generationfor many uses in some regions of the country.

● Grid operations will improve in efficiency so much that the grid operator could reducemarket price differentials without the need to put in major new assets.

● The low value of the dollar plus repeal of the Public Utility Holding Company Actwill permit large foreign utilities to take over major American utilities.

● Disconnected market structure will reduce the ability of any grid-based supplier to fashion and furnish a distinct customer-tailored product. That will reduce the competitive edge of incumbents.

Please note, I am not offering the above as predictions of what will happen, but rather asplausible developments one could easily derive from existing information, some of it yearsold. Everyone of those developments would effect the finances and operations of electricitysuppliers, and the value of those enterprises in the marketplace.

I know too, that you will object to the list on methodological grounds: “If we can’t meas-ure it, we can’t manage it, and we haven’t seen any measures for that stuff. Where are thedata to plug into the model?” My answer is, “I’m just pointing out obvious risks. You fig-ure out how to measure and manage them. That’s your job. You can’t manage risk levelsby ignoring risks.” In my entire career on Wall Street, the excuses I never got away withoffering were “I didn’t consider that,” or “I never thought of that.”

Conclusion

Even if corporate managers, after due consultation with the bankers and consultants andindustry trade groups that never wish to disturb the preconceived notions of the client,

Ch10-I044949.qxd 5/23/06 12:10 PM Page 149

Page 149: Managing Enterprise Risk

150 Managing Enterprise Risk

decide that none of the above strikes them as plausible, they still should put in place strate-gies to mitigate the impact of the implausible. Their predecessors probably dismissed thepossibility of what happened to them as implausible, assuming they even considered thepossibility. Learn from history.

I realize that the risk managers may not view such activities as within their purview. Theyapply mathematical formulas to past events in order to limit future risks. If that’s whatthey do, they are technicians and they don’t have much to tell investors, and they need notlearn anything from investors, either, because investors have a responsibility to considerany and all possibilities, and they take the responsibility for their actions. People who losemoney do not want to hear that nobody could have expected the event that ruined theinvestment any more than they care that nobody could have expected the event that pro-duced the profit.

Risk managers will fade into irrelevance if they don’t get that message. Think about HarryLime’s speech in The Third Man: “In Italy for 30 years under the Borgias they had …murder, bloodshed, but they produced … the Renaissance. In Switzerland, they had …500 years of … peace. And what did they produce? The cuckoo-clock.”

I guess that means that if you want to see a renaissance of risk management, risk man-agers will have to move beyond producing their equivalent of the cuckoo-clock.

Notes

1. Smith, Adam, An Inquiry into the Nature and Causes of the Wealth of Nations (NY: The ModernLibrary, 1937), pp. 110–111.

2. Drucker, Peter F., Managing for Results: Economic Tasks and Risk-taking Decisions (NY: Harper &Row, 1964), pp. 173–174.

Ch10-I044949.qxd 5/23/06 12:10 PM Page 150

Page 150: Managing Enterprise Risk

CHAPTER 11

Executive Decision-Making under KUUConditions: Lessons from Scenario Planning,Enterprise Risk Management, Real OptionsAnalysis, Scenario Building, and Scenario Analysis

Marilyn L. Taylor

Department of Strategic ManagementUniversity of MissouriKS, USA

Karyl B. Leggio

Henry W. Bloch School of Business and Public AdministrationUniversity of Missouri at Kansas CityKansas City, MO, USA

Lee Van Horn

The Palomar Consultant GroupCA, USA

David L. BoddeDepartment of International Center for Automotive ResearchClemson UniversityClemson, SC, USA

Abstract

This chapter provides overviews and comparisons of major concepts and methodologiesfrom the fields of finance and strategic management. This chapter draws on the field offinance for enterprise risk management (ERM), real options analysis (ROA), scenario

153

Ch11-I044949.qxd 6/29/06 9:10 AM Page 153

Page 151: Managing Enterprise Risk

154 Managing Enterprise Risk

building (SB), and scenario analysis (SA). These techniques and processes are comparedto scenario planning (SP), a concept and methodology from strategic management. SP isa strategic management methodology used extensively by senior executives since itsapplication at Royal Dutch Shell in the late 1960s and early 1970s. ERM is a broadapproach that has recently become more pervasive in use with the increasing emphasis onimproving governance processes in companies. ROA has come into use since the late1980s as a decision-support tool under conditions of higher uncertainty. SB is a decision-support tool for developing quantitative or qualitative descriptions of alternative outcomes.SA, a sub-set of SB used in finance and accounting, is a means of establishing internallyconsistent sets of quantitative parameters used as inputs for modeling investment alterna-tives. This chapter draws on the KUU (Known, Unknown, and Unknowable) frameworkto demonstrate the commonalities and differences among these approaches and calls fortheir synergistic use.

Introduction

Boards of Directors, Executives, and Strategic planners all have fiduciary responsibilitiesfor oversight in a corporation. To carry out these responsibilities they must more fullyunderstand how to identify, evaluate, and manage the risks and uncertainties facing the cor-poration. Yet the complexity of many industries makes this task difficult. Every firm hasmultiple risk factors that in the short term contribute to the variability in the firm’s earn-ings, and in the long term can determine the firm’s survivability. A thorough understand-ing of these risk factors will enhance the abilities of Board members and executives toanticipate competitive, environmental, regulatory, and legislative changes and their impactupon the firm. Firm executives and Board members are increasingly being called upon tobe responsible for meeting financial expectations, managing risk to stabilize earnings, andincrease the firm’s potential survivability. In this era managing the firm’s risk and the firmunder conditions of uncertainty becomes critical.1

The fields of strategic management and finance have contributed much in this regard. Yetpractitioners and theorists in these two fields have often been at odds with one another,each trumpeting the shortcomings of the other field – the traditional dichotomy beingstrategy’s emphasis on the long term and qualitative approaches, and finance’s emphasison the short term and quantifiable approaches. This chapter uses the KUU framework topoint out commonalities and differences. Commonalties include objectives and purposes.The commonalities and differences provide opportunities for synergistic use that bothfields can draw upon.

This chapter argues that reciprocal appreciation and appropriate synergistic use of avail-able decision-making and analysis tools will assist executives in coping with the inevitableacceleration of uncertainties in the twenty-first century. The first section draws a processfrom the field of strategic management – SP. SP is a process for dealing with total companylong-term risk. The process has historical roots in military applications before its strikingresults in assisting Royal Dutch Shell executives in coping with the discontinuity of the

1See Alessandri et al. (2003).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 154

Page 152: Managing Enterprise Risk

1973–1974 oil crisis. This chapter turns in the next four sections to the field of finance foroverviews of ERM, ROA, SB, and SA. The sixth section compares SP with these fourfinance tools and methodologies using the KUU framework. The chapter’s final sectioncalls for increased synergistic use and integration of tools and methodologies drawn fromstrategic management on one hand and finance on the other.

I Driving for Strategic Flexibility Using SP

“… chance favours only the prepared mind.” Louis Pasteur2

Drawn from the field of strategic management, SP is a qualitative process for dealing withlong-term uncertainty at the firm or business unit levels. SP is about envisaging potentialfutures and developing strategies to assist organizations in dealing with these potentialfutures. The process is thus about managing risk and uncertainty and exploiting opportu-nities. SP involves understanding both internal and external factors. It incorporates trends,national and global phenomena, the environment, and other factors that may not appearspecifically relevant for today’s decisions, but could potentially impact significantly onfuture outcomes for the firm. Thus SP is a process tool for helping organizations to con-sider future possibilities, plan for uncertainties, and prepare contingent plans in order tobe better prepared for whatever may transpire.

Through the SP process, executives can undertake “forward thinking” to consider futurepotentialities and how they might impact their organizations. However, detailed planningfor the future is fraught with potential for errors. The further out in the future we envisagethe fuzzier it becomes. Traditional strategic planning (i.e., for the next 1 to 3 years), takesprimarily a linear approach. That works well in a stable environment but does not preparethe organization for discontinuities (i.e., significant changes). The further out planners goin the future the less useful the traditional techniques. Thus, what to do? One process thatarose in the late 1960s from a group of strategic planners at Shell Oil’s London headquar-ters was SP.

SP’s Origins and evolution

SP was originally developed for military applications. As used by the military duringWWII, SP was based on Herman Kahn’s work at RAND and later at his own HudsonInstitute. However, after WWII these war-planning scenario processes were adapted bycompanies as a business-planning tool. Application of SP in the corporate world dates to the1960s and early 1970s when Royal Dutch Shell adopted the military technique to enhancethe firm’s strategic flexibility under conditions of uncertainty. A strategic planning team led by Pierre Wack encouraged executives throughout the firm to utilize the process to generate alternative plausible scenarios regarding the longer-term future of the external

Executive decision-making under KUU conditions 155

2More, H., “Strategic Planning Scenario Planning or Does your Organisation Rain Dance?” New ZealandManagement (Auckland), 50(4), May 2003, 32–34.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 155

Page 153: Managing Enterprise Risk

environment.3 By 1972 the approach had been implemented worldwide throughout Shell.Simplified, the process involved constructing plausible scenarios of the future environmentand then designing alternative strategies that would be appropriate under those scenarios.

Wack was not alone in undertaking the adaptation of SP into corporate application. IanWilson at GE and Peter Schwartz at SRI International were both working at redefining thescenario approach. They redefined scenarios as alternative outcomes of trends and events bya target year regardless of the precise sequence of events. Their scenarios were descriptionsof future conditions rather than accounts of how events might unfold. Thus scenariosoffered firms a set of distinct alternative futures thus emphasizing that the business environ-ment was uncertain, events were potentially discontinuous, and the future could evolve intotally different ways. The scenarios provided a context for developing long-term corporatestrategic plans as well as near-term contingency plans.4

Following WWII the planning process had evolved. For the 10 years following the War, Shellhad concentrated on physical planning (i.e., scheduling of increases in long-term productioncapacity to meet the ever-increasing demand). From about 1955 to about 1965 the firmmoved to an emphasis on financial planning, primarily on a project basis (i.e., capital budg-eting of long-term assets such as tankers, depots, pipelines, and refineries). In 1965 the firm’splanning began focusing on coordinating details of the whole chain activity of moving petro-leum from the group to the retail outlets. Shell’s planning approaches certainly evolved overthese two decades. However, like other companies during this time period, the planningemphasis in the fairly stable environment had been primarily on “more of the same.”

However, in the late 1960s the firm’s executives determined that even a 6-year time hori-zon was too short. Efforts focused on finding ways of exploring what the competitiveenvironment for the firm might look like as much as 30 years hence (i.e., in the year2000). Wack was familiar with Herman Kahn’s developments in SP for the military andhelped work out a SP approach to meet Shell executives’ concerns.

The SP process proved itself when Organization of the Petroleum Exporting Countries(OPEC) formed in 1973 and decreased wellhead production thus creating a shortage ofpetroleum. Of all the major petroleum firms, Shell was best positioned at the time in termsof strategic flexibility, clearly a result of having embraced SP firm wide. In short, Shell hadalready identified its strategic options for discontinuous conditions.5 The firm’s executivenetwork was emotionally ready to tackle the difficult circumstances and its contingent planswere in place even as other firms groped for approaches.

Many major firms have adopted SP over the past three decades. Somewhat out of favor in the 1980s, SP began to be used more extensively in the 1990s. However, full-blown SP

156 Managing Enterprise Risk

3See Hamish More, “Strategic Planning Scenario Planning or Does your Organisation Rain Dance?” NewZealand Management (Auckland), 50(4), May 2003, 32–34 and Mintzberg, H., “The Rise and Fall of StrategicPlanning”, Harvard Business Review, 72(1), 107–114.4Millett, S.M., “The Future of Scenarios: Challenges and Opportunities,” Strategy and Leadership (Chicago),31(2), 2003, 16 (See also by the same author: The Manager’s Guide to Technology Forecasting and StrategyAnalysis Methods (*Battelle Press, 1991) in which Millett aims to acquaint all levels of management with thevarious extant methodologies for considering the future. The scenarios approach is included.5Pierre, W., “Scenarios: Uncharted Waters Ahead,” Harvard Business Review, 63(5), September to October1985, 72–89.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 156

Page 154: Managing Enterprise Risk

approaches are expensive. Estimates of cost, range upwards of half million dollars ormore. SP exercises require facilitation and in depth, intuitive knowledge of the industry.These two aspects of adeptness at the process and content expertise are seldom found inone person. The consultants who have both kinds of expertise and who are available areexpensive. In addition to the costs of consultants, SP must have the intensive involvementof senior executives – expensive time indeed. Senior executives are used to relying ontheir experience and intuition for making decisions. The SP process requires creativityand imagination and suspension of the tendency to assign probabilities to outcomes. Itencourages a dialog among the executive team to provide opportunity for more system-atic examination of the exogenous variables that impact these decisions.

What is SP?

Understanding SP requires first defining what a scenario is. A scenario is a description ofa possible future outcome that can be used to guide current decision-making. In somesense it is a map of the future with each map having internal consistency and integrity.6

Experts in the SP process suggests the creation of three to no more than five scenarios.The technique is used most appropriately at the corporate or strategic business unit levels.The process of establishing the scenarios generally involves the following phases:

● Identification of critical factors affecting the external environment for the firm (i.e.,driving forces).7

● Selection of significant forces (or sets of forces).

● Utilization of the forces to establish scenarios.

● Writing of “stories” or “scripts,” (i.e., outlining of characteristics of the resulting sce-narios).

● Establishing signposts (i.e., leading indicators suggesting that the environment mightindeed be going in the direction of a specific scenario).

SP is long term in its orientation and puts emphasis on identifying variables external to,but highly impactful on the firm. The variables are qualitatively assessed8 in terms of theirimpact and their inter-relationships. Thus the most impactful external forces might bedepicted in the four cells of a two by two matrix (i.e., four scenarios). If impactful andhighly related, variables are grouped on a continuum to make up one side of a matrix andanother group makes up the other axis of a matrix.

A pure SP exercise discourages any discussion of the probability of a particular scenario’soccurrence. Why? Because the purpose is to hold all scenarios plausible. The word plausibledoes not mean equally likely, but it does mean likely enough that it is worth decision-makers’ time to think through the implications of each scenario and consider what the

Executive decision-making under KUU conditions 157

6See footnote 2.7The SP process essentially creates a dialog regarding the opportunities and threats in the classic SWOTs model.8The raw input, however, might be extensive quantitative input.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 157

Page 155: Managing Enterprise Risk

firm’s contingent strategy should be. In some sense, accepting the scenarios may requiresuspension of disbelief in order to examine “wild card futures” (i.e., to consider factorsthat are most impactful and most uncertain). In this process establishing probabilities istantamount to prioritizing the scenarios. Doing so risks politicizing the process, that is,that the sets of executives most likely to benefit from a scenario are more likely to advo-cate the likelihood of it occurring. Using plausibility as the legitimizing criterion helps tomitigate the tendency to politicize the process.

An application of SP in the electric power industry

A set of scenarios for the electric power industry is depicted in Appendix A. In this exer-cise the participants were asked to:

● Establish a strategic question.

● Identify the critical dimensions that would impact upon the question (and then).

● Create “stories” or “scenarios” that were plausible and internally consistent.

The question chosen by this group was what the firm should do regarding nuclear powergeneration given that the firm was part owner in a major facility. In the scenario matrix thedimensions were grouped as no nuclear acceptability (i.e., combining social and politicalfactors) and cost of alternatives (predicated among other factors on technology advances).The four identified scenarios were given the names “Fossil Heaven,” “Greenville,”“Diversification,” and “Nuckies Rule.” Once the scenarios are written the executive groupcould then begin the process of creating strategies that are appropriate to the scenarios.9

SP benefits and drawbacks

SP has its advocates and detractors. Advocates argue SP is really about making decisionstoday but point out that SP is used to systematically define long-term uncertainties withthe ultimate purpose of assisting managers to design contingency plans and thus buildflexibility into the strategic future of the firm. The SP process encourages contingentstrategic thinking or, another way of putting it, is that SP encourages long-term firm-levelflexibility. The process encourages managers to envision plausible future states of theenvironment in which their organizations operate and think long term about how theirorganizations might take advantage of opportunities and avoid potential threats.10

What are the benefits of using the SP process? Numerous are cited by executives, consult-ants, and internal staff facilitators. The benefits tend to cluster around: (a) expanded

158 Managing Enterprise Risk

9Actual application of SP varies immensely in practice. What is described here is the classical approach.However, many consultants shorten the process, changes the steps, or include techniques they have developedin house.10Kent, D., Miller, H. and Waller, G., “Scenarios, Real Options and Integrated Risk Management,” Long RangePlanning (London), 36(1), February, 2003, 93.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 158

Page 156: Managing Enterprise Risk

Executive decision-making under KUU conditions 159

mutual understanding of potential environmental discontinuities, (b) greater teamman-ship as a result of the process and developing a common language, and (c) increased nim-bleness of the firm that already has articulated contingent plans.

In short, the SP process brings two major benefits germane to our discussion. First it helpsin identifying the long-term uncertainties and risks that impact on the firm as a whole.Second it assists the executives in defining their alternatives (i.e., increasing their flexibil-ity in responding to uncertainties). And, in so doing SP contributes to the firm’s ability tosurvive under hostile conditions and to position itself to more proactively exploit munifi-cent environments.

In other words, SP contributes to strategic understanding of long-term exogenous vari-ables and the design of contingent strategies or options. SP offers organizations the oppor-tunity to envision the future and prepare contingency plans to address the uncertainties. Ithelps to challenge existing “maps of future possibilities, to explore what might possiblyhappen, and to prepare contingent plans for whatever might transpire”. In short, it preparesorganizations for possibilities.11 The process represents a very qualitative real optionsapproach to long-term strategy design.

On the other hand detractors point out the “blue sky” thinking that can result from SP –contingency plans with dubious application in a too distant future. The firm’s sharehold-ers clamor for bottom-line results in the form of dividends and market capitalizationgrowth – those are the short-term realities compared with SP’s nebulous contributions tosurvivability.

II Moving to Firm-wide Awareness with Enterprise

The ERM

“It’s not what we don’t know that causes trouble. It’s what we know that ain’t so.”Will Rogers

ERM programs have been increasingly implemented in companies as they confronttwenty-first century risks and respond to the increasing demands for effective governancein the wake of the difficulties in such firms as WorldCom, Tyco, and Enron.12 There arestrong arguments that the level of risk attendant modern corporations will intensify incoming years, not diminish.13 These include risks in areas where forces are contributingto considerable change as well as increasing business environmental turbulence, that is,areas of high uncertainty (see Exhibit 11.1).

11See footnote 2.12See Barton et al. (2002).13See Mandel (1996). Mandel is the economics editor for Business Week. He argues that four factors – uncertaintyof rewards, ease of entry, widespread availability of information, and rapid reaction to profit opportunities – aredriving the intensifying risk (as reported in Barton).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 159

Page 157: Managing Enterprise Risk

Within companies risk management has heretofore been handled in “silos” or “compart-ments” (i.e., within specific areas or domains such as insurance, technology, financial loss,and environmental risk). Under the old approach various risks were considered in isolation.The new approach, enterprise (wide) risk management, represents the efforts by anincreasing number of companies to undertake risk management across the enterprise on anintegrated and coordinated basis. The approach is additive, aggregative, and holistic as itidentifies the existence of, and appropriate responses to, the several and joint impacts ofrisks across the organization. In so doing, companies aim to create a culture of risk aware-ness throughout all levels of the organization. As one set of practitioner authors put it,“Farsighted companies across a wide cross section of industries are successfully imple-menting this effective new methodology.”14 They are doing so because stakeholders aredemanding that companies identify and manage the risks that attend the firm’s chosen busi-ness model.

Risk management has been traditionally thought of as managing the company so as to avoidor mitigate events or actions that “will adversely affect an organization’s ability to achieve itsbusiness objectives and execute its strategies successfully.”15 As a Dupont executive put it,“Risk management is a strategic tool that can increase profitability and smooth earningsvolatility.”16 Microsoft reportedly uses SA (see next sections) within its ERM to identify itssignificant business risks. This includes thinking of risks in broader terms. For example,

160 Managing Enterprise Risk

Exhibit 11.1. Forces contributing to increasing change, risk, and turbulence in the business environment

● Technology● The Internet● Worldwide competition● Global blocs and trading agreements● Deregulation● Organizational structural and culture changes resulting from downsizing, reengineering,

and mergers● Changing demographics, especially aging populations● Higher customer expectations for products and services● Increases in purchasing power in heretofore lesser developed countries● Movement to service economies

Source: Adapted from Thomas L. Barton, William G. Shenkir, and Paul L. Walker, Making Enterprise RiskManagement Pay Off: How Leading Companies Implement Risk Management (Upper Saddle River, NJ:Prentice-Hall PTR, 2002).

14“…a new model-enterprise-wide risk management – in which management or risks is integrated and coordi-nated across the entire organization. A culture of risk awareness is created. (P) Farsighted companies across awide cross section of industries are successfully implementing this effective new methodology (Barton, T.L.,Shenkir, W.G. and Walker, P.L., Making Enterprise Risk Management Pay Off: How Leading CompaniesImplement Risk Management (Upper Saddle River, NJ: Prentice-Hall PTR, 2002), pp. 1–2).15Economist Intelligence Unit, written in cooperation with Arthur Anderson, Inc., Managing Business Risks –An Integrated Approach (New York: The Economist Intelligence Unit, 1995), p. 2 as reported in Barton, T.L.,Shenkir, W.G. and Walker, P.L., Making Enterprise Risk Management Pay Off: How Leading CompaniesImplement Risk Management (Upper Saddle River, NJ: Prentice-Hall PTR, 2002), p. 2.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 160

Page 158: Managing Enterprise Risk

considering risk in the narrow sense as the company insures a building incorporates thepotential loss of the value of the building. In an ERM approach the risk must be thought ofmore broadly (e.g., in terms of its potential interruption of the business). The risk manage-ment group initiates thinking about the possible scenarios and includes consideration of theexperiences of other firms (i.e., a form of benchmarking). The process then considers notonly the appropriate level of insurance given the market or replacement costs of the building,but also the broader effects and their ramifications. The process uses face-to-face contactbetween business units using cross-functional task forces, team meetings, or brainstormingsessions to identify the risks, consider the broader system effects, and identify appropriatealternative responses. The purpose is to establish a heightened awareness and a continuousprocess of dynamic self-assessment for both identifying and addressing risk.

Critical steps include identification of the risks, evaluation of the severity of the risks, aswell as innovative approaches to managing the risks and their potential impacts. In someinstances the impact of risks can be quantified. For example, many companies have exten-sive historical databases on credit risk, either internally generated or access to externalparties’ assessments. However, companies readily acknowledge that some risks are notquantifiable and that the impacts can only be qualitatively identified and perhaps ranked.For example, operating risks may not be so easily quantifiable.

First steps in ERM programs are the inventorying of risks throughout the company, theestimated effects on the unit and company generally, and the current approaches in placefor dealing with the risks.17 ERM approaches place significant emphasis on more rigor-ously identifying, ranking, considering the impact of these kinds of risks, and designingapproaches managing them. The initial stages are additive. However, by moving to moresystematically quantify, rank, or qualitatively consider the effect on the total company,ERM programs establish a basis for an integrated approach to total ERM. This mightinclude establishment of an ERM committee or task force at the corporate level withresponsibility for reporting to the board on a regular basis.

How does an organization move to total organizational awareness of risk managementissues? Certainly approaches vary. However, the following appear to characterize mostapproaches: involvement of a broader array of decision-makers, adoption of commonapproaches and “languages” across the corporation, broader sharing of information, andcontinuous involvement of the organization’s chief risk management officer (i.e., theCEO). It is not that risks were not previously managed before the advent of ERM pro-grams. Rather, in today’s environment risk management has become more salient. Thus,a broader array of company programs and policies are being monitored more systemati-cally and seen in a more integrated manner as risk management approaches. The purposeis to both exploit opportunities and manage the risks attendant the pursuit of those oppor-tunities. The process explicitly recognizes the fiduciary responsibility of a broader set ofplayers throughout the enterprise. This shift in governance philosophy and organizational

Executive decision-making under KUU conditions 161

16Barton, T.L., Shenkir, W.G. and Walker, P.L., Making Enterprise Risk Management Pay Off: How LeadingCompanies Implement Risk Management. (Upper Saddle River, NJ: Prentice-Hall PTR, 2002), p. 2.17See, for example, Leggio, K., Taylor, M. and Young, S., Enterprise Risk Management at GPE (NorthAmerican Case Research Association, November 2003).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 161

Page 159: Managing Enterprise Risk

culture is often accompanied by structural changes such as the appointment of a ChiefRisk Officer (CRO) and the establishment of risk-management committees at the corpo-rate and unit levels.18 An important issue is that the process is continuous, not periodic.These structural changes contribute to focusing the organization’s attention on managingits multiple risks on a continuous basis.

Traditional approaches to managing risk include accepting (e.g., self-insuring), transfer-ring (e.g., buying insurance from a third party), or mitigating the risks (e.g., building inoptions; undertaking maintenance more frequently) as well as monitoring and controlsystems. Increasingly risk management refers to the management of companies’ drive tocreate, protect, and enhance the short- and long-term value of the firm.

Managing risk is more than protecting shareholders from downside risk; risk manage-ment can be a powerful tool for improving business performance since risk arises frommissed opportunities as well as from threats to earnings stability.19 To do so companyexecutives must seek opportunities where the possibilities for profits and growth aregreatest (i.e., where the potential for profitable growth is greatest). However, these aregenerally the arenas where uncertainty and risks are most attendant. In short, to carry outtheir fiduciary responsibility in today’s environment, executives must simultaneouslyseek out uncertain and high-risk situations to generate growth while simultaneously mit-igating the results from that pursuit. The CEO must carry the responsibility as the chiefrisk management officer, but in actuality decision-makers at all levels in the organizationsmust be attuned to seeking new opportunities while managing and overseeing the risksthat attend the effort. In this sense ERM has close links with SP in purpose.

ERM is a total company process or program whose purpose is to identify the potentialsources of system risks or aggregation of sub-system risks, to assess those risks (degreeof importance and likelihood of occurrence), and to design the alternative action systemsfor responding to them.

III Quantifying Flexibility Using ROA

The increasing use of ROA as a decision-making process is undeniable. The enthusiasmof some practitioners is clear, as one set of authors argued, “In 10 years, real options willreplace Net Present Value (NPV) as the central paradigm for investment decisions”20 (seeAppendix B for a comparison of ROA and NPV). Finance practitioners and theorists havedeveloped ROA as a means to value decisions made under conditions of uncertainty. Ingeneral, ROA can be applied to decisions where the investment can be staged so that theincremental investments are predicated on outcomes from the prior stage, where the initialinvestment can be small, where the firm is not locked into making the future investments,

162 Managing Enterprise Risk

18For Example, Chase Manhattan has a Risk Policy Committee as a Board of Directors standing committee and five company-wide committees focusing on credit, market, capital, operating, and fiduciary risks (seeBarton, p. 48).19See Lam (2000) and Laarni (2001).20See Copeland and Antikarov (2001).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 162

Page 160: Managing Enterprise Risk

and conditions are uncertain. The technique has increasingly been found valuable in con-tributing to more rigorous examination of strategic and operational decisions where out-comes are path dependent. The assumptions underlying the theory may significantly limitthe application of ROA as a means of quantifying investment outcomes, but the richnessof the discussion that is an integral part of the analysis remains undampened.

What is ROA?21

ROA is a financial options theory applied to business decisions. Indeed, the Black–Scholesapproach has migrated in recent years from application to stock options to applications toROA (i.e., the valuation of strategic and operational investment alternatives). A great dealcan be gained by developing an understanding of such decisions through the ROA perspec-tive. The theory recognizes that there is value in companies making limited initial invest-ments that permit them to retain flexibility to take future action and possibly incur a gain.22

One way to think about the connection between financial options theory and ROA is that afinancial call option gives the right, but not the obligation, to buy an asset at a predeter-mined price on or before a given date. The same applies in strategic decisions that can bemulti-staged. At each stage the decision-maker can make an investment in order to obtainthe ability, but not the obligation, to make the decision to invest at the next stage.

Exhibit 11.2 below summarizes the analogous characteristics of stock options and strate-gic investments viewed from an ROA perspective. Exhibit 11.3 summarizes ROA alterna-tives and aspects of this approach.

Coined by Stewart Myers of MIT’s Sloan school, the term real options is based on theprinciple that there is value to waiting for more information when faced with a series oflinked investments, and that this value is not reflected in the standard discounted cash flowmodels used for capital investment decisions, such as payback, net present value, or inter-nal rate of return (IRR).23 The concept is an extension of options theory which underliesthe securities market transactions described above. However, in real options, the underly-ing asset is not a security, but rather investment in an asset or a business opportunity (i.e.,investment in a “real” asset). Many strategic investments have characteristics of securitiesoptions decisions, that is, an investment that made today gives the decision-maker theflexibility to make a future decision (e.g., make additional investment, reduce investment,or abandon the project). In many instances more information is acquired during theinterim time that gives the decision-makers a clearer picture for assessing whether thefuture outcomes have positive value or not.

A major link between options theory as applied in the securities market and strategic invest-ment opportunities for companies is that investment decisions are often modular and can be

Executive decision-making under KUU conditions 163

21This section is drawn heavily from Leggio, K., Bodde, D. and Taylor, M., “The Application of BankingModels to the Electric Power Industry: Understanding Business Risk in Today’s Environment,” Oil, Gas andEnergy Quarterly, 52(2), December 2003, 20-1-14 and Taylor, M., Leggio, K., Coates, T.T. and Dowd, M.L.,“Real Options Analysis in Strategic Decision-Making,” Presentation to ANZAM/IFSAM VI World Congress,Gold Coast, Queensland, Australia, 2002.22See Kroll (1998).23See Reary (2001).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 163

Page 161: Managing Enterprise Risk

deferred. In short, strategic investments often come in the form of embedded options, thatis, a series of options within the same decision stream. When strategic decisions are modu-lar, or can be treated with modularity, the decision-maker has flexibility and this flexibilityitself has value. Putting it another way, the decision-maker has the option to invest – she canexercise that option now or later – very much like financial options.

In many strategic decisions, there are several options embedded. Maximizing the value ofthe opportunity involves making the decisions to invest at the right time. A strategic decision-maker has the flexibility to buy, sell, or exercise options now or at some time in the future.ROA can assist strategic decision-makers by providing analyses leading to decisions and,importantly, by shedding greater light on underlying factors in the opportunity itself as partof the process of undertaking the analysis. How do real options provide answer to manage-rial issues? As one writer in this field puts it, “First real options provide a strategic answeras they force the manager to set up an opportunity register (i.e., identify a set of investmentalternatives). Second, the pricing of these options will help the manager to quantify theopportunities attached to each project. Third, the solving of the real options’ price willindicate to the manager the optimal investment timing for the project.”24

164 Managing Enterprise Risk

Exhibit 11.2. Summary of the analogous characteristics of stock options and strategic investment decisions

Financial Options Real Options

The financial option aspect on the left is analogous to the characteristics of business investment decisions on the right

Call option on a stock Future investment decision

Current value of the stock Expected present value of future cash flows

Strike price Expected (future) investment cost

Time to maturity of the option Time until the investment opportunity no longer exists

Volatility Variability in the project’s returns

Dividend on a stock (i.e., the values Cost of keeping the investment opportunity foregone by the option holder in avoiding alive (e.g., values paid by the option holder exercising the option right) to avoid making the full investment)

Risk-free interest rate Risk-free interest rate

Source: Adapted from Botteron (2001).

Exhibit 11.3. Real options alternatives and characteristics

ROA alternatives ROA characteristics

● Investment timing options ● Real options exist when managers can ● Abandonment/shutdown options influence the size and riskiness of a project’s ● Growth/expansion options cash flows by taking different actions during● Flexibility option the project’s life.

● Real option analysis incorporates typical NPVbudgeting analysis with an analysis for oppor-tunities resulting from managers’ decisions

Ch11-I044949.qxd 6/29/06 9:10 AM Page 164

Page 162: Managing Enterprise Risk

Rather than viewing a particular decision as a series of projected cash flows to be discounted,the project’s value can be viewed as a “portfolio of options.”25 This view is particularly valu-able for strategic decisions where the investments will be made in multiple stages, for exam-ple, investments such as R&D, purchase of natural resources, entry into a new market, anddiversification including purchase or development of companies. Thus, Botteron argues,“The advantage of real options used in a multi-stage valuation is the ability to take intoaccount future strategic decisions. These types of investments are analogous to compoundoptions: each stage completed gives the firm an option to complete the next stage.”26

A multi-stage investment is usually structured so that after each investment stage, a com-pany’s decision-makers can choose to increase the investment in the project, delay it, oreven abandon the project. The future decisions for additional investments are contingenton the outcomes of the previous stages. For example, in the pharmaceutical industry thedecision to abandon or further invest in an R&D project is often associated with outcomesof the various drug-testing stages.

ROA is relatively new, an advanced technique that links strategy and finance. Comparedto traditional tools such as NPV, ROA provides management with improved facility fordealing with uncertainty and thus helping managers solve complex investment problems(see Appendix B for a comparison of NPV and ROA). ROA draws strategic decision-makers into a process designed to enhance their insights into issues such as:

● timing of their investments;

● relative value of multiple-staged investments;

● identification of risk factors and ways of managing them;

● flexibility (i.e., what management can do to maximize the value of strategic investments).27

Future risks include factors such as changes in consumer tastes, regulations, governmentapprovals (e.g., of New Drug Applications), currency exchange rates, or commodityprices, as well as technological breakthroughs – all these factors, and more, can make sig-nificant differences in strategic choices. Further, the outcomes of these uncertainties canbe managed using an ROA approach.

The choice to invest in full stream production of a controversial new product may be linked to the results of test marketing. However, test marketing involves investment in: (a) development of the concept, (b) prototypes, (c) sufficient inventory for test marketing,(d) marketing analysis, and so on. ROA encourages strategic decision-makers to moreclearly map the stages in the investments that will need to be made, provided the resultsfrom each prior stage signal a “green light” for the next one, that is, a series of embedded

Executive decision-making under KUU conditions 165

24Botteron, P., “On the Practical Application of the Real Options Theory,” Thunderbird International BusinessReview, 43(3), 2001, 472.25Botteron, P. “On the Practical Application of the Real Options Theory,” Thunderbird International BusinessReview, 43(3), 2001, 476.26Ibid.27Adapted from Botteron, P. “On the Practical Application of the Real Options Theory,” ThunderbirdInternational Business Review, 43(3), 2001, 475.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 165

Page 163: Managing Enterprise Risk

options. Further, at each stage the strategic decision-makers can choose to invest, wait, orabandon the activity. The investment at each stage is an investment in another option. Thatis, at the end of each stage, the decision-maker can decide to exercise (or not exercise) theoption to make the additional investment to continue with the strategic alternative.

Where has ROA been used?

Applications of ROA have appeared in industry, in the academic research literature, andin teaching.

Applications in industry: The relative advantages demonstrated in ROA applications moreclearly capture strategic aspects of the decisions that management confronts. ROA advan-tages have led to increasing application of the approach. The example of the pharmaceu-tical industry was given above. For what kinds of decisions do experts use ROA? BothMcKinsey and KPMG International have groups that specialize in the use of ROA. KPMGInternational, for example, has helped companies apply ROA to:

● R&D choices, especially in the early stages;

● mergers and acquisitions/alliances;

● management of patents, licenses, and brands.

In addition to pharmaceutical firms mentioned above, to date the technique has been usedto evaluate investments in strategic opportunities in a variety of firms including mining,petroleum, electric power companies, television programming28 and hi-tech ventures.

Options theory has been found to have value when managers are confronting a strategicinvestment opportunity that has a great deal of uncertainty.29 However, when using realoptions, managers must have the ability to react to the uncertainty and alter the planned activity.30 Under conditions of uncertainty traditional NPV analysis undervalues the project.ROA allows managers to incorporate the value of flexibility to adapt to changing environments.

166 Managing Enterprise Risk

28McKinsey found that applying options theory to TV-programming decisions could improve the returns fromprogramming investments. McKinsey argues that application of real options theory is effective because of thehigh uncertainty of outcomes and costs of a program series. This article suggests that TV executives informallyexercise options whenever they cancel under-performing shows or modify schedules in other ways. The articleargues that the executives in this industry must institutionalize the process of recognizing, evaluating, and exer-cising the options embedded in the TV-program decision. McKinsey’s underlying argument is similar to thatraised in this article. Baghin, J., “Black-Scholes Meets Seinfeld,” McKinsey Quarterly, (2), 2000, 13–16.29See Coy (1999). Coy observes that real options theory is a revolutionary concept emerging in corporatefinance. He argues that real options theory’s basic premise is that when the future is highly uncertain, it pays tohave a broad range of options open. Thus, the value of ROA accrues to executives who retain flexibility. Coypoints out that real options theory is too complex for minor decisions and not useful for projects that require afull commitment now. Rather, the value of an option lies with management’s ability to incur a relatively smallamount of cost now and retain the ability to decide later whether to make additional commitment. See alsoAlleman (1999). Note that Coy’s arguments underlie the commonalities between SP and ROA. However, we arearguing that SP is more useful with the Unknown and pushing the boundaries of the Unknownable because ofits generation of qualitative scenarios by creatively employing executive knowledge, judgment, and intuition.ROA, in its strictest sense, uses SA (i.e., quantified scenarios or distributions of data).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 166

Page 164: Managing Enterprise Risk

Within companies where ROA is being used, it is finding broader and broader uses.31 Indeed,options models are becoming mainstream tools for financial practitioners around the world.32

The academic research literature: ROA concepts have been applied in a widening varietyof situations noted in the literature including:

● toeholds pursued by minority shareholders;

● small acquisitions made in order to enter new technology or business areas;

● capital investment decisions;

● valuation of R&D and technology;

● development and introduction of new products;

● understanding environment, ownership, and performance relationships;

● capital budgeting;

● real asset investments;

● natural resources investments;

● valuation of government subsidies;

● various kinds of investment alternatives in the electric utility industry includingmergers and acquisitions;

● new venture startups;

● pursuit of maximizing value derived from entrepreneurship activities;

● analysis of the value of decision-support systems.33

Options theory and, more recently ROA, have received attention in economics, finance, andaccounting classrooms for some time. Indeed, finance specialty courses focusing on risk

Executive decision-making under KUU conditions 167

30Indeed a recent Economist article argues that managers do not like the capital asset pricing model (CAPM)because it “ignores the value of real life managers.” In contrast the real options approach “places managers at itsvery core.” CAPM requires establishing projections that are close approximations of the ultimate cash flows and acorrect discount rate. The model can use only known information and the resulting uncertainty is reflected in exces-sive discount rates. CAPM ignores the capability of management to exercise managerial prerogatives to build flex-ibility into their decision-making process. For example, under conditions of uncertainty (e.g., drilling oil wells,searching for new pharmaceuticals) management usually keeps multiple alternatives active while continuing toinvest. CAPM on any one of the alternatives would kill most of the individual projects because of the high discountrates required. (“Economics Focus: Keeping all Options Open,” Economist, 352(8132), August 14, 1999, 62.)31See Herath and Park (1999).32See Merton (1988).33See Bulow et al. (1999); Laamanen (1999); Busby (1997); Angelis (2000); Boer (2000); Lounsbury (1993);Roberts and Weitzman (1981); Herath and Park (1999); Li, M. et al. (1998); Dastgir (1998; 1995); Pinches andLander (1998); Trigeorgis (1996); Rao, R.K.S. et al. (1980); Brennan and Schwartz, 1985; Mason and Baldwin,1988; Competition in Electricity…, 1990; Leggio et al. (2001); Leggio, K. and Chan, H., “Valuation of PortlandiaAle Startup as a Portfolio of Growth Options” (Leggio, K./Hooilin IS Sum02/1002 Hooilin Presentation – Valuinga Startup) The University of Missouri at Kansas City, Privately distributed; McGrath (1999); Kumar (1999).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 167

Page 165: Managing Enterprise Risk

management and ROA are increasing. However, only recently has the concept begun to beapplied in strategic management classrooms. A review of leading U.S. strategic managementtexts yields at most a few paragraphs dealing with options and only cursory mention of ROA.The first two strategic management texts to devote more than a few sentences to one or twoparagraphs to ROA appeared only recently.34 To date no strategic management cases havebeen known to use ROA as an analysis technique in the instructors manual (or teaching note),although a small body of cases in corporate finance has begun to emerge.35 Thus, there is con-siderable opportunity to extend the concepts into the strategic management area and perhapsinto other disciplines such as Strategic Marketing and Human Resources Management.

ROA caveats

Use of options pricing theory was revolutionary for financial markets – ROA is proving tobe as much of a stretch for strategic decision-makers. Experts expect that “the applicationof real options thinking to corporate strategy to be an active area of inquiry over the nextfew years.”36 As the above discussion suggests, the ROA approach offers significantadvantages for both decision-makers and researchers alike.

There are caveats, however. For example, measuring the volatility can be a challenge withreal options. How does one actually develop a measure of volatility? There are no easyanswers. The critical step is to examine the primary sources of uncertainty. Indeed, thisstep is of critical value in developing a better understanding of the venture. Where there isprior experience, for example with drilling of oil wells or pricing of commodities, theinvestor may have defensible data. For many strategic decisions there is little prior experi-ence and thus no reliable historical data to provide guidance. One approach is to apply sim-ulation analysis to the present value of the underlying asset to estimate the cumulativeeffect of the many uncertain variables. Another solution used in practice is to estimatevolatility on the basis of the performance of a selected portfolio of comparable stocks,under the assumption that the volatility of this portfolio is reflective of the volatility of theopportunity being explored. “Finally we could turn the question around as follows: Howlarge would volatility need to be in order for the project to generate shareholder value?Sometimes it is easier to assure ourselves that our volatility is at least some threshold levelthan to estimate it precisely.”37

Although increasingly used in making decisions within companies, ROA has not yetaccumulated a history. Management’s judgment whether better decisions have been madeusing ROA, as contrasted to what would have been made using NPV or other decisiontechniques, is yet future. Developing decision-makers’ understanding of the approach

168 Managing Enterprise Risk

34See Dess, G., Lumpkin, T. and Taylor, M., Strategic Management: Creating Competitive Advantage (Burr Ridge,IL: McGraw-Hill, Inc., 2004) and the Annual Update section in Hill, C. and Jones, G. Strategic Management: AnIntegrated Approach, 5th Edition (Boston: Houghton Mifflin Company, 2001), pp. 7–11.35See, for example, the work of Robert Bruner at the Darden School at the University of Virginia (http://faculty.darden.edu/brunerb/) including the recently completed case “Enron: 1986–2001” by Samuel Bodily andRobert Bruner.36See Amram and Kulatilaka (1999).37See Hevert (2001) p. 3.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 168

Page 166: Managing Enterprise Risk

remains the major obstacle. Understanding and measuring volatility also remains anobstacle. NPV approaches simply do not appropriately value highly uncertain, activelymanaged projects. Strategic decision-makers have been in search of better approaches.Learning option valuation approaches takes significant organizational commitment, butROA is well within the capabilities of motivated managers.

Comparing ROA, SP, and ERM

In the last decade, finance practitioners and researchers have developed ROA as a way tovalue investments under uncertainty. SP and ROA have complementary strengths and weak-nesses as tools for managers making strategic investment decisions under uncertainty.Ideally these two approaches are combined in an integrated risk management (i.e., ERM)process. This process involves scenario development, exposure identification, formulatedrisk management responses, and implementation steps. A corporate-level perspective onmanaging risk that takes into consideration the full range of exposures across a firm’s port-folio of businesses as well as its operations is advocated. Most of the ROA literature has apredominant emphasis on quantitative analysis. However, this chapter argues that there issignificant value in the qualitative assessment of real options.

IV Analysis of Uncertainties through SB

Scenarios can be used, of course, for various purposes in organizations. We have alreadydescribed the process of constructing or building qualitative scenarios in SP in the sense offirm-level long-range planning. Quantitative scenarios are valuable in capital budgetingprojects, and other internal, project-based situations. As used in the latter processes, SB isoften referred to as SA (see next section).

The parameters of various operational and strategic decisions can be examined qualitatively,quantitatively, or in combination through the building of scenarios, a process we refer to asSB. In an SB process, critical risk factors or uncertainties, internal or external to the unitwhich the decision effects, are identified. These can be evaluated and used to construct thetwo axes of a matrix. Each of the four quadrants then represents a possible outcome for thedecision. The process is similar to SP, but can be applied internally or externally and can beshorter term than SP’s emphasis on the long-term total business strategic choices.

An SB example

As an example, we choose the decision by the National Ignition Facility (NIF)’s scientistdecision-makers to consider using one vendor or two vendors for a critical component.38

The NIF was operated by the Livermore Labs, managed by the University of Californiaunder a Department of Energy contact. The critical external factors for this decision werethe ability of each vendor to successfully provide the key component, a component that

Executive decision-making under KUU conditions 169

38See Alessandri et al. (2003).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 169

Page 167: Managing Enterprise Risk

required a radical innovation in manufacturing technology. In considering two vendorsthere were four possible outcomes as depicted in Exhibit 11.4:

● Both vendors could succeed (Quadrant A: “Bullseye”).

● Vendor A could succeed and Vendor B fail (Quadrant B: “X Scores”).

● Vendor B could succeed and Vendor A fail (Quadrant C: “Y Scores”).

● Neither vendor could succeed (Quadrant D: “Complete Flop”).

In this instance NIF’s decision-makers used their own backgrounds and general experienceto make an assessment of the outcome for each vendor. They estimated there was a greaterthan 50% probability that each individual vendor likely would be able to satisfactorilyundertake the development of the radical innovation in manufacturing process and deliverthe critical component. Thus, overall, the decision-makers’ intuitive assessment was thatQuadrant I was possible, that II or III were likely, and that IV was highly unlikely.

170 Managing Enterprise Risk

Exhibit 11.4. SB for the NIF decision to fund one versus two glass vendors

Company X: Development of the glass manufacturing application and quality of output

Successful A. “Bullseye” B. “X Scores” (Possible, and (Likely, but leaves NIF provides NIF dependent on one supplier)with future flexibility of choice of lower-cost provider)

Unsuccessful C. “Y Scores” D. “Complete Flop” (Highly (Likely, but leaves unlikely, but leads to NIF dependent on Livermore Lab’s loss of the one supplier) NIF project and possible loss

of the University of California’s Department ofEnergy (DOE) contract to manage the national laboratory, and significant disadvantage to the nation innot being able to test its nuclear weapon supply or undertake nuclear research)

Company Y: Development of the Successful Unsuccessfulglass manufacturing application and quality of output

Outcomes

Outcomes

Ch11-I044949.qxd 6/29/06 9:10 AM Page 170

Page 168: Managing Enterprise Risk

The NIF scientist decision-makers estimated that investment in one vendor’s developmentcosts would be $12 million and that two things could happen. The vendor could developa quality production process successfully on budget and on schedule. However, they feltthere was a significant chance that the vendor could fail. They knew that vendor failurewould seriously delay the overall project. There was also significant concern that theentire project will be cancelled. On the other hand, if they invested in two vendors, bothof them could succeed. If both vendors succeeded the project would be successful andNIF would have the flexibility of choosing between two vendors in the future. To proceedwith two vendors doubled the cost (i.e., an incremental investment of $12 million). Giventhat the entire project’s estimated cost was $1.5 billion, the incremental investment to pre-serve the entire project was warranted. The NIF decision-makers chose to invest in paral-lel development by two vendors.

The discussion above demonstrates that NIF’s decision-makers valued flexibility. Indeed,in general when two managers are looking at two different projects, they prefer the proj-ect that has greater flexibility. Management is often willing to spend additional funds todesign in flexibility. A question that certainly arises is how managers can justify pursuingprojects with higher costs but more flexibility. By appropriately designing the contractwith each vendor and building in milestones and progress payments, NIF could build inthe flexibility to abandon if development with a vendor was not progressing as expected.However, it was clearly more expensive to fund parallel development, and while futurebenefits could be qualitatively described, they were difficult to quantify. The benefitswere, in short, highly uncertain.

Today, most executives and academics recognize that, when market conditions are highlyuncertain and managers have decision flexibility, the traditional financial analysis toolsfor making strategic and operational decisions are not adequate decision-making tools. Ata qualitative level, strategic decision-makers can use scenarios to define projects and topush the boundaries of what the possible outcomes of the project might be. However,from a financial analysis perspective, scenarios are very difficult to use to quantitativelyvalue flexibility. Quantitative valuation is limited by identification of the values that thevariables making up the scenarios can take and by the assignment of probabilities to thepossible scenarios. Thus, the benefit from using scenarios is far more from the process ofdeveloping the scenarios (i.e., SB).

V Driving for Quantification – SA

In the fields of finance and accounting SA is the use of internally consistent sets of data usedfor quantitatively evaluating alternative outcomes. For example, in establishing pro formasanalysts will generally include projections of revenue growth and other parameters deemedcritical for the income statement and balance sheets to yield most likely, best possible, andworst case scenarios. The qualitative designator “most likely” generally designates themean/median of the distributions of the underlying variables deemed critical by the analyst(with the accompanying necessary assumption that the variables are normally distributed, ornearly so) while the best possible and worst case are qualitatively established toward thetails of the distribution. SA is ubiquitous and the term is used to denote alternative, inter-nally consistent sets of quantitative assumptions that are expected to impact the outcomes.

Executive decision-making under KUU conditions 171

Ch11-I044949.qxd 6/29/06 9:10 AM Page 171

Page 169: Managing Enterprise Risk

SA has application in short-term decisions where there is risk, but the uncertainties are nothigh. In short, the underlying parameters are expected to be continuous and the data can beforecasted. Significant advances have been made in quantitative risks assessment processes.However, SA is not as applicable where the underlying time series of data are likely to be dis-continuous (e.g., a radical change in technology that obsoletes manufacturing capabilities).

A major difficulty with SA is that it can be used to project data sets with risk, but is lessuseful under conditions of significant uncertainty. Indeed, one of the difficulties with SAis that quantification of variables yields outputs from the modeling options that are seem-ingly certain, but which may be fraught with serious errors.

SA is a sub-set of SB and can be used as a tool in ROA and ERM. Using SA indicates thefirm is operating in an environment where variables are knowable and relatively speaking,less uncertain. Exhibit 11.5 summarizes scenarios as used in SP, SB, and SA.

VI Applying KUU to SP, SB, ERM, ROA, and SA – A New Way of Thinkingabout Uncertain Variables and Risk

We turn our attention to a new notion – KUU and consider the processes (SP, ERM, ROA)and tools (SB and SA) within its framework (see Exhibit 11.6).

What do we know – what is certain? Either the issue is short term (but by definitiondoes not apply to SP) or it has been stable in its performance and there is nothing toexpect it will change.

172 Managing Enterprise Risk

Exhibit 11.5. Comparison of scenarios as used in SP, SB, and SA

SA as Qualitative/ Variables/level used in: Time frame quantitative of Analysis Typical applications

(a) SP Long term Qualitative Multiple/firm Firm level. Qualitative level identification of plausible

future scenarios of highly uncertain external factors and design of appropriate firm-level strategies

(b) SB Moderate Qualitative/ Multiple/ Project level under quantitative project conditions of uncertainty.

Qualitative identification of factors external to the projectand prediction of outcomes for the project. Quantitative estimates of the likelihood ofthe scenarios occurring

(c) SA Short Quantitative Multiple/ Project level (e.g., sensitivity project analysis in pro formas or

discounted cash flow (DCF) analyses)

Ch11-I044949.qxd 6/29/06 9:10 AM Page 172

Page 170: Managing Enterprise Risk

What is Unknown? Certainly longer-term states of unstable factors are less pre-dictable and thus the predictability of their future states is less certain.

Unknowable? What do we just not see as impacting on our system? The word systemwas chosen very carefully here. Often the factors that are unknowable are: (a) out-side our system as defined and thus they are unexpected when they impact on oursystem because they simply were not on the radar screen or (b) so unpredictable thatwe simply do not know where the outcome might be – we can identify the factor, wejust cannot identify the plausible states.

In using the various programs and processes for managing risk and uncertainty, execu-tives are responsible for moving the Unknown to the Known and the Unknowable as leastto the Unknown. Current mandates underscore even more sharply than before that weproactively address this set of issues.

How can we apply the KUU concept?

SP incorporates these notions implicitly and explicitly. SP helps us to understand thesources of uncertainty and think through the actions to reduce the risks of taking inappro-priate action. SP is not as concerned with the Known – if a factor is known it is unlikelyto have a range of possibilities that we would not assume are equally probable (i.e., for thepurposes of discussion).

The Unknown is where SP focuses – on factors that have a range of possibilities and,again, for the purposes of the process assumed to be reasonably equally likely or proba-ble, that is, if one thinks of outcomes as a continuum of some nature, the groups of exec-utives have chosen four nodes on a distribution assumed to have an equal distribution ofprobability over the range being considered.

What about the Unknowable? – Obviously, “simply” by choosing the factors on the axes,the executives are judging the factor/set of factors to be more highly uncertain and alsocritical (i.e., important to survival). SP, adeptly facilitated, can stretch the imagination asmuch as humanly possible to consider what other factors, perhaps those currently unknow-able, that might have significant effect.

What about ROA? It is much the same as SP in terms of understanding the outcomes of deci-sions taken at each stage of the analysis (i.e., the time modularized decision). The process isless concerned with the known, and rather works explicitly with the unknown. The facilitatedprocess pushes to include what might otherwise be unknowable into at least the unknown.

What about ERM? We come to the same conclusion. ERM is a process intended to cap-ture quantitatively and qualitatively the KUUs in each of the critical functional areas ofthe firm and design contingency plans to minimize negative effects of that scenario (e.g.,just in case the scenario entitled “The Pits” in Exhibit A-3 really does occur!). In thissense ERM is a process for bringing the Unknown into the Known, and perhaps theUnknowable into the Unknown chapter.

Obviously by choice of current business model executives have implicitly made the assess-ment that a specific scenario is most likely (a “no–no” for the SP process as strictly defined,

Executive decision-making under KUU conditions 173

Ch11-I044949.qxd 6/29/06 9:10 AM Page 173

Page 171: Managing Enterprise Risk

174M

anaging Enterprise R

isk

Exhibit 11.6. Relating KUU and SP/SB/ERM/ROA/SA

● What does KUU have to do with SP/SB/ERM/ROA/SA?● What do SP/SB/ERM/ROA/SA have to do with KUU?● SP/SB/ERM/ROA/SA relative usefulness in dealing with KUU

Lo Uncertainty HiTechnique/

Process Relationship to KUU Known Unknown Unknowable

Hi SP SP focuses on the longer-term futures where there is greater uncertainty and X XXX XXtherefore higher risk (e.g., regulatory change; world political conditions). While SP deals with the Knowns as a foundation, its emphasis is on the Unknowns. Through pushing the boundaries using dialog and consensus building, SP establishes plausible descriptions of the Unknowns and, potentially, the Unknowables

ERM A process of inventorying the risks throughout the firm. It deals primarily XX XXX ?with the Knowns (e.g., databases of counterparty credit histories), the Unknowns (e.g., the future projections of expected behaviors based on past credit histories), and could potentially deal with the Unknowables

ROA ROA assumes future flexibility exogenous to the decision-makers (see, e.g., XX XXX ?Daniel A. Levinthal (Wharton), “What is not a Real Option: Considering Boundaries for the Application of Real Options to Business Strategy,”Academy of Management Review, January 2004). Thus, it necessarily deals with the Knowns and the Unknowns (i.e., Monte Carlo simulation ofexpectations)

→←

RISK

Ch11-I044949.qxd 6/29/06 9:10 AM Page 174

Page 172: Managing Enterprise Risk

Executive decision-m

aking under KU

U conditions

175

It would seem that ROA has more difficulty with the unknowables since (by definition) unknowables may be subject to qualitative identification, but not quantification

SB SB is not the same process as SP since SP is generally used to focus on the XX XX Xenvironment exogenous to the firm and the scenarios thus generated as a basis for generating possible contingent long-term strategies

Thus SB is defined here as the use of scenarios which may include qualitative or quantitative parameters or both. The scenarios from SB may be internal to the firm, or external, and are generally short term and used for investment orcapital budgeting decisions which may be of a strategic or operational nature

While SB deals primarily with the Knowns, the Unknowns, and can also deal with the Unknowables

Lo SA SA is essentially a sub-set of SB in which the variables can be quantified. As defined in this chapter, SA consists of establishing internally consistent and selected sets of quantified predictions of the values that future variables will take. Thus it deals with the Knowns and can only partially deal with the Unknowns, and has at best limited ability to deal with the Unknowables XX X ?

Legend for application to KUU: X � applies somewhat; XX � applies moderately; XXX � applies strongly; ? � not sure.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 175

Page 173: Managing Enterprise Risk

176 Managing Enterprise Risk

but an effective approach to proactively minimize enterprise risk). And, from a systemapproach, the examination of the approaches at the sub-unit level permits a higher-orderbody, for example, the team of senior executives (dominant coalition) to think throughwhether there is sub-optimization in order to assure the (presumably very) dominant coalitionknown as the Board of Directors to make sure that the firm as a whole is (at least defensively)managing “total ERM.” Of course, the executive team wants to formulate a “robust strategy,”that is, one that ensures the company’s survival regardless of which scenario occurs.

The strategic management domain since its inception has focused on the executive as the“hero” assisted by analytic techniques. Some of these techniques, such as SP, are morequalitative in nature and rely on experience, judgment, and intuition. Some are morequantitative techniques such as ROA and SA, and most recently ERM. The quantitativetechniques are reinforcing of and supportive of SP. In some sense SP establishes the long-term hypotheses about the future against which executive observations of U(nknown) andU(knowable) signposts/milestones and strategic and operational choices are tests of thosehypotheses while ROA, ERM, and SA are short-term hypotheses used to ascertain whetherthe Ks and Us continue to hold.

Thus, overall we see that strategic management and finance are both about management ofuncertainty, mitigating risk, and enhancing profitability in the short run and survivability in thelong run. We can see the comparabilities in the techniques and, increasingly, in the processes.And, we remain convinced that by dialogs across the strategic management – finance bound-aries we can provide each “side” a more effective bundle of skills, tools, and processes.

VII Conclusions

Certainly new ideas do not arrive full-blown in organizations. Incorporation of the conceptspresented in this chapter will require consideration by multiple levels in organizations, notjust the dominant coalition. Steps include gaining advocacy of senior executives, keepingthe language simple, and breaking a new concept or process into “Trojan mice”.39

ROA has been developed over the decade of the 1990s and provides complementarystrengths and weaknesses as compared to SP as managers make strategic investment deci-sions under uncertainty. In an integrated risk management approach these two techniquescan be combined. The process involves scenario development, exposure identification,formulating risk management responses, and implementation steps. The discussion in thischapter encourages a corporate-level perspective incorporating consideration of the rangeof exposure across a firm’s portfolio of businesses. This chapter illustrates qualitativeassessment of real options.40

How do these strategic and finance techniques, methodologies, processes, and programsrelate to the KUU framework? The critical contingencies are the degrees of the uncertain-ties and the expectation as to whether, over time, current uncertainties will or can become

39Daveport, T.H. and Prusak, L., “The Practice of Ideas: Identifying, Advocating and Making It Happen,” BabsonInsight, 2003. (Note that this article is adapted from the authors’ new book, What’s the Big Idea: Creating andCapitalizing on the Best Management Thinking (Boston, MA: Harvard Business School Press, 2003.))40See footnote 10.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 176

Page 174: Managing Enterprise Risk

Executive decision-making under KUU conditions 177

certainty. From a business perspective one might invoke “good old SWOTs” (strengths,weaknesses, opportunities, and threats) and consider that SP, SB, ERM, ROA, and SA areways of examining and designing responses to OTs (i.e., those factors external to the firm).

The commonalities41 are worthy of consideration. They include commonality of objec-tives, overlaps, and interconnections.

They focus on the same basic objectives: All the approaches discussed in this chapter havethe same end goal in mind – future viability of the firm. SP is concerned with long-termfuture Unknowns while ERM/ROA/SA are concerned with identification and quantifica-tion and emphasizes profitability and firm value, generally a shorter-term perspective.Further, the ROA and ERM process (which utilize SB and SA) clearly encourage broad-ened participation in dialog and discussion.42

There are tremendous overlaps among SP, ERM, and ROA: SP is essentially going througha process of examining the plausible states of critical OTs43 and then figuring out whatmight be the most appropriate action to take, that is, how to deploy Ss (or core compe-tences) and mitigate Ws44 (i.e., what changes in strategy should take place). That is a verysimplistic description of a process that has undergone 30 or more years of development.One of its most heralded stories, as noted earlier, is enabling Shell to respond more nim-bly to the 1973–1974 Oil Crisis than competitors.

SP, simplistically thinking, starts with an inventory of the Knowns and then moves on tothe future outcomes, or plausible states that the environment can take. SP is usuallythought of in terms of a process for identifying longer-term plausible future states and isusually coupled with contingency planning (i.e., what should we do if that future stateoccurs and what is our current best alternative?) Thus SP goes through a process an outcome of which is generating strategic alternatives that are possible for addressing eachoutcome. Going through the process of dialog to gain agreement on the descriptions ofthose plausible long-term futures followed by discussion of the appropriate action to meetthe broadest challenge of those future is the most critical aspect of the process.

It is the discussion and the arrival at consensus that are critical – in short the participantsinvolved in the process come to a better common understanding of the range of the futuresituations that might develop. Depending on how the process is structured, the partici-pants might also go the extra step of developing a better understanding of the appropriate

41Taylor et al. (2003) and Taylor, M., Leggio, K., Coates, T. and Dowd, M.L., “Real Options Analysis inStrategic Decision-Making,” Presentation to ANZAM/IFSAM VI World Congress, Gold Coast, Queensland,Australia, July 10–13, 2002.42Diane Lander makes the point that primary benefits of ROA come from the process and not necessarily the endvaluation.43Note that the term scenario is often used to mean the decisions made and outcomes associated with. It is alsoused to mean the results of financial/accounting sensitivity analysis (i.e., best, worst, and most likely scenarios).In this section I am referring to Exogenous Uncertainties that impact on the firm. Strategic management oftenuses PEST as a set of categories to capture these (e.g., political, environmental (physical environment), social,and technological).44Although our unit of analysis is the firm, SP can apply at a country level regarding changes in policy or at adepartment level regarding operations.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 177

Page 175: Managing Enterprise Risk

response to each scenario. In short, the process yields contingency plans. SP provides uswith the nodes on the distribution of the outcomes.… It is a categorical scale to be sure.

SP is a process, a process which permits executives to work out a common understanding ofthe possible interactive effects of at least two factors or sets of factors in the environment.Once the players in the process have come to a common understanding of four “points” or“scenarios,” then for each of those scenarios, it is more likely that that same set of players cancome to concurrence as to what to do should that scenario begin to happen. Note that we aredealing with possibilities not with probabilities, with plausibility not guesstimates of whetherthe scenario might happen. Stories are told about executives who have gone through a SPprocess together and then meeting in the hall and saying things like, “Well it looks morelikely that ‘The Pits’ is where things are going …have you been preparing…?” It looks sim-ple, doesn’t it? It’s not – gaining consensus is only achieved through a carefully facilitatedprocess – indeed most observe that the process is at least as valuable as the outcome.

What are the interconnections? The discussions within the SP, ROA, and ERM processesespecially enable executives to establish a range of combinations of possible future statesand possible actions. In short, the executives are going through a form of a total ERMprocess. At the very least they have identified the actions that would be inappropriate giventhat scenario, and they have agreed ipso facto on the alternative scenarios. As an extra stage,they can then concur on “leading indicators” that signal whether a scenario might possiblybe occurring. The contingency plans are better understood as alternative actions. In short,the firm’s executives, at least, have built a broad-based understanding of their environmentand the way that their company should be positioned to appropriately address that environ-ment to increase its chances of survival – a truly total ERM approach. Another way of think-ing about it is that the executives going through the process have used a qualitative processof truncating their loss alternatives (i.e., an outcome at which ROA is aimed).

Thus we see that the fields of strategic management and finance have developed tools andprocesses that have commonalties. The tools discussed in this chapter are among those thatassist executives in identifying and managing uncertainties, mitigating risks, and exploitingopportunities. The biggest problem in crossing the borders between the disciplines is thatfor a long time we have been trying to quantify strategy factors. That is appropriate. Asfinance and strategy come together, the finance perspective especially emphasizes quantify-ing inputs and outcomes. However, instead of trying to force quantification of strategic fac-tors, we should be taking more of a qualitative approach with the financial tools andconcepts. The intent thus is to extend the boundaries of K (i.e., what we know), U (i.e., whatwe don’t know), and U (what we currently cannot know). As well-known psychologist EricFromm put it, “The quest for certainty blocks the search for meaning. Uncertainty is thevery condition to impel man to unfold his powers.”45 In strategic management and financewe need to mitigate our concerns for our differences and try to pull things together to inte-grate the two fields in theory, in research, and in practice so that we can expand our under-standing and in the process become more successful at mitigating negative outcomes in theshort run, help our organizations profitably pursue opportunities in the medium term, andseek appropriate survival alternatives in the long term.

178 Managing Enterprise Risk

45Fromm, E., Man for Himself: An Inquiry into the Psychology of Ethics (Routledge, London, ISBN: 978-0-415-21020-1 and 0-415-21020-8, 1999).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 178

Page 176: Managing Enterprise Risk

Appendix A

SP in the power industry – a look at nuclear

The four exhibits (A-1 to A-4) that make up this appendix summarize an exercise examining Medium Sized Power Company (MSPC)’s nuclear power alternatives and thelong-term uncertain factors that impact on the decision regarding the firm’s businessmodel.

In this exercise the participants were asked to undertake three steps:

1. Establish a strategic question (see Exhibit A-1).

2. Identify the critical dimensions that would impact upon the question (see Exhibit A-2).

3. Create “stories” or “scenarios” that were plausible and internally consistent (seeExhibit A-3 and A-4).

This group chose the question: what the firm should do regarding nuclear power generationgiven that the firm was part owner in a major facility. After identification, the critical dimen-sions were grouped in the scenario matrix as no nuclear acceptability (i.e., combining social and political factors) and cost of alternatives (predicated among other factors on tech-nology advances). The four identified scenarios were given the names “Fossil Heaven,”“Greenville,” “Diversification,” and “Nuckies Rule”. Once the scenarios are written, theexecutive group sketched their conceptual understanding of the application of scenarios(Exhibit A-4) and used the earlier steps as a basis for creating strategies that are appropriateto the scenarios.46

Exhibit A-1. Focal question for scenario exercise

Should MSPC expand its nuclear power capabilities?

YES:(a) Purchase remaining stake in existing nuclear facility currently jointly owned Creek(b) Merge nuclear facility with a similar nuclear capability owned by another firm(c) Purchase existing or new unit

NO:(a) Liquidate current holdings in nuclear power

Executive decision-making under KUU conditions 179

46Actual application of SP varies immensely in practice. What is described here is the classical approach.However, many consultants shorten the process, changes the steps, or includes techniques they have developedin house.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 179

Page 177: Managing Enterprise Risk

Exhibit A-2. Uncertainties to consider for scenarios

● Social and political acceptability of nuclear power generation– Environmental →Global warming– Federal and state policy →Regulation versus deregulations– Media and movies →Social pressures– Perceptions and viability of renewable power sources– Safety and danger of nuclear plants (e.g., waste issues)– Nuclear accidents (Three Mile Island)

● Technology changes– Hydrogen– Fuel cells

● Cost and volatility of other energy sources– Coal– Natural gas– Wind– Solar– Oil– Hydroelectric

Exhibit A-3. Scenario dimensions

180 Managing Enterprise Risk

Low costof

High cost of

Nuclearacceptability

No nuclear acceptability

• Regulatory activity that increases cost for nuclear

• Accidents or recorded safety reports

• Reasonable regulation on CO2 and abundantfossil fuel sources

• Renewable model grows in strength asboth nuclear and other energy sourcesbecome unfavorable and costly

• Strong power base for environmentalists

• CO2 regulation and high costs of fossil fuels

• Washington in favor of nuclear capabilities

• Improved treatment of nuclear related capitalinvestments

• Favorable public perception of nuclear power

• Restriction to fossil fuel sources

• Stiff regulations on emissions

• Creation of new nuclear capabilities

• Low cost of alternative sources

• Washington in favor of nuclearcapabilities

• Favorable public perception of nuclear

“Fossil Heaven” “Greenville”

“Nuckies Rule”“Diversification”

Ch11-I044949.qxd 6/29/06 9:10 AM Page 180

Page 178: Managing Enterprise Risk

Exhibit A-4. Scenarios

Executive decision-making under KUU conditions 181

B

C

A

Today Future

Area of plausiblefuture

Areas offuture Wildcard X

YWildcard

Scenarios

Appendix B

What advantages does ROA have compared to NPV?

Like NPV, the purpose of ROA is to quantify today’s value of future opportunities. NPVand its counterpart IRR remain the most frequently used management valuation tools formajor investment decisions. Even when not explicitly used, NPV usually underlies thebasic thinking process behind strategic choices. However, NPV has been greatly criti-cized on several bases including: (a) its arbitrary choice of a discount rate, (b) the riskadverseness inherent in the choice of the discount rate, and (c) its non-recognition of man-agement’s prerogative to make decisions as the strategic investment evolves.

To apply the NPV managers need to know four elements:

1. Discount rate (adjusted to reflect the risk level).

2. Amount of investment or cash outflows (usually assumed as committed, even ifexpended at various time intervals).

3. Time period for completion.

4. The amount of the cash inflows.

In ROA managers need to know five elements about the investment opportunity:

1. Discount rate (risk-free rate).

2. Exercise price is the amount of the investment that the investor can, but does notnecessarily have to, make at the conclusion of the next phase. 47

47In other words, the amount the investor purchases the right, but not the obligation, to make – provided theinvestor makes the first investment (i.e., buys the option).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 181

Page 179: Managing Enterprise Risk

182 Managing Enterprise Risk

3. Time to expiration (i.e., the time interval before the next investment decision mustbe made and the capital outlays undertaken).

4. Value of the underlying asset (i.e., the present value of the cash inflows).

5. The volatility, a measure of how uncertain the future value of the opportunity is.

For this discussion let us look at two examples, the mid-1990s example of the develop-ment of a D-Xerox machine and a second example of large pharmaceutical companyacquiring a small pharmaceutical startup.

Example #1 – The D-Xerox machine: The D-Xerox machine is a piece of equipment thatlooks like a Xerox machine but has, rather, the ability to remove all vestiges of copierdeposits from paper. The paper can be reused, but, more importantly, sensitive informa-tion is permanently erased, a more secure alternative than shredding sensitive documents.The invention is ready for beta testing. The inventor is well aware that the first 2 years ofuser acceptance will determine success of the venture and that there is considerableuncertainty regarding the extent of user acceptance. The business plan calls for an invest-ment outlay over the next 3 years that can be modularized, that is, the first year can con-sist of building a limited number of prototypes and seeking and managing beta sites, thesecond year can involve marketing activities and larger-scale assembly, while the thirdyear calls for more intensive investment in marketing and manufacturing capabilities. Inaddition, the investment calls for phased inventory buildup of disposable supplies for theequipment. The business plan also outlines plans for other applications for the underlyingtechnology.

Compare the D-Xerox situation to a call option in the securities market. Obviously, theoption will be exercised only if the stock price is above the strike price on the option’s expi-ration date. Otherwise, the holder of the option will allow the option to expire as worthless.The D-Xerox venture is really a series of call options (i.e., investment modules or phases).Phase One is the cash inflows and outflows associated with the investment in the proto-types and beta sites – clearly an expected negative outcome! Phase Two is essentially a calloption on the future cash outflows and inflows from the scaling up phase, probably also anegative outcome. Phase Three is the future cash outflows and inflows from the greateractivity envisioned from scaled-up marketing and manufacturing. However, the expendi-tures for Phase Two will only be made if the experience in Phase One indicates that the out-comes from the future investment will be positive. Further into Phase One, the Phase Two(and perhaps Phase Three) outcomes are likely to be clearer to management than they areat the beginning of the investment in Phase One. Similarly the investment in Phase Threeis contingent upon the outcomes in Phase Two, and could be modified significantly if addi-tional technology breakthroughs are forthcoming.

With its heavy weighting of earlier outcomes, standard NPV techniques would properlyvalue Phase One, but the flexibility inherent in future phases is not well addressed with theNPV approach. It is more appropriate to evaluate Phases Two and Three using ROA. NPV,in essence, ignores the reality that the future Phase Two and Phase Three capital outlays aresubject to managerial discretion. Thus, NPV rule would undervalue the total value of thisopportunity. In using NPV entrepreneurs and managers may be systematically rejecting

Ch11-I044949.qxd 6/29/06 9:10 AM Page 182

Page 180: Managing Enterprise Risk

opportunities that really deserve further exploration.48 “Traditional valuation tools do notquantify strategic options embedded within an investment project and, therefore, may pro-duce inadequate indications of the timing of an investment.”49 Traditional NPV analysis ofsuch multi-staged strategic investment decisions generally leads to negative valuation ofthese situations and thus the decision not to undertake the initial investment.

Example #2 – Acquiring a pharmaceutical startup company: As a second example takethe possibility of a large pharmaceutical firm such as Merck confronted with the possibil-ity of acquiring BioHope, a small company that has been developing a drug for Alzheimers.The drug appears to have promise and, if successful, could provide sales of several bil-lions of dollars per year and profits of nearly 1 billion. Merck could purchase BioHopetoday for $600 million. BioHope still has to complete Phase One clinical trials in humanpatients for its Alzheimers drug and if successful, then incur the larger and largerexpenses for Phases Two and Three. In short, if Merck purchases BioHope for $600 million,over the next 10 years Merck would still have to expend another $500 million to take thedrug through clinical trials and the Food and Drug Administration (FDA) approval process.Past history indicates there is about a 10% chance the drug can be brought to market.

Can this decision be modularized in such a manner that ROA can be applied? The situa-tion has characteristics analogous to those of financial options, the upside potential ishuge and the uncertainties are huge. Making an irreversible commitment to the venturethrough acquisition involves significant opportunity costs. How can Merck structure aseries of sequential investments in such a manner that the company can participate in thepotential upside while minimizing short-term commitments? The goal would be to gainmore information, thereby reducing uncertainty, and making subsequent investments onthe basis of the increasing body of knowledge.

Merck could offer BioHope an options contract. One structure for the contract would befor Merck to pay $10 million immediately and the remainder spread over 10 years witheach payment dependent on the successful completion of the next phase that the paymentis funding. The milestone payments could be structured to get larger as BioHope’s druggot closer to market and uncertainty about the ultimate outcomes is reduced. However,Merck would reserve the right to terminate the contract if the BioHope drug did not suc-cessfully pass any of its milestones.

This situation is an options contract. Why? Because Merck would invest only a relativelysmall amount to have the opportunity to participate in the BioHope drug’s upside potential.Merck’s downside risk is limited to the milestone (option) payments made at any one pointin time. In each “module,” Merck is able to wait for more information before committingadditional investment. Merck thus maintains its strategic flexibility since the company is ableto consider and pursue other ventures should appropriate opportunities arise.

It is true that there is a price. In the final analysis Merck will have to support more thanhalf of the developments costs and incur the costs of manufacturing, sales, and marketingin order to get a royalty on the drug. However, the contract provides significant potentialfor Merck, and the deal is a good one for BioHope also.

Executive decision-making under KUU conditions 183

48See Hevert (2001) p. 2.49See Botteron (2001) p. 472.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 183

Page 181: Managing Enterprise Risk

Traditional NPV would be appropriate if the investors (i.e., Merck), were reasonably confi-dent of the cash flow projections in BioHope’s business plan. NPV would take into consid-eration the sum of the discounted value of all the cash flows expected for the venture over theforeseeable future. However, given the uncertainties, it is likely that Merck would find that the NPV was negative since earlier net cash outflows are weighted more heavily. Thus,the expectation would be to reject the investment in BioHope. NPV, however, neglects animportant critical contingency – if the Merck executives make the initial investment, theyretain the right, but not the obligation, to learn more about the technology, market, opera-tions, and future applications. Such learning leads to an accumulation of acquired knowl-edge. Making the initial investment buys the option to make additional future expendituresto scale up the venture based on this additional knowledge. Further, NPV does not take intoconsideration management’s prerogative to modify the strategy, delay, or even terminate theventure should management’s enhanced understanding and new information suggest that achange in strategy is needed. As one NPV critic has noted “Standard NPV analysis … treatsall expected future cash flows as if they will occur, implicitly assuming a passive manage-ment strategy.”50 The passivity accusation is not, strictly speaking, correct. Better put, stan-dard NPV assumes that the management strategy will continue as in the original plan anddoes not recognize management’s ability to modify its actions contingent on acquired infor-mation and understanding (i.e., management’s flexibility).

As it turns out Merck was one of the first company to apply a real options perspective toevaluating strategic decisions like the BioHope opportunity. Merck, and other pharma-ceutical companies have increasingly found ROA useful in tackling strategic investmentdecisions of this nature.

ROA offers significant advantage over NPV analysis under conditions of uncertainty.Where expectations or predictions of future values are certain (i.e., there is no volatility inthe underlying value of the asset), the NPV model will yield the same results as ROA and,given its greater simplicity, should be used. General observations, however, underscorethat the world is becoming more complex – not simpler – and thus we can expect tomor-row’s uncertainties to be greater. The overall situation suggests that ROA will continue todemonstrate greater value for application in an increasingly volatile environment.

Strategic decision-makers need to take note of the volatility measure, the only piece ofdata required for valuing a real option that is significantly different than the elementsrequired for a NPV analysis. Volatility is explicitly recognized as a key driver of value inROA. Indeed the greater the volatility, the greater the value of the option, a concept thatis difficult for many strategic managers steeped in the NPV approach to grasp. In the NPVapproach, high volatility is recognized through the use of higher discount rates. Higherdiscount rates lead to lower values for the investment. In ROA, higher volatility is linkedto higher value. There are at least three reasons why:

● Since greater volatility creates a wider range of possible future values for the opportunity.

● As strategic decision-makers can actively manage the investment in taking continu-ous cognizance of these future values and add value on an ongoing basis, an aspect

184 Managing Enterprise Risk

50See Hevert (2001) p. 2 (Emphasis added).

Ch11-I044949.qxd 6/29/06 9:10 AM Page 184

Page 182: Managing Enterprise Risk

explicitly recognized in ROA but not in NPV where the all-or-nothing decision isassumed to be made at one point in time.

● As the strategic decision-makers will only exercise their options to make the futureinvestments if the value of the opportunity exceeds the exercise price. Greater uncer-tainty on the downside will not hurt them. They clearly will not make the investment(i.e., exercise their option). However, greater uncertainty (i.e., spreads of values) onthe upside produces greater excess of opportunity over the required investment (i.e.,exercise price). Thus, there is a correspondingly greater option value under conditionsof greater uncertainty.

Another set of ROA scholars-practitioners has suggested that the major advantages ofROA are as follows:

First, option valuation is based on objective inputs and a precise list of which inputsare needed and which are not. Those inputs are used in a way that produces marketvalues and the real options approach guides the user on where to look and why.Experienced users of the real options approach see the patterns, the types of options,and the important sources of uncertainty.

Second, the real options approach provides a framework for revising expectations. Inthe real options approach, investments are managed over time. The optimal exerciseof managerial options requires frequent scans of the environment and updates ofimportant information. Although it is impossible to always avoid the sin of omission,the disciplined thinking about the consequences of uncertainty in the real optionsapproach can help.51

In addition, significant value can accrue to a firm during this process in the form of orga-nizational learning. NPV cannot include the variety of strategic possibilities – new infor-mation, changing market conditions, new technologies, and the simple fact that manyuncertainties become less uncertain as time goes by. The traditional NPV analysis isappropriate for valuing strategic opportunities in which:

a. The decision to be made is once for all (i.e., there are no future nodes at which theinvestment could or would be modified).

b. The future situation is expected to be stable.

c. The strategy and operational modus operandi will hold fairly constant.

However, these criteria characterize few strategic decisions. Rather strategic decisionstypically are:

a. Multi-staged (i.e., the decision can be modified, delayed, even reversed at a future point).

b. Increasingly made under conditions of uncertainty (i.e., the future is not expected tobe stable).

c. Actively managed since managers, given their fiduciary responsibility on behalf ofowners of the investment, will make modify their actions in order to maximize the valueof the investment in the future, including cease the activity completely if necessary.

Executive decision-making under KUU conditions 185

51See Amram and Kulatilaka (1999) p. 45.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 185

Page 183: Managing Enterprise Risk

ROA recognizes these possibilities. NPV does not. In short, NPV “… does not capture therichness of the many ways in which a highly uncertain project might evolve, and the waysin which watchful managers will influence this evolution.”52

Bibliography

Alessandri, T., Ford, D., Leggio, K., Lander, D. and Taylor, M.L., “Uncertainty Management inCapital Projects: The Challenges of Implementation: Strategic Decision-Making Contributionsfrom ERM, ROA, KUU, DTA, SM, SWOTs, SP, and CP: Examining Commonalities,” Presentedas part of the panel Valuing Uncertainty in Capital Budgeting Projects: Alternatives (St. Louis:Midwest Finance Association, March 2003).

Alessandri, T., Ford, D., Lander, D., Leggio, K. and Taylor, M., “Managing Risk and Uncertainty inComplex Capital Projects,” Quarterly Review of Economics and Finance, 44(5), 2004, 75.

Alleman, J., “Real Options: Management Flexibility and Strategy in Resource Allocation,”Information Economics and Policy, 11(2), 1999, 229–235.

Amram, M. and Kulatilaka, N., Real Options: Managing Strategic Investment in an UncertainWorld (Boston: Harvard Business School Press, 1999), p. 63.

Amran, M. and Kulatilaka, N., “Disciplined Decisions: Aligning Strategy with the FinancialMarkets,” Harvard Business Review, January to February, 1999.

Angelis, D., “Capturing the Option Value of R&D,” Research Technology Management, 43(4),2000, 31–34.

Anonymous, “Economics Focus: Keeping all Options Open,” Economist, 352(8132), August 14,1999, 62.

Barton, T.L., Shenkir, W.G. and Walker, P.L., Making Enterprise Risk Management Pay Off: HowLeading Companies Implement Risk Management (Upper Saddle River, NJ: Prentice-Hall PTR,2002).

Bernhard, R.H., “Real Options: Managerial Flexibility and Strategy in Resource Allocation,”Engineering Economist, 45(2), 2000, 182.

Bernstein, P.L., Against the Gods – the Remarkable Story of Risk (New York: John Wiley & Sons,Inc., 1996).

Boer, F.P., “Valuation of Technology using ‘Real Options’,” Research Technology Management,43(4), 2000, 26–30.

Boulton, R.E.S., Libert, B.D. and Samek, S.M., Cracking the Value Code: How SuccessfulBusinesses are Creating Value in the New Economy (New York: Harper Business, 2000).Written by three Arthur Anderson partners this work identifies five classes of assets: physical,financial, customer, employee and supplier, and organization. The work identifies dynamic andinnovative ways in which companies utilize these asset categories to drive value for shareholders.

Botteron, P., “On the Practical Application of the Real Options Theory,” Thunderbird InternationalBusiness Review, 43(3), 2001, 469–479.

Brealey, R.A., Myers, S.C., Principles of Corporate Finance (New York: McGraw-Hill, 1996).

186 Managing Enterprise Risk

52See Hevert (2001) p. 1.

Ch11-I044949.qxd 6/29/06 9:10 AM Page 186

Page 184: Managing Enterprise Risk

Brennan, M. and Schwartz, E., “Evaluating Natural Resource Investments,” Journal of Business, 58(2),1985, 135–157.

Bughin, J., “Black-Scholes meets Seinfeld,” McKinsey Quarterly, (2), 2000, 13–16.

Bulow, J. et al., “Toeholds and Takeovers,” Journal of Political Economy, 107(3), 1999, 427–454.

Busby, J.S. and Pitts, C.G.C., “Real Options and Capital Investment Decisions,” ManagementAccounting (London), 75(10), 1997, 38–39.

Chapman, C. and Ward, S., Project Risk Management (New York: Wiley and Sons, 1997).

Competition in Electricity: New Markets and New Structures (Arlington, VA; Palo Alto, CA: PublicUtilities Reports, Inc., QED Research, Inc., 1990).

Copeland, T. and Antikarov, V., Real Options – A Practitioner’s Guide (New York: Texere LLC, 2001).

Copeland, T.E. and Keenan, P.T., “How Much is Flexibility Worth?” The McKinsey Quarterly, (2),McKinsey & Company, New York, 1998, 38–49.

Copeland, T.E. and Keenan, P.T., “Making Optional Real,” The McKinsey Quarterly, (3), McKinsey &Company, New York, 1998.

Copeland, T.E., Koller, T. and Murrin, J., Valuation and Measuring and Managing the Value ofCompanies (New York: John Wiley & Sons, Inc., 1995).

Courtney, H., Kirkland, J. and Vigueria, P., “Strategy Under Uncertainty,” Harvard Business Review,75(6), 1997, 66–79.

Coy, P., “Exploiting Uncertainty.” Business Week, (3632), New York, June 7 1999, 118–124.

Coyne, K. and Subramanian, S., “Bringing Discipline to Strategy,” The McKinsey Quarterly, (4),McKinsey & Company, New York, 1996, 14–25.

Damodaran, A., Investment Valuation (New York: John Wiley & Sons, 1996).

Dastgir, M., Real Options in Capital Investment: Models, Strategies, and Applications (Westport,CT: Praeger, 1995).

Dastgir, M., Real Options and Capital Budgeting: An Empirical Study of United Kingdom Firms.Unpublished doctoral dissertation (UK: University of Essex, 1998).

Dixit, A. and Pindyck. R., Investment Under Uncertainty (New Jersey: Princeton University Press,Princeton, 1994).

Financial Executives Institute, “Survey: Audit Committees should Focus on Key Business Risks”(FEI Press Release, January 12, 2000).

Friedman, T., Longitudes and Attitudes: The World after September 11 (Lee recommended asdemonstrating the important nature of leaders).

Garvin, D., Learning in Action: A Guide to Putting the Learning Organization to Work (Boston:Harvard Business School Press, 2000).

Herath, H.S.B. and Park, C.S., “Economic Analysis of R&D Projects: An Options Approach,” The Engineering Economist, 44(1), 1999, 1–35.

Hevert, K., “Real Options: Valuing Flexibility in Strategic Investments,” The Babson Insight (2001)(www.babsoninsight.com/contentmgr.showdetails.php?id�116).

Hoskin, R.E., Financial Accounting (New York: John Wiley & Sons, 1994).

Executive decision-making under KUU conditions 187

Ch11-I044949.qxd 6/29/06 9:10 AM Page 187

Page 185: Managing Enterprise Risk

Knight, F.H., Risk, Uncertainty and Profit (Washington, DC: Beard, 1921).

Kroll, K., “Keeping Options Open,” Industry Week, 247(4), February 16, 1998, 22.

Kumar, R.L., “Understanding DSS Value: An Options Perspective,” Omega, 27(3), 1999, 295–304.

Laamanen, T., “Option Nature of Company Acquisitions Motivated by Competence Acquisition,”Small Business Economics, 12(2), 1999, 149–168.

Lam, J., “Enterprise Risk Management and the Role of the Chief Risk Officer,” ERisk, March 25, 2000.

Laarni, T.B., “Real Options, Irreversible Investment and Firm Uncertainty: New Evidence fromU.S. Firms,” Brandeis University – International Business School (December 5, 2001http://papers.ssrn.com/sol3/papers.cfm?abstract_id�293152).

Leggio, K., David, B. and Taylor, M., “The Application of Banking Models to the Electric PowerIndustry: Understanding Business Risk in Today’s Environment,” Global Conference onBusiness and Economics, Summer, 2003, London, England.

Leggio, K., Taylor, M., Bodde, D. and Coates, T., “Dating, Engagements, and Marriages amongU.S. Electric Utilities: Potential Application of Options Theory,” Current Issues in Management,1(1), 2001, 43–61.

Leslie, K.J. and Michaels, M.P., “The Real Power of Real Options,” The McKinsey Quarterly, (3),McKinsey & Company, New York, (3), 1997, 4–22.

Li, M. et al., “The Moderating Effect of Environmental Dynamism on the Ownership andPerformance Relationship,” Strategic Management Journal, 19(2), 1998, 169–179.

Lounsbury, H.B., Options Theory as a Framework for Decision-Making in R&D Investments(Ottawa: National Library of Canada, 1993).

Luehrman, T., “Investment Opportunities as Real Options: Getting Started on the Numbers,”Harvard Business Review, 76(4), 1998, 51–62.

Luehrman, T., “Strategy as a Portfolio of Real Options,” Harvard Business Review, 76(5), 1998, 89–99.

Mandel, M., The High Risk Society (New York: Random House, 1996).

Mason, S.P. and Baldwin, C., “Evaluation of Government Subsidies to Large-scale Energy Projects:A Contingent Claims Approach,” Advances in Futures and Options Research, 3, 1988, 169–181.

Mauboussin, M., “Get Real: Using Real Options in Security Analysis,” Credit Suisse/First Boston,June 23, 1999 ([email protected]).

McGrath, R.G., “Falling Forward: Real Options Reasoning and Entrepreneurial Failure,” Academyof Management Review, 24(1), 1999, 13–30.

Merton, R.C., “Applications of Option-pricing Theory: Twenty-five Years Later,” AmericanEconomic Review, 88(3), 1988, 323–349.

Miller, K.D. and Waller, H.G., “Scenarios, Real Options and Integrated Risk Management,” LongRange Planning (London), 36(1), February 2003, 93.

Mintzberg, H., “The Rise and Fall of Strategic Planning”, Harvard Business Review, 72(1), 1994,107–114.

Nichols, N.A., “Scientific Management at Merck: An Interview with CFO Judy Lewent,” HarvardBusiness Review, 72(1), 1994, 88–98.

188 Managing Enterprise Risk

Ch11-I044949.qxd 6/29/06 9:10 AM Page 188

Page 186: Managing Enterprise Risk

Palmer, T.B. and Wiseman, R.M., “Decoupling Risk Taking from Income Stream Uncertainty: AHolistic Model of Risk,” Strategic Management Journal, 20, 1999, 1037–1062.

Pinches, G., “Myopia, Capital Budgeting and Decision-Making,” Financial Management, 11(3),1982, 6–20.

Pinches, G.E. and Lander, D.M., The real option approach to capital budgeting decisions. WorkingPaper, 1998.

Price, J., “Warren Buffett meets Sherlock Holmes: The Case of the Missing Ten Pounds (withapologies to Sir Arthur Conan Doyle),” Derivatives Strategy, 1997 [Also in Price, J.F. (Ed.),Derivatives and Financial Mathematics, Nova Science Publishers.]

Rao, R.K.S. et al., “Another Look at the Use of Options Pricing Theory to Evaluate Real AssetInvestment Opportunities” (Austin, TX: Graduate School of Business, University of Texas atAustin: Distributed by Bureau of Business Research, 1980).

Reary, B., “Strategic Collaborative Commerce with Suppliers Must Go Beyond ROI,” EBN, 1249(2/12/2001), 82.

Roberts, K. and Weitzman, M., “Funding Criteria for Research, Development, and ExplorationProjects,” Econometrica, September 1981, 1261–1288.

Robertson, D.D., “A Markov View of Bank Consolidation”: 1960–2000, Presentation to theMidwest Finance Association, March, 2003 (Douglas D. Robertson, Office of the Comptrollerof the Currency, 250 E Street, SW, Washington, DC 20219 Ph. 202-874-4745 Fax: 202-874-5394 E-mail: [email protected]) for a long-term look at consolidation periodsin the banking industry.

Sender, G.L., “Option Analysis at Merck: The New Pharmaceutical Paradigm,” Harvard BusinessReview, 72(1), 1994, 92.

Sharfman, M.P. and Dean, J.W.J., “Flexibility in Strategic Decision Making: Informational andIdeological Perspectives,” Journal of Management Studies, 34(2), 1997, 191–217.

Simons, R.L., “How Risky is Your Company,” Harvard Business Review, 77(3), 1999, 85–94.

Stewart, T.A., “Managing Risk in the 21st Century,” Fortune, 141(3), February 7 2000, 202–203.

Taylor, M.L., “Strategic Thinking – Strategic Management and Finance Contributions – ExaminingCommonalities,” Presentation to the Midwest Finance Association, St. Louis, March 28, 2003.

Taylor, M.L. and Leggio, K., “Strategic Decision-Making Contributions from ERM, ROA, KUU,DTA, SM, SWOTs, SP, and CP,” Presentation to the Midwest Finance Association, St. Louis,Spring, 2003.

Taylor, M., Leggio, K., Bodde, D. and Coates, T.T., “Dating, Engagements, and Marriages amongElectric Utilities: An Application of Options Theory,” Current Issues in Management, 1(1), Fall,2001, 43–61.

Trigeorgis, L., Real Options – Managerial Flexibility and Strategy in Resource Allocation(Cambridge, Massachusetts: MIT Press, 1996).

Vermeulen, F. and Barkema, H., 2001. “Learning through Acquisitions.” Academy of ManagementJournal, 44(3), 2001, 457–476.

Executive decision-making under KUU conditions 189

Ch11-I044949.qxd 6/29/06 9:10 AM Page 189

Page 187: Managing Enterprise Risk

CHAPTER 12

Assessing Capital Adequacy

Robert Anderson and the Committee of Chief Risk Officers1

Executive Director, CCRO

Introduction

The concept of capital adequacy has been a topic of interest and debate for many years. Inits simplest definition, capital adequacy is the availability of funds necessary for a companyto meet its foreseen and unforeseen obligations – both short term and long term. Capitalshould be sufficient to allow a company to operate as a going concern through expected andunexpected business and economic cycles without disrupting operations and while continu-ing to support the process of shareholder value creation.

The energy industry can benefit and borrow from the lessons “learned” in the financial sec-tor regarding the design of a framework for measuring capital adequacy. Through regulation,banks are required to hold sufficient capital to reduce the risk of insolvency and the potentialcost of a bank’s failure to depositors. In 1988, the Basel Committee on Banking Supervisionpublished the Capital Accord. Since then, a more risk-sensitive framework has been debated.The banks, through the New Basel Capital Accord, refined their framework for capital toincorporate a menu of approaches to assess risk factors (market, credit, and operational).

The energy industry has been slow to adopt many of the capital adequacy concepts thebanks use primarily because of the complexities specific to the energy sector. Energy com-panies typically have long-lived physical and financial assets and liabilities, which pose sig-nificant market, credit, operations, and operational risks. Further, since the energy market is

190

1By participating in energy commodities markets throughout the world, companies are exposed to a variety ofrisks. However, each company has developed its own financial reporting practices, risk management techniques,and infrastructure to manage its business. The CCRO has been formed in an effort to compile risk managementpractices surrounding these activities. The Committee is composed of Chief Risk Officers from leading compa-nies that are active in both physical and financial energy trading and marketing. They are committed to openingchannels of communication and establishing best practices for risk management in the industry.

Ch12-I044949.qxd 5/23/06 12:14 PM Page 190

Page 188: Managing Enterprise Risk

Assessing capital adequacy 191

not always sufficiently liquid to help measure and mitigate these risk exposures, it is verydifficult to determine the appropriate level of required capital to carry these risks. In addi-tion, the industry itself has been changing radically, and deregulation has enabled manycompanies to expand their business interests into unregulated operations that have intro-duced new market, credit, operational, and operations risks.

The Committee of Chief Risk Officers (CCRO) has developed a capital adequacy frame-work for application in the energy industry. To date, there have been fragmented effortswithin the industry to address capital adequacy, primarily through the use of a “one size fitsall” approach to calculating capital adequacy. This chapter is an introduction to the conceptof capital adequacy, addressing many of the complexities in the energy industry. The intentis to introduce a set of emerging practices that energy companies can explore and use.2

Companies should adopt specific emerging practices as appropriate based on their indi-vidual circumstances and needs. There is a broad distinction between regulated utilitiesand asset-based merchant/trading companies. Some entities participate in both these activ-ities, in which case the activities may be measured separately under different approachesyet combined in the end to measure overall capital adequacy. Nonetheless, capital adequacyis important for all types of organizations, and this chapter identifies emerging methods tomeasure an entity’s capital adequacy. Determining capital adequacy is not an easy task.Companies may require a substantial transition period for adoption and implementation.Furthermore, emerging practices are not deemed to be static. They will change and adapton a continuing basis to remain relevant.

The focus for capital adequacy models is on how we measure net assets as opposed to theamount a company “should have,” with a resultant “excess” or “shortage.” There are twomethods for calculation of net assets in measuring capital adequacy for economic value –invested capital and market value – with strengths and weaknesses associated with bothmeasurements. Invested capital is a more straightforward approach but has a significantdrawback in that values on the balance sheet may not reflect the market value of assets,especially for regulated companies. Estimated market value is the preferred, albeit moredifficult, approach because of the reliance on assumptions about several key factors.Economic capital is the capital a company is required to hold to support the risk of unex-pected loss in the value of its portfolio. Economic capital should encompass all risk factors the enterprise faces – market, credit, operational, and operations.

The chapter will proceed as follows. Section II will outline the concept of capital adequacyand its balanced components; Sections III and IV look at the determination of capital ade-quacy from an economic value and financial liquidity perspective, respectively; Section Vnotes the importance of the concept of capital adequacy; and Section VI concludes.

The Concept of Capital Adequacy

Capital adequacy is a potentially vital financial metric designed to assess a company’sshort- and long-term outlook for financial health. Many financial metrics emphasize the“returns” for measuring financial performance. Capital adequacy emphasizes “sufficient

2For a more detailed description of capital adequacy and the work of the CCRO, visit www.ccro.org.

Ch12-I044949.qxd 5/23/06 12:14 PM Page 191

Page 189: Managing Enterprise Risk

192 Managing Enterprise Risk

capital” to meet adverse events. Capital is a key barometer of financial health, providinginvestors assurances that the company is viable and can weather uncertain outcomes.

A robust assessment of capital adequacy requires an analysis of and balance between twomeasurements – economic value and financial liquidity (Figure 12.1). For a company tohave an adequate capital level, it must simultaneously possess the capacity to create suf-ficient economic value for its customers and shareholders and the sources of liquidity tomeet maturing obligations under adverse conditions. Insufficiencies in either measurewill create inadequacy for the business as a going concern and will hinder its ability tocreate value. In the context of capital adequacy for this chapter, economic value and finan-cial liquidity have the following connotations:

● Economic value relates to the ability of a company to execute its planned businessactivities aimed at creating or providing products and services for existing or new cus-tomers while creating or enhancing shareholder value. The general state of the globaleconomy and prevailing business and regulatory climate create uncertainty in a com-pany’s future cash flows, thereby creating uncertainty in its valuation. Although theexpected economic value may be favorable, the business must have capital to with-stand potential unfavorable outcomes to remain a viable competitive entity.

● Financial liquidity relates to a company’s ability to meet demands for cash as theybecome due. These cash demands arise simultaneously from the company’s physicalbusiness activities and from its financial operations and are required to manage therisks inherent in creating economic value.

The vigor with which a particular company approaches its assessment of economic capi-tal and financial liquidity adequacy depends, to a large extent, on (1) the complexity of itsportfolio and (2) the availability and commitment of resources, both of which affect thelevel of complexity used to calculate capital adequacy. Understanding how companies

What capital adequacy means

• Capital adequacy refers to the overall assessment of a company’s financial healthas a “going concern”

• A robust assessment of capital adequacy requires balance between two financialperspectives on “adequacy”…

Capital adequacy for continuing Economic value:

Economic capital adequacy

Capital adequacy for continuing Financial liquidity: Liquidity adequacy

Figure 12.1. The components of capital adequacy – balanced economic value and financial liquidity.

Ch12-I044949.qxd 5/23/06 12:14 PM Page 192

Page 190: Managing Enterprise Risk

Assessing capital adequacy 193

measure risk is as important as the results they are calculating. This includes how onelooks at the individual components of each calculation, how the components are summedonce they are determined, and how relationships among various factors are accounted for.

The Components of Capital Adequacy for Economic Value

The framework for assessing capital adequacy (or inadequacy) for economic value requiresquantitative evaluations of three components:

1. net assets;

2. debt and debt-like instruments;

3. economic capital.

Capital adequacy for economic value equals net assets less debt less economic capital(Figure 12.2). Methodologies for determining net assets and debt are described in generalterms, and some high-level issues concerning their valuation are identified.

Net assetsThe net assets for a company (Figure 12.2) are long-term assets and short-term assets andliabilities, including such items as collateral and margin requirements.

DebtAs is evident from Figure 12.2, debt and debt-like instruments reduce net assets. Allforms of debt should be accounted for in this component. Debt should reflect the dollar value of the claims on a company that third parties hold as a result of financing or

Net assets

Debt

Economic capital: Capital for riskor uncertainty in value

Excess or shortfall

Figure 12.2. Calculating capital adequacy for economic value.

Ch12-I044949.qxd 5/23/06 12:14 PM Page 193

Page 191: Managing Enterprise Risk

194 Managing Enterprise Risk

commercial contracts. This includes both on- and off-balance sheet debt. On-balancesheet debt includes debt obligations such as commercial paper, first mortgage bonds, cap-italized leases, and deferred taxes. Examples of off-balance sheet debt are operatingleases, guarantees, and unfunded pension obligations.

Economic capital and its componentsEconomic capital, the third component of the capital adequacy framework, is the capitala company is required to hold to support the risk of unexpected loss in the value of itsportfolio. Economic capital should encompass all risk classes (market, credit, operations,and operational) the enterprise faces (Figure 12.3).

The framework for determining capital adequacy for economic value requires an estima-tion of economic capital. This economic capital should cover the most significant quan-tifiable risks that a merchant energy business faces: market risk, credit risk, and operative(operational/operations) risk. A company must assess each of these sources of risk.

Market risk

Market risk is broadly defined as the potential loss in value from adverse movement ofmarket price variables, such as energy prices, foreign exchange, and interest rates, over adefined time horizon. Market risk is measured by taking the difference between theexpected value of the performance measure and the value of the measure at a certain con-fidence level on the distribution. Key to the measurement of market risk is the estimationof price movements over time. Both analytical (“closed-form”) and simulation approachesare used. Simulation offers flexibility in handling several features of energy price behavior

Probabilistic cash flows

Risk sources

Legal

FinanceMarketing

Asset operations Planning

Trading

Pric

e le

vels

Pric

e vo

latil

ityB

asis

risk

sTr

ade

expo

sure

s

Volume

balan

cing

Execu

tion

Cash flow

Debt level

Currency

Taxes

Interest rate

Marketexpectations

Estimation/

modeling

Strategy

Tactics

Constructi

on

Enviro

nmen

t

Vend

ors

Sup

ply

cost

sQ

ualit

yS

afet

y

Perfo

rman

ce

CountryCustomer relationsCompetitors

Market development

Tactics

Product offering

Regulatory

Environm

ent

Contractual

Injury/death

Property

Credit

MarketMarket

CreditCredit

OperationalOperational

OperationsOperations

Figure 12.3. Assessment of economic capital adequacy.

Ch12-I044949.qxd 5/23/06 12:14 PM Page 194

Page 192: Managing Enterprise Risk

Assessing capital adequacy 195

that make analytical solutions difficult. Simulation does have its drawbacks, however, primarily because of the sheer number of iterative simulations and need for proper treat-ment of multidimensional correlations. A firm must constantly evaluate the need to modelsuch complexities for their respective businesses in order to determine the appropriatesolution for quantifying market risk.

Credit risk

Credit risk is the risk of non-performance by a counterparty. Economic capital for credit riskis defined as the difference between the expected loss of a portfolio and the maximum tol-erable loss implied by a desired confidence level. Economic capital for credit risk is derivedby calculating the amount of capital required to support the unexpected credit loss of anorganization, using a distribution of credit losses generated by a credit risk model. Note thefocus on “unexpected” loss – the measurement of uncertainty around the expected loss. Thischapter describes the various approaches and modeling techniques for measuring credit riskand also discusses an “interim” solution that approximates unexpected loss. It is providedfor those companies that may not otherwise be able to calculate economic capital.

Operative risk

Operative risk is an integral component of measuring capital adequacy. However, themethod is not as well established as the other components of economic capital. There are awide range of methodologies for managing these risks. We define “operative risk” as thesum of operations and operational risk. Operations risk is the risk associated with deliver-ing, producing, or storing physical energy products including unplanned forced outagerates. Operational risk is the risk of direct or indirect loss resulting from inadequate or failedinternal processes, people, and systems or from external events. Note that operative risk in energy is inherently different from banking due to the presence of physical assets in acompany’s portfolio.

Principally, a means for measuring operative risk is to create a “risk taxonomy” as a long-term solution, coupled with the development of an internal rating-based scorecard as the firststep toward including operative risk as part of economic capital. The scorecard approachassesses the effectiveness of the controls and mitigation techniques in place. The risk taxon-omy is a system for organizing types of operative risks via a family tree, aggregating risks byvarious characteristics. Given the current embryonic state of measuring operational risk, weprefer a combination of measures, with emphasis on qualitative differentiation betweencompanies. Once again, it is as important to understand “how” the companies are goingabout this measurement as it is to understand the “results” they are calculating.

Aggregating market, credit, and operative risksThere are a number of means to combine market, credit, and operative risk to calculatetotal economic capital. Three common methodologies for aggregation are Simple Sum,Modern Portfolio Theory, and Monte Carlo Simulation. The first two methods imply atwo-step process. First, the components of economic capital are calculated for each risk.

Ch12-I044949.qxd 5/23/06 12:14 PM Page 195

Page 193: Managing Enterprise Risk

196 Managing Enterprise Risk

Second, they are aggregated in an analytical form. While these approaches may seem simplistic, they are a practical necessity. The second methodology takes into account thecorrelation between risk buckets. However, estimating correlation at this level is difficultbecause of the limited availability of data. Finally, the third methodology attempts to pro-duce a joint probability distribution for the three risk buckets through simulation. Thismethodology is the most comprehensive and consistent, but is the most costly and mostdifficult to implement.3

Economic capital is determined at a desired confidence level in the probability distributionof value of the business. We assume that economic capital can be approximated by theunexpected loss at the given confidence level.

Economic capital may be estimated using a range of methodologies with varying levels ofrobustness. The greater the methodology’s robustness, the greater will be the transparency,relevance, and applicability of the economic capital measure.4

The Components of Capital Adequacy for Liquidity

Liquidity adequacy comprises the right-hand side of the balance beam for capital adequacyas shown in Figure 12.1. Liquidity adequacy is the assessment of the sufficiency of allexpected internal and external financial resources that are readily available to meet sched-uled cash flow obligations, net of a measurement of the uncertainties resulting from cashflow risk factors. Liquidity adequacy must exist without substantial disposition of assetsoutside the ordinary course of business, restructuring of debt, externally forced revisionsof its operations, or similar actions.

The liquidity framework provides an assessment of a company’s liquidity adequacy underboth normal business conditions and stressed conditions. If liquidity is inadequate undernormal condition and the company does nothing to either reduce risk or increase the sourcesof liquidity, then trigger events such as a credit downgrade, reduced credit lines, and collat-eral calls might drive the company into financial distress. This may precipitate futurerequirements for liquidity. To meet this requirement, the company may be forced toengage in asset disposition or other activities to raise capital, which may worsen its cash-generating capability, leading to further rating cuts and collateral calls.

Liquidity may be viewed differently for regulated utilities that are contractually or regu-latively afforded cost recovery as a result of events that create a mismatch in their costsand rates. A potential short-term deficiency in revenues requires the load-serving entity tohave adequate sources of capital to cover the time it takes to recover its excess costs. Thiscould be a relatively short time (2–6 months) while unexpected costs are tracked througha revenue (rate) adjustment mechanism, but it could become longer if a revenue (rate)

3A discussion of risk aggregation can be found in “Emerging Practices for Assessing Capital Adequacy” avail-able at www.ccro.org4For a thorough discussion of economic capital, see “Emerging Practices for Assessing Capital Adequacy” atwww.ccro.org

Ch12-I044949.qxd 5/23/06 12:14 PM Page 196

Page 194: Managing Enterprise Risk

Assessing capital adequacy 197

increase were needed. Regulated utilities should also identify and measure rate designrisks that prohibit them from balancing their revenues and costs.

Liquidity adequacy equals sources of liquidity less fixed payments and contingent liquid-ity requirements (Figure 12.4).

Calculating liquidity adequacy is done by measuring internal funding requirements fromall expected internal and external financial resources in meeting cash flow obligations ordemands under normal and adverse market conditions, taking into account market, credit,and operative contingencies. Modeling liquidity is complex in that it is centered on unex-pected change or variation in requirements. To the extent possible, it applies the consistentprice propagation or price modeling process used in market and credit risk assessmentscombined with financial relationships used in the construction of forward-looking financialcash flow statements. It is recommended that companies use both expected and extremestress test scenarios in modeling liquidity requirements and disclose assumptions. We sug-gest implementing liquidity limits for contingent liquidity requirements as a means tomonitor and report on liquidity risk. Finally, the importance of liquidity dictates measuringliquidity over a number of different time horizons including calculating both over a short-term horizon (e.g., 30 days, 90 days) and a longer term (e.g., 1 year).

Importance of the Framework for Assessing Capital Adequacy

The capital adequacy framework can be a very useful managerial tool. The value of theframework comes from: improving stakeholder confidence; managing performance; andpromoting competition.

Bank creditlines

Cash and cash

equivalents

Expectedcash flow

fromoperations

(CFO)

Requiredfixed

payments

Cash flowat

risk (CFaR)

Triggerevents

Sources ofliquidity

Fixedpayments

Contingentliquidity

Liquidityadequacy

Excess(or shortfall)

Figure 12.4. Assessment of liquidity adequacy.

Ch12-I044949.qxd 5/23/06 12:14 PM Page 197

Page 195: Managing Enterprise Risk

198 Managing Enterprise Risk

Improving stakeholder confidence

Financial stakeholders include shareholders, debt providers, rating agencies, analysts, andauditing companies to mention a few. The confidence of these stakeholders is crucialbecause they effectively set the company’s cost of capital.

The capital adequacy framework is aligned with the needs of stakeholders because it can helpmanagement assess the long-run viability of the company’s business model. Through theprocesses of performing the risk analyses necessary for determining capital adequacy, man-agement brings forth valuable information that can be used to bolster stakeholder confidence.

External auditors play an important role in creating the degree of stakeholder confidence,and the capital adequacy framework is aligned with audit standards. Through the concept of“going concern,” auditing standards emphasize the importance of demonstrating capitaladequacy. One definition of going concern is “… entity’s ability to continue to meet its obli-gations as they become due … without substantial disposition of assets outside the ordinarycourse of business, restructuring of debt, externally forced revisions of its operations or sim-ilar action.” The “going concern” concept is well aligned with the definition of capital ade-quacy in that both the liquidity and the economic value aspects are included.

In performing the capital adequacy analyses, a company must evaluate outcomes for its oper-ations and financing activities. These evaluations are designed not simply to communicate asingle “expected” value, but also to focus on the uncertainty around that expected value. Thisevaluation requires consideration of a range of alternate scenarios for market conditions,business environment, and the ultimate success of a business plan.

The capital adequacy framework can be used to evaluate a company’s growth plan withits capital and risk implications. A company’s internal growth plans must be integratedwith externally driven, uncertain market conditions and an uncertain competitive envi-ronment. As a result, aggressive growth rate assumptions need to be created by first exam-ining their impacts on capital adequacy. Decision-makers can gain much more confidencewhen given well-defined scenarios and transparency into the potential downside implica-tions of new business ventures and the ability of the firm to weather adversity.

Managing performance

Capital adequacy provides many insights into internal management of performance. Thisframework for assessing capital adequacy contributes to performance management bybringing to the forefront the risks implicit in a project or business plan. As such, theframework is a useful starting point for eventually assessing a charge for the utilization ofcapital. While the determination of the cost of capital is beyond the scope of this chapter,managing business for value requires that management account for or “charge” the vari-ous commercial activities for the capital they use. By explicitly charging for the cost ofexpected capital requirements and incorporating the cost of risk through a return measure,performance management for risk-adjusted value is possible.

Ch12-I044949.qxd 5/23/06 12:14 PM Page 198

Page 196: Managing Enterprise Risk

Assessing capital adequacy 199

Managing performance requires metrics that are scalable from the project-specific level tothe enterprise-wide level. Risk management best practices promote use of these kinds ofmetrics for capital allocation and design of risk management controls (e.g., setting theappropriate VaR and/or credit limits). These metrics can be rolled up to accurately repre-sent how various parts of the company fit within established limits and targets. This cap-ital adequacy framework provides such scalable performance metrics. Enterprise-levelcapital adequacy requirements are often built from the bottom up, using similar calcula-tions at an asset- or project-specific level. This means that risk-adjusted performance met-rics are available from the bottom up for any part of the company.

The capital adequacy framework supports and promotes better decision-making underuncertainty. Management decisions become more aligned with the ability of the companyto fund its businesses. Demonstrating these performance management practices lays thefoundation for increasing stakeholders’ confidence in the company’s ability to succeed inthe face of an uncertain business environment and should result in enhanced credit rat-ings. Furthermore, the relationship between an entity’s target credit rating and capitalrequirements should also be considered, given that differences in credit ratings result indifferent capitalization requirements.

The capital adequacy framework is actionable. Consider a company facing capital inade-quacy (Figure 12.5). This capital adequacy framework will help management evaluate theeffects of specific corrective actions. Management may consider changing the company’scapital structure (e.g., reduce debt by adding equity) or reducing economic capital require-ments by changing the makeup of the business portfolio’s risk profile.

Promoting transparency in the industry

The capital adequacy framework may help promote transparency throughout the industryas management’s use of the principles of this framework allows for a more complete assess-ment of the business and financial risks the company faces. It provides more insight intofactors that drive uncertainties and their influence on short- and long-term financial results,which may be communicated with stakeholders.

Competitors of any size that understand the uncertainties they face can provide clear insightsinto their true economic capital requirements. Ultimately, regardless of size of the company,this framework is a mechanism to provide many stakeholders with improved transparencyinto risk factors, risk management capability, and capital adequacy levels.

Conclusion

This chapter lays out a risk-based capital adequacy framework that energy companies, indus-try analysts, and other stakeholders can use to analyze a company’s ability to meet both near-term and long-term obligations, with a particular focus on merchant energy activities.

Capital adequacy is a potentially vital financial metric designed to assess a company’sshort- and long-term outlook for financial health. It measures the availability of capital

Ch12-I044949.qxd 5/23/06 12:14 PM Page 199

Page 197: Managing Enterprise Risk

200 Managing Enterprise Risk

necessary for a company to meet both its foreseen and unforeseen obligations in the shortand long term. The main notion is that existing capital should be sufficient to enable a com-pany to operate as a going concern through expected and unexpected business and eco-nomic cycles without disrupting operations and while continuing to support shareholdervalue creation.

Companies should embrace a “capital adequacy framework” in the course of performingtheir regular planning and analysis activities for the following reasons:

● First, management can use the framework to assess the long-run viability of a com-pany’s business model. The capital adequacy framework is consistent with and sup-ports the concept of a “going concern” and also shows support for anticipated growthrates. Management’s assessment brings forth valuable information that can be usedto bolster stakeholder confidence.

● Second, management can use this framework for decision-making regarding capitalallocation by bringing to the forefront risks implicit in a proposed project or businessplan. The capital adequacy framework may be used as a starting point for eventuallyassessing a “charge” for the utilization of capital.

● Third, if a company is facing a capital shortfall, a capital adequacy framework willhelp management evaluate the effects of specific corrective actions. Managementmay consider changing the company’s capital structure (e.g., reducing debt by

• Economic capital beyond B/S capacity creates a shortfall

• Restore adequacy by:– Change capital structure

– Change the risk profile driving EC

– Accept a reduced solvency confidence (rating out)

Netassets

Debt

Economic capital

target rating

Capital shortfall

MarketMarket

CreditCredit

OperationalOperational

OperationsOperations

Figure 12.5. Restoring capital adequacy.

Ch12-I044949.qxd 5/23/06 12:14 PM Page 200

Page 198: Managing Enterprise Risk

adding equity) or reducing economic capital requirements by changing the makeupof the business portfolio’s risk profile.

● Fourth, the capital adequacy framework may help promote transparency throughoutthe industry, as management’s use of the principles of this framework allows for amore complete assessment of the business and financial risks the company faces. Itprovides more insight into factors that drive uncertainties and their influence on short-and long-term financial results, which may be communicated with stakeholders.

Developing a capital adequacy framework for a firm leads to better understanding of theorganization’s risks both by internal and external constituencies which ultimately leads toimproved firm value.

Assessing capital adequacy 201

Ch12-I044949.qxd 5/23/06 12:14 PM Page 201

Page 199: Managing Enterprise Risk

CHAPTER 13

Full-Spectrum Portfolio and Diversity Analysis of Energy Technologies

Shimon Awerbuch

SPRU Energy Center, University of Sussex, Brighton, UK

Andrew Stirling

SPRU Energy Center, University of Sussex, Brighton, UK

Jaap C. Jansen

ECN Energy Research Centre of the Netherlands

Luuk W. M. Beurskens

ECN Energy Research Centre of the Netherlands

Abstract

Energy diversity and security have been evaluated using Stirling’s (1994, 1996, 1997b,1998) multi-criteria diversity analysis (MDA) as well as more classical Markowitz mean-variance portfolio (MVP) theory (Awerbuch and Berger, 2003; Awerbuch, 2005). Each ofthese approaches is capable of producing an efficient frontier (EF) that shows optimalgenerating mixes – those that maximize performance (i.e. minimize cost) while mini-mizing risk or uncertainty (i.e. maximizing diversity). MDA covers the full spectrum of“incertitude,” reaching into areas where little is known about the range of possible out-comes, let alone their probabilities. However, MDA does not exploit statistical informa-tion that is available in certain parts of the risk spectrum where historic means, variancesand co-variances of outcomes are known and can be used to make inferences about thefuture. MVP operates precisely in this space, although, like other capital market models,its prescriptive value rests on the idea that the past is the best guide to the future and that.As such MVP can be blind to unforeseen events that create future structural change.

Used in isolation, therefore, neither model offers a fully satisfying result. An MVP analy-sis of energy technologies tells us how to create generating portfolios with minimum cost

202

Ch13-I044949.qxd 5/25/06 4:18 PM Page 202

Page 200: Managing Enterprise Risk

Full-spectrum portfolio and diversity analysis of energy technologies 203

and risk (cost-variance), assuming historic ranges predict the future well enough. The solu-tions are fine – as long as decision makers are confident that market prices fully reflect allrelevant considerations and that past values, ranges and variances provide a reliable guideto the future. Yet it is unlikely that relevant technology and portfolio attributes are fullyreflected purely by the level and variability of their accounting costs. MDA recognizes thatperformance aspects are incompletely addressed in market prices and that concern aboutthe future extends to so-called unknown risks, that is possible future events of a kind orscale that has not occurred before.

This chapter articulates the two approaches to make it potentially applicable to a fullrange of decision-making contexts. Using a combined MVP�MDA optimization we mapthe space between optimal MVP and MDA solutions for a given set of input assumptions.Placing 100% of the emphasis on MVP, produces results based purely on market pricesand historic trends. These may suffice for narrower financial purposes or short planninghorizons. On the other hand, giving MDA a 100% weighting produces portfolios thatremain efficient even under conditions of uncertainty, ambiguity or ignorance, where policymakers need to consider broader notions of performance and have less confidence in theirknowledge of future events and their consequences.

The full-spectrum uncertainty model enables policy makers to evaluate how the EF changesas their confidence in historic-based statistical risk measures is reduced while uncertainty,ambiguity and ignorance are given more weight. The full-spectrum model provides a basisfor systematically exploring sensitivity to changes in the underlying qualitative assumptions.

I. Overview: Introduction to Portfolio and Diversity Analysis

Standard MVP models are widely applied to the selection of optimal financial portfolios.1

MVP optimization has also been applied to capital budgeting and project valuation (Seitzand Ellison, 1995), valuing offshore oil leases (Helfat, 1988), energy planning (Bar-Lev andKatz, 1976; Awerbuch, 1995; Humphreys and McLain, 1998; Awerbuch, 2000; Awerbuchand Berger, 2003; Berger, 2003) climate change mitigation policies (Springer, 2003;Springer and Laurikka (undated)) and optimizing real (physical) and derivative electricitytrading options (Kleindorfer and Li, 2005).

Like the capital asset pricing model (CAPM) and other capital market models, MVP isconceptually forward looking, reflecting investors’ future assessment of market risk andreturn, where risk is measured as the periodic standard deviation (SD) of asset returns.Since future SD is unknown, MVP analysis invariably substitutes the SD of observed, his-toric returns.2 This procedure is useful as long as past market processes provide a reliable

1Portfolio theory is based on the pioneering work of Nobel Laureate Harry Markowitz 50 years ago; see:Fabozzi, Gupta and Markowitz (2002) and Hal Varian (1993).2Strictly speaking, future risk is an investor appraisal. It may be based on a number of unknown factors includ-ing observed historic variance. Fabozzi et al. (2002) characterize the policy maker’s assumptions regardingfuture expected values, SDs and correlations as a “hypothetical set of beliefs,” and do not presume uncondi-tionally that only historic-based values should be used. Humphreys and Mclain (1998) use a GeneralizedAutoRegressive Conditional Heteroscedasticity (GARCH) approach to reflect changing future variance and cor-relation expectations.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 203

Page 201: Managing Enterprise Risk

204 Managing Enterprise Risk

guide to the future, a presumption that Stirling (1995, 1996) has recently criticized in thecase of energy generating portfolios.

Stirling argues that uncertainty, ambiguity and ignorance rather than risk, dominateactual electricity investment and policy decisions and conceptualizes a broader notion ofdiversification as a response to these more intractable knowledge-deficiencies. Thesenotions form the basis of his adaptation to energy use of the Shannon–Wiener DiversityIndex (Stirling, 1994) – an approach attracting some academic and policy interest (Lucaset al., 1995; Brower, 1995; DTI, 1995; Feldman, 1998; OECD, 2001; DTI, 2004; Jansenet al., 2004; Grubb et al., 2004; Suzuki, 2004). Stirling’s criticism of MVP is two-fold.First, he argues that finance-theoretic approaches are constrained by their rather narrowrate-of-return performance notions. These neglect social, environmental or other strategicissues that are incompletely incorporated into market prices. Additionally, he argues thatMVP approaches are limited to a small, circumscribed region of the complete uncertaintiesor incertitude space, and that the full range of relevant technology performance attributesextends well beyond those addressed by ordinary accounting costs and rates-of-return. Inaddition, the future potential for “surprise”3 likely goes well beyond the more limited rangeof possible outcomes and likelihoods addressed by historic variance and covariance data.

In view of these limitations of traditional MVP, Stirling (1997b, 1998) proposes a MDAapproach that seeks to cover the full spectrum of incertitude, reaching into areas of strictuncertainty,4 where outcomes may be fully characterized, but where there is little confi-dence in the basis for assigning probabilities (Knight, 1921; Keynes, 1921; Luce and Raiffa,1957; Morgan et al., 1990) (Figure 13.1). MDA also addresses ambiguity – a region of the

3Surprise being defined as outcomes not previously envisioned as in the realm of possibility.4The term strict uncertainty is used to distinguish its accepted precise usage in economics (Knight, 1921) anddecision analysis (Luce and Raiffa, 1957) from the more general colloquial connotations of ‘uncertainty’ forwhich Stirling (1998) proposes the term incertitude.

After: Stirling, 2003

IgnoranceUncertaintyNo basis forprobabilities

Knowledgeabout

likelihoods

Some basis forprobabilities Risk Ambiguity

Degree/type of incertitude

Knowledge about outcomes

Well-definedoutcomes

Poorly-definedoutcomes

Figure 13.1. Knowledge about likelihoods and outcomes and the resulting type and degree of incertitude.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 204

Page 202: Managing Enterprise Risk

Full-spectrum portfolio and diversity analysis of energy technologies 205

incertitude space where the characterization, partitioning (classification) or interpretation ofthe outcomes themselves is a matter of dispute (Figure 13.1). Ambiguity reflects the extentto which we disagree about what is important. Whether this is a reflection of divergent dis-ciplinary perspectives, cultural values or institutional interests, it may remain a challengeeven where there is little dispute regarding the relative likelihood of the outcomes. Undersuch conditions of divergent framings and plural preferences, social choice theory hasdemonstrated that the use of MVP and other conventional utility maximizing approaches toidentify a single “best” policy option is intrinsically problematic (e.g. see Arrow, 1963). Evenwhere they cannot be resolved by analysis, diverse portfolios may satisfactorily accommo-date such plural perspectives.

Beyond the challenges presented by uncertainty and ambiguity, there lies the state ofignorance where planners face the challenge of unknown unknowns, knowledge gaps,indeterminacy and surprise (Keynes, 1921; Loasby, 1976; Smithson, 1989; Funtowiczand Ravetz, 1990; Wynne, 1992; Faber and Proops, 1994; Stirling, 2003). In addition todifficulties in definitively characterizing or partitioning the possibilities, there is aprospect of unexpected outcomes, arising entirely outside the domain of prior possibili-ties. This is where MDA provides a means of hedging against ignorance by not “puttingall the eggs in one basket”. MDA focuses attention on using the best available informationto characterize the “eggs” and the “baskets.” The degree of diversification reflects a balance between confidence in this knowledge and aversion to ignorance.

A potential criticism of MDA, however, is that it does not exploit additional informationavailable in the risk region of the full incertitude spectrum (Figure 13.1), where suchinformation is sufficiently robust to have decision-making value. The diversity approachneglects the historic variances and co-variances of outcomes, even where these may provide some reliable guide to future performance.

MVP theory operates precisely in this space. Like other capital market models, MVP’sprescriptive value rests on the idea that the past is the best guide to the future. This is notto say that unexpected events will not happen – only that the effect of these events, includ-ing their impact on costs and other performance indicators are already known from pastexperience.

The MVP approach defines portfolio risk as total risk – the sum of random and systematicfluctuations – measured as the SD of periodic returns. Portfolio risk therefore includes therandom (and hence largely uncorrelated) fluctuations of many individual portfolio compo-nents, which have a wide variety of historic causes including an Enron bankruptcy, a par-ticular technological failure, bad news about a new drug, resignation of a company’s CEOor the outbreak of unrest in oil-producing parts of the world (Awerbuch and Berger, 2003).Total risk, it seems, is therefore the summation of the effects of all historic events, includ-ing countless historic surprises (ibid.).

It may be true, as Stirling posits, that no particular random event may ever be precisely dupli-cated. Nonetheless, at least in the case of financial investments, historic total variability iswidely considered to be a useful indicator of future volatility so that studying the past canhelp planners make inferences about the future (e.g. Ibbotson Associates, 1998, p. 27). Andwhile the actual historic events may not be repeated, event-types can be expected to recur(ibid.).

Ch13-I044949.qxd 5/25/06 4:18 PM Page 205

Page 203: Managing Enterprise Risk

206 Managing Enterprise Risk

Yet the idea that total historic variability of returns offers a useful guide to the future isprobably more justifiable in the case of financial portfolios, where markets are highly effi-cient and assets infinitely divisible, since these conditions imply that portfolio assets canbe sold the moment observed variances change or new information alters an investor’sperception about their relevance to the future.

It is less clear that this justification is as reliable for MVP applications involving non-financial portfolios especially portfolios of long-lived energy assets that trade in dynamicand imperfect markets.5 However, even if the robustness of historic variance as a predic-tor of future risk were not in question, MVP applications to energy technologies are stillsubject to the criticism that they are probably blind to a variety of possible unforeseenevents, at least some of which are capable of creating sufficient future structural changeso as to nullify the prescriptive value of historic variance measures.

We conclude therefore that used in isolation, neither the MVP nor the diversity model maybe seen to offer fully satisfying results. MVP analysis tells us how to create generating port-folios with minimum cost-variance, but only if the cost parameter dominates decision-making and historic variance ranges predict the future well enough. The solutions are usefulfor policymaking as long as decision makers are confident that past values, ranges and vari-ances (and co-variances) are complete and will continue into the future. But what aboutuncertain, ambiguous or unknown risks – unforeseen possibly low-probability future eventsthat might produce outcomes with unknown or disputed consequences? Where the prospectof strict uncertainty, ambiguity and ignorance raise questions about the appropriate degreeof confidence to place in historic data, Stirling’s MDA becomes a potentially powerfulmeans for developing efficient generating portfolios that attempt to reflect the entire tech-nology performance and risk space – not just return (or cost) and its historic variance.

This research addresses the limitations of both classical MVP models as well as MDA asapplied to portfolios of electricity generating technologies. It articulates the two models intowhat we characterize as full-spectrum risk analysis, capable of producing a set of user-weighted EFs that vary with relative degrees of confidence or ignorance about our knowl-edge concerning future outcomes. At one extreme, when the MVP component has a weightof 1.0, the frontier reflects only historic knowledge, as mediated by market processes. Thisis useful where there exists a high level of confidence that past relationships will hold, aswould be the case for near-term corporate planning, where uncertainty, ambiguity and igno-rance over future events – and corresponding possibilities for disagreement and surprise –are low. As planning horizons are extended, however, confidence about historic relation-ships and the sufficiency of accounting cost data diminishes and is replaced by the prospectof a range of uncertain, ambiguous, unknown and even unknowable events and outcomes.Over this range, MDA becomes increasingly useful as a basis for managing risks.

Ultimately, at the other extreme, MVP optimization results carry no weight and the EFrelies purely on MDA. This produces a set of “efficient” portfolios characterized not in termsof financial risk data, but in terms of an explicit set of more broadly based performancecriteria and judgments over the qualitative disparities between different options. This full

5Although Humphreys and McLain (Energy Journal, op cit) attempt to reflect changing variances and co-variances over time with a GARCH-based approach.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 206

Page 204: Managing Enterprise Risk

Full-spectrum portfolio and diversity analysis of energy technologies 207

spectrum model enhances our understanding and communication of risk factors and theirrelevance to energy portfolio optimization. For example, MVP might specify a set of effi-cient portfolios whose composition differs from those prescribed by MDA’s wider per-formance criteria and attributes of disparity. Analyzing the differences may yield new andpowerful insights regarding the requisites for future energy security and diversity.

The full-spectrum model can be used to explore and map the transition space between the extremes: the optimal MVP- and MDA-based outcomes. In this chapter we map theefficient portfolio set and its changes over this range: that is as uncertainty, ambiguity andignorance rise, while confidence in the validity of financial data and historic relationshipsdwindles.

Inputs for this initial illustrative analysis are intentionally simple. We consider two fossiltechnologies – coal and gas and one renewable – wind. In future applications we hope toultimately specify the problem as a full multi-criteria optimization. The full-spectrummodel presented in this chapter however, allows policy makers to evaluate how the shapeand technology make-up of the EF changes as confidence in historic statistical cost and riskmeasures is reduced (given less weight) while aversion to strict uncertainty, ambiguity andignorance is increased or given more weight.

II. A Full-Spectrum Portfolio and Diversity Model

MVP theory

Portfolio selection is generally based on MVP theory developed by Harry Markowitz(1952). It enables the creation of minimum-variance portfolios for any given level ofexpected (mean) return. Such efficient portfolios therefore minimize risk, as measured bythe SD of periodic returns. The idea is that while investments are unpredictable and risky,the co-movement or covariance of returns from individual assets can be used to help insu-late portfolios thus creating higher returns with little or no additional risk.

Portfolio theory was initially conceived in the context of financial portfolios, where itrelates E(rp), the expected portfolio performance6 or return, to �p, the total portfolio risk,defined as the SD of expected returns, although historic returns and risks are typicallyused in practice. The relationship is illustrated below using a simple, two-stock portfolio.

Expected portfolio performance or return, E(rp), is simply the weighted average of theindividual expected returns E(ri) of the two securities:

E(rp) � X1 � E(r1) � X2 � E(r2) (13.1)

where

E(rp) is the expected portfolio return;

X1, X2 are the fractions of the assets 1 and 2 in the portfolio; and

6In the case of perfect markets, expectations are assumed to be unbiased, but not error-free.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 207

Page 205: Managing Enterprise Risk

208 Managing Enterprise Risk

E(r1), E(r2) are the expected holding period returns7 for assets 1 and 2; specifi-cally: the mean of all possible outcomes, weighted by the probabilityof occurrence; for example: for asset 1 it can be written: E(r1) � �piriwhere pi is the probability that outcome i will occur, and ri is the returnunder that outcome.

Portfolio risk, �p, is also a weighted average of the two securities, but is tempered by thecorrelation coefficient between the two returns:

(13.2)

where

�12 is the correlation between the two return streams8, and

�1 and �2 are the SDs of the holding periodic returns to assets 1 and 2.

The correlation coefficient, �12 represents a measure of diversity. Smaller correlation amongportfolio components creates greater diversity, which serves to reduce portfolio risk. Port-folio risk rises as its diversity declines.

Classical MVP optimization maximizes portfolio performance or return at any given levelof portfolio risk as measured by the SD of portfolio holding period returns. MVP opti-mization also assures that risk is minimized – i.e. diversity or absence of correlation ismaximized – at any given performance level. MVP principles can be applied to energytechnologies, where return can be expressed as kWh/unit cost – e.g. kWh/US cent.9 Thisis the inverse of the traditional busbar or kWh unit-cost measure. Generating portfolioperformance is therefore defined in terms of output per unit cost, a traditional engineeringoriented cost performance or efficiency measure.

MDA

Like MVP, MDA rests on optimization concepts although it adopts a broader and moreheuristic decision analytic framework (Stirling, 1997b). By analogy with the minimiza-tion of portfolio risk in MVP, MDA seeks to minimize portfolio uncertainty, ambiguityand ignorance by maximizing diversity at any given level of portfolio performance. Thedifferences between MVP and MDA lie primarily in the way performance (return) anduncertainty (risk) are conceptualized and measured. Applied to generating assets, MVP

s s s r s sp X X X X� � �12

22

12

22

1 2 12 1 22

7The financial holding period return is defined as (Seitz and Ellison, 1995, p. 225): ,

where EV is the ending value, BV the beginning value and CF the cash inflow during period.8The covariation of two return streams can be calculated by COV12 � �12�1�2. Therefore Equation (13.2) might

as well be written as

9A demonstration is given in Berger (2003).

s s sp 12

12

22

22

1 2 122 COV .� � �X X X X

rsEV BV CF

BV�

� �

Ch13-I044949.qxd 5/25/06 4:18 PM Page 208

Page 206: Managing Enterprise Risk

Full-spectrum portfolio and diversity analysis of energy technologies 209

uses singular measures for performance and risk: performance is expressed in terms ofhistoric generating costs. Risk deals with the statistical variability of those costs. MDA,on the other hand, treats portfolio performance as a vector of multi-dimensional proper-ties, which, in addition to economic efficiency measures, may also include broader envi-ronmental or strategic factors.10 Portfolio performance is now defined as:

Pport � � Xi � Wi � ri,j (13.3)

where

Xi is the fraction of asset i in the portfolio

Wj is a weighting scheme reflecting the relative priority attached to different per-formance factors such that �W �1, and

ri,j is the performance rank for technology i and cost performance factor j. In thisanalysis ri,j is a normalized rank-ratio measure, such as is routinely used in deci-sion analysis11.

In contrast to MVP, the diversity analytic approach represents an attempt to address the fullspectrum of different forms of incertitude. As previously discussed (Table 13.1), it appliesto ambiguity where outcomes are poorly defined but historic probabilities apply and thenextends to regions where probability data themselves do not apply – e.g. to uncertainty,where outcomes are well defined and ignorance, where outcomes are unknown and thepossibility of surprise heightens (Stirling, 1998, 2003).12 MDA addresses these challengesby focusing directly on portfolio diversity as a means to hedge against uncertainty, accom-modate ambiguity and build resilience against ignorance. Drawing on concepts from deci-sion analysis, information theory, taxonomy and evolutionary ecology, diversity is definedas a function of

(i) The disparities between options – characterized in terms of whatever propertiesare judged to represent the salient differences between options.

(ii) Portfolio variety as represented by the number of options.

(iii) The balance in the relative contributions of the different options.

Under any view, disparity is the principal, necessarily qualitative, determinant of diversity(Stirling, 1998). As such, disparity represents the salient differences between the contend-ing options. Such judgments explicitly or implicitly underlie the categorization of options

10We observe that environmental costs, in the form of adders, can readily be attached to the MVP optimization,although Stirling (1997b) has argued that this should not be done.11The normalization of performance attributes is performed on scores, not ranks. High score values are positive.The expression: S � [s – MIN{s}]/[MAX{s} – MIN{s}], where: S � normalized score for option i under crite-rion c, and s �assigned score for option i under criterion c. MIN{s}/ MAX{s} � minimum/maximum assignedoption score under criterion c. The overall performance ranks are then computed from the normalized scores asfollows: R � SUM{W.S}, where: S � normalized score for option i under criterion c and W: normalized weight-ing for criterion c. For unstructured assignment of weightings, the normalization is similar: W � w/SUM{w}.Where: W � normalized weighting for criterion c, and w � assigned weighting for criterion c.12Which is similar to the Sloan Foundation KUU concept – The known, unknown and unknowable.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 209

Page 207: Managing Enterprise Risk

210 Managing Enterprise Risk

in any form of analysis and will serve to alter the notions of disparity. For instance, clas-sifying a set of options in terms of “coal, oil, gas and renewables” reflects different dis-parity notions than “fossil fuels, wind, solar, biomass and geothermal.” The twoclassification sets may produce significantly different results. Using the concept of dis-parity, MDA makes these factors explicit. It approaches the characterization of optionsdirectly in terms of whatever attributes are held to be salient disparity characteristics. Forany given set of such judgments, variety and balance reduce to simple quantitative fac-tors. Variety is an integer. Balance a set of fractions that sum to one.13 In these terms, (byanalogy with risk minimization in MVP) portfolio incertitude can be conceived as thereciprocal of its multi-criteria-diversity and expressed as:

Uport �1/Dport �1/� Xi � Xj � di,j

where

Dport is portfolio diversity,

Xi, Xj are, as before, the fractions of assets i and j in the portfolio, and

di,j is the measure of disparity between technology i and technology j.

The di,j are measured as the n-dimensional Euclidian distance between the disparityattributes of the alternative technologies.

Diversity analysis: an illustration

Consider a 3-technology portfolio consisting of coal, gas and wind generation whose per-formance can be measured in term of four criteria: economic efficiency, environmental(CO2) performance, land-use impact and energy security impact.

Now portfolio performance can be written as:

Pport �

Xcoal � (Wcost � rcoal,econ. efficiency � WCO2� rcoal,CO2

� Wland � rcoal,land � Wsec � rcoal,sec)

� Xgas � (Wcost � rgas,econ. efficiency � WCO2� rgas,CO2

� Wland � rgas,land � Wsec � rgas,sec)

� X wind � (Wcost � rgas,econ. efficiency � WCO2� rwind,CO2

� Wland � rwind,land � Wsec � rwind,sec)

13Detailed analysis of technical approaches to diversity in a range of disciplines (Stirling, 1998) yields a robustintegration of these factors into a single novel heuristic metric of diversity. This takes the simple form of the sumof the disparities over all pairs of options, weighted by the proportional contributions to the portfolio of each pairof options as further discussed subsequently. It can be shown that this index displays all the desirable properties ofa diversity index, rising monotonically with variety, balance and disparity (Stirling, 1998).

Ch13-I044949.qxd 5/25/06 4:18 PM Page 210

Page 208: Managing Enterprise Risk

Full-spectrum portfolio and diversity analysis of energy technologies 211

Table 13.1 provides a set of illustrative values of r, the performance attributes and theirweightings, W. Higher numbers indicate a greater cost – i.e. poorer technology perform-ance – along a particular criterion. As is standard in multi-criteria analysis, these valuesare normalized to ensure consistency. The values indicate that the economic efficiency ofgas is “better” than wind or coal. Similarly, gas is “better” in terms of land use but worseon the Energy Security measure.

Table 13.1 also displays a particular set of illustrative base-case weightings for the vari-ous performance criteria. The particular values assigned here embody an idea that exter-nal costs (last three columns) are collectively similar in importance to private costs (firstcolumn), an assumption that broadly reflects the geometric mean for an extremely variedempirical literature (Sundqvist et al., 2004). Within this range, CO2 and energy securityobjectives are equally important, and twice the priority of land use. However, as with theperformance attributes themselves, the particular values are highly schematic andintended for expositional purposes. An actual diversity analysis exercise would need tovalidate these inputs through intensive deliberative consultation with appropriate projectparticipants.

Portfolio diversity

Portfolio exposure to ambiguity, uncertainty and ignorance is characterized by a portfoliodiversity index, which is based on the sum of disparities between different pairs of port-folio options, weighted by the proportional reliance on each pair. As discussed above, thediversity of any portfolio consisting of the three options, coal, gas and wind, will dependon judgments over the salience of the strategic differences between them. For the pur-poses of this illustrative exercise, we assume that technology disparity can be expressedin terms of the following set of strategic technology disparity attributes (ai) that relatepotential technology exposure to a number of factors:

(i) Sensitivity to climate change restrictions.

(ii) Vulnerability to disruption of global supply chains.

(iii) Prospects for domestic industrial disruption.

Table 13.1. An illustrative view of multi-criteria performance (r).

Performance dimension

Economic Environment Energyefficiency (CO2) Land-use security

Weight 0.5 0.2 0.1 0.2

Coal 4 4 2 2Gas 3 2 1 4Wind 4 1 4 1

Ch13-I044949.qxd 5/25/06 4:18 PM Page 211

Page 209: Managing Enterprise Risk

212 Managing Enterprise Risk

(iv) Exposure to political instability in source countries.

(v) Infrastructure vulnerability to terrorist attack.

(vi) Potential for unanticipated technological failures.

Table 13.2 provides an illustrative set of low-resolution, schematic strategic technologydisparity attributes (ai) where the magnitude of the number reflects the degree of exposureto that particular type of surprise: that is the larger the number, the greater the exposure.As with the performance attributes, these values are also normalized in analysis, to ensureconsistency.

As with the performance criteria (Table 13.1), Table 13.2 includes a set of weightings toreflect the relative importance of each disparity attribute. Again, these are subjective andcontingent and intended simply for illustrative purposes. They represent a two-tier impor-tance ordering, that prioritizes climate, supply chains, political and technological failuresequally over industrial disruption and infrastructure vulnerabilities. In any full diversityanalysis, both the weightings and the attribute values – like performance criteria – wouldalso be validated through consultative deliberation with project participants. In any event,the subjective nature of these parameters means that their value lies primarily in provid-ing a basis for comprehensive sensitivity analysis.

Portfolio diversity, Dport is a function of the multi-attribute disparities (di,j,) between eachpair of options i, j and the proportional contributions made by each option to the portfo-lio (Equation 13.3) (Stirling, 1998). The di,j represents the Euclidean distances betweenthe coordinates of each option in the multi-dimensional attribute space. Using Equation(13.3), portfolio diversity can now be written as:

The di,j Euclidian distances between technologies i and j are computed as follows:

di,j � � SQRT (ai,n – aj,n)1

where ai,n is the weighted disparity attribute for technology i and strategic attribute n.

D X X d

X X d Xport i,j i j i,j

coal gas coal,gas c

� �

� �

⋅ ⋅

ooal wind coal,wind gas wind gas,windX d X X d�

Table 13.2. An illustrative view of multi-attribute disparity (ai).

Technology disparity attributes

Supply Industrial Political Infrastructure TechnologyClimate chains disruption instability vulnerability failure

Weight 0.2 0.2 0.1 0.2 0.1 0.2

Coal 1 1 1 0.5 0.5 0Gas 0.5 0.5 0.5 1 1 0.5Wind 0 0 0 0 0 1

Ch13-I044949.qxd 5/25/06 4:18 PM Page 212

Page 210: Managing Enterprise Risk

Full-spectrum portfolio and diversity analysis of energy technologies 213

For example:

The disparity between coal and gas is 0.5. By comparison, the disparity between coal andwind is:

As would be expected, the disparity between coal and gas is smaller than the disparitybetween coal and wind.

Full-spectrum portfolio analysis

Whilst MVP analysis is restricted to “risk” (�p), a “full spectrum” analysis extends atten-tion to broader notions of uncertainty, ambiguity and ignorance. By analogy with MVP,such full spectrum portfolio uncertainty (Uport), can be defined for present purposes14 asan absence of multi-attribute diversity or 1/D. We express this full spectrum portfoliouncertainty as:

where � is a weighting parameter with values 0–1.0.

� � 1.0 yields efficient portfolios based entirely on MVP risk while � �0 yields solu-tions that consider only multi-attribute diversity.

Full-spectrum portfolio performance can be written as:

Pport* MVP return 1 multi-criteria pe� � � � � � �( ) rrformance

( ) 1

( )p port

1 1

� � � � � � �

� � � �

E r P

X E r

( )

{ ⋅ XX E r X W r2 2 i i i,j( )} 1⋅ ⋅ ⋅∑� � � �( )

Uport MVP risk 1 1/multi-attribute d* ( )� � � � � � � iiversity uncertainty

1 1/port*

pU Dp� � � � � � �s ( ) oort

dcoal,wind2 2SQRT {0.2(1 0) 0.2(1 0) 0.1(1 0� � � � � � )) 0.25(0.5 0)

0.1(0.5 0) 0.2(0 1)SQR

2 2

2 2

� �

� � � ��

}TT{.475} 0.7�

d

a acoal,gas

coal,climate gas,climateSQRT {(

� )) )2coal,suppl-chain gas,supply-chain

2(� �

a a

���� �( )a acoal,tech-failure gas,tech-failure

2}}

� � � � � � �SQRT {0.2(1 .5) 0.2(1 .5) 0.1(1 .5) 02 2 2 ..2(.5 1)0.1(.5 1) 0.2(0 .5)

2

2 2

coal,ga

� � � � }d ss SQRT {.25} 0.5� �

14As discussed elsewhere (Stirling, 1994, 1998), diversification, even under this broader understanding, is onlyone strategic response to incertitude. Others include, precaution, adaptability, flexibility, robustness and resilience(Stirling, 1999). However, for present purposes, the analogy with MVP risk makes this acceptable as a workingdefinition.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 213

Page 211: Managing Enterprise Risk

214 Managing Enterprise Risk

The full-spectrum set of EFs can be found by varying values of the weights, �, for eachchosen level of P*port :

Min: U*port � ��p � (1 – �) � 1/Dport for feasible (reasonable) values of P*port

It is important to note that the parameter � simultaneously broadens out notions both ofincertitude and performance – as characterized under a particular perspective. This canthen be repeated for a number of different perspectives. Under the MDA analysis per-formance notions are extended beyond the narrow financial considerations represented inMVP. Cost is not double counted. At intermediate points, a balance is struck between nar-rower and broader notions of uncertainty and performance. A more sophisticated modelmight treat separately the weighting of performance and uncertainty. However, for presentillustrative purposes, the single parameter � captures the central question of the “breadth”represented by the two contrasting methodologies.

III. Illustrative Results

As a first iteration, we solve the full-spectrum model as a constrained optimization. We donot optimize the articulated model, which we denote as: Min: {MVP sigma �1/(DAdiversity)}. Rather, we examine P* and U* along the MVP-based EF. This is conceptuallysimilar to Pre-emptive ordered Goal-programming, a form of mathematical programming(e.g. see Charnes and Cooper, 1964). Pre-emptive ordered goal programming satisfies thefirst or most important objective as fully as possible before it proceeds to dealing withsubsequent objectives. The solutions are optimal in some sense, though not necessarilyefficient (e.g. see Awerbuch, 1976). Our approach first satisfies the objective of MVP effi-ciency, and then goes on to meet the subsequent diversity objectives. The results can becharacterized as diversity optimization, subject to an MVP efficiency constraint. All thesolutions presented in this section lie on the MVP EF.

Figure 13.2 shows the risk-return relationship for portfolios consisting of three generatingtechnologies, gas, coal and wind. A portfolio consisting of 100% gas generation has a historicSD (of holding period returns) of about 0.115 and a cost of 1/0.29 � $0.034/kWh (Table13.3). A 100% coal portfolio exhibits less risk – 0.098 – and costs 1/0.23 � $0.043/kWh.Wind, by comparison, costs 1/0.17 � $0.058 (Table 13.3). Portfolio A represents the Year-2000 US capacity mix, (15% gas, 85% coal, 0% wind) expressed in terms of gas, coal andwind only.15

Mix A is sub-optimal from a risk-return perspective. There exist an infinite number ofportfolios that lie above it and to the left. These portfolios lie in the region bounded by the triangle ANS. Compared to A, any portfolio in this region will show higher expectedreturns (lower cost) and lower risk, which represents a welfare improvement. Noimprovement is possible beyond the EF, along which return can be improved (i.e. cost canbe reduced) only by accepting higher risk. In terms of national or even corporate policy,

15Nuclear, oil and other sources are omitted.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 214

Page 212: Managing Enterprise Risk

Full-spectrum portfolio and diversity analysis of energy technologies 215

it is not essential that portfolios lie on the EF. Desirable mixes that meet particular policyobjectives not reflected in this analysis may lie inside the EF and would be perfectlyacceptable.

In Figure 13.2, Portfolio S exhibits the same risk as A, but lies on the EF. No higher-returnportfolio exists at this level of risk. Portfolio N has the same expected return as A butexhibits lower risk. As we move along the EF from S to N, the share of wind rises from0% to 18% of the mix, replacing primarily gas. Portfolio N demonstrates a key point:although assumed wind-based generating costs are 70% higher than gas, it is possible toadd wind to Portfolio A without increasing overall generating cost.16 Moreover, the movefrom Mix A to Mix N reduces risk by 23%, from 8.5% to 6.6% without affecting cost.Alternatively, Portfolio S reduces generating cost by 12% without affecting risk. This“free-lunch,” widely ignored in public and corporate policymaking, is a result of the so-called portfolio effect (see Awerbuch and Berger, 2003).

0.14

0.16

0.18

0.20

0.22

0.24

0.26

0.28

0.30

0.32

0 2 4 6 8 10 12 14Risk: Portfolio standard deviation (%)

Por

tfolio

retu

rn: k

Wh/

cent

100% Coal

100% Gas

Efficient frontier

Mix A US-2002 15% Gas85% Coal0% Wind

100% Wind

Mix RMax diversity

31% Gas 25% Coal 44% Wind

Mix N 45% Gas 37% Coal 18% Wind

Mix S 66% Gas 34% Coal 0% Wind

Figure 13.2. Risk and return for U.S. generating mix.

Table 13.3. Assumed technology costs for MVP analysis.

Technology Cost/kWh

Gas $0.034Coal $0.043Wind $0.058

16This result is discussed in Awerbuch and Berger (2003).

Ch13-I044949.qxd 5/25/06 4:18 PM Page 215

Page 213: Managing Enterprise Risk

216 Managing Enterprise Risk

Moves along the EF, while they may satisfy the different preferences of various cliente-les, create no net gains. Generating mixes at the upper end of the EF are riskier but costless. Some investors (and utility customers) may prefer such mixes. Other would prefermixes that lie at other locations along the EF. It cannot be said that Portfolio S is superiorto N; however, either is better than A.

Mix R, (Figure 13.2) represents the maximum-diversity mix, as more fully discussed sub-sequently. It reflects uncertainty, ambiguity and surprise and hence better insulatesagainst unforeseen events. Portfolio R raises cost: its generating cost is 1/0.22 � $0.045as compared to 1/0.24 � $0.042 for Portfolio A, the Year-2000 mix. Insulating againstuncertainty and surprise, therefore, increase cost by about 0.045 – 0.042 � $0.003 or 7%.This is also the additional direct cost of increasing the wind share from 18% to 44%, notcounting any additional system costs.17

Mixes along the EF consist of almost 100% wind at the lower end and 100% gas at theupper, riskier end. Figure 13.3 shows the changes in the composition of the portfolio

17We acknowledge that high wind shares may impose additional costs on the system due to intermittencyalthough in the UK these costs have been estimated as relatively small, on the order of £ 0.003–£ 0.005 (Dale et al.,2003). Moreover, Awerbuch (March, 2004) argues for a set of decentralized network protocols with discrete loadmatching that enables individual load-types to efficiently deal with resource intermittency. The foregoing notwith-standing, a more detailed MDA exercise could address the performance of all options as a resource curve, showingvariations in performance with system penetration. This is especially important for renewable options, given thesensitivity of performance to portfolio effects and the potential exhaustion of more favorable sites. Further portfo-lio effects can also be accounted for in a more elaborate exercise through the weighting of the disparity distancesto reflect positive and negative interactions.

0

10

20

30

40

50

60

70

80

90

100

0.173 0.185 0.192 0.199 0.205 0.213 0.220 0.228 0.240 0.246 0.256 0.270 0.273 0.279 0.284 0.289 0.294Portfolio return

Tec

hnol

ogy

shar

es (

%)

1.4 2.0 2.6 3.2 3.9 4.5 5.1 5.8 6.6 7.0 7.6 8.5 8.9 9.5 10.1 10.8 11.4

Portfolio MVP risk (%)

GasWind

Coal

Mix S

Mix R

Mix N

Figure 13.3. Portfolio mix along the EF.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 216

Page 214: Managing Enterprise Risk

Full-spectrum portfolio and diversity analysis of energy technologies 217

mixes along the EF. The costliest portfolio (left-hand side) consists of 96% wind, 2% gasand 2% coal. As gas and coal are added, the share of wind decreases and generating costsdecline (return increases). The share of coal begins to decline between Mixes N and S. Atthe low-cost/high risk end of the EF the portfolio mix is 100% gas.

Diversity analysis results

Figure 13.4 shows the shares of wind, coal and gas in the optimal MVP mix along the EF,(as previously shown in Figure 13.3) and also superimposes MVP risk and return along withDA diversity and performance results onto the efficient MVP mixes. The solid (red) linesshow the MVP results while the dotted (blue) lines the DA outcomes. These outcomes arenormalized. For example, MVP return begins at about 58%. This is the ratio of the initial EFreturn of 0.17 (Figure 13.2) and the highest EF return, 0.29 (0.17/0.29 � 58%).

MVP-risk begins at 12% (1.4/11.5, Figure 13.2) and reaches its maximum at the right-hand side of Figure 13.4. The MVP risk and return lines represent the EF re-plotted in thisnormalized fashion. The two MDA lines are normalized in a similar fashion so they canbe shown on the same axis. The x-scale in Figure 13.4 is no longer uniform and the loca-tion of portfolios R, N and S are shown for reference.

MDA performance drops along with the share of wind. Given the assumed performanceattributes, the portfolio performs best at the left-hand side where it consists of nearly allwind. At this point the sum of the four factors – economic efficiency, environment, land-useand energy security are maximized according to the weighting scheme imposed. Portfolioperformance hits its minimum between N and S, at the point where coal has a maximum

Performance (DA)Risk (MVP)

0%

20%

40%

60%

80%

100%

P R N S Q

0.00

0.20

0.40

0.60

0.80

1.00

CoalGasWindMVP RiskMVP ReturnDA DiversityDA Performance

COAL

WIND

GAS

Normalized risk orreturn value

Technologyshare

Diversity (DA)

Diversity (DA)

Return (MVP)

Figure 13.4. MVP and diversity analysis results along the EF.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 217

Page 215: Managing Enterprise Risk

218 Managing Enterprise Risk

share of the mix. Beyond that point, performance begins to improve again, reaching 25% ofits maximum potential at Portfolio Q, which consists of 100% gas.

Portfolio diversityPortfolio diversity (Dport) reaches its maximum with the maximum-diversity mix R, whichcontains a higher share of wind, because the disparity attributes (Table 13.2) render windmore disparate from gas and coal than either of these options is from each other.

The shape of the diversity curve will change as a function of the diversity attributes andtheir weightings. Where coal and gas are determined to display very similar strategicproperties (i.e. display negligible disparity) then the maximum diversity portfolio (forcoal, gas and wind) will comprise roughly 50% wind and 50% coal and gas together.18 Onthe other hand, where the strategic disparities between coal, gas and wind are determinedto be similar, then the maximum diversity portfolio will comprise roughly equal contri-butions from each of the three options. Between these extremes, the composition of themaximum diversity portfolio depends on the degree to which wind is felt to be more dis-parate from coal and gas, than either are from each other.

One of the objectives of this work is to examine how the composition of the MVP efficientmixes changes as phi changes, that is as the confidence in historic events and narrowfinancial performance is reduced and the presence of uncertainty and exposure to surpriseare increased. Optimal mixes now consist of minimum U* portfolios, defined as:

U* � Min: ��p � (1 – �) � 1/Dport

Figure 13.5 illustrates the way the composition of this minimum variance/maximumdiversity mix changes with phi along the MVP EF, given the particular set of parametervalues we examined. The results generally indicate that as phi decreases, that is as MVPcarries less weight and MDA more weight, the wind share declines while the share of gasand coal rise. The mix consists entirely of wind at � � 1.0, since this technology exhibitsthe lowest expected variance on the basis of its historic cost characteristics. Outlays forwind generation consist largely of up-front capital coupled with small, relatively fixedannual maintenance outlays. The expected risk of wind is lower than the fossil fuel alter-natives. Where the confidence in historic costs and relationships is high, a mix high inwind lowers risk. Where confidence in such historic values declines, however, increasingshares of several disparate technologies improves portfolio diversity. In a more complete,real world setting, the full spectrum application, would be carried out for a variety ofstakeholder perspectives on multi-criteria performance (Table 13.1) and multi-attributedisparity (Table 13.2) in order to map the way in which resulting preferred portfolios varywith subjective assumptions, interpretations and values. This in turn, might form a useful

18In this case, where gas and coal are quite similar, the relevant diversity attribute is between “wind” and “notwind.” Portfolio diversity (Equation 13.3), is expressed as: Dport � Xi Xj di,j or the product of the disparity indexbetween technologies i and j and the shares of these two technologies in the mix (0 Xi, Xj 1). With two-options we have Dport � (Xi)(1 – X i)di,j .It is easy to show that the maximum value of this expression occurs ata 50%–50% mix since the value 0.5 maximizes the value of (Xi)(1 – X i).

Ch13-I044949.qxd 5/25/06 4:18 PM Page 218

Page 216: Managing Enterprise Risk

Full-spectrum portfolio and diversity analysis of energy technologies 219

input to decision-making, of a kind that is more robust under uncertainty, ambiguity andignorance than is MVP alone.

III. Conclusions and Future Directions

Mean-variance portfolio analysis has been used to develop efficient real-asset portfolioswith optimal risk-return properties. Like other capital market models, MVP is generallypredicated on relatively narrow financial performance notions: historic statistical risk(variance) and covariance. As such, MVP may not be sufficiently robust to deal with thefull implications of uncertainty, ambiguity and ignorance. These attributes, which havelittle or no basis in history, routinely affect longer-term, more broadly-framed strategicand policy decisions. MVP is based on stylized assumptions about fuel prices and similaraccounting performance aspects. Yet surprise can rear its head in many forms. So-calledenergy shocks can be caused by interruptions due to a variety of factors including warfare,terrorism, natural catastrophe, organizational collapse, infrastructure failure, engineeringfault or regulatory intervention. Although these issues are important drivers for diversifi-cation, they are effectively excluded by an MVP approach based on historic relationships.

Nonetheless, historic variability is widely considered useful as an indicator of future volatil-ity. While actual historic events may not be repeated, event-types, and their effects can beexpected to recur. On balance, however, the idea that historic variability offers a useful guide to the future may be more justifiable in the case of financial portfolios, where marketsare highly efficient and assets virtually infinitely divisible.

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Phi � 0.00Phi � 0.25Phi � 0.50Phi � 0.75Phi � 1.00

Wind

Gas

Coal

WIND

GAS

COAL

Figure 13.5. Minimum variance/maximum diversity mixes along MVP frontier.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 219

Page 217: Managing Enterprise Risk

220 Managing Enterprise Risk

Instead of seeking to resolve inevitably protracted debates over these kinds of intrinsicallyvalue-laden and often highly political issues, this chapter develops a framework underwhich the implications of different positions might be systematically explored. We attemptto articulate and even reconcile what seem to some as rather narrow and short-sighted market oriented views (as reflected in MVP) with what to others look like unmanageablybroad and unduly pessimistic perspectives which fail to take account of the available data(as seen in MDA). In the process, this heuristic approach reveals the crucial influence onanalysis of intrinsically subjective but equally reasonable assumptions, priorities and values.

To this end, we have provided a first step towards a fully articulated model that optimizesMVP risk-return as well as broader multi-dimensional portfolio performance and diver-sity. Our “constrained optimum” model provides optimal diversity results, subject to theconstraint of MVP efficiency. Thus while our illustrative results are generally limited tothe MVP EF, they are nonetheless quite informative. They show that:

(i) By increasing the share of gas from 15% to 66%, generating costs of the illustrativeUS portfolio can (under this view) be reduced 12% without affecting MVP risk.

(ii) Alternatively, increasing the share of wind from 0% to 18% does not change theMVP cost of the US portfolio but reduces MVP risk 23%.

(iii) By increasing the share of wind to 44%, the US portfolio can be moved to a maximum-diversity point where it is best insulated against unknown surprise. Thecost of so doing is about $0.003 or 7% of the current generating cost (the cost ofMix A in Figure 13.2).

For the data we used, the first two outcomes above represent “free-lunch” results that shouldnot be ignored in public and corporate policymaking. Of course, we note that the presentexercise is expository and heuristic rather than prescriptive. The particular results obtainedhere rest on schematic assumptions, constrained options and limited performance data.They are not validated through consultative deliberation with a wider range of stakeholders.However, the present exercise does serve to illustrate both the process and the findings thatwould be associated with a more comprehensive full spectrum analysis. Based on a moreintensive process of stakeholder engagement and analysis, findings such as these hold asalience that could not easily be ignored in public and corporate policymaking.

References

Arrow, K., Social Choice and Individual Values (New Haven: Yale University Press, 1963).

Awerbuch, S., “New Economic Cost Perspectives For Valuing Solar Technologies,” in Karl W. Böer,(editor) Advances in Solar Energy: An Annual Review of Research and Development, Vol. 10(Boulder: ASES, 1995).

Awerbuch, S., “Getting it right: the real cost impacts of a renewables portfolio standard,” PublicUtilities Fortnightly, February 15, 2000.

Awerbuch, S., “Restructuring Our Electricity Networks to Promote Decarbonization:Decentralization, mass-customization and intermittent renewables in the 21st Century,” Tyndall

Ch13-I044949.qxd 5/25/06 4:18 PM Page 220

Page 218: Managing Enterprise Risk

Full-spectrum portfolio and diversity analysis of energy technologies 221

Centre Working Paper No. 49, March 2004, http://www.tyndall.ac.uk/publications/ working_ papers/working_papers.shtml.

Awerbuch, S., “Portfolio-Based Electricity Generation Planning: Policy Implications for Renewablesand Energy Security,” Mitigation and Adaptation Strategies for Global Change, in press.

Awerbuch, S. and Berger, M., “Energy Security and Diversity in the EU: A Mean-Variance PortfolioApproach,” IEA Research Paper, Paris, February 2003, www.iea.org/techno/renew/port.pdf.

Awerbuch, S. and Wm, A, Wallace Policy Evaluation for Community Development: Decision toolsfor Local government (NY: Praeger Publishers, 1976).

Bar-Lev, D. and Katz, S, “A Portfolio Approach to Fossil Fuel Procurement in the Electric UtilityIndustry,” Journal of Finance, 31(3), June 1976, 933–947.

Berger, Martin, Portfolio Analysis of EU Electricity Generating Mixes and Its Implications forRenewables, Ph.D. Dissertation, Vienna, Technischen Universität Wien, March 2003.

Brower, M., “Comments on Stirling’s ‘Diversity and Ignorance in Electricity Supply Investment’ ”,Energy Policy, 23(2), March 1995.

Charnes, A. and Cooper, W.W., Management models and Industrial Applications of LinearProgramming (John Wiley, 1964).

Dale, Lewis, David Milborrow, Richard Slark and Goran Strbac, “Total Cost Estimates for Large-scale Wind Scenarios in UK,” National Grid Transco, Coventry, CV4 8JY. Tel �44 2476423074 (Manchester: UMIST, 2003), M60 1QD. Tel: �44 161 200 4803.

DTI, The Prospects for Nuclear Power in the UK: conclusions of the Government’s NuclearReview, UK Department of Trade and Industry (London: HMSO, May 1995), p. 32.

DTI, UK Energy Sector Indicators 2004, UK Department of Trade and Industry (London: HMSO,2004), p. 56, available at: http://www.dti.gov.uk/energy/inform/energy_indicators/index.shtml

Faber, M. and Proops, J., Evolution, Time, Production and the Environment (Berlin: Springer, 1994).

Fabozzi, Frank, J., Francis Gupta and Harry Markowitz, “The Legacy of Modern Portfolio Theory,”Journal of Investing, Institutional Investor, Fall 2002, 7–22.

Feldman, M., Diversity and Risk Analysis in a Restructured California Electricity Market(Sacramento: California Energy Commission, October 1998).

Funtowicz, S. and Ravetz, J., Uncertainty and Quality in Science for Policy (Amsterdam: Kluwer,1990).

Grubb, M., Butler, L. and Sinden, G., Diversity and Security in UK Electricity Generation: TheInfluence of Low Carbon Objectives, Carbon Trust, June 2004.

Helfat Constance E., Investment Choices in Industry (Cambridge, MA: MIT Press, 1988).

Humphreys, Brett, H. and McLain, K.T. “Reducing the impacts of energy price volatility throughdynamic portfolio selection,” Energy Journal, 19(3), 1998.

Ibbotson Associates, Stocks, Bonds Bills and Inflation 1998 Yearbook, Chicago, 1998.

Jansen J.C. et al., Designing Indicators of Long Term Energy Supply Security (The Netherlands:ECN, Petten, January 2004), available at: http://www.ecn.nl/docs/library/report/2004/c04007.pdf

Keynes, J., A Treatise on Probability (London: Macmillan, 1921).

Kleindorfer, Paul R. and Lide Li, “Multi-period, VaR-constrained portfolio optimization in electricpower,” The Energy Journal, January 2005, 1–26.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 221

Page 219: Managing Enterprise Risk

222 Managing Enterprise Risk

Knight F., Risk, Uncertainty and Profit (Boston: Houghton Mifflin, 1921).

Loasby, B., Choice, Complexity and Ignorance: An Inquiry into Economic Theory and the Practiceof Decision Making (Cambridge: Cambridge University Press, 1976).

Lucas, N., Price, T. and Tompkins, R., ‘Diversity and ignorance in electricity supply investment: a reply to Andrew Stirling’, Energy Policy, 23(1), 5–16, 1995.

Luce, R., Raiffa, H., An Axiomatic Treatment of Utility, in Luce, R., Raiffa, H. (eds), Games andDecisions (New York: John Wiley, 1957).

Morgan, M., Henrion and Small, M., Uncertainty: A Guide to Dealing with Uncertainty in Quantita-tive Risk and Policy Analysis (Cambridge: Cambridge University Press, 1990).

OECD, Toward a Sustainable Energy Future, Organisation for Economic Co-operation and Develop-ment, Paris, 2001.

Seitz, Neil and Mitch Ellison, Capital Budgeting and Long-Term Financing Decisions, DrydenPress, 1995.

Smithson, M., Ignorance and Uncertainty: Emerging Paradigms (New York: Springer, 1989).

Springer, Urs. “Can the Risks of Kyoto Mechanisms be Reduced Through Portfolio Diversification:Evidence From the Swedish AIJ Program.” Environmental and Resource Economics 25(4),August 2003, 501–513.

Springer, Urs and Harri Laurikka., “Quantifying risks and risk correlations of investments inClimate Change Mitigation,” IWOe Discussion paper No. 101, University of St. Gallen(undated), ISBN 3-906502-98-8 www.iwoe.unisg.ch/org/iwo/web.nsf

Stirling A.C., “Diversity and ignorance in electricity supply – addressing the solution rather thanthe problem,” Energy Policy, Vol. 22, March 1994, 195–216.

Stirling, A.C., ‘Diversity in Electricity Supply: a response to the criticism of Lucas et al.’, EnergyPolicy, Vol. 23(1), January 1995, 8–11.

Stirling, A.C., “Optimising UK Electricity Portfolio Diversity”, chapter in G. MacKerron and P. Pearson, (eds), The UK Energy Experience: a model or a warning? Imperial College Press,March, 1996.

Stirling, A.C., ‘Limits to the Value of External Costs’, Energy Policy, 25(5), 1997a, 517–540.

Stirling, A.C., ‘Multicriteria Mapping: Mitigating the Problems of Environmental Valuation? chap-ter in J. Foster, (ed), Valuing Nature: Economics, Ethics and Environment’ (London: Routledge,April 1997b).

Stirling, A.C., On the Economics and Analysis of Diversity, SPRU Electronic Working Paper No. 28,October 1998; http://www.sussex.ac.uk/spru/publications/imprint/sewps/sewp28/sewp28.html

Stirling, A.C., On ‘Science’ and ‘Precaution’ in the Management of Technological Risk, report tothe EU Forward Studies Unit, IPTS, Sevilla, EUR19056 EN, 1999.

Stirling, A.C., ‘Risk, Uncertainty and Precaution: Some Instrumental Implications From the SocialSciences’ in I. Scoones, M. Leach, F. Berkhout, Negotiating Change: Perspectives inEnvironmental Social Science (London: Edward Elgar, 2003).

Suzuki, T., Energy Security and the Role of Nuclear Power in Japan, Central Research Institute ofthe Electric Power Industry, Tok.

Ch13-I044949.qxd 5/25/06 4:18 PM Page 222

Page 220: Managing Enterprise Risk

Index

Abandonment option, in negative NPV, 72, 87

Advanced measurement approach (AMA)external loss data, 124internal loss data, 123–124qualitative factors, 127scenario analyses, 124statistical technique, 126

Airline industryreal options analysis, 88

Ambiguity, 205, 206, 207, 209, 211, 213, 216, 219

Arbitrage pricing theory (APT), 81risk factors, 82

Automobile manufacturing industryreal options analysis, 87–88

Avoided cost, 8

Bank capital regulation, 119–120Banking industry, ERM, 107–108, 135–137

banks and bank supervisors, challenges,129–130, 136

Base case net-present-value analysis, realoption, 102

Basel Capital Accord, 107–108, 120, 135,136, 190

Basel Committee, 135, 190Basel I agreement, 119, 120Basel II proposal, 120–121, 123, 129,

130, 132loss event types, 121operational risk, 120–121

Basic indicator approach, 123Big Three, 40–42BioHope, 183, 184Boeing, 88

California Experience, The, 9Capital adequacy, 190

assessment, 190, 194

componentsfor economic value, 192, 193–196for financial liquidity, 192, 196–197

concept, 191–193framework, 197–199, 200–201

improving stakeholder confidence, 198managing performance, 198–199promoting industry transparency, 199

restoring, 200Capital asset pricing model (CAPM), 70, 81,

167, 203Capital budgeting decision-making

DCF approach, 67–74, 137Cash flow

limitations, 70see also Free cash flow; Discounted cash flow

Chief Executive Officer (CEO), 108, 161Chief Financial Officer (CFO), 73, 109Chief Risk Officer (CRO), 108, 161–162

responsibilities, 110Clearinghouse, 25Committee of Chief Risk Officers (CCRO),

190, 191Computer industry

real options analysis, 71–72, 88Constrained optimum model, 214, 220Corporate culture, 14, 40, 42Cost approach, valuation, 77Covering, in commodity delivery rate, 25Credit risk, 111, 112, 120, 135, 195, 197

D-Xerox machine, 182–183DCF approach, 67–74, 77, 137–138

advantages and disadvantages, 78–83flow valuation steps, 68–69NPV analysis limitations, 69–70

cash flow limitations, 70discount rate limitations, 70–71strategic options, 71–73

shortcomings, 81

223

Index-I044949.qxd 5/24/06 5:14 PM Page 223

Page 221: Managing Enterprise Risk

224 Index

DCF approach (continued)and sensitivity analysis, 95

Decision-maker, 14, 54, 162, 164, 169–170,171, 184–185, 198

Defer option, in negative NPV, 71, 72–73, 92,95, 98, 100

Deregulation, 8–10, 53–54, 108Discounted cash flow see DCF approachDiversification strategies, 13–14Diversity analysis, 205, 209, 210–211

criteria, 210Energy Security, 211illustrative results, 217MDA, 208–211see also Full-spectrum diversity model

Drucker, P., 144, 147

Economic and financial deregulation, 8–10Economic capital, 113, 191, 194–196Economic value, capital adequacy, 192,

193–196credit risk, 195debt, 193–194economic capital and its components, 194market risk, 194–195net assets, 193operative risk, 195

Economies of scale, 4, 43–44Efficient frontier (EF), 202Electric companies

risk management failure, 146Electric power research institute (EPRI), 17,

55, 56, 49Electricity risk management, 16

ERM, 138–141scenario analysis, 142

known versus unknown, 141–142old versus new model, 17–18price and volume risk mitigation, financial

means, 24–30regulatory risk, 3–15, 18–22technology risks, 22–23uncovered risk, 30SP application, 158

Embedded options, strategic decision, 138,164–165, 183

Emission allowance, 9Energy Policy Act, 8, 11, 13, 45, 47, 145Energy Security, 211, 217Enron scandal, 139, 205Enterprise risk management (ERM), 107–115,

138, 159–162banking industry, 135–137benchmarks, 114

benefits, 113–114birth, 107–108and business function

audit, 110–111board of directors, 112corporate governance and compliance, 111executive management, 112investor relations, 111legal, 111treasury/finance, 111

framework, building, 109–113goal, 140, 142heightened importance, 108–109KUU concept, 173–176for power companies, 139–141value proposition, 109

Entrepreneurs, 3–4, 38, 145Environmental Protection Agency (EPA), 12ERM see Enterprise risk managementEuropean regulatory frameworks, 59Executive decision-making, 153Exercise price, 26, 118, 185Expected portfolio performance, 207–208External auditors, 198External loss data, 124, 129Extreme value theory, 128

Fat-tails, 127, 128Federal Energy Regulatory Commission

(FERC), 8, 12, 18, 45, 48, 139Federal Power Act of 1935, 5Federal Trade Commission, 6Finance-theoretic approach, 204Financial and structural regulation, 5–6Financial liquidity, capital adequacy, 192,

196–197modeling liquidity, 197

Financial options theory, 163, 164Forward contract, 24, 25Free cash flow (FCF), 68, 70, 76, 82Full-spectrum diversity model

illustrative results, 217–218MDA, 208–210

Full-spectrum portfolio analysis, 205, 213constrained optimization, 214, 220illustrative results, 214–217MVP theory, 207–208

Future events, 144–145Future exchanges see Futures contractFuture risks, 165Futures contract, 25

Gas-fired generation, 8General Motors (GM), 21, 41, 87

Index-I044949.qxd 5/24/06 5:14 PM Page 224

Page 222: Managing Enterprise Risk

Globalization, 107, 120“Going concern” concept, 192, 198, 200Gordon constant growth model (GGM),

79, 83Growth option, in negative NPV, 71–72

Habitual thinking, ixHigh-tech and e-business industry

real options analysis, 90Holding companies, 5–6Hope Natural Gas Co, 54Horizon value (HV), 68HP-Compaq, 88Hughes, T.P., 22, 146

Ignorance, 205, 206, 207, 209, 211, 213, 219Incertitude, 202, 204, 205, 209, 210

see also UncertaintyIncome approach, valuation, 76Independent system operator (ISO), 45, 46,

47, 49Independent transmission company (ITC),

46, 48Industry transparency, promoting, 199Insurance, as risk mitigant, 127Integrated gasification combined cycle

(IGCC) technology, 39–40syngas, 39–40

Internal loss data, 123–124, 128Investor-owned electric utility industry, 145Investors, 19–20, 59, 134, 146, 147–148, 203

responsibility, 150

Keynes, J.M., 147, 148Knowable, 148–149Known, 135, 147–148, 172, 174, 177

versus unknown, 141Known, unknown and unknowable (KUU), 147

application, 172–178

Locational marginal pricing (LMP), 46, 48,49, 58

Loss distribution approach, 126–127Loss event types, 121, 122, 130

Marketas replacement of regulation, 52–55

Market approach, valuation, 76Market dimension, of strategic surprise, 40–42Market risk, 80, 140, 194Mean-variance portfolio (MVP) theory, 202,

205, 219application, 206portfolio risk definition, 205

full-spectrum, 207Medium Sized Power Company (MSPC), 179Merck, 183, 184Mergers and acquisition

real options analysis, 90Microsoft, 160Minimum variance/maximum diversity

MVP frontier, 219Monopoly, 52

power industry, 139Monte Carlo simulation, 75, 82, 94, 97,

102–103, 126combining with real option, 99

Multi-attribute disparity, 212Multi-criteria diversity (MDA) analysis, 202,

208–211limitation, 204see also Full-spectrum diversity model

Multinomial lattice solver (MNLS), 105Multiple super lattice solver (MSLS), 105Multiple asset-pricing model, 81MVP theory see Mean-variance portfolio

theory

National Bell Telephone, 35National Grid, 43, 55National Ignition Facility (NIF)

scenario building, 169–171Net present value (NPV) analysis, 69, 76, 80,

89, 96, 102, 137, 162, 184, 186limitations, 69managers, 181negative projects, 69, 71, 73, 74, 98positive projects, 69, 70, 71, 73, 74technical assumptions, 69–70

Net salvage value (NSV), 68New analytics, for valuation, 83–85

paradigm shift, 85real options

application, 87–90fundamentals, 90–106issues, 86

top-down approach, 83see also Traditional valuation

methodologiesNorth American Electric Reliability Council

(NERC), 56, 57NPV analysis see Net present value analysis

Oil and gas industryreal options analysis, 88

Operational risk management, at financialinstitutions, 119–132

application, 130–131

Index 225

Index-I044949.qxd 5/24/06 5:14 PM Page 225

Page 223: Managing Enterprise Risk

Operational risk management, at financialinstitutions (continued )

current proposal, 120–127implementation

challenges, 129–130issues, 128–129

Operative risk, 120, 121, 123, 129, 130,131–132, 195

Option writer/grantor, 26

Performance-based regulation (PBR)performance index, 51price moratorium, 51profit sharing, 51range of return, 51regulatory lag, 51

Performance index, 51Performance, management of, 52, 114, 119,

135, 138, 162, 198–199, 202, 204Pharmaceutical research and development

industryreal options analysis, 89

Policy-makers, 4, 13, 18, 43, 52, 203, 207Portfolio analysis

CAPM, 203MVP theory, 206see also Full-spectrum portfolio analysis

Portfolio diversity, 209, 210, 211–213, 218illustrative results, 218–219index, 210, 211, 218strategic technology disparity attributes,

211–212, 213Portfolio performance, 207, 208, 209–210,

213Portfolio risk, 205, 207, 208Portfolio theory, 207Powerplant and Industrial Fuel Use Act, 7Present values (PV), 68, 76, 79, 91, 102, 137Price and volume risk mitigation

financial means, 24Price moratorium, 51Production option, in negative NPV, 71Profit sharing, 51Provider of last resort (POLR), 17Public Utilities Regulatory Policies Act, 44Public Utility Holding Company Act of 1935

(PUHCA), 6, 8, 11, 12Public Utility Regulatory Policies Act

(PURPA), 8, 10, 11, 12

Qualitative factors, 127Qualitative management screening, real

option, 102

Range of return, 51Real estate industry

real options analysis, 89Real options analysis (ROA), 75, 86, 98,

137–138, 162, 163–166, 169advantages, 181–186airline industry, 88alternatives and characteristics, 164analytical process, 75application, 166–168automobile and manufacturing industry,

87–88basics, 91caveats, 168–169computer industry, 88and DCF, comparison, 100decision analysis, 75fundamental, 90–102high-tech and e-business industry, 90implementation, 104–105importance, 91–94KUU concept, 173, 174–175managers, 181–182mergers and acquisition, 90and Monte Carlo simulation, 99and NPV, comparison, 181–186

D-Xerox machine, 182–183pharmaceutical firm acquisition, 183–184

oil and gas industry, 88pharmaceutical research and development

industry, 89process steps, 101–104

base case NPV analysis, 102modeling and analysis, 103Monte Carlo simulation, 102–103portfolio and resource optimization, 103problem framing, 103qualitative management screening, 102reporting, 104update analysis, 104

real estate industry, 89telecommunications industry, 89terminology, 92and traditional approaches, comparison,

94–101utilities industry, 89

Reality check, 145–146Regional transmission organization (RTO),

19, 46, 48, 57Regulated industry, incentives, 50–52

performance-based regulation, 51Regulation

replacing with market, 52–55

226 Index

Index-I044949.qxd 5/24/06 5:14 PM Page 226

Page 224: Managing Enterprise Risk

Regulatory change, 13, 14Regulatory lag, 51Regulatory risk, in electric industry, 3, 18–22

current issues, 11–13future, 15historical scenarios, 3–5

economic and financial deregulation,8–10

environmental effects, 6financial and structural regulation, 5–6national security, 7safety issues, 7–8

themes, 10–11unknown and unknowable risk assessment,

13–14Risk adjusted

discount rates, 71, 78–79, 80Risk and return, 144Risk management

traditional approaches, 162Risk managers, 16

learning from investors, 144–150Rumsfeld, D., 147

Say, J.B., 145Scenario, 157

phases, 157SP, SB, and SA, comparison, 172

Scenario analysis (SA), 124, 142, 171–172KUU concept, 175

Scenario building (SB), 169–171KUU concept, 175

Scenario planning (SP), 157–158application, 158benefits and drawbacks, 158–159KUU concept, 173, 174origins and evolution, 155–157in power industry, 179–181

Shannon–Wiener Diversity Index, 204Shell, 14, 154, 155, 156Single super lattice solver (SSLS), 105SLS excel solution, 105SLS functions, 105Smith, A., 144Stakeholder, 109, 198Standard market design (SMD), 46, 47Standardized approach, 123Statistical technique, 126–127Stranded cost, 9Strategic opportunity, 34Strategic options, in negative NPV, 71–72Strategic risk, 33–34

technology implication, 33–42

Strategic surpriseanticipation, 36–42corporate culture power, 42market dimensions, 40–42and technology, 34–36technology dimensions, 37–40

Strict uncertainty, 204Strike price, 26Swaps, 27SWOT application, 177

Technical and technologydifference, 146

Technology dimension, of strategic surprise,37–40

Technology risks, 22–23Telecommunications industry

real options analysis, 89Telephone industry

and VOIP, 36Test marketing, 165–166Tornado Diagram, 102–103

and scenario analysis, 96Traditional valuation methodologies, 75

cost approach, 77income approach, 76–77intangible-asset valuation, 77market approach, 76practical issues, 77–78and real option, comparison, 94see also New analytics, for valuation

Transmission network, 43in America, 45–48incentives

and fixes, 58–60in regulated industry, 50–52

market, as replacement of regulation, 52–55

old model and new needs, 43–45problem fixing cost, 57–58and reliability

incentives for, 55then and now, 55–57

status report, 48–49

Uncertainty, 14, 105–106, 166, 169, 178, 185,203, 204

cash flow, 70see also Incertitude

Unknowable, 13, 135, 141, 147–148, 173Unknown, 13, 135, 141, 147–148, 173Utilities industry

real options analysis, 89

Index 227

Index-I044949.qxd 5/24/06 5:14 PM Page 227

Page 225: Managing Enterprise Risk

Valuationnew analytics, 83–87traditional views, 67–68, 76value, 76

Value-at-risk (VaR) methodology, 107, 135, 136Voice-over-Internet-protocol (VOIP), 36, 39Volatility measure, 184–185

Weighted average cost of capital (WACC), 68,76–77, 80, 81

Western Union, 42and voice-over wire, 34–35

Wholesale power market platform (WPMP), 47

228 Index

Index-I044949.qxd 5/24/06 5:14 PM Page 228