valuation of fleeting opportunitiesqz978wj4057...wide spectrum of classes and appreciate the...
TRANSCRIPT
VALUATION OF FLEETING OPPORTUNITIES
A DISSERTATION
SUBMITTED TO THE DEPARTMENT OF
MANAGEMENT SCIENCE AND ENGINEERING
AND THE COMMITTEE ON GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
Ibrahim S. Almojel August 2010
http://creativecommons.org/licenses/by-nc/3.0/us/
This dissertation is online at: http://purl.stanford.edu/qz978wj4057
Β© 2010 by Ibrahim Saad Almojel. All Rights Reserved.
Re-distributed by Stanford University under license with the author.
This work is licensed under a Creative Commons Attribution-Noncommercial 3.0 United States License.
ii
I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.
Ronald Howard, Primary Adviser
I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.
Ali Abbas
I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.
Samuel Chiu
Approved for the Stanford University Committee on Graduate Studies.
Patricia J. Gumport, Vice Provost Graduate Education
This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file inUniversity Archives.
iii
iv
Abstract This dissertation is concerned with the study of problems that involve fleeting opportunities.
These are situations with limited capacity and a stream of opportunities on which the decision
maker must decide whether to capitalize as they occur. One example is the evaluation of
business proposals by Venture Capitalists (VC). I have developed a process to help decision
makers in such situations set their policies and answer valuation questions. In the VC context,
this process helps evaluate the firm itself or the value of a partner in hiring/firing situations. It
can also help the organizationβs decision makers determine the value of information-gathering
activities for different deals at different situations.
Our solution is based on three steps. In the first, we assess the frame and decision basis. In the
second, we build the valuation funnel. In the third, we apply the results of the valuation funnel
to decisions regarding the deal flow and specifically to deals as they arrive. We assume that the
risk preference of the decision maker follows a linear or an exponential utility function. We
formulate the generic problem in the valuation funnel as a dynamic program wherein the
decision maker can either accept a given deal directly, reject it directly, or seek further
information on its potential and then decide whether to accept it or not. This approach is
illustrated in this dissertation through several examples.
Our results show well-behaved characteristics of the optimal policy, deal flow value, and the
value of information and other alternatives over time and capacity. To enable these results, we
developed the power u-function further and defined a u-value multiplier and certain equivalent
multipliers. We also studied different approximations for the power u-curve based on few
moments of the underlying distribution. The system we created gives a valuation template for
the fleeting opportunities problem. This template allows decision makers to address single
opportunities as they occur and guides their approach to the deal flow as a whole. We believe
this template will help to extend the application of Decision Analysis and spur more research
within the fleeting opportunities problem and, more generally, valuation problems.
Our process allows us a valuation template for problems fitting the fleeting opportunities
description. This, we hope, will extend the use of Decision Analysis into new fields.
v
To my inspiration β my parents Saad and Halah
And to my life β my beloved Sarah and Saad
vi
Acknowledgments βGratitude Is Not Only the Greatest Of Virtues,
But the Parent of All the Others.β
Marcus Cicero
Obtaining a degree at Stanford is a life-changing experience. Working in such a dynamic, active,
and intellectual environment changes oneβs perspective. I was lucky enough to experience a
wide spectrum of classes and appreciate the insights of the full range of professors.
I owe much gratitude to all the people who made this dissertation possible. My parentsβ
inspiration, my wifeβs moral support, my siblingsβ encouragement, and my sonβs innocent
enthusiasm enriched my experience. My professorsβ guidance changed my perspective and my
friendsβ support and kindness created a home for me away from home.
My position with Professor Ron Howard was a learning experience in itself. Learning went
beyond Decision Analysis to ethics and legal systems. His logical and consistent view, relevant
across all aspects of life, was what enlightened me most. I will forever be indebted to him for
my learning. I am also indebted to my advisors Jim Matheson, who worked closely with me
through the defense, and Ali Abbas, who introduced me to Decision Analysis and the path to
changing my life.
My friends provided me with a rich intellectual environment at Stanford. Our continuous
discussions at the Decision Analysis Group broadened my horizons in a fun-loving atmosphere.
The Muslim and Arab communities at Stanford were a home for me away from home. The list is
long and as diverse as the World itself. I will forever cherish our memories together.
I am most appreciative of my family. My parents, Saad and Halah, are my inspiration and
motivation throughout my life. I gained a new sense of appreciation for them after my son was
born. My wife Sara is the warmth of my life and her support and care keeps me going every day.
My sonβs innocent enthusiasm and excitement is my delight. I owe my siblings Muhammad,
Maha, and Rana for their endless encouragement.
vii
Table of Contents
ABSTRACT .................................................................................................................................. IV
ACKNOWLEDGMENTS ............................................................................................................ VI
TABLE OF CONTENTS ........................................................................................................... VII
LIST OF FIGURES .................................................................................................................... XII
CHAPTER 1 β INTRODUCTION .............................................................................................. 1
1.1 A New Research Direction .................................................................................................................. 1 1.1.1 The Need to Formalize DA ................................................................................................................. 1 1.1.2 Introducing DA Methodologies .......................................................................................................... 1
1.2 The Research Process ......................................................................................................................... 2 1.2.1 The Motivating Problem .................................................................................................................... 2 1.2.2 Intuitive Understanding of the Motivating Problem .......................................................................... 3 1.2.3 The Research Problem ....................................................................................................................... 3 1.2.4 Solution to the Research Problem ..................................................................................................... 4 1.2.5 Intuitive Understanding of the Research Problem ............................................................................ 6 1.2.6 Solution to Motivating Problem ......................................................................................................... 6
1.3 Dissertation Structure ........................................................................................................................ 6
1.4 Using this Dissertation ........................................................................................................................ 6
CHAPTER 2 β REVIEW & SUMMARY OF CONTRIBUTIONS .......................................... 8
2.1 Literature Review ............................................................................................................................... 8 2.1.1 Risk Aversion ...................................................................................................................................... 9 2.1.2 The Value of Information ................................................................................................................... 9 2.1.3 Stand-alone Opportunities ............................................................................................................... 10 2.1.4 Repeated Opportunities................................................................................................................... 12
2.2 Contributions ................................................................................................................................... 14 2.2.1 Risk Aversion .................................................................................................................................... 14 2.2.2 The Value of Information ................................................................................................................. 14 2.2.3 Stand-alone Opportunities ............................................................................................................... 15 2.2.4 Repeated Opportunities................................................................................................................... 15
2.3 Impact of These Contributions ......................................................................................................... 15
viii
CHAPTER 3 β RISK ATTITUDE ............................................................................................ 16
3.1 The Exponential U-Curve .................................................................................................................. 16 3.1.1 Introduction .................................................................................................................................... 16 3.1.2 Definitions ....................................................................................................................................... 18 3.1.3 Applying the Exponential U-Curve .................................................................................................. 18 3.1.4 The Value of Information and Control ............................................................................................ 18 3.1.5 Proximal Analysis ............................................................................................................................. 18
3.2 The Power U-Curve .......................................................................................................................... 19 3.2.1 Introduction .................................................................................................................................... 19 3.2.2 Definitions ....................................................................................................................................... 23 3.2.3 Applying the Power U-Curve ........................................................................................................... 25 3.2.4 The Value of Information and Control ............................................................................................ 27 3.2.5 Proximal Analysis of the Power U-Function .................................................................................... 29
3.3 Comparison between Exponential and Power U-Functions .............................................................. 32 3.3.1 Introduction .................................................................................................................................... 32 3.3.2 Comparison when Considering a Single Deal .................................................................................. 32 3.3.3 Comparison when Considering Fleeting Opportunities .................................................................. 33
CHAPTER 4 β MODEL AND NOTATION ............................................................................ 35
4.1 General Description ......................................................................................................................... 35
4.2 Components of the Model: Deals, Time Horizon, and Capacity ........................................................ 36
4.3 The Additive Model .......................................................................................................................... 37 4.3.1 Representation of Deals .................................................................................................................. 37 4.3.2 Model Layout .................................................................................................................................. 38
4.4 The Multiplicative Model ................................................................................................................. 39 4.4.1 Representation of Deals .................................................................................................................. 39 4.4.2 Model Layout .................................................................................................................................. 40
4.5 Terms and Notation ......................................................................................................................... 40
CHAPTER 5 β STEP 1: FRAME AND DECISION BASIS ................................................... 43
5.1 Overview ......................................................................................................................................... 43
5.2 Framing ............................................................................................................................................ 44 5.2.1 Framing the Deal Flow..................................................................................................................... 45 5.2.2 Framing the Deals............................................................................................................................ 45
5.3 Basis for the Decisions ..................................................................................................................... 46 5.3.1 Preferences ..................................................................................................................................... 46 5.3.2 Alternatives ..................................................................................................................................... 47 5.3.3 Information ..................................................................................................................................... 47
ix
CHAPTER 6 β STEP 2.1: ADDITIVE VALUATION FUNNEL .......................................... 49
6.1 Overview .......................................................................................................................................... 49
6.2 Basic Problem Structure ................................................................................................................... 50 6.2.1 Definitions ........................................................................................................................................ 51 6.2.2 The Value of Control ........................................................................................................................ 53
6.3 Main Results ..................................................................................................................................... 53 6.3.1 Optimal Policy .................................................................................................................................. 53 6.3.2 Characterizing the Certain Equivalent and Threshold...................................................................... 53 6.3.3 Characterizing Indifference Buying Prices........................................................................................ 54 6.3.4 Characterizing the Optimal Policy .................................................................................................... 56 6.3.5 Multiple Detectors ........................................................................................................................... 58
6.4 The Long-Run Problem ..................................................................................................................... 59 6.4.1 Problem Structure and Definitions .................................................................................................. 60 6.4.2 Extension of the Results to the Long-Run Problem ......................................................................... 61 6.4.3 Policy Improvement Algorithm ........................................................................................................ 62
6.5 Extensions ........................................................................................................................................ 64 6.5.1 Multiple Cost Structures .................................................................................................................. 64 6.5.2 Decision Reversibility ....................................................................................................................... 67 6.5.3 The Probability of Knowing Detectors ............................................................................................. 69
CHAPTER 7 β STEP 2.2: THE MULTIPLICATIVE VALUATION FUNNEL .................. 72
7.1 Overview .......................................................................................................................................... 72
7.2 Basic Problem Structure ................................................................................................................... 73 7.2.1 Definitions ........................................................................................................................................ 74 7.2.2 The Multiple of Control .................................................................................................................... 76
7.3 Main Results ..................................................................................................................................... 76 7.3.1 Optimal Policy .................................................................................................................................. 76 7.3.2 Characterizing the Certain Multiplier and Threshold ....................................................................... 76 7.3.3 Characterizing Indifference Buying Fractions .................................................................................. 77 7.3.4 Characterizing the Optimal Policy .................................................................................................... 80 7.3.5 Multiple Detectors ........................................................................................................................... 81
7.4 The Long-Run Problem ..................................................................................................................... 83 7.4.1 Problem Structure and Definitions .................................................................................................. 83 7.4.2 Extension of the Results to the Long-Run Problem ......................................................................... 84
7.5 Extensions ........................................................................................................................................ 85 7.5.1 Multiple Cost Structures .................................................................................................................. 85 7.5.2 Decision Reversibility ....................................................................................................................... 88 7.5.3 The Probability of Knowing Detectors ............................................................................................. 90
x
CHAPTER 8 β STEP 3: FUNNEL APPLICATION AND EXAMPLES............................... 93
8.1 Overview ......................................................................................................................................... 93
8.2 Application of the Funnel ................................................................................................................. 94
8.3 Two Types of Decisions: Meta and Real-Time .................................................................................. 94
8.4 Setup Examples ................................................................................................................................ 95
8.5 Real-Time Application Examples ...................................................................................................... 97 8.5.1 Real-Time Setup Example ................................................................................................................ 97 8.5.2 Should Saad Buy Information? ........................................................................................................ 98 8.5.3 Should Saad Invest in the Deals? ..................................................................................................... 98
8.6 Examples of Meta-Decision Applications ......................................................................................... 99 8.6.1 Situation 1: Choosing the Focus of the Firm ................................................................................... 99 8.6.2 Situation 2: Hiring Partners: Evaluating Synergies in Their Skills .................................................. 102 8.6.3 Situation 3: Hiring Partners: Evaluating Their Skills ...................................................................... 103
CHAPTER 9 β CONCLUSIONS AND FUTURE RESEARCH ........................................... 104
9.1 Conclusions .................................................................................................................................... 104
9.2 Future Research ............................................................................................................................. 105
BIBLIOGRAPHY .................................................................................................................... 107
APPENDIX A1 β PROOFS .................................................................................................... 113
A1.1 Chapter 3 Proofs .......................................................................................................................... 113
A1.2 Chapter 6 Proofs .......................................................................................................................... 118 A1.2.1 Section 6.3 Main Results ............................................................................................................. 118 A1.2.2 Section 6.4 The Long-Run Problem ............................................................................................. 134 A1.2.3 Section 6.5.1 Extensions β Multiple Cost Structures .................................................................. 140 A1.2.4 Section 6.5.2 Extensions β Decision Reversibility ....................................................................... 142 A1.2.5 Section 6.5.3 Extensions β Probability of Knowing Detectors .................................................... 143
A1.3 Chapter 7 Proofs .......................................................................................................................... 151 A1.3.1 Section 7.3 Main Results ............................................................................................................. 151 A1.3.2 Section 7.4 The Long-Run Problem ............................................................................................. 167 A1.3.3 Section 7.5.1 Extensions β Multiple Cost Structures .................................................................. 174 A1.3.4 Section 7.5.2 Extensions β Decision Reversibility ....................................................................... 176 A1.3.5 Section 7.5.3 Extensions β Probability of Knowing Detectors .................................................... 177
xi
APPENDIX A2 β A GENERIC DECISION DIAGRAM EXAMPLE ................................. 185
A2.1 Introduction ................................................................................................................................. 185
A2.2 Generic Diagram Setup and Frame ............................................................................................... 185
A2.3 Generic Diagram Node Definitions ............................................................................................... 187
A2.4 Deeper Layers .............................................................................................................................. 191
A2.5 Extensions .................................................................................................................................... 194
APPENDIX A3 β VENTURE CAPITAL VALUATION ..................................................... 196
A5.1 Literature Review ......................................................................................................................... 196 A5.1.1 Valuation Overview ..................................................................................................................... 196 A5.1.2 The Venture Capital Method ....................................................................................................... 197 A5.1.3 The First Chicago Method (FCM) ................................................................................................. 199
A5.2 The βReal VCβ View ....................................................................................................................... 199
A5.3 Summary ...................................................................................................................................... 201
xii
List of Figures Figure 1 - Research Process ............................................................................................................. 2 Figure 2 - Solution to the Research Problem ................................................................................... 5 Figure 3 - Valuation Funnel .............................................................................................................. 5 Figure 4 - Basic Dynamic Program Structure ................................................................................... 5 Figure 5 β Power u-function for Ξ» β€ 0 ........................................................................................... 21 Figure 6 β Power u-function for 0 < Ξ» < 1 ....................................................................................... 21 Figure 7 β Example 3.2.1 Original Deal Structure .......................................................................... 24 Figure 8 β Example 3.2.1 Simplified Deal ....................................................................................... 24 Figure 9 β Example 3.2.2 Deal Structure with a Combined Deal ................................................... 26 Figure 10 β Example 3.2.3 Deal Structure with Perfect Information ............................................. 27 Figure 11 β Example 3.2.4 Improved Deal Structure ..................................................................... 28 Figure 12 β Power Function approximation error converges to zero ............................................ 32 Figure 13 - Deal Flow Structure ..................................................................................................... 36 Figure 14 - Deal Setup Structure .................................................................................................... 36 Figure 15 - Representation of Deals in the Additive Model........................................................... 38 Figure 16 β Additive Model Layout ................................................................................................ 38 Figure 17 βRepresentation of Deals in the Multiplicative Model .................................................. 39 Figure 18 β Multiplicative Model Layout ....................................................................................... 40 Figure 19 - Step 1 of the Solution Process ..................................................................................... 43 Figure 20 - Two Levels of Framing ................................................................................................. 44 Figure 21 - Generic Diagram for Internet Consumer Startups ....................................................... 46 Figure 22 - Decision Basis ............................................................................................................... 46 Figure 23 - Modeling the Prior Deals ............................................................................................. 48 Figure 24 - Step 2 of the Solution Process ..................................................................................... 49 Figure 25 - Basic Problem Structure .............................................................................................. 51 Figure 26 - Example Deal Structure ............................................................................................... 52 Figure 27 - Example Deal Flow ....................................................................................................... 52 Figure 28 - Examples of Deal Flow Values ..................................................................................... 54 Figure 29 - Characterizing the IBP of Information ......................................................................... 55 Figure 30 - Example IBP Value ....................................................................................................... 56 Figure 31 β Characterizing Optimal Policy ..................................................................................... 57 Figure 32 β Example of Optimal Policy Over Time ......................................................................... 58 Figure 33 - Example 6.3.4: The ordering of detectors is not myopic. ............................................ 59 Figure 34 - Example 6.3.4: The ordering of detectors is not myopic. ............................................ 59 Figure 35 - Problem Structure with Infinite Horizon...................................................................... 61 Figure 36 - Problem Structure with Information at Multiple Cost Types ...................................... 65 Figure 37 β Example 6.5.1: Value of the deal flow with clairvoyance at different cost structures66 Figure 38 - Example 6.5.1: Incremental multiple of the deal with clairvoyance at different cost structures ....................................................................................................................................... 67 Figure 39 - Problem structure with an option to reverse an allocation decision .......................... 69 Figure 40 - Problem Structure with the Probability of Knowing Detectors ................................... 70 Figure 41 - Step 2 of the Solution Process ..................................................................................... 72 Figure 42 - Basic Problem Structure .............................................................................................. 74 Figure 43 - Example Deal Structure ............................................................................................... 75
xiii
Figure 44 - Example Deal Flow ....................................................................................................... 75 Figure 45 - Example Deal Flow Value ............................................................................................. 77 Figure 46 - Characterizing the IBF of Information ......................................................................... 78 Figure 47 - Example IBF Value ........................................................................................................ 79 Figure 48 β Characterization of Optimal Policy ............................................................................. 80 Figure 49 - Example Optimal Policy Over Time .............................................................................. 81 Figure 50 - Example 7.3.4: The ordering of detectors is not myopic. ............................................ 82 Figure 51 - Example 7.3.4: The ordering of detectors is not myopic. ............................................ 82 Figure 52 - Problem Structure with Infinite Horizon ..................................................................... 84 Figure 53 - Problem Structure with Information at Multiple Cost Types ...................................... 86 Figure 54 β Example 7.5.1: Multiples of the Deal Flow with Clairvoyance at Different Cost Structures ....................................................................................................................................... 87 Figure 55 - Example 7.5.1: Incremental multiples of the deal with clairvoyance at different cost structures ....................................................................................................................................... 87 Figure 56 - Problem Structure with an Option to Reverse an Allocation Decision ........................ 90 Figure 57 - Problem Structure with Probability of Knowing Detectors ......................................... 91 Figure 58 - Step 3 of the Solution Process ..................................................................................... 93 Figure 59 β Two Levels of Funnel Application ............................................................................... 94 Figure 60 - Application Example: Hardware Deals ......................................................................... 96 Figure 61 - Application Example: Software as Service Deals ......................................................... 96 Figure 62 - Example 8.4: Real-Time Decisions Structure ............................................................... 97 Figure 63 - Example 8.4: Should Saad Buy Information? ............................................................... 98 Figure 64 - Example 8.4: Should Saad Invest in the Deals? ........................................................... 99 Figure 65 β Example 8.5: Should Saad Focus on Hardware or Software? ................................... 100 Figure 66 β Example 8.5: What Combination of HW and SW Should Saad Focus on? ................ 100 Figure 67 - Example 8.5: With Information, Should Saad Focus on Hardware or Software? ...... 101 Figure 68 - Example 8.5: With Information, What Combination of HW and SW Should Saad Focus on? ............................................................................................................................................... 101 Figure 69 β Example 8.5: Are the Partnerβs Skills Complementary or Synergetic? ..................... 102 Figure 70 β Example 8.5: How Good Must the Partner's Information be to Justify Hiring her? . 103 Figure 71 - Example of a Generic Decision Diagram .................................................................... 186 Figure 72 - Generic Diagram Node Key ........................................................................................ 187 Figure 73 - Venn Diagram of the Competitive Landscape ........................................................... 192 Figure 74 - Cluster Diagram of the Competitorsβ Node ............................................................... 193 Figure 75 - Types of Market Share ............................................................................................... 193 Figure 76 - Example of an internal diagram for Market Share .................................................... 194
1
Chapter 1 β Introduction βLife is but Fleeting Opportunitiesβ¦β
1.1 A New Research Direction
1.1.1 The Need to Formalize DA Four decades after the introduction of Decision Analysis (DA), applications for this process
are still lacking. In order to disseminate DA we need to formalize it and the way we
communicate it. Howard and Matheson identified a path for the future of DA in their (1968)
paper. The following is a summary of their recommendation:
βDecision analysis procedures will become standardized so as to yield special forms
of analyses for the various types of decisions, such as marketing strategy, new
product introduction, and research expenditures. This standardization will require
special computer programs, terminology, and specialization of concepts for each
kind of application.β
We are pleased and eager to call for this specialization and standardization of DA. We
present a method for doing so in section 1.1.2, including a suggestion for standardization
method that we actually follow in our dissertation.
1.1.2 Introducing DA Methodologies We suggest standardizing the decision analysis cycle and principles of DA to specific areas of
application. The researcher should tackle a specific area of application and study the
challenges of applying DA in that particular area. We customize the DA cycle and develop
new tools, definitions, and concepts that address the challenges. We describe the process in
the following section
Definition 1.1.1: DA Methodologies We define DA Methodologies as a customization of the DA cycle to address challenges within
an area of application. This customization is accomplished by introducing tools, definitions,
and new concepts to the DA cycle.
2
1.2 The Research Process We suggest the following research process to serve as DA methodology (see Figure 1):
β’ Begin with an observed motivating problem
β’ Develop an intuitive understanding of the motivating problem (what makes it challenging)
β’ Abstract the motivating problem to a research problem
β’ Develop a solution to the research problem
β’ Develop an intuition about the research problem (what impact this problem has)
β’ Customize our solution to the specific motivating problem
Figure 1 - Research Process
In the following sections, we describe each step of the research process and give a summary
of our work within that step.
1.2.1 The Motivating Problem This work was motivated by a personal experience. My friends started a company for which I
helped to raise the capital. I was struck with the challenge of how to value the company.
After going through the venture capital (VC) literature and talking to Venture Capitalists, it
was clear that the methods currently in application were insufficient. This got me interested
in the problem of evaluating startups. Being a student of Decision Analysis, the obvious
solution was to value the startup using the DA process. I give a literature review of the
problem of valuing stand-alone opportunities in Chapter 2, and present a more detailed view
of venture capital valuation in Appendix A3.
3
The VC example holds many interesting questions that we can approach through this
process. For example, should the VC invest in a specific deal at hand? Should he buy
information relevant to the deal? What category of deals should the firm focus on? How
good should a potential partnerβs expertise be to justify hiring him/her? We will answer
these motivating questions in chapter 9.
1.2.2 Intuitive Understanding of the Motivating Problem Consideration of the VC problem reveals that the real challenge facing a Venture Capitalist
lies in analyzing the startup within the context with a multitude of deals requiring immediate
action.
There are two dimensions to this challenge. The first relates to determining the optimal
response to each opportunity, given what lies in the future. The second relates to evaluating
alternatives with regards to the deal flow in general. The first dimension includes questions
like, βHow much analysis is too much?β or, βShould we spend time on seeking information
about a specific startup or go on to analyzing a different one?β
The second dimension encompasses interesting questions that are difficult to resolve. Some
of the questions that we will discuss in this dissertation are, βWhat category of deals should
the firm focus on?β βAre a potential partnerβs abilities and skills complementary or
substitutes?β and, βHow good should a potential domain expertβs information-gathering
ability be to justify hiring him/her?β
1.2.3 The Research Problem In this section we define the class of problems that are suitable for our process. This is an
abstraction to the challenges we faced when attempting to answer the motivating questions
above. First we describe the fleeting opportunities problem class, then we give its
constraints, and finally we list some other problems that can be answered within our
abstraction.
Problem Description We consider situations in which decision makers have flows of opportunities becoming
available over time. They can only accept a limited number of deals and have to immediately
decide how to react to each deal as it arrives.
4
Problem Constraints We limit the class of problems to two dimensions, namely, constraints on the deals and
constraints on the decision makers.
1. Constraints on the deals 1.1. The distribution of deals is irrelevant to the deals arriving in the past. This means
that the decision makersβ belief about the deals does not change as they evaluate more deals.
1.2. The effects of the deals have to be separable over time. Thus, the impact of a deal arriving at a certain period can be accounted for independently of deals already at hand and arriving in the future.
1.3. The deals are irrelevant of each other. Thus, there is no value to diversification or βhedgingβ across deals.
1.4. The deals require immediate decisions. This constraint has two implications. First, it means that that deals available do not change during the decision makersβ consideration. Second, decision makers cannot change their decisions later. We relax the second implication later in the dissertation by allowing decision makers to reverse a decision to accept a deal.
2. Constraints on decision makers 2.1. Decision makers are limited to the exponential and power u-curves. 2.2. Decision makers have limited resources. 2.3. Decision makers face a deadline after which they cannot accept more deals. We
relax this constraint later in the dissertation but require discounting of future prospects.
Example Applications This abstraction goes beyond the Venture Capital problem. For example, consider a movie
production company. The producers are continuously approached with scripts, which they
consider for production. They cannot accept all the scripts and have to decide on the scripts
as they arrive.
In chapter 9, we give a detailed example using a Venture Capital context. We consider a VC
firm contemplating where to focus its efforts. We also study process of the firm hiring a new
partner.
1.2.4 Solution to the Research Problem We developed a solution that consists of the following three steps:
5
Figure 2 - Solution to the Research Problem
Setup In the setup, we determine the frame and the decision basis (assessing beliefs, risk attitude,
etc.). We discuss setup in further detail in Chapter 5.
Valuation Funnel Our solution to the challenge of fleeting opportunities lies in developing a valuation funnel.
The funnel is structured as follows:
Figure 3 - Valuation Funnel
The basic element of the valuation funnel is the following dynamic programming structure:
Figure 4 - Basic Dynamic Program Structure
At each chronological step, the decision maker decides whether to accept the deal at hand,
reject it, seek more information, or apply another alternative (e.g., control, options, hedging,
etc.).
Setup Valuation Funnel Application
6
1.2.5 Intuitive Understanding of the Research Problem Our solution provides a DA Methodology for a large class of problems that include, among
others, Venture Capitalists, book publishers, and academic search committees.
Characterizing the optimal policy and the values within the deal structure can provide a
deeper understanding of the underlying problems. The monotonicity of the optimal policy
and the predictable structure of the values of information and control are also worth noting.
1.2.6 Solution to Motivating Problem We apply the solution process we developed in this dissertation to answer the motivating
questions we listed in Section 1.2.1.
1.3 Dissertation Structure The dissertation is organized as follows. We discuss relevant research and background in
Chapter 2. In Chapter 3 we discuss modeling the risk attitude of the decision maker. We
focus on two u-functions, namely, the exponential and the power u-functions. In Chapter 4
we define the model and notation used to represent the problem of fleeting opportunities. In
chapters 5, 6, 7, and 8 we discuss our valuation template. In Chapter 5, we describe Step 1:
modeling the frame and the decision basis. Step 2, building the valuation funnel, is discussed
in chapters 6 and 7. In Chapter 6 we discuss the additive setup and in Chapter 7 we discuss
the multiplicative setup. Step 3, applying the funnel, is explained in Chapter 8, along with
detailed examples of applying the process. In chapter 9 we highlight directions for future
research and conclude.
Our proofs are included in Appendix A1. In Appendix A2 we give an example of a generic
decision diagram. We give an overview of the venture capital method in appendix A3.
1.4 Using this Dissertation This dissertation is developed as a DA methodology for problems fitting the fleeting
opportunities structure. It is a valuation template that practitioners may follow to help
decision makers with fleeting opportunities. Our dissertation structure is intended to help
practitioners apply the results of the dissertation. Practitioners may use Chapter 3 to assess
and approximate the risk attitude of the decision maker. Chapter 5 outlines the first step of
the solution process and structures the framing and decision basis process. The goal of this
7
step is to formulate the deal flow. After this step, decision makers should have clearly frame
boundaries and assessed alternatives, uncertainties, and preferences relevant to the
problem.
Chapter 6 and 7 address the second step for the additive and multiplicative settings,
respectively. The goal of this step is to build the valuation funnel. After carrying out this step,
decision makers will have determined the factors involved in the decision about each deal,
including the different buying prices, information, control, etc. These incremental values, or
multipliers, help the decision maker make real-time choices when alternatives are available.
In addition, the resulting valuation funnel can be easily used to evaluate alternatives
concerning the deal flow as a whole. The two chapters are intended to be independently self-
sufficient, so that the practitioner may use the one relevant to the problem at hand. For this
reason, note that the chapters follow the same structure and discuss the same situations.
Chapter 8 discusses the third step of the process. The goal of this step is to discuss the
application of the valuation funnel. By the end of this step, the decision maker may use the
results of the valuation funnel to evaluate meta-decisions that relate to the process itself.
Additionally, the results will provide the optimal policy regarding real-time decisions.
8
Chapter 2 β Review & Summary of Contributions
βObserve the wonders as they occur around
you. Don't claim them. Feel the artistry moving
through and be silent.β
Jalal ad-Din Rumi
In this chapter we give an overview of the literature and a summary of our contributions. In
section 1 we review the relevant literature and then follow with a summary of our
contributions in Section 2. In Section 3 we highlight the impact of our contributions.
2.1 Literature Review Our motivating problem is studied in the literature in two main strands. The first deals with
evaluating decisions focused on an opportunity as it stands alone. The second evaluates
decisions about opportunities in the context of repeated offerings. We review both areas of
literature in the following sections, first giving a brief review of the relevant literature in risk
attitude and value of information as our proposed solution contributes to these areas. We
refer the reader to papers by Howard and recommend the upcoming book by Howard &
Abbas (2011E).
The Decision Analysis view of probability is based on the concept of probability as a measure
of belief. The underlying premise is the Bayesian updating of information originally discussed
in Bayes (1763). Laplace (1902) gives an early discussion of this view of probability. Howard
(1965) discusses the application of Bayesian methods in systems engineering. Howard (1992)
revisits this view of probability and compares it to the prevalent statistics view of probability.
Jaynes (2003) gives a detailed study of probability as a measure of belief.
In this dissertation we follow the nomenclature suggested by Howard (2004). Howard
emphasizes using a precise language for decision distinctions. While these might differ from
the accepted norms in the literature, we adopt them for the sake of clarity.
9
2.1.1 Risk Aversion For a general overview of the foundations of the theory of risk aversion, we refer the reader
to the seminal papers of Arrow (1971) and Pratt (1964). We are specifically interested in two
forms of u-curves, namely, the exponential u-curve and the power u-curve. For an overview
of the exponential u-curve we refer the reader to Howard (1998). One of the earlier
applications of the power u-curve is that of Kelly (1956), who recommends setting the
objective function to the logarithmic u-curve (a special case of the power u-curve). Thorp
(1997) studies the logarithmic u-curve in the context of practical applications. Cover (1991)
discusses the power function in detail, albeit within the context of information theory. We
refer the reader to Abbas (2003) for advanced concepts on utility functions.
We discuss the exponential and power u-curves in more detail in Chapter 3, where we also
present our extensions to the power u-curve.
2.1.2 The Value of Information The value of information has been studied in a variety of forms in the literature. We are
concerned with buying information to improve deal selection; thus, we are concerned with
the economic value of buying information and the personal indifference buying price of
information (PIBP). For a review of this concept, please refer to Howard (1967). The results in
this paper can be directly extended to the value of control. We refer the reader to Matheson
and Matheson (2005) for more details on this concept.
Results of research on the value of information in specific decision situations have been
characterized in the literature. One such result is that the value of free, but possibly
imperfect, information is always nonnegative and is bounded by the value of perfect
information. Another result is that the value of information is positive if and only if it changes
the optimal decision; if the information does not compel a change in the optimal decision, its
value is zero.
Gould (1974) showed the lack of a monotonic relationship between the value of information
and the risk aversion coefficient. Hilton (1981) surveyed the properties of the value of
information. Hilton further demonstrated the lack of a monotonic relationship between the
value of information and decision flexibility. In a different domain, Barron and Cover (1988)
studied the value of information in repeated gambles with logarithmic utility. They defined
the value of information in growth ratios and proposed bounds on the value.
10
More recently, DelquiΓ© (2008) gave a brief overview of the research on the value of
information and attempted to characterize the value of information through the intensity of
preference, i.e., the difference in utility across alternatives. Additionally, Bickel (2008)
defined and characterized the relative value of information (RVOI) as the value of imperfect
information relative to that of perfect information. Using normal priors, exponential utility,
and two alternatives, Bickel showed that RVOI is maximal when the decision maker is
indifferent between the two alternatives.
2.1.3 Stand-alone Opportunities We classify the methods of evaluating stand-alone opportunity into three categories,
namely, discounted cash flow methods, decision analytic methods, and application-specific
methods.
Discounted Cash Flow Most valuation systems in the business world are based on discounted cash flow (DCF)
analysis. Here, decision makers are asked to estimate the future cash flows of the different
alternatives, discount them for risk, and finally choose the alternative with the highest net
present value. The main issue with such a system is that it mixes time and risk preference in a
single parameter, that is, the discount factor. Still, the DCF literature is useful for our
purposes, as it gives a detailed view of how to build the economic model of a decision before
we incorporate uncertainties. For the practitioner, we suggest Damodaran (2002) and
Copeland et al. (2005). A brief tutorial of DCF is given in Jennergen (2002).
Decision Analysis This dissertation is based on the Stanford system of Decision Analysis (DA). We assume that
the reader is well versed in DA. The use of DA allows the decision maker to assess risk and
time preference separately. The principles of DA allow the decision maker to calculate the
value of information and control, among other quantities. The term βDecision Analysisβ was
coined by Howard in his seminal paper Howard (1966a). Other key papers by Howard include
Howard (1966b), where he presented an early treatment of the value of information.
Howard (1968) gives his early detailed study of decision analysis. He revisited his analysis in
several later papers, most notably Howard (1988) and Howard (2007).
11
Another set of references for DA is based on the view of multiple objectives. Notable papers
in this strand include Pratt et al. (1964), Raiffa (1968), Keeney & Raiffa (1976), and Keeney
(1982).
For applications of DA, we refer the reader to McNamee & Celona (1990), Matheson &
Matheson (1998), and Matheson (1983). On assessments, we refer the reader to Tversky &
Kahneman (1974) for their seminal paper on biases and to Spetzler & von Holstein (1975) for
a discussion of assessments in the light of biases.
Decision diagrams are an essential tool for framing and structuring the decision problem.
They were first introduced by Howard & Matheson (1981). We refer the reader to Shachter
(1986) and Shachter (1988) for a detailed study of decision diagrams and influence diagrams.
Specific Applications In addition to the above two methods, there is a variety of research on valuation and
decision-making in specific areas of application. These studies span both the descriptive and
prescriptive methods. For example, we give a brief review of the literature of venture capital
decision-making. The prescriptive literature listed below conflicts with the principles of
Decision Analysis largely in its attitude towards uncertainty.
Descriptive The descriptive research of VC decision-making began in the 1970s. Wells (1973) was one of
the earliest who worked on examining the criteria used by VC to evaluate ventures. The
earlier papers suggested that VC focus more on the characteristics of the team, while the
later papers suggest that the industry and market characteristics are more important. Most
of this work, however, was done through interviews or questionnaires and had small sample
sizes. Fried & Hisrich (1988) gave a more extensive study and suggested a unifying model for
describing the VC decision-making process. They conclude that VC apply three principles
when evaluating investments, namely, the viability of the project, the capability of the
management, and the size of the prospects. Multiple authors extended this work, most
notably Zacharakis & Shepherd (2001), who incorporated specific cognitive biases that affect
VC. Others include Zacharakis & Meyer (1998) and Shepherd & Zacharakis (1999). In
Appendix A3 we give an overview of VC valuation in the literature and from VC surveys.
12
Prescriptive The interest in building prescriptive models for VC decision-making grew in the 1980s. One of
the earlier models was developed by Tyebjee & Bruno (1984). This work was further
extended by others, notably MacMillan et al. (1985) and MacMillan et al. (1987). These
models were based on correlation relations. Zacharakis & Meyer (2000) and Shepherd &
Zacharakis (2002) argue that decision aids should be the focus of future research on VC
decision-making. They also tested some of the previous models and found that they
outperformed VC in their sample. Kemmerer et al. (Unpublished) developed a causal
Bayesian network as a decision aid for Venture Capitalists.
2.1.4 Repeated Opportunities After our overview of the literature on stand-alone opportunities, we turn to evaluating
opportunities within a deal flow. The deterministic problem that relates to our work is the
Knapsack Problem (KP) and the probabilistic problem related to our work is the Secretary
Problem (SP). Worth noting of the later extensions to the secretary problem is the Dynamic
Stochastic Knapsack Problem (DSKP). In the following we give a brief review of the KP, SP,
and the DSKP.
Knapsack Problem In this static resource allocation problem, the resource capacity is known and the requests
have known requirements and rewards. The goal is to maximize the rewards obtained by
distributing the resources most effectively over the requests. The stochastic version of the KP
differs in that the rewards and/or requirements are random, while the set of items is still
known. The stochastic KP was first introduced by Ross & Tsang (1989). Multiple objective
functions were studied, including ones maximizing the mean value, the percentile of total
value, and a linear combination of mean and variance. Refer to Kellerer et al. (2004) for a
detailed overview of the different variations.
Secretary Problem In the original secretary problem, a series of secretaries is interviewed until one offer is
made. The objective is to allocate a single resource (a job) to a single request (a secretary
application) with the aim of obtaining one of the best applicants; that is, to minimize the
ranking of the applicant selected relative to the others. Gilbert & Mosteller (1966) first
extended the secretary problem to allow for multiple choices. Two types of objectives were
13
studied; the first endeavors to minimize the ranks of the requests granted and the second to
maximize the rewards. Some of the main problems considered in reward maximization
include the asset selling problems and the sequential stochastic assignment problems (SSAP).
Kleinberg (2005) considered the DSKP as a generalization of this strand.
In the asset selling problem, also called the βfull informationβ secretary problem, the decision
maker holds assets for which offers (deals) come in over time. In MacQueen & Miller (1960),
the decision maker has a holding cost and the offers are random. In the SSAP, the problem is
concerned with assigning people to requests arriving over time. Each person has a known
value and each job has a random value, which becomes known upon arrival. Once assigning a
specific job to a specific person, the decision maker attains the multiplication of his or her
values. Refer to Derman et al. (1972) for a discussion of the SSAP.
Dynamic Stochastic Knapsack Problem The Dynamic Stochastic Knapsack Problem was studied by Papastavron et al. (1996) and
Kleywegt & Papastavron (1998). Their original setup considered a limited capacity resource
with requests arriving randomly over time. The requests have random rewards and/or
random capacity requirements that only become known upon arrival. The decision maker
must decide in real time whether to accept or reject the request, where recall is not allowed.
The study showed that the problem has an optimal policy, defined as a threshold. They also
study the changes in the optimal policy as the deal flow progresses over time. They excluded
the possibility of seeking information and hence required that the decision maker either
accept or reject an incoming deal. Furthermore, they did not allow the decision maker to be
risk-averse.
Kleywegt (1996)studied different variations of the DSKP. He worked on discrete and
continuous time horizons and random requirements and/or rewards. Along similar lines, Van
Slyke & Young (2000) studied yield management problems. Boulis & Srivastava (2004) used
the DSKP to design power-saving mechanisms in wireless networks. Feller (2002)extended
the DSKP to a multidimensional setting in which the resource has multiple dimensions across
which the requests have their requirements. While extensions to the DSKP were varied, they
usually focused on applying the DSKP on different problems without much extension to the
underlying problem structure.
14
2.2 Contributions Now that we have reviewed the extant research, we highlight our contributions, which
address some of the limitations in the research just described. While our focus is in the
context of repeated opportunities, we offer some contributions in the context of stand-alone
opportunities. A summary of our contributions is given below.
2.2.1 Risk Aversion We extend the application of the power u-curve by defining the certain equivalent multiplier
and the u-value multiplier. The use of these definitions simplifies the calculations in the
multiplicative context. This is of special interest to us, as many of the motivating questions
we consider deal with situations in which equity is considered the medium of payment, and
hence the prospects are multiplicative.
We additionally studied different approximations to the power u-curve. Here we
approximate the results using several moments of the underlying distribution. It is
worthwhile to note the approximation of the u-multiplier as a function of the moments of
the log of the prospectβs probability distribution. For a large range of risk aversion, this
approximation converges quickly in a few moments.
2.2.2 The Value of Information The value of information has proven a challenge for general characterization in the literature.
In the structure of fleeting opportunities we characterized the value of the deals with
information in relation to the parameters of the model (i.e., capacity and time). More
interestingly, we characterized the value (IBP) of information and found that it exhibits a
complex relationship with both time and capacity. In relation to both parameters, the IBP
first increases until it reaches a maximum at a well-defined quantity before decreasing again
until it converges at the value of information on the specific deal outside the deal flow. In
other words, we found that the IBP for information about a given deal as part of the deal
flow begins with a smaller value than that for the deal as it stands alone and then reaches a
maximum that is often higher than the IBP of the information outside the deal flow before it
converges to that value. The fact that the value begins as less than that of the information
within the deal flow allows an opportunity for the decision maker to wait for the best deal,
which is, in effect, similar to the effect of information, as expected. However, the
15
observation that the value of information would actually exceed that of the IBP of
information outside the deal flow was an unexpected result.
2.2.3 Stand-alone Opportunities We further developed the understanding of the power u-curve within a multiplicative setting
so that it can be incorporated into the DA process. We defined the certain equivalent
multiplier (CM) and defined the value of information and control in terms of the CM. We also
studied how to approximate the CM based on the moments of the underlying distribution.
2.2.4 Repeated Opportunities We extended the research by allowing a multiplicative setting and integrated more
alternatives. Specifically, we incorporated risk aversion (exponential and power u-curves),
value of information, value of control, and value of options. In the case of information, we
studied how the value of information changes with time and capacity. We also characterized
the optimal policy against time and capacity. Further, we discussed extending the cost
structure to include capacity and time requirements. We also studied relaxing the deadline
requirement and allowing an infinite horizon.
In our extensions, we were able to maintain the intuitive threshold optimal policies. We also
studied how the value of the deal flow changes with capacity and time. The results were
similar in both the additive and multiplicative settings.
2.3 Impact of These Contributions The system we created gives a valuation template for the fleeting opportunities problem.
This template allows decision makers to answer questions about single opportunities as they
arise and about the deal flow as a whole. We believe this template will help extend the
application of Decision Analysis and spur more research within the fleeting opportunities
problem and, more generally, valuation problems.
16
Chapter 3 β Risk Attitude βThe actual science of logic is conversant at
present only with things either certain,
impossible, or entirely doubtful, none of which
(fortunately) we have to reason on. Therefore
the true logic for this world is the calculus of
Probabilities, which takes account of the
magnitude of the probability which is, or ought
to be, in a reasonable man's mind.β
James Maxwell
In this chapter we discuss the decision makersβ attitudes towards risk. We solve the process
using two u-curves, namely, the exponential and the power u-curves. We use them,
respectively, in the additive and multiplicative settings described above.
In section 1, we give a summary of the exponential u-curve. In Section 2, we discuss our work
on the power u-curve in detail. The proofs for our results in Section 2 are in the appendix. In
Section 3 we compare both u-curves and elaborate on the conditions leading to the choice of
one over the other.
3.1 The Exponential U-Curve
3.1.1 Introduction In this section we give an overview of the exponential u-curve, its risk aversion coefficients,
and an assessment method. In Section 2 we give some definitions related to the exponential
u-curve. In Section 3 we discuss the u-curveβs application and then we specifically study the
value of information and control in Section 4. Finally, we give a proximal analysis in section 5.
17
Definition 3.1.1: Exponential U-Function The exponential u-function is defined as:
π’(π₯) = 1 β πβπΎ(π₯) This function has one main parameter, Ξ³, which is used to model attitude towards risk.
Risk Attitudes
Absolute Risk Aversion We study the risk attitude modeled by the exponential u-function using the Arrow-Pratt
coefficient of absolute and relative risk aversion. Refer to Arrow (1971) and Pratt (1964).
For the exponential u-function we find the absolute risk aversion coefficient as follows:
π’"π’β²
= βπΎ2 πβπΎπ₯
πΎπβπΎπ₯= βπΎ
The coefficient is specified solely by Ξ³ and it determines the risk attitude of the decision
maker as follows:
β’ Ξ³ > 0 π’"ππ’β²π
< 0 Risk-averse
β’ Ξ³ = 0 π’"ππ’β²π
= 0 Risk-neutral
β’ Ξ³ < 0 π’"ππ’β²π
> 0 Risk-seeking
Relative Risk Aversion The Arrow-Pratt coefficient of relative risk aversion for the exponential u- function is given by
π’"π’β²β π₯ = βπΎ β π₯
Hence, the exponential u- function has a constant relative risk aversion coefficient.
Assessment Method Here we suggest a method, devised by Howard (1998), to assess Ξ³. Decision makers are
asked to answer the following question: given a deal in which they can gain with probability
π or lose π₯ with probability (1 β π), for what value of p is the deal worth nothing to them? In
other words, what is the value of π for which the following diagram is true?
18
After we assess π we can obtain Ξ³ as given by:
πΎ = ln οΏ½π
1 β ποΏ½
For example, when p = 0.5, Ξ³ = 0 and the decision maker is risk-neutral.
3.1.2 Definitions Decision makers with an exponential u-curve are said to follow the delta property. The delta
property, defined in Howard (1998), means that decision makersβ certain equivalent of a
specific deal is invariant to shifting in the prospects of the deal. That is, if the prospects are
shifted by Ξ΄ then the certain equivalent is also merely shifted by Ξ΄. This is true because the
absolute risk-aversion coefficient is constant in the prospects. For more on the delta
property please refer to Howard (1998).
3.1.3 Applying the Exponential U-Curve The exponential u-curve simplifies the calculation of the certain equivalent of the deal.
Decision makers do not need to include their wealth when calculating the certain equivalent
of a given deal.
3.1.4 The Value of Information and Control In the same lines as in Section 3.1.3, the value of information and control can be easily
calculated when decision makers follow the delta property. The value of information and
control can be calculated by finding the value of the deal with free information and control
and then subtracting the value of the deal without information and control from it. More on
this can be found in Howard (1998).
3.1.5 Proximal Analysis When only a few statistics are available to describe the uncertainties at hand, we can
approximate the CM using proximal decision analysis as shown in Howard (1970). Howard
gave the following approximation to the certain equivalent of a deal X when the decision
maker follows the delta property.
πΆπΈ(π) = οΏ½(β1)nβ1
n!Ξ³nβ1 ΞΊn
nβ₯1
0 π₯
~ βπ₯
π
1 β π
19
Where ΞΊnis the nth cumulant of the deal X and is defined as follows:
ΞΊn = ππ βοΏ½οΏ½π β 1π β 1
οΏ½ ΞΊn
πβ1
π=1
ππβπ
3.2 The Power U-Curve Now that we have given an overview of the exponential u-curve we turn to the power u-
curve.
3.2.1 Introduction In this section we give the equation of the power u-curve, its risk aversion coefficients, and a
suggestion for an assessment method. In Section 2 we give some definitions related to the
power u-curve. In Section 3 we discuss the u-curveβs application then we specifically study
the value of information and control in Section 4. Finally, we give a proximal analysis in
Section 5.
Definition 3.2.1: The Power U-Function The power u-function is defined as:
π’π(π₯ β π€) =(π₯ β π€)π β 1
π
This function has two parameters, namely, w and Ξ». The term (xβw) represents the prospects
of the deal where x is the distribution of the deal as a multiple of w. We will discuss the
interpretation of the parameters next. In general, however, the parameters are chosen in
order to best fit the u- function to the assessed risk attitude of the decision maker.
The following is the definition of wealth by Bernoulli (1954):
βBy this expression [wealth] I mean to connote food, clothing, all things which add to
the conveniences of life, and even to luxury β anything that can contribute to the
adequate satisfaction of any sort of want. There is then nobody who can be said to
possess nothing at all in this sense unless he starves to death. For the great majority
the most valuable portion of their possessions so defined will consist in their
productive capacity, this term being taken to include even the beggarβs talent: a man
who is able to acquire ten ducats yearly by begging will scarcely be willing to accept a
20
sum of fifty ducats on condition that he henceforth refrain from begging or otherwise
trying to earn money.β
Following this definition, we can in general interpret w as the wealth of the decision maker,
while Ξ» can be understood as a measure of his risk aversion. More detailed analysis of the
parameters follows within our discussion of the risk attitude represented by the power u-
curve.
Risk Attitude
Absolute Risk Aversion As in 3.1.1, we study the risk attitude modeled by the power u- function using the Arrow-
Pratt coefficient of absolute and relative risk aversion.
For the power u-function we find the absolute risk aversion coefficient as follows:
π’"ππ’β²π
= π(π β 1)(π₯ β π€)πβ2
π(π₯ β π€)πβ1 = π β 1
(π₯ β π€)
The prospects of the deal are non-negative; hence Ξ» determines the risk attitude of the
decision maker as follows:
β’ Ξ» > 1 π’"ππ’β²π
> 0 Risk seeking
β’ Ξ» = 1 π’"ππ’β²π
= 0 Risk neutral
β’ Ξ» < 1 π’"ππ’β²π
< 0 Risk averse
Note that Ξ» = 0 reduces the absolute risk aversion coefficient to β 1π₯βπ€
, which is equivalent to
that of the log u- function.
For Ξ»<1, the risk aversion of the decision maker decreases with the number of prospects. It is
worth noting that for risk-averse decision makers, the interpretation of w differs depending
on Ξ». Figure 5 shows the power u- function for Ξ»β€0 for x 0
21
Figure 5 β Power u-function for Ξ» β€ 0
Note that, as π₯ β 0, the u-curve for π β€ 0 reaches negative infinity. This falls in line with
Bernoulliβs inclusion in his definition of wealth the stipulation that one will never take a deal
with the possibility of losing all of his wealth.
Figure 6 shows the power u- function for 0 < π < 1 ππ π₯ β 0:
Figure 6 β Power u-function for 0 < Ξ» < 1
Here we note that the u-curves for 0 < π < 1 have finite negative values when the prospect
is 0. This result indicates that the decision makers are willing to take on deals that have
nonzero probability of losing all of their wealth. If we use Bernoulliβs definition of wealth, we
-25
-20
-15
-10
-5
0
5
0 0.5 1 1.5 2 2.5 3 3.5
Ξ» = 0
Ξ» = -0.5
Ξ» = -0.75
Ξ» = -1
Prospect
u-va
lues
Power u-function for Ξ» β€ 0
-3.5
-3
-2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
0 0.5 1 1.5 2 2.5 3 3.5
Ξ» = 0.75
Ξ» = 0.5
Ξ» = 0.25
Ξ» = 0.05
Prospect
u-va
lues
Power u-function for 0 < Ξ» < 1
22
will be hard-pressed to find a decision maker who is willing to take such a chance. However,
we note that the u-curve can be scaled by w so that we can still call it wealth.
Relative Risk Aversion The Arrow-Pratt coefficient of relative risk aversion for the power u- function is given by
π’"ππ’β²π
. (π₯ β π€) = π β 1
Hence, the power u- function has a constant relative risk aversion coefficient.
Assessment Method In the remainder of this dissertation we will use the following u- function:
π’(π₯ β π€) = (π₯ β π€)π
π
This function is easier to deal with mathematically and it results in the same conclusions as
the general power u-function discussed above. This is the case because it is only differs from
π’π by a constant shift.
In the following analysis we assess Ξ» where we take w to be the wealth of the decision
maker, as defined by Bernoulli. The decision maker has to answer the following question:
given a deal that doubles his wealth with probability p or halves it with probability 1 β p, for
what value of p is he indifferent between his current wealth and the deal?
From this we can easily infer Ξ»to be:
π = log2 οΏ½1 β ππ
οΏ½
When p = 0.5, Ξ» = 0, and hence we have the log u-function. Note that Abbas (2007) shows
that it is not sufficient to assert that the decision maker is invariant to scaling in the
prospects using the results of specific assessments. To clarify his point, Abbas gives an
example in which the decision maker can assert invariance to scaling in the prospects with
regards to an infinite series of assessments and still not follow the power u- function. This
π€0 2π€0
~ 0.5π€0
π
1 β π
23
issue can be overcome by directly asking decision makers if they are willing to follow this
property across the prospective range of concern.
3.2.2 Definitions
Multipliers
Definition 3.2.2: U-Multiplier (UM) We denote by UM the u-multiplier of an alternative, defined as:
ππ = π’(π)/π’(π0) = οΏ½ππ
π
π=π
(1 + πππ)π
over the probability distribution of the alternative. The UM of a deal is the maximum of the
u-multipliers of its alternatives.
It is also possible to define the certain equivalent multiplier of the deal, which relates directly
to the u-multiplier.
Definition 3.2.3: Certain Equivalent Multiplier (CM) We denote by CM the Certain Equivalent Multiplier. Consider a deal with a u-multiplier of
UM. The CM of the deal is then defined as follows
πΆπ = πΆπΈ(π)/π0 = οΏ½οΏ½ππ(1 + πππ)ππ
π=1
π
= βπππ
Interpreting the CM Intuitively, we can think of the CM as the risk-adjusted return of an investment (as it relates
to oneβs initial wealth). For risk-neutral people, the CM will equal the average return on the
initial wealth. For risk-averse people, the CM equals the rate of return for which the decision
maker is indifferent between having ππΆ β π0 for certain and taking the uncertain deal.
When faced with such investment opportunities, the decision maker has at least one other
alternative, which is usually to invest his assets in a bonds or savings account with a
guaranteed return of r0. This allows us to split the u-multiplier into two factors; the first
includes outcomes that generate a return larger than r0 and the second includes outcomes
that generate a return less than r0.
24
Example 3.2.1 To demonstrate the use of the above definitions, we introduce the following example. Here
we consider the case of Sara, an entrepreneur faced with the following deal.
Figure 7 β Example 3.2.1 Original Deal Structure
Assume that the savings rate is 5%, which is currently the only other alternative for the
investment available to Sara. Sara follows the power u-curve with Ξ» = 0.5. This allows Sara
to calculate her UM and CM as follows:
ππ = 0.05(1 + 10)0.5 + 0.4(1 + 0.3)0.5 + 0.45(1 + 0.3)0.5 = 1.11205
Thus, the CM is
πΆπ = βπππ = 1.23667
The tree above, then, reduces to the following:
Figure 8 β Example 3.2.1 Simplified Deal
As shown by the figure above, this startup is equivalent to a 23.7% certain return for Sara,
which is higher than the 5% savings rate. Therefore, Sara should start the company because
it constitutes the best decision.
0.05 IPO(1+1000%)W0
0.4 Do Well(1+30%)W0
Start
0.45 Escape(1-10%)W0
0.1 Go Under(1-60%)W0
Donβt (1+5%)W0
>>> Start(1+23.7%)W0
Donβt(1+5%)W0
25
3.2.3 Applying the Power U-Curve In this section we give two definitions describing the indifference buying and selling
fractions. These definitions follow closely those of the Personal Indifference Buying and
Selling Prices as defined by Howard (1998).
Indifference Buying Fractions
Definition 3.2.4: The Personal Indifference Buying Fraction The personal indifference buying fraction, Fb, is the certain fraction that when applied to all
the prospects of the deal, decision makers will be indifferent between having the deal and
keeping their current wealth. In other words, decision makers are indifferent between their
current wealth and having the deal multiplied by (1-Fb).
Proposition 3.2.1 The fraction Fb for which decision makers are indifferent between having that fraction as a
certain return on their initial wealth and applying the uncertain deal (pi, ri) with a certain
equivalent multiplier CM on their wealth satisfies:
πΉπ =πΆπ β 1πΆπ
Definition 3.2.5: Personal Indifference Selling Fraction The personal indifference selling fraction, Fs, is the fraction that when offered as a certain
return on the decision makersβ initial wealth; they will be indifferent between having the deal
and having the certain return. In other words, decision makers are indifferent between their
current wealth multiplied by (1+Fs) and having the deal.
Proposition 3.2.2 The fraction Fs for which the decision maker is indifferent between keeping a deal (pi, ri) with
a certain equivalent multiplier CM he currently owns and a having Fs as a certain return on
his initial wealth satisfies:
πΉπ = πΆπ β 1
Combining Deals In the following section we consider the effect of combining irrelevant deals.
26
Proposition 3.2.3 Consider a decision maker with a deal with a certain equivalent multiplier CM1 who is offered
a deal with a certain equivalent multiplier CM2 in addition to the first deal. If the two deals
are irrelevant, then the decision makerβs CM of the combined deal is:
πΆπ(ππππππππ ππππ) = πΆπ1 β πΆπ2
The maximum fraction decision makers should be willing to offer in return for the second
deal is their indifference buying fraction for the second deal, namely:
πΉπ =πΆπ2 β 1πΆπ2
Example 3.2.2 At a dinner party, Sara is introduced to a marketing expert who believes she can help Sara
reach her target market more efficiently. This marketing strategy can improve the value of
the startup as follows:
Figure 9 β Example 3.2.2 Deal Structure with a Combined Deal
The marketing expert offers to help Sara in exchange for 10% of the company. Sara is able to
calculate the maximum she should offer him as follows.
First, Sara finds the CM of the new deal as
πΆπ = 1.178571
Now, the indifference buying fraction for the expertβs services equals:
πΉπ =πΆπ β 1πΆπ
=1.178571β 1
1.178571= 0.151515
Since this fraction (~16%) is higher than the 10% cost of the service, Sara immediately
accepts the expertβs offer.
0.75 Gain(1+30%)S
0.25 Loss(1-15%)S
27
3.2.4 The Value of Information and Control Along similar lines to Section 3.1.4, using the power u-curve allows us to derive closed-form
expressions for the value of information and control in terms of equity payments. These are
presented in the following propositions.
Value of Information The value of information can be calculated by comparing the CM of the deal with free
information to that with no information.
Proposition 3.2.4 Let πΆπ1 and πΆπ2 be the CM of the deal without information and with free information,
respectively. The maximum percentage the decision maker should be willing to pay can be
calculated as:
πΉπ = 1 β πΆπ1
πΆπ2
Or, in terms of the UM:
πΉπ = 1 β οΏ½ππ1
ππ2
π
Example 3.2.3 Now, we go back to the original scenario in which Sara was considering whether or not to
start the venture. A market researcher offers Sara the following deal. In return for 10% of
Saraβs equity, he will tell her whether or not, the startup will βgo under.β Should Sara accept?
The deal with free help from the marker researcher is represented as follows:
Figure 10 β Example 3.2.3 Deal Structure with Perfect Information
Start Go Under(1-60%)W0
0.1 "Goes Under"
>>> Donβt(1+5%)W0
0.06 IPO(1+1000%)W0
>>> Start 0.44 Do Well(1+30%)W0
0.9 "Doesnβt" 0.5 Escape(1-10%)W0
Donβt(1+5%)W0
28
We calculate the CM of the deal with free information to be 1.32, and, hence, Sara should
accept the offer of information.
The Value of Control Here we consider the case when the decision maker is offered to change the distribution of a
set of prospects in return for a fraction of the deal. Consider the following proposition.
Proposition 3.2.5 Consider a deal with n possible outcomes (ππ , ππ) andπΆπ1. The decision maker follows the
power u-function with a certain value of Ξ». Therefore, the value of control, or the maximum
fraction Fb the decision maker should be willing to pay to change the deal from(ππ , ππ) to
(ππ, ππ) with πΆπ2 can be calculated as:
πΉπ = 1 β πΆπ1
πΆπ2
Or, in terms of the UM:
πΉπ = 1 β οΏ½ππ1
ππ2
π
Example 3.2.4 Sara is still interested in improving her odds of success. After some study, she believes that,
with extra funding, she can improve the chances of an IPO and ensure that she will never go
under. Effectively, this funding will ensure that she can get out of the venture after losing at
most 10% of her equity.
Figure 11 β Example 3.2.4 Improved Deal Structure
0.1 IPO(1+1000%)W0
0.4 Do Well(1+30%)W0
0.5 Escape(1-10%)W0
29
A Venture Capitalist offers Sara the required funds in return for a 15% stake in the company.
Since the CM of the updated deal is 1.59 compared to a CM of 1.237 for the original deal,
Sara immediately calculates her buying fraction as:
πΉπ = 1 β1.2371.59
= 0.223
and thus accepts these funds. Note that now her new CM increases as follows:
πΆπ(π€ππ‘β ππ’πππππ) = (1 β πππ π‘) β πΆπ
πΆπ(π€ππ‘β ππ’πππππ) = (1 β 0.15)(1.59) = 1.37
This is compared to a CM of 1.237 from the original deal.
3.2.5 Proximal Analysis of the Power U-Function When few statistics are available to describe the uncertainties at hand, we can approximate
the CM using proximal decision analysis. This section deals with several methods of proximal
analysis as applied to the power u-curve.
βSlightlyβ Risk-Averse Decision Makers The Pratt approximation shown in Pratt (1964) around the mean of the deal allows us to obtain an approximation of the CE as follows:
πΆπΈ(π) β πΈ(π) β 12ποΏ½πΈ(π)οΏ½.ππ΄π (π)
Where X represents the prospects and r (E(X)) is the risk odds of the decision makerβs u-
curve.
Proposition 3.2.6 The second order approximation for the CM of a deal (pi,ri ) around the mean is the
following:
πΆποΏ½ β πΈ(π π) β12
(1 β π).ππ΄π (π π)πΈ(π π)
where ri = 1 + ri
Example 3.2.5 Use of the above approximation allows Sara to quickly calculate the CM of her initial deal
(assume at this point that Ξ» = 0.9). Since
πΈ(π π) = 2.045
30
ππ΄π (π π) = 8.97
It is straightforward to calculate thatπΆποΏ½ = 1.83. The true πΆπ is 1.93, which is an error of
5.2%. Note that as Ξ» decreases, this error grows. This approximation is best for individuals
who are almost risk-neutral, or when Ξ» is close to 1.
Small Fraction Investments It is also possible to create a Taylor Series as π β 0. This is especially important for investors
such as Venture Capitalists, who in general invest a small fraction of their wealth in each
venture.
Proposition 3.2.7 The second order approximation for the UM of a deal around is the following:
πποΏ½ β 1 + π β πΈ(ππ) β π + π β (π β 1) β πΈ(ππ2) βπ2
2
It is also possible to approximate the optimal fraction the decision maker should invest in a
deal (ππ , ππ), as well as to calculate the maximum fraction at which the decision maker is
indifferent to investing in the deal.
Corollary 3.2.1 The optimal fraction, f*, the decision maker will invest in a deal, using the second-degree
Taylor series approximation, is the following:
πβ = πΈ(ππ)
(1 β π)πΈ(ππ2)
Corollary 3.2.2 The maximum fraction, fm, the decision maker will invest in a deal (ππ , ππ), using the second-
degree Taylor series approximation, is the following
πβ = πΈ(ππ)
(1 β π)πΈ(ππ2)
When Ξ» Disappears Another interesting case occurs when Ξ» is approximately 0, or when the individual u-curve is
almost a log function (recall that the power u-curve converges to a log function when Ξ» = 0).
Proposition 3.2.8 The Taylor series expansion for the UM of a deal around Ξ»0 is the following:
31
πποΏ½ βοΏ½ππ
π!πΈ[(ln(π π))π]
β
π=0
Example 3.2.6 We can approximate Saraβs UM with two or three moments depending on the required
accuracy. If Saraβs Ξ» is now 0.2, the UM can be approximated as
πΈ(ln(π π)) = 0.25
πΈ(ln(π π)2) = 0.85
πποΏ½ = 1.063 β πΆποΏ½ = 1.36
π€βπππ ππ ππ₯πππ‘ ππ = 1.065 π€ππ‘β πππππ = 0.2%
πππ π‘βπ ππ₯πππ‘ πΆπ = 1.37 π€ππ‘β πππππ = 1%
Discrete Approximation Proposition 8 relates the UM approximation to the moments of the logarithm of the
distribution. For people with -1 < Ξ» < 1, the UM approximation converges very quickly for
most well behaved distributions, since ππ
π! β 0 quickly in j.
For βwell-behavedβ distributions, the moments of the log distribution will change more
slowly than the reduction of the Ξ»j/j term. Therefore, the approximation will converge quickly
depending on Ξ». The number of terms needed for a good approximation determines the
number of moments needed for the approximation. This, in turn, determines the number of
degrees needed to assess the discrete approximation of the distribution. Recall that a
discrete distribution with n degrees can be used to equate (2n β 1) moments of a continuous
distribution. Thus, if the distribution converges after m terms, then it is more than enough to
assess m degrees of the discrete distribution.
32
From Figure 12, we can see that four terms are sufficient unless higher moments (5 and
above) grow exponentially.
3.3 Comparison between Exponential and Power U-Functions
3.3.1 Introduction In this section we discuss the use of the power and exponential u-functions. First, we discuss
the applicability of the u-functions to the decision involved in a single deal. Then we consider
the use of the u-function within our process.
3.3.2 Comparison when Considering a Single Deal Here, the decision maker is only concerned with the deal at hand. In this situation, the only
considerations that matter are the range of prospects and the medium of payment.
We consider the range of the prospects as they compare to the decision makersβ wealth
levels. If it is small, risk attitude does not matter, as the decision makersβ attitude is almost
risk-neutral. If the range is of medium size then the risk attitude becomes important; we call
this situation βmedium rangeβ in what follows. If the uncertainty is of the same order of
magnitude as the decision makersβ wealth, then we call this situation βlarge range.β The
medium of payment, or currency, can be in absolute terms as cash or proportional to the
decision makersβ assets as equity payments. The following table discusses the applicability of
the u-functions in these situations.
Figure 12 β Power Function approximation error converges to zero
The factor converges to zero quickly
33
Currency Range of prospects
Cash Equity
Medium Range (1) Exponential (2) Both are acceptable
Large Range (3) Both have limitations; Power function is preferred
(4) Power
Table 1 - Applicability of the Power and Exponential U-Functions
Area (1) When the prospects have medium range and the currency is cash, we prefer using the
exponential u-function as it allows us the mathematical convenience of calculating the
indifference prices of information and control in closed form. This is due to the constant
absolute risk aversion coefficient of the exponential u-function.
Area (2) When the currency is equity, the power u-function allows us the mathematical convenience
of calculating the indifference prices as fractions of the decision makerβs equity in closed
form. This is due to the constant relative risk aversion coefficient of the power u-function.
That being said, this consideration is merely mathematical convenience and both u-functions
are suitable for this setting.
Area (3) In this situation the prospects are in the large range and the currency is cash. This situation is
not ideal for either u-function. The exponential u-function is not suited for prospects with
large ranges, as it is not sensitive to large prospects with small probabilities. The power u-
function, for its part, poses some mathematical inconvenience when prospects are
represented in cash, since it requires that the prospects be translated from absolute to
proportional terms.
Area (4) This is the situation with most entrepreneurial settings and it is best suited to the use of the
power function. Here the prospects have a large range and the currency is equity. The power
u-function has a decreasing absolute risk aversion coefficient, which allows it to be sensitive
to large prospects with small probabilities. Its constant relative risk aversion coefficient
allows for multiplicative separation.
3.3.3 Comparison when Considering Fleeting Opportunities Here we consider the situation in which decision makers are considering fleeting
opportunities as defined above. Recall that within the problem constraint 1.2, we require
34
that the effects of the opportunities must be separable and accounted for independently of
deals at hand and arriving in the future. We discuss this constraint within the additive and
multiplicative settings in the following section.
The Additive Setting In the additive setting, the value to the decision maker is an additive function of the various
deals, as follows:
πΆπΈ(ππππ’π) = πΆπΈ(π΄ + π΅ + πΆ + π·)
To satisfy constrain 1.2, the u-function must allow the certain equivalent to be additively
separable. That is:
πΆπΈ(ππππ’π) = πΆπΈ(π΄ + π΅ + πΆ + π·) = πΆπΈ(π΄) + πΆπΈ(π΅) + πΆπΈ(πΆ) + πΆπΈ(π·)
This allows the decision maker to value the deal at hand alone. This is only true with the
exponential u-function.
The Multiplicative Setting In the multiplicative setting, the value to the decision maker is a multiplicative function of
the various deals as follows:
πΆπΈ(ππππ’π) = πΆπΈ(π΄ β π΅ β πΆ β π·)
To satisfy constraint 1.2, the u-function must allow the certain equivalent to be
multiplicatively separable. That is:
πΆπΈ(ππππ’π) = π β πΆπ(π΄ β π΅ β πΆ β π·) = π β πΆπ(π΄) β πΆπ(π΅) β πΆπ(πΆ) β πΆπ(π·)
Where, as above, w is the decision makerβs wealth and CM is the certain equivalent
multiplier. This allows the decision maker to value the deal at hand alone. This is only true
with the power u-function.
35
Chapter 4 β Model and Notation βMore important than the quest for certainty is
the quest for clarityβ
Francois Gautier
In this chapter we discuss the basic model that we apply in this dissertation. We extend this
model as we go explain our process more completely in the later chapters. In section 1 we
illustrate the general model. We apply this model in both additive and multiplicative settings.
We discuss these settings in sections 2 and 3, respectively. In Section 4 we compile a list of
the notation used in this dissertation.
4.1 General Description We consider situations in which decision makers have flows of fleeting opportunities that
arise over time. They can only accept a limited number of deals and have to immediately
decide how to react to each deal as it arrives.
The decision maker is offered a deal at each the start of each time period over a horizon of T
periods and has a limited capacity to accept C deals. A deal is represented as an uncertain
distribution over prospects. When offered a deal, she must decide whether to accept or
reject the deal or whether to gather more information before reaching that decision. We
represent the state of the decision maker by the current time period t and capacity level c
and denote it by (t,c).
As discussed in Chapter 1, the distribution over deals that might be offered is assumed to the
same during each time period. In addition, the deals offered at any time are irrelevant, as are
the outcomes of the deals themselves. At the start of each time period, the decision maker
receives a deal and has to decide what to do about it. This is presented in the following
figure.
36
Figure 13 - Deal Flow Structure
Once a deal arrives at time t, the decision maker can accept it, reject it, or apply further
alternatives to the situation (e.g., acquire relevant information). Once she accepts it, she
assigns one resource unit and time passes on one unit. If she rejects it, she maintains her
resources but time still goes forward. If she takes the third alternative, she continues going
through alternatives until she either accepts or rejects the deal. The following figure
illustrates the alternatives facing the decision maker.
4.2 Components of the Model: Deals, Time Horizon, and Capacity As stated above, our model assumes that the distribution of deals available in each time
period stays the same, irrespective of the deals that were available in past periods. At each
period, decision makers are offered a deal from a distribution of possible opportunities that
includes a βzeroβ representing the possibility of not getting a deal.
Figure 14 - Deal Setup Structure
Receive Random Deal for
State Capacity Time
C = c T = t
Accept
Another Alternative
Reject
Alternative Evaluation
Accept
Reject
State Capacity Time
C = c - 1 T = t +1
C = c T = t +1
37
We define the length of the time horizon, T, as the number of periods available to the
decision maker. At time T+1 she can no longer accept deals. The length of the time period is
set in such a way that there is a single deal under consideration during any time period,
which is no longer available after the time period is over.
In the Venture Capital example, VC firms usually have a commitment to their investors to
provide them with returns in a specified period (e.g., 7 years). To be able to do so, they
impose a deadline on investing that will allow the firm to sell the startups before the
specified period passes. Private equity firms have a similar condition. They usually operate
through rotational funds that are required to close in a certain length of time.
While we do impose a deadline in the model above, we relax this condition when we study
infinite horizons in Chapters 6 and 7. In most practical situations, however, there is an
effective or implicit deadline that has a part in determining the policies.
We define the initial capacity, C, as the maximum number of deals that decision makers can
accept through the deal flow. Our model assumes that deals have the same capacity
requirement. So when decision makers accept a deal, they irrevocably allocate to it a
resource unit from their capacity. We relax the condition of irrevocable allocation when we
consider the options presented in Chapters 6 and 7.
In the Venture Capital example, the number of deals in which VC firms can invest is limited
by the time of their partners. Each partner can only be on the directorsβ board of a limited
number of startups.
4.3 The Additive Model Here we consider situations in which the value of the investment to decision makers is
modeled as an additive function of the deals accepted by them.
4.3.1 Representation of Deals For the sake of clarity and without loss of generality, we limit our discussion to a discrete
representation of deals. A deal has n potential outcomes, each with a return ππ in addition to
the decision makersβ initial wealth W0. The ith outcome can be written as:
π0 + ππ
The outcome occurs with a probabilityππ. The deal may be represented by the following tree:
38
The mean of the deal is and the u- value is
πΈ(π) = π0 + οΏ½ππ ππ
π
π=1
It is possible to calculate the certain equivalent as the inverse of the u-value.
4.3.2 Model Layout
Where
β’ ππ: A specific deal β’ π0: is the decision makersβ initial wealth β’ πππ‘: Certain equivalent of future value of the deal flow at time t with capacity c
before observing a specific deal β’ πππ‘|ππ: πππ‘given that the deal currently available is ππ β’ πΆπΈ(ππ): is the certain equivalent of deal ππ β’ β±οΏ½πππ‘+1,πΆπΈ(ππ)οΏ½: is a function of the future value and certain equivalent of ππ
πππ‘|ππ + π0
Accept
Another Alternative
Reject
ππβ1π‘+1 + πΆπΈ(ππ) + π0
πππ‘+1 + π0
β±οΏ½π, π, π‘,πΆπΈ(ππ)οΏ½ + π0
π1 Outcome 1
π·πππ π
π0 + π1
π0 + ππ
π0 + ππ
ππ Outcome i
ππ Outcome n
Figure 15 - Representation of Deals in the Additive Model
Figure 16 β Additive Model Layout
39
4.4 The Multiplicative Model Here we consider situations in which the value to decision makers is modeled as a
multiplicative function of the deals accepted by them.
4.4.1 Representation of Deals As above, for clarity, we illustrate the discrete representation of deals. A deal has n potential
outcomes for an investment of a fraction f of the decision makersβ initial wealth w0. The ith
outcome can be represented, in the context of investments, as a multiple of the decision
makersβ investment. Thus the ith outcome can be written as:
(1 + ππ)π β π0 + (1 β π)π0 = (1 + πππ)π0
This outcome occurs with a probability of ππ. The deal may be represented by the following
tree:
We denote the deal (pi , ri), with X.
The mean of the deal is
πΈ(π) = π0οΏ½ππ(1 + πππ)π
π=1
and the u- value is
π’(π) = οΏ½ππ
π
π=1
π’οΏ½(1 + πππ)π0οΏ½
The certain equivalent can be calculated as the inverse of the u-value.
π·πππ π
(1 + π1π)π0
(1 + πππ)π0
(1 + πππ)π0
ππ Outcome i
ππ Outcome n
π1 Outcome 1
Figure 17 βRepresentation of Deals in the Multiplicative Model
40
4.4.2 Model Layout
Where
β’ πΆπ(ππ) is the certain equivalent multiplier of deal ππ. β’ ππ
π‘ is the certain equivalent multiplier of future value of the deal stream at time t with capacity c left before observing a specific deal.
β’ πππ‘|ππ: ππ
π‘ given that the deal available is ππ β’ β±οΏ½ππ
π‘+1,πΆπ(ππ)οΏ½ is a function of the future value and certain equivalent of ππ
4.5 Terms and Notation
Decision Analysis
ππΌπ΅π(ππ) Personal Indifferent Buying Price for a deal Xn. This is defined as the price b at which the decision maker is indifferent between buying deal Xn and keeping his current wealth.
ππΌππ(ππ) Personal Indifferent Selling Price for a deal Xn. This is defined as the price s at which the decision maker is indifferent between selling deal Xn and keeping it.
π’(π₯) U-value of deal x.
Exponential U-curve
π’(π₯) = 1 β πβπ₯/π Where π is a parameter of risk attitude denoted as the risk tolerance of the decision maker.
Power U-curve π’(π₯ β π€) =
(π₯ β π€)π
π
Where Ξ» is a parameter of risk attitude πΆπΈ(ππ) Certain Equivalent of deal Xn. This is defined as the PISP of deal
Xn
πππ‘|ππ β π0
Accept
Another Alternative
Reject
ππβ1π‘+1 β πΆπ(ππ) β π0
πππ‘+1 β π0
β±οΏ½π, π, π‘,πΆπ(ππ)οΏ½ β π0
Figure 18 β Multiplicative Model Layout
41
πΆπ(ππ) Certain Equivalent Multiplier of deal Xn. This is defined as the multiple f for which the decision maker is indifferent between selling deal Xn and keeping it.
πΆπ(ππ) =πΆπΈ(ππ)π
Deal Flow Characterization
π Length of time horizon
π‘ Current time
πΆ Initial capacity (number of resources available)
π Capacity remaining (current number of resources available)
ππ A specific deal
πππ‘ Certain equivalent of the future value of the deal stream at time t with capacity c.
πππ‘|ππ Certain equivalent of the future value of the deal stream at time t with capacity c after observing a deal Xn
πππ‘ Certain equivalent multiplier of future value of the deal stream at
time t with capacity c.
πππ‘|ππ Certain equivalent multiplier of future value of the deal stream at
time t with capacity c after observing a deal Xn
π ππ‘ Marginal value of a resource unit at time t and capacity c. This is defined for the additive setting as:
π ππ‘ = πππ‘+1 β ππβ1π‘+1 Or, in the multiplicative setting as
π ππ‘ =ππ
π‘+1
ππβ1π‘+1
42
Incremental Values/Multipliers
ππππ‘(ππ) Incremental value of deal Xn at time t with capacity c. This is defined as the contribution of deal Xn to the future value of the deal stream at time t with capacity c.
ππππ‘(ππ) = max (πΆπΈ(ππ) β π ππ‘ , 0)
πππΌππ‘(ππ) Incremental value of deal Xn with free information at time t with capacity c. This is defined as the contribution of deal Xn with free information to the future value of the deal stream at time t with capacity c.
πππΌππ‘(ππ) = πΆπΈοΏ½ππ(ππ|πΌπ)οΏ½ ππππΌππ‘(ππ) Incremental value of information for a deal Xn at time t with
capacity c. This is defined as the PIBP of information on deal Xn at time t and capacity c.
π Monetary cost of information/control
ππππ‘(ππ) Incremental multiplier of deal Xn at time t with capacity c. This is
defined as the contribution of deal Xn to the future value multiple of the deal stream at time t with capacity c.
ππππ‘(ππ) = max (
πΆπ(ππ)π ππ‘
, 1)
πππΌππ‘(ππ) Incremental multiplier of deal Xn with free information at time t with capacity c. This is defined as the contribution of deal Xn with free information to the future value multiple of the deal stream at time t with capacity c.
πππΌππ‘(ππ) = πΆποΏ½ππ(ππ|πΌπ)οΏ½ ππππΌππ‘(ππ) Incremental multiplier of information for a deal Xn at time t with
capacity c. This is defined as the PIBP of information on deal Xn at time t and capacity c.
π Fraction paid for information/control.
43
Chapter 5 β Step 1: Frame and Decision Basis
βDoubt, indulged and cherished, is in danger of
becoming denial; but if honest, and bent on
thorough investigation, it may soon lead to full
establishment of the truth.β
βWho never doubted, never half believed.
Where doubt is, there truth is - it is her
shadow.β
Ambrose Bierce
In previous chapters we discussed the underlying models and their appropriate risk attitude
curves. In Chapters 5, 6, 7, and 8 we analyze our proposed process.
Figure 19 - Step 1 of the Solution Process
5.1 Overview In this chapter we describe the first step in our three-step process. Here decision makers
formulate the situation by determining the frame and building the decision basis. Recall that
the decision basis includes decision makersβ preferences, information, and the decision
44
alternatives. The goal of this step is to formulate the deal flow. After completing this step,
decision makers should have clear frame boundaries and assessments of alternatives,
uncertainties, and preferences relevant to the problem.
This chapter is organized into four parts. In Section 2, we discuss framing both the
opportunities themselves and the deal flow as a whole. In Section 3, we assess the decision
basis beginning with the decision makersβ time and risk preferences. We then identify the
available alternatives to the opportunities and the deal flow. Finally, we assess the decision
makersβ information about the uncertainties and sketch a method to model them.
5.2 Framing The frame of the decision determines the boundary of the issues considered for analysis
now. In our process, framing is done on two levels. First, we frame the deals available to
decision makers, and then we frame the deal flow as a whole. The two levels of framing are
shown in Figure 20.
Figure 20 - Two Levels of Framing
Decision makers must determine the decisions and uncertainties that are to be considered
within the analysis. The related decisions are divided into three categories. The categories
are decisions taken as given, decisions to be considered now, and decisions that are delayed
until later. The decisions and uncertainties are modeled in a decision diagram. Decision
diagrams, first introduced in Howard & Matheson (1981), are used to model the relationships
45
between the elements affecting the decision and to communicate them between the
stakeholders.
5.2.1 Framing the Deal Flow In this step we define the boundaries of the decisions considered with regards to the deal
flow as a whole, in order to frame the flow itself. This includes deciding on the capacity
available, the deadline, and the alternatives available within the deal flow. For example,
decision makers may be offered information about or control of the entire deal stream. Also,
decision makers might be able to change their capacity and/or the deadline. The inclusion of
such decisions is accomplished while framing the deal flow.
5.2.2 Framing the Deals Decision makers must frame the deals available to them during each period. The goal here is
to determine the boundaries of the decisions considered within each deal. This framing
exercise will result in decision diagrams of the deals available to the decision makers.
We suggest using a generic (or template) decision diagram to represent each deal type. Here,
decision makers model their beliefs about each category of deal using a generic diagram that
captures a frame specific to that category. This way, they only need to update the diagram
with their beliefs about specific deals as they arrive. The following is an example of a generic
diagram for online consumer startups. In Appendix A2, we give a detailed description of the
diagramβs nodes. Richman (2009) vetted this diagram with approximately 20 Venture
Capitalists.
46
Figure 21 - Generic Diagram for Internet Consumer Startups
5.3 Basis for the Decisions The decision basis includes preferences, alternatives, and information. The following graph
summarizes the assessments needed for this step.
Figure 22 - Decision Basis
5.3.1 Preferences We begin with modeling the decision makersβ preferences. Specifically, we are concerned
with time and risk preferences.
Time Preference We assess the decision makersβ time preference as a discount rate. We represent all the cash
flows in their net present value (NPV) before the analysis. Note that the discount rate is used
to solely model time preference and not risk preference.
Invest?
INITIAL EXECUTION RESULTS LIQUIDATION
Market Size & Growth
Technology
Team
Competitors
Possible Applications
Business Model
Revenue
Cost Future Financing
Exit
Dilution
ValueProf it
Cash Balance
Observables
PartnershipsHiring Barriers to Entry
Z
Market Share & Growth
47
Risk Preference Here we assess the decision makersβ risk preference by assessing the parameters of the u-
curve in use. As discussed earlier, we require that the decision maker use either the
exponential or the power u-curve.
5.3.2 Alternatives Here we assess the alternatives available to the decision maker within the decisions we
included in the framing stage.
Deal Flow Alternatives Deal flow alternatives include decisions related to sourcing deals, prepaying for information,
control, and options, and changing the capacity and/or the deadline.
Deal-Specific Alternatives Deal-specific alternatives include, among many others, those related to accepting the deal
and buying information, control, or options on the deal.
5.3.3 Information In this step we assess the information of the decision maker and his beliefs about the deals
and the alternatives. The latter includes the accuracy of information and control alternatives.
Prior Deals To facilitate this step, we suggest modeling the decision makersβ beliefs about deal
categories using generic diagrams. We then assess decision makersβ belief about the
uncertainties and probabilities within each template. In the following tree we have three
categories, each with a probability ππ.
48
Figure 23 - Modeling the Prior Deals
We then assess decision makersβ beliefs about the information gathering activities, control,
sourcing deals, options, and the rest. For example, we assess the ability to discern and learn
about the opportunities as detectors with certain accuracies and costs. Along the same lines,
we assess the ability to influence the distribution of the opportunities as controllers with
certain accuracies and costs.
Prior Alternatives Here we model the decision makerβs beliefs about the alternatives available. For example,
this includes the specificity and sensitivity of detectors.
49
Chapter 6 β Step 2.1: Additive Valuation Funnel
βThe most important questions of life are
indeed, for the most part, really only
problems of probability.β
βProbability theory is nothing but common sense reduced to calculation.β
Pierre Laplace
So far, we have the frame and the decision basis of the complete deal flow. In Chapters 6 and
7 we turn to the second step of our process, that is, building the valuation funnel.
Figure 24 - Step 2 of the Solution Process
6.1 Overview The second step of our template is to build the valuation funnel. In this chapter we build the
valuation funnel for the additive setting. The goal of this step is to build the valuation funnel
for the additive setting. After this step, decision makers will have calculated the indifference
50
buying prices (IBP) of the deals, information, control, etc. These incremental values help the
decision maker make real-time decisions when the alternatives are available. In addition, the
resulting valuation funnel can be easily used to evaluate alternatives concerning the deal
flow as a whole. More on the application of the funnel is given in Chapter 8.
This chapter is organized into six parts. In Section 2 we give the basic problem setup, in which
we describe the model and define the main terms. In Section 3 we present the optimal
policy, characterize the values within the deal flow, and then return to characterizing the
optimal policy. In Section 4 we extend our main results to the long-run problem setup and
discuss the policy iteration algorithm used to solve for the results. Finally, in Section 5, we
study some extensions to the basic model. These extensions are a more flexible cost
structure, the use of the probability of knowing detectors, the option of reversing an
allocation decision, and finally the value of perfect hedging.
6.2 Basic Problem Structure The basic problem considered here concerns a decision maker who is offered a deal at the
start of each time period over a horizon of t periods. When offered a deal, the decision
maker can accept the deal, reject it, or seek information and then decide. The basic model
assumes independence over time and among deals.
When a decision maker is offered a deal ππ she has one of three alternatives. If she directly
accepts the deal, she allocates one unit of her capacity (initial capacity πΆ) and gets the dealβs
certain equivalent. If she rejects the deal, she keeps her capacity. Finally, she can seek
information at a cost π. If she seeks information, she will observe an indication πΌ, which
updates her beliefs about the deal and will then allow her to decide whether or not to accept
the deal.
The following graph shows the structure of the dynamic program where πππ‘ is the certain
equivalent of the future value of the deal stream at time t with capacity c remaining before
observing a specific deal. πππ‘|ππ is the certain equivalent of the future value after observing
the deal ππ.
51
For a review of the research on the value of information, please refer to Howard (1967).
6.2.1 Definitions
Definition 6.2.1: Marginal Value of Capacity We denote the marginal value of a unit of capacity at (c,t) by π ππ‘ and define this quantity
π ππ‘ = πππ‘+1 β ππβ1π‘+1
Definition 6.2.2: Incremental Value of a Deal We denote the incremental value of a deal ππ at (c,t) by ππππ‘(ππ) and define this quantity
ππππ‘(ππ) = [πΆπΈ(ππ) β π ππ‘]+
This is the indifference buying price (IBP) of the deal ππat time π‘ and capacity π.
Definition 6.2.3: The Incremental Value of a Deal with Information We denote the incremental value of a deal ππ with information at (c,t) by πππΌππ‘(ππ) and
define this quantity
πππΌππ‘(ππ) = πΆπΈοΏ½ππππ‘(ππ|πΌπ)οΏ½
This is the indifference buying price (IBP) of the deal ππwhen offered information with
indication πΌπ at time π‘ and capacity π.
Definition 6.2.4: The Incremental Value of Information We denote the incremental value of information on a deal ππ at (c,t) as ππππΌππ‘(ππ) and
define this quantity as the indifference buying price (IBP) of information on this deal at (c,t),
or
ππππΌππ‘(ππ) = πππΌππ‘(ππ)β ππππ‘(ππ)
Figure 25 - Basic Problem Structure
Accept
Seek info
Reject
Reject
Accept
πππ‘|ππ
ππβ1π‘+1 + πΆπΈ(ππ)
πππ‘+1
ππβ1π‘+1 + πΆπΈ(ππ|πΌπ) βπ
πππ‘+1 β π
Ii
52
Example 6.2.1 We refer to the following simple example throughout this chapter. We consider the situation
facing a risk-averse decision maker in the following scenario: The decision maker exhibits
exponential utility π’(π₯) = 1 β πβπΎπ₯ with Ξ³ = 0.1. The decision maker has a time horizon of 50
periods (T=50) and can accept at most 4 opportunities (C=4). The deals have the following
structure (and differ in p):
Figure 26 - Example Deal Structure
In each time period, the decision maker can either get nothing or one of six possible deals.
Figure 27 - Example Deal Flow
53
6.2.2 The Value of Control The value of control can be studied along the same lines as the value of information. All of
our results below for the value of information can be duplicated for that of control. For a
more detailed study of the value of control, please refer to Matheson & Matheson (2005).
6.3 Main Results In this section we present the main results of this chapter. We present the optimal policy and
characterize the main values across the model parameters (c,t). Specifically, we characterize
the optimal policy, the value of the deal flow, the incremental value of capacity, the
incremental values of deals with and without information, and the buying price of
information.
6.3.1 Optimal Policy Given the scenario above, the decision makerβs optimal policy is defined in the following
proposition.
Proposition 6.3.1a: Optimal Information Gathering Policy When offered a deal ππ the decision maker should buy information if and only if
πππΌππ‘(ππ) β₯ ππππ‘(ππ) + π
Proposition 6.3.1b: Optimal Allocation Policy After information is received, the decision maker should accept the deal if and only if
πΆπΈ(ππ|πΌπ) β₯ π ππ‘
Otherwise, the deal is worth accepting without information if and only if
πΆπΈ(ππ) β₯ π ππ‘
6.3.2 Characterizing the Certain Equivalent and Threshold Given the above scenario, we can find that the certain equivalent of the deal flow (πππ‘) and
the marginal value of capacity (π ππ‘) change in intuitive ways with capacity (c) and time (t).
Proposition 6.3.2: Characterizing the Deal Flow Certain Equivalent
I. πππ‘ is non-decreasing in c for all t
II. πππ‘ is non-increasing in t for all c
54
In other words, the value of the deal flow increases as the capacity at hand increases and
decreases as the deadline approaches.
Example 6.3.1 Using the example above, Figure 28 is a graph of the deal flow value.
Figure 28 - Examples of Deal Flow Values
As stated in Proposition 6.3.2, the value of the deal flow decreases with time and increases
with capacity. This agrees with the intuition that more capacity and more time are desirable.
As time progresses, the chance of getting high-valued deals and hence of earning high
rewards in the future diminishes.
Proposition 6.3.3: Characterizing Threshold (πΉππ) I. π ππ‘ is non-increasing in c for all t II. π ππ‘ is non-increasing in t for all c
Proposition 6.3.3 states that the marginal value of capacity decreases as capacity increases
and as the deadline approaches. Otherwise stated, the optimal policy becomes more lenient
with more capacity at hand and as we approach our deadline.
6.3.3 Characterizing Indifference Buying Prices Based on the above results, we characterize the way in which the indifference buying price of
a deal in and of itself and that of the deal with information πππΌππ‘ changes with c and t. Then
we characterize the indifference buying price of information ππππΌππ‘.
55
Corollary 6.3.1: Characterizing IBP of Deals (ππππ‘(ππ) and πππΌππ‘(ππ)) The IBP values of a deal ππ with and without information exhibit the following two
properties:
I. ππππ‘(ππ) πππ πππΌππ‘(ππ) are non-decreasing in c for all t II. ππππ‘(ππ) πππ πππΌππ‘(ππ) are non-decreasing in t for all c
The marginal value of capacity is non-increasing in time and capacity. From this, it
immediately follows that the incremental value of the deal n when the capacity is c at
time t is non-decreasing in both c and t. That is, the value of a deal increases as the
deadline approaches and/or the available capacity increases.
Proposition 6.3.4: Characterizing the IBP of Information οΏ½ππ½ππ°ππ(πΏπ)οΏ½
The IBP of information exhibits the following properties:
I. For a given value of c, the IBP of information is increases with t and reaches a maximum when π ππ‘ = πΆπΈ(ππ); then it decreases with increasing t until it converges at πΆπΈ(ππβ) β πΆπΈ(ππ)
II. For a given value of t, the IBP of information is increases in c and reaches a maximum when π ππ‘ = πΆπΈ(ππ); then it decreases with increasing c until it converges at πΆπΈ(ππβ) β πΆπΈ(ππ)
where ππβ is the deal with free information. The proposition above can be represented
graphically as follows. We define the value of information as πππΌ = πΆπΈ(ππβ) β πΆπΈ(ππ).
Figure 29 - Characterizing the IBP of Information
Example 6.3.2 Based on the example above, the following graph characterizes the IBP of information for a
specific deal.
56
Figure 30 - Example IBP Value
In Figure 30 we can see that the IBP of information increases with time and capacity until it
reaches a maximum when π ππ‘ = πΆπΈ(ππ). Before that point, the deal ππ is not worth
accepting without information. Hence, the only alternative to buying information is to reject
the deal. After this point, the alternative to buying information is to accept the deal without
information. Hence, the value of information decreases as ππ(ππ) increases. At the threshold
we have ππ(ππ) = πΆπΈ(ππ) and (ππβ) = πΆπΈ(ππβ) , so that the value of information within the
dynamic program ultimately converges to the value of information for a stand-alone deal.
Another interesting point illustrated in Figure 30 is that the value of information for a deal
within the deal flow can exceed the value of information for the stand-alone deal. The reason
for this fact is that the value of information is relative to the incremental values of the deal
with and without information. The incremental value of the deal without information might
equal zero even if the stand-alone value of the deal is positive. Hence, the difference
between the incremental values can be higher than that of the values of the stand-alone deal
with and without information. Note that this is not the case for the value of a deal. The value
of a deal within the deal flow can never exceed the value of the stand-alone deal.
6.3.4 Characterizing the Optimal Policy
Proposition 6.3.5: Characterization of the Optimal Policy The optimal policy for a given deal ππ is characterized as follows:
57
I. For a given value of c, the optimal policy can only change over time from rejecting, to buying information, and finally to accepting.
II. For a given value of t, the optimal policy can only change over capacity from rejecting, to buying information, and finally to accepting.
Proposition 6.3.5 states that if a certain type of deal is to be accepted whenever the capacity
is c, then it must be accepted also for higher capacity levels. Similarly, if it is rejected at the
capacity level c, then it must be rejected for all lower capacity levels. Further information is
sought only for a bounded range of capacity levels. In other words, if it is optimal to buy
information for a deal ππ and capacity c, then for higher capacity levels one would never
reject without information and for lower capacity levels would never accept without
information. The same behavior is also observed when the capacity is fixed and time
progresses. These statements are represented graphically below:
Figure 31 β Characterizing Optimal Policy
Example 6.3.3 Figure 32 shows the case at three units of capacity when the decision maker is offered a
detector with symmetric accuracy of 0.9 and costs two monetary units. By symmetric
accuracy, we mean that the detector indicates the probability of success and the probability
of failure with equal accuracy.
58
Figure 32 β Example of Optimal Policy Over Time
The graph illustrates the elements of the proposition. As time progresses, the optimal action
changes in one direction: from rejecting to buying information and from buying information
to accepting. Note that the same pattern is observed over deals in this example; however,
this is not true in general.
6.3.5 Multiple Detectors Here we discuss the situation in which the decision maker is offered multiple irrelevant
detectors. The following proposition identifies the optimal detector. We then discuss
ordering detectors.
Proposition 6.3.6: Identifying the Optimal Detector Consider two detectors with incremental value of deals with information πππΌ1 and πππΌ1 and
cost π1 and π2, respectively. Detector 1 will be optimal when:
πππΌ1β πππΌ2 > π1 βπ2
Otherwise, detector 2 will be optimal.
This optimality is not myopic, that is, if the decision maker is offered the use of both, he/she
should not always start with the optimal detector.
Example 6.3.4: Multiple detectors To demonstrate the points above we consider the two detectors. Detector 1 has an accuracy
of 0.9 and costs 2 units of currency. Detector 2 has an accuracy of 0.8 and costs 1 unit of
currency. The following illustrates the case with capacity = 3 and deal 5. It shows the
difference between the πππΌ values of both detectors.
59
Figure 33 - Example 6.3.4: The ordering of detectors is not myopic.
Figure 33 shows that for deal 5 (p = 0.80) from t = 40 onwards the difference between πππΌ1
and πππΌ2 is always greater than the difference between the detectorsβ prices. This result
indicates that we should always choose Detector 1.
Figure 34 shows the optimal policy for capacity 3. The vertical axis is the probability of
technical success, whereas the horizontal axis is the time period. The βacceptβ, βuse detector
1β, βuse detector 2β, and βrejectβ alternatives are color-coded.
Recall that after t=40, it was optimal to use Detector 1 if the decision maker had to choose
between the two detectors. When we allow the use of both detectors, the graph shows that
it is optimal to begin with Detector 2 (t=40-41). Thus, the optimality is not myopic.
Figure 34 - Example 6.3.4: The ordering of detectors is not myopic.
6.4 The Long-Run Problem Here we discuss the situation in which the decision maker is interested in situations with no
deadline. This is the case when the decision maker does not face a relevant limitation on
60
time. In the case of a movie producer, for example, the decision maker only considers the
value of time through discounting, but does not have a deadline imposed on allocating
his/her capacity.
As Howard & Matheson (1972) discussed, discounting in the risk-averse model causes
inconsistency. For this reason, we limit our discussion of the long-run problem to the risk-
neutral setting and leave the risk-averse model for future research.
6.4.1 Problem Structure and Definitions In many practical instances of this problem, the decision maker faces a large number of time
periods. Consequently, the computational complexity required to find an optimal policy
increases. Moreover, the dynamic nature of the optimal policy makes the storage and the
administration of the policy difficult. To overcome these difficulties, we look into the infinite-
horizon problem, i.e., one in which the deadline π is infinity. The main virtue of the infinite-
horizon problem is that, under certain conditions, it admits a stationary optimal policy. This
policy is in turn nearly optimal for the finite horizon problem, with an error diminishing with
an increasing actual time horizon.
We reformulate our problem to introduce a discount factor 0 < πΏ < 1 in the value function
so that the maximum value is guaranteed to be finite and a stationary optimal policy exists.
Discounting ensures that the expected present value is finite under every policy.
For the infinite-horizon model, let ππ|ππ denote the maximum expected value when the
available capacity is c and the current deal offered is ππ. If π β₯ 1 and this deal is accepted,
the decision maker collects its certain equivalent and moves on to the next period with
capacity π β 1. If the deal is not accepted, then the capacity remains the same in the next
period. The third available action for a positive capacity is to investigate the deal further
(gather information) and base the decision to accept or reject it on the outcome of the
investigation. Figure 35 summarizes the evolution of the discounted model.
61
where πΏ is the discounted rate.
Definition 6.4.1: Values in the Long-Run Problem We extend the definitions of the values from Section 6.3 to the long-run problem.
Marginal Value of Capacity π π = ππ β ππβ1
Incremental Value of a Deal πππ(ππ) = [πΆπΈ(ππ) β πΏπ π]+
Incremental Value of a Deal with Information πππΌπ(ππ) = πΆπΈοΏ½πππ(ππ|πΌπ)οΏ½
Incremental Value of Information ππππΌπ(ππ) = πππΌπ(ππ) β πππ(ππ)
6.4.2 Extension of the Results to the Long-Run Problem Here we characterize the problem parameters along similar lines as in Section 6.3. We find
that all the relations are maintained along the capacity dimension.
Proposition 6.4.1: Characterizing Long-Run Problem I. When offered the deal n, the decision maker should buy information if and only
if πππΌπ(ππ) β₯ πππ(ππ) + π. After information is received, the decision maker should accept the deal if and only if πΆπΈ(ππ|πΌπ) β₯ π π . Otherwise, the deal is worth accepting without information if and only if πΆπΈ(ππ) β₯ π π.
II. ππ is non-decreasing in c. III. π π is non-increasing in c. IV. πππ(ππ) πππ πππΌπ(ππ) are non-decreasing in c. V. ππππΌπ(ππ) is non-decreasing in c and reaches a maximum when π π = πΆπΈ(ππ);
then it decreases to πΆπΈ(ππβ)β πΆπΈ(ππ), where ππβ is the deal with free information.
Accept
Reject
Seek info
Reject
Accept
ππ|ππ
πΏ ππβ1 + πΈ(ππ)
πΏ ππ
πΏ ππβ1 + πΈ(ππ|πΌπ) βπ
πΏ ππ β π
Ii
Figure 35 - Problem Structure with Infinite Horizon
62
VI. The optimal policy can only change over c from rejecting to buying information, and from buying information to accepting.
Policy improvement, iterative approximations algorithms, or a combination of the two can be
employed to find a stationary optimal or a near-optimal stationary policy. We detail a policy
improvement algorithm below.
6.4.3 Policy Improvement Algorithm Let a decision be a vector of actions, which determines whether to accept a deal of type n,
to reject such a deal, or to seek further information (denoted by A, R, and I, respectively) at
each capacity level and for each deal type. The present value of using the same decision D in
every period is ππ(π·)|ππ if initially the capacity is c and the first deal observed is of type n.
The maximum present value when deal n is observed at capacity c is ππ|ππ. Also let ππ(π·) be
the expectation of ππ(π·)|ππ over all deal types. If π = (ππ) is the distribution of deal types,
then ππ(π·) = β ππππ(π·)|πππ . The policy improvement algorithm starts with an arbitrary
decision D and computes ππ(π·)|ππ for every c and n.
We can start with the decision that consists of rejecting every deal at every capacity level,
since its value is already known to be zero. Alternatively, we can start with the decision to
accept every incoming deal. In each iteration, we look for a decision such that using this
decision in the very first period instead of the decision at hand returns a larger present value.
This algorithm ends after a finite number of iterations and outputs a stationary optimal
policy. This is the generic policy improvement algorithm, as introduced first by Howard
(1960).
When the option to seek further information is eliminated, we are left with a series of simple
stopping problems, i.e., problems in which there are two available actions, to continue or to
stop. This observation allows an efficient implementation of the policy improvement
algorithm, which terminates after at most Kc iterations if K is the total number of deal types
and c is the initial capacity.
When c=1, the problem is trivially a simple stopping problem where the certain equivalent
πΈ(ππ) of the deal n is the reward from stopping when deal n is offered. If one chooses to
continue, then the capacity continues to be 1 and the cost of continuing is zero. When c >1,
the problem in the present period can be regarded as a simple stopping problem in which
63
the reward from stopping is the certain equivalent of the observed deal πΈ(ππ) and the
maximum present value πΏ β ππππβ1|πππ with capacity c β 1. As shown in (Veinott, 2004), a
simple stopping problem with S states can be solved in at most S iterations with a policy
improvement algorithm that changes the action in at most one state in every iteration. In our
case, the number of states at each capacity level is the number of deal types, so solving the
corresponding simple stopping problem for a capacity level c requires at most K iterations.
Since the initial capacity is c, the algorithm should end in Kc iterations. Refer to Appendix 1
for the pseudo-code summarizing the algorithm.
When the deals are ordered in decreasing order of their certain equivalents, a threshold
policy is optimal. That is, deals with small indices will be accepted, whereas ones with higher
indices will be rejected. Therefore, if the action for a deal at a given capacity level is switched
to accepting, the action for all deals with indices that are higher than the index of this one
should be updated to accepting. Similarly, if rejecting a deal is optimal, it is optimal to reject
all deals at all smaller capacity levels. These principles can be exploited to simplify the
algorithm.
Policy Improvement Pseudo Code (π,π) = π΄ for all π,π
π0(π·)|ππ = 0; π0(π·) = 0
π1(π·)| ππ = πΈ(ππ);
π1(π·) = β πππ1(π·)|πππ
Iterations:
For 1 β€ c β€ C
If πΈ(ππ) + πΏππβ1(π·) < πΏππ(π·) 1 β€ n β€ K
π·(π,π) = π
Compute ππ(π·)|ππ by solving the system:
If π·(π,π) = 1
ππ(π·)|ππ = πΈ(ππ) + πΏππβ1(π·)
End If
If π·(π,π) = π
ππ(π·)|ππ = πΏππ(π·)
End If
ππ(π·) = β ππππ(π·)|πππ
64
6.5 Extensions In this section we extend our work to allow for flexibility in capturing different assumptions.
We study the optimal policy in four cases. First, we look into situations when gathering
information imposes time and capacity costs in addition to monetary costs. For example, the
time spent on gathering information about a specific deal reduces our ability to source more
deals. Similarly, the time a partner allocates to gathering information might decrease the
time she has to be on company boards and hence decreases the capacity available to the
firm. We use this structure in the first and second extensions only.
In the second extension, we consider situations where the decision maker may choose to
reverse a single accepting decision. This can be seen as having the option to continue with
the allocation if no better opportunity arises. Otherwise, the decision maker, at a price, may
choose to reclaim his resource unit and allocate it to the new opportunity.
In the third extension, we employ the Probability of Knowing structure for information. In
this setup, information gathering is modeled as an exercise in obtaining perfect information.
This structure is useful when modeling situations in which the decision maker is seeking
information on a distinction that is known by others. Additionally, this structure helps to
assess the relative value of information-gathering activities.
6.5.1 Multiple Cost Structures In this section we extend our work on the value of information to include a delay and a
capacity cost in addition to the monetary cost. We set up the problem, extend our optimal
policy results, and then follow with a discussion of the use of multiple detectors.
Problem Structure As an abstraction, we consider that information is offered at a cost of k resource units, d
delay units, and m monetary cost. A graphical depiction of the problem is shown below
65
Definition 6.5.1: Threshold with Multiple Cost Structures We generalize the threshold π ππ‘ definition given in Section 6.3 to:
π ππ‘(π,π) = πππ‘+1 β ππβππ‘+π
Note that the threshold given earlier is now stated as π ππ‘(1,1).
The Optimal Policy with Multiple Cost Structures
Proposition 6.5.1a: The Optimal Information-Gathering Policy with Multiple Cost Structures When offered a deal ππ, the decision maker should buy information if and only if
πππΌπβππ‘+π(ππ) β₯ ππππ‘ (ππ) + π ππ‘(π,π + 1) + π
Proposition 6.5.1b: The Optimal Allocation Policy with Multiple Cost Structures After information is received, the decision maker should accept the deal if and only if
πΆπΈ(ππ|πΌπ) β₯ π πβππ‘+π(1,1)
Otherwise, the deal is worth accepting without information if and only if
πΆπΈ(ππ) β₯ π ππ‘(1,1)
Example 6.5.1 To better illustrate this example we changed the parameters of the problem; we have a
capacity of 8 resource units and a time horizon of 10 periods. Figure 37 shows the value of
the deal flow with perfect information (clairvoyance) at different cost structures.
Accept
Seek info
Reject
Reject
Accept
πππ‘|ππ
ππβ1π‘+1 + πΆπΈ(ππ)
πππ‘+1
ππβ1βππ‘+1+π + πΆπΈ(ππ|πΌπ) βπ
ππβππ‘+1+π β π
Ii
Figure 36 - Problem Structure with Information at Multiple Cost Types
66
Figure 37 β Example 6.5.1: Value of the deal flow with clairvoyance at different cost structures
The red line represents the value of the original deal flow with no clairvoyance. The blue line
shows the value of the deal flow with clairvoyance offered at a cost of 3 monetary units. The
green line shows the value of the deal flow when clairvoyance is offered at 3 time units per
usage. Finally the black line shows the value when the cost of clairvoyance is 3 resource units
of capacity.
Note that in the beginning of the time horizon, the deal flow with clairvoyance at a capacity
cost is worse than that with clairvoyance at a monetary cost. This changes as time passes
when the capacity at hand exceeds the number of periods left.
Figure 38 shows the ππππΌ of clairvoyance in the different cases described above.
67
Figure 38 - Example 6.5.1: Incremental multiple of the deal with clairvoyance at different cost structures
As stated above, πππΌ increases with time, as shown in the properties above. Note that the
green line (with delay cost) goes to zero at t = 9, because within this deal flow decision
makers cannot afford the delay cost, as there are only two units of time left.
Multiple Detectors We consider the case when the decision maker is offered two irrelevant detectors with
indications πΌ1and πΌ2 and costs (π1,π1,π1)and (π2,π2,π2), respectively.
Corollary 6.5.1: Identifying the Optimal Detector with Multiple Cost Structures Given the scenario above, Detector 1 will be optimal when:
πππΌ1 β πππΌ2 β₯ π ππ‘(π2,π2 + 1 ) β π ππ‘ (π1,π1 + 1 ) + π1 βπ2
where πππΌ1 = πππΌπβπ1π‘+π1ππ£ππ πΌ1,πππ πππΌ2 = πππΌπβπ2
π‘+π2ππ£ππ πΌ2
Otherwise, Detector 2 will be optimal.
This analysis can be extended to more than two detectors. Also, as previously discussed, this
criterion is not myopic. If the decision maker is offered both detectors, she might still start
with the less βoptimalβ detector.
6.5.2 Decision Reversibility In this section we consider the option of reversing decisions. For a background on options
within Decision Analysis please refer to Howard (1995).
68
Problem Setup and Definitions This section requires generalizing the definitions used in this chapter so far.
Definition 6.5.2: Values with Decision Reversibility We extend the definitions of the values from Section 6.3. The generalizations include adding
a placeholder for the deal with an option.
The Certain Equivalent of a Deal Flow with the Option to Reverse a Decision πππ‘(π)|ππ is the certain equivalent of the future value of the deal at time t with remaining
capacity c after observing a deal Xn with an option to reverse an allocation to deal Z.
πππ‘(π) is the certain equivalent of the future value of the deal at time t with remaining
capacity c before observing a specific deal with an option to reverse an allocation to deal Z.
Threshold with an Option to Reverse a Decision π ππ‘(π,π,π) is the threshold with an option to reverse an allocation to deal Z.
π ππ‘(π,π,π) = πππ‘+1(π) β ππβππ‘+1+π(π)
The Incremental Cost of Capacity ππΆππ‘(ππ,π) is the incremental cost of capacity.
ππΆππ‘(ππ,π) = [π ππ‘(1,1,π) β πΆπΈ(ππ)]+
Definition 6.5.2: The Incremental Value of a Deal with an Option πππππ‘(ππ,π) = πππ‘+1(ππ)β ππβ1π‘+1(π)
The incremental value of a deal with the option to reverse an investment decision is defined
as the indifference buying price of the deal with the option and is equal to the contribution
of the deal with a free option to of the value of the deal flow.
We limit our discussion to the option of reversing a single allocation decision at any time. The
decision maker can keep an option on one deal only. So, in effect, the decision maker has
three alternatives at each stage. The first alternative is to reject. The second is to accept
without buying an option and the third is to accept with an option to reverse the decision
later.
In general, we assume that the decision maker has previously bought an option on deal Z and
now he is offered a deal ππ at time t with remaining capacity c. The first alternative is for the
decision maker to reject and not change anything. Another alternative is for him to accept
69
deal ππ without buying an option so that he still has the option on Z. The third alternative is
for him to accept ππ and buy an option. Here, since we allow one option only, he discards Z
and does not allocate more capacity. Figure 39 is a graphical description of the deal.
Optimal Policy with an Option to Reverse an Allocation Decision
Proposition 6.5.2: Optimal Allocation Policy with an Option When offered a deal ππ with an option on deal π, the decision maker should accept the deal
and buy an option on it if and only if:
πππππ‘(ππ,π) β₯ ππΆππ‘ (ππ,π) +π + πΆπΈ(π)
Otherwise, the decision maker should accept if and only if:
πΆπΈ(ππ) > π ππ‘(1,1,π)
Therefore, the option is worth buying when the incremental value of the option is higher
than the sum of its price, the incremental cost of capacity, and the value of the deal
liquidated.
6.5.3 The Probability of Knowing Detectors In this section we consider the probability of knowing detectors. This is an alternative
method of modeling information-gathering activities defined by Howard (1998). We include
this procedure, as it allows us a clear way to value and order detectors. Here a detector is
modeled as a probability of obtaining clairvoyance versus no information. In this extension
we do not use the multiple cost structure. We set up the problem, extend our optimal policy
results, and then follow with a discussion of using multiple detectors.
πππ‘(π)|ππ
Reject
Accept Donβt
Buy option πππ‘+1(ππ) + πΆπΈ(ππ)β πΆπΈ(π) βπ
ππβ1π‘+1(π) + πΆπΈ(ππ)
πππ‘+1(π)
Figure 39 - Problem structure with an option to reverse an allocation decision
70
Problem Structure The setup here differs from the basic one in the structure of the information. When the
decision maker seeks information, he either gets clairvoyance with probability π or gets no
information. After getting the indication, the decision maker can then decide whether to
accept or reject the deal at hand. The following is a graphical representation of the problem.
If decision makers receive clairvoyance, they get the incremental value with perfect
information (iVPI) as
ππππΌππ‘ (ππ) = πΆπΈ(ππππ‘(ππ|πΌ))
If they do not receive clairvoyance, the deal does not change.
The Optimal Policy with a Probability of Knowing Detectors
Proposition 6.5.3a: The Optimal Information Gathering Policy with a Probability of Knowing Detectors Given a detector defined as above with a probability of knowing p and price m, the decision
maker should buy information if and only if:
π’(ππππΌππ‘ (ππ)β ππππ‘(ππ)) >π’(π)π
Figure 40 - Problem Structure with the Probability of Knowing Detectors
Ii
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject πππ‘|ππ
ππβ1π‘+1 + πΆπΈ(ππ)
πππ‘+1
Reject
Accept ππβ1π‘+1 + πΆπΈ(ππ|πΌπ) βπ
πππ‘+1 β π
Reject
Accept ππβ1π‘+1 + πΆπΈ(ππ) βπ
πππ‘+1 β π
p
71
Proposition 6.5.3b: The Optimal Allocation Policy with a Probability of Knowing Detectors If clairvoyance is received, the decision maker should accept the deal if and only if:
πΆπΈ(ππ|πΌπ) β₯ π ππ‘
Otherwise, if no clairvoyance is received or the decision maker did not buy information, then
the decision maker should accept the deal if and only if:
πΆπΈ(ππ) β₯ π ππ‘
Multiple Detectors with a Probability of Knowing Detectors Now say we have two irrelevant detectors with probability of clairvoyance p1 and p2 and
costs m1 and m2, respectively.
Corollary 6.5.3: Identifying the Optimal Detector with a Probability of Knowing Detectors Given the setup above, Detector 1 will be optimal when
π’(π1)π1
<π’(π2)π2
Otherwise Detector 2 will be optimal. In this setup, the optimality is myopic, so if we have
multiple irrelevant detectors we use them in decreasing order of π’(π) πβ .
72
Chapter 7 β Step 2.2: The Multiplicative Valuation Funnel
βDoubt is not a pleasant condition, but
certainty is absurd.β
François-Marie Voltaire
As stated in the introduction, this chapter follows the structure of Chapter 6 and will contain
repeated information. The goal here is to have each chapter be independent of the other so
that readers can elect to read the chapter relevant to their context.
Figure 41 - Step 2 of the Solution Process
7.1 Overview The second step of our template is to build the valuation funnel. In this chapter we build the
valuation funnel for the multiplicative setting. The goal of this step is to build the valuation
funnel for the multiplicative setting. After this step, decision makers will have calculated the
indifference buying fractions of the deals, information, control, etc. These indifference
fractions help the decision maker to make real-time decisions when the alternatives are
available. In addition, the resulting valuation funnel can be easily used to evaluate
alternatives concerning the deal flow as a whole. More on the application of the funnel is
given in Chapter 8.
73
This chapter is organized into six parts. In Section 2 we give the basic problem structure, in
which we describe the model and define the main terms. In Section 3 we present the optimal
policy, characterize the values within the deal flow, and then return to characterizing the
optimal policy. In Section 4 we extend our main results to the long-run problem structure.
Finally, in Section 5, we study some extensions to the basic model. These extensions are a
more flexible cost structure, the use of the probability of knowing detectors, the option of
reversing an allocation decision, and finally the value of perfect hedging. Note that the
results here differ more in the probability of knowing detectors than those in Chapter 6.
7.2 Basic Problem Structure The basic problem considered here concerns a decision maker who is offered a deal at the
start of each time period over a horizon of t periods. When offered a deal, the decision
maker can accept the deal, reject it, or seek information and then decide. The basic model
assumes independence over time and among deals.
When a decision maker is offered a deal ππ she has one of three alternatives. If she directly
accepts the deal, she allocates one unit of her capacity (with initial capacity πΆ) and gets the
dealβs certain multiplier. If she rejects the deal, she keeps her capacity. Finally, she can seek
information at a cost of a fraction π of her capacity. If she seeks information, she will observe
an indication that updates her beliefs about the deal and then will decide whether or not to
accept the deal.
Figure 42 shows the structure of the dynamic program, where πππ‘ is the certain multiplier of
a future multiple of the deal stream at time t with capacity c remaining before observing a
specific deal. πππ‘|ππ is the certain multiplier of a future multiple after observing the deal.
74
For a review of research on the value of information, please refer to Howard (1967).
7.2.1 Definitions
Definition 7.2.1: The Marginal Multiple of Capacity We denote the marginal multiple of a unit of capacity at (c,t) by π ππ‘, defined as
π ππ‘ = πππ‘+1
ππβ1π‘+1
Definition 7.2.2: The Incremental Multiple of a Deal We denote the incremental multiple of a deal ππat (c,t) by πππ
π‘(ππ), defined as
ππππ‘(ππ) =
πΆπ(ππ)π ππ‘
β¨ 1
This is the indifference buying fraction (IBF) of the deal ππat time t and capacity c.
Definition 7.2.3: The Incremental Multiple of a Deal with Information We denote the incremental multiple of a deal ππ with information at (c,t) by πππΌππ‘(ππ);
defined as
πππΌππ‘(ππ) = πΆποΏ½ππππ‘(ππ|πΌπ)οΏ½
This is the indifference buying fraction (IBF) of the deal ππ when offered information with
indication πΌπ at time t and capacity c.
Definition 7.2.4: The Incremental Multiple of Information We denote the incremental multiple of information about a deal ππ at (c,t) as ππππΌππ‘(ππ),
defined as the IBF of information on this deal at (c,t), or
Figure 42 - Basic Problem Structure
Accept
Seek info
Reject
Reject
Accept
πππ‘|ππ
ππβ1π‘+1 β πΆπ(ππ)
πππ‘+1
ππβ1π‘+1 β πΆπ(ππ|πΌπ) β (1 β π)
πππ‘+1 β (1 β π)
Ii
75
ππππΌππ‘(ππ) =πππΌππ‘(ππ)πππ
π‘(ππ)
Example 7.2.1 We refer to the following simple example throughout this chapter. We consider the situation
facing a risk-averse decision maker with the following setup: The decision maker exhibits
power utility
π’(π₯) = (π₯)π
π
with Ξ» = 0.1. The decision maker has a time horizon of 50 periods (T=50) and can accept at
most 4 opportunities (c=4). The deals have the following structure (and differ in p):
Figure 43 - Example Deal Structure
In each time period, the decision maker can either get nothing or one of six possible deals.
Figure 44 - Example Deal Flow
76
7.2.2 The Multiple of Control The multiple of control can be studied along the same lines as the multiple of information. All
of our results below for the multiple of information can be duplicated for that of control. For
a more detailed study of the value of control, please refer to Matheson & Matheson (2005).
7.3 Main Results In this section we present the main results of this chapter. We present the optimal policy and
characterize the main values across the model parameters (c,t). Specifically, we characterize
the optimal policy, the multiple of the deal flow, the incremental multiple of capacity, the
incremental multiple of deals with and without information, and the buying fraction of
information.
7.3.1 Optimal Policy Given the setup above, the decision makerβs optimal policy is defined in the following
proposition.
Proposition 7.3.1a: The Optimal Information Gathering Policy When offered a deal ππ, the decision maker should buy information if and only if
πππΌππ‘(ππ) β₯πππ
π‘(ππ)1 β π
Proposition 7.3.1b: The Optimal Allocation Policy After information is received, the decision maker should accept the deal if and only if
πΆπ(ππ|πΌπ) β₯ π ππ‘
Otherwise, the deal is worth accepting without information if and only if
πΆπ(ππ) β₯ π ππ‘
7.3.2 Characterizing the Certain Multiplier and Threshold Given the above structure, we can show that the certain multiplier of the deal flow (ππ
π‘) and
the marginal multiple of capacity (π ππ‘) change in intuitive ways with capacity (c) and time (t).
Proposition 7.3.2: Characterizing the Deal Flow Certain Multiplier I. ππ
π‘ is non-decreasing in c for all t II. ππ
π‘ is non-increasing in t for all c
77
In other words, the value of the deal flow increases as the capacity at hand increases and
decreases as the deadline approaches.
Example 7.3.1 Using the setup example above, Figure 45 is a graph of the deal flow multiple.
Figure 45 - Example Deal Flow Value
As stated in Proposition 7.3.2, the multiple of the deal flow decreases with time and
increases with capacity. These observations agree with the intuition that more capacity and
more time are desirable. As the time progresses, the chance of getting high-valued deals and
hence of earning high rewards in the future diminishes.
Proposition 7.3.3: Characterizing Threshold I. π ππ‘ is non-increasing in c for all t II. π ππ‘ is non-increasing in t for all c
Proposition 7.3.3 states that the marginal multiple of capacity decreases as capacity
increases and as the deadline approaches. Otherwise stated, the optimal policy becomes
more lenient with more capacity at hand and as we approach our deadline.
7.3.3 Characterizing Indifference Buying Fractions Based on the above results, we characterize the way in which the indifference buying
fraction of a deal ππππ‘ and that of the deal with information πππΌππ‘ changes with c and t. Then
we characterize the indifference buying fraction of information ππππΌππ‘.
Corollary 7.3.1: Characterizing the IBF of Deals (ππππ‘(ππ) and πππΌππ‘(ππ))
The IBF multiples of a deal ππ with and without information exhibit the following two properties
I. ππππ‘(ππ) πππ πππΌππ‘(ππ) are non-decreasing in c for all t
78
II. ππππ‘(ππ) πππ πππΌππ‘(ππ) are non-decreasing in t for all c
The marginal multiple of capacity is non-increasing in time and capacity. From this, it
immediately follows that the incremental multiple of the deal ππ when the capacity is c at
time t is non-decreasing in both c and t. That is, the multiple of a deal increases as the
deadline approaches and/or the available capacity increases.
Proposition 7.3.4: Characterizing the IBF of Information οΏ½ππ΄ππ°ππ(πΏπ)οΏ½
The IBF of information exhibits the following properties
III. For a value of given c, the IBF of information is increasing in t and reaches a maximum when π ππ‘ = πΆπ(ππ); then it decreases in t until it converges at πΆπ(ππβ)/πΆπ(ππ).
IV. For a given value of t, the IBF of information is increasing in c and reaches a maximum when π ππ‘ = πΆπ(ππ); then it decreases in c until it converges at πΆπ(ππβ)/πΆπ(ππ).
where ππβ is the deal with free information. The proposition above can be represented
graphically as follows. We define the multiple of information as πππΌ = πΆπ(ππβ)/πΆπ(ππ).
Figure 46 - Characterizing the IBF of Information
Example 7.3.2 Based on the example scenario described above, the following graph characterizes the IBF of
information for a specific deal.
79
Figure 47 - Example IBF Value
In Figure 47 we can see that the IBF of information increases with time and capacity until it
reaches a maximum when π ππ‘ = πΆπ(ππ). Before that point, the deal n is not worth buying
without information. Hence, the only alternative to buying information is to reject the deal.
After this point, the alternative to buying information is to buy the deal without information.
Hence, the value of information decreases as ππ(ππ) increases. At the threshold we have
ππ(ππ) = πΆπ(ππ) and ππ(ππβ) = πΆπ(ππβ), so the value of information within the dynamic
program ultimately converges to the multiple of information for a stand-alone deal.
Another interesting point illustrated in Figure 47 is that the multiple of information for a deal
within the deal flow can exceed the multiple of information for the stand-alone deal. The
reason here is the multiple of information is relative to the incremental multiples of the deal
with and without information. The incremental multiple of the deal without information
might equal one even if the stand-alone multiple of the deal is larger than one. Hence, the
ratio of the incremental multiples can be higher than that of the multiples of the stand-alone
deal with and without information. Note that this is not the case for the multiple of a deal.
The multiple of a deal within the deal flow can never exceed the multiple of the stand-alone
deal.
80
7.3.4 Characterizing the Optimal Policy
Proposition 7.3.5: Characterization of the Optimal Policy The optimal policy for a given deal ππ is characterized as follows
I. For a given value of c, the optimal policy can only change over time from rejecting, to buying information, and finally to accepting.
II. For a given value of t, the optimal policy can only change over capacity from rejecting, to buying information, and finally to accepting.
Proposition 7.3.5 states that if a certain type of deal is to be accepted whenever the capacity
is c, then it must be accepted also for higher capacity levels. Similarly, if it is rejected at the
capacity level c, then it must be rejected for all lower capacity levels. Further information is
sought only for a bounded range of capacity levels. In other words, if it is optimal to buy
information for a deal ππ and capacity c, then for higher capacity levels one would never
reject without information and for lower capacity levels one would never accept without
information. The same behavior is observed also when the capacity is fixed and time
progresses. These statements are represented graphically below:
Figure 48 β Characterization of Optimal Policy
Example 7.3.3 The following graph shows the case at three units of capacity when the decision maker is
offered a detector with symmetric accuracy of 0.9 that costs a 5% fraction of current
capacity. By symmetric accuracy, we mean that the detector indicates the probability of
success and the probability of failure with equal accuracy.
81
Figure 49 - Example Optimal Policy Over Time
The graph illustrates the principles of the proposition. As time progresses, the optimal action
changes in one direction: from rejecting to buying information and from buying information
to accepting. Note that the same pattern is observed over deals in this example; however,
this is not true in general.
7.3.5 Multiple Detectors Here we discuss the situation in which the decision maker is offered multiple irrelevant
detectors. The following proposition identifies the optimal detector. We then discuss
ordering detectors.
Proposition 7.3.6: Identifying the Optimal Detector Consider two detectors with incremental multipliers of deals with information πππΌ1 and
πππΌ1cost fractions π1 and π2, respectively. Detector 1 will be optimal when:
πππΌ1πππΌ2
>1 β π21 β π1
Otherwise, Detector 2 will be optimal.
This optimality is not myopic, that is, if the decision maker is offered the use of both, he
should not always start with the optimal detector.
Example 7.3.4: Multiple Detectors To illustrate the points above we consider the two detectors. Detector 1 has an accuracy of
0.9 and cost fraction 5%. Detector 2 has an accuracy of 0.8 and cost fraction 1%. Figure 50
illustrates the case with capacity = 3 and deal 3. It shows the difference between the πππΌ
values of both detectors.
82
Figure 50 - Example 7.3.4: The ordering of detectors is not myopic.
The graph above shows that for deal 4 (p = 0.65) from t=41 onwards the ratio between πππΌ1
and πππΌ2 is always greater than the ratio between the detectorsβ costs. This indicates that
we should always choose Detector 1.
The following graph shows the optimal policy for capacity 3. The vertical axis is the
probability of technical success, whereas the horizontal axis is the time period. The βacceptβ,
βuse detector 1β, βuse detector 2β, and βrejectβ alternatives are color coded.
Recall that after t=41, it was optimal to use Detector 1 if the decision maker had to choose
between the two detectors. When we allow the use of both detectors, the graph below
shows that it is optimal to begin with Detector 2 (t>45). Thus, the optimality is not myopic.
Figure 51 - Example 7.3.4: The ordering of detectors is not myopic.
83
7.4 The Long-Run Problem Here we discuss the situation in which the decision maker is interested in situations with no
deadline. This is the case when the decision maker does not face relevant limitations on
time. In the case of a movie producer, for example, the decision maker only considers the
value of time through discounting but does not have a deadline imposed on allocating his
capacity.
As Howard & Matheson (1972) discussed, discounting in the risk-averse model causes
inconsistency. For this reason, we limit our discussion of the long-run problem to the risk-
neutral setting and leave the risk-averse model for future research.
7.4.1 Problem Structure and Definitions In many practical instances of this problem, the decision maker faces a large number of time
periods. Consequently, the computational complexity to find an optimal policy increases.
Moreover, the dynamic nature of the optimal policy makes the storage and the
administration of the policy difficult. To overcome these difficulties, we look into the infinite-
horizon problem, i.e., one in which the deadline π is infinity. The main virtue of the infinite-
horizon problem is that, under certain conditions, it admits a stationary optimal policy. This
policy is in turn nearly optimal for the finite horizon problem, with an error diminishing with
an increasing actual time horizon.
We reformulate our problem to introduce a discount factor 0 < πΏ < 1 in the value function
so that the maximum value is guaranteed to be finite and a stationary optimal policy exists.
Discounting ensures that the expected present value is finite under every policy.
For the infinite-horizon model, let ππ|ππ denote the maximum expected multiple when the
available capacity is c and the current deal offered is ππ. If π β₯ 1 and this deal is accepted,
the decision maker collects its certain multiplier and moves on to the next period with
capacity π β 1. If the deal is not accepted, then the capacity remains the same in the next
period. The third available action for a positive capacity is to investigate the deal further
(gather information) and base the decision to accept or reject it on the outcome of the
investigation. Figure 52 summarizes the evolution of the discounted model.
84
where πΏ is the discount rate.
Definition 7.4.1: Multiples in the Long-Run Problem We extend the definitions of the values from Section 7.3 to the long-run problem.
The Marginal Multiple of Capacity
π π =ππ
ππβ1
The Incremental Multiple of a Deal
πππ(ππ) =πΆπ(ππ)πΏπ π
β¨ 1
The Incremental Multiple of a Deal with Information πππΌπ(ππ) = πΆποΏ½πππ(ππ|πΌπ)οΏ½
The Incremental Multiple of Information
ππππΌπ(ππ) =πππΌπ(ππ)πππ(ππ)
7.4.2 Extension of the Results to the Long-Run Problem Here we characterize the problem parameters along similar lines as in Section 7.3. We find
that all the relations are maintained along the capacity dimension.
Proposition 7.4.1: Characterizing the Long-Run Problem I. When offered the deal n, the decision maker should buy information if and only
if πππΌπ(ππ) β₯ πππ(ππ)/(1β π). After information is received, the decision maker should buy the deal if and only if πΆπ(ππ|πΌπ) β₯ π π. Otherwise, the deal is worth buying without information if and only if πΆπ(ππ) β₯ π π.
II. ππ is non-decreasing in c. III. π π is non-increasing in c.
Figure 52 - Problem Structure with Infinite Horizon
Accept
Reject
Seek info
Reject
Accept
ππ|ππ
πΏ ππβ1 β πΆπ(ππ)
πΏ ππ
πΏ ππβ1 β πΆπ(ππ|πΌπ) β (1 β π)
πΏ ππ β (1 β π)
Ii
85
IV. πππ(ππ) πππ πππΌπ(ππ) are non-decreasing in c. V. ππππΌπ(ππ) is non-decreasing in c and reaches a maximum when π π = πΆπ(ππ); then
it decreases to π π = πΆπ(ππ), where ππβ is the deal with free information. VI. The optimal policy can only change over c from rejecting to buying information, and
from buying information to accepting.
7.5 Extensions In this section we extend our work to allow for flexibility in capturing different assumptions.
We study the optimal policy within four cases. First, we look into situations when gathering
information imposes time and capacity costs in addition to monetary costs. For example, the
time spent on gathering information about a specific deal reduces our ability to source more
deals. Similarly, the time a partner allocates to gathering information might decrease the
time she has to be on company boards and hence decreases the capacity available to the
firm. We use this structure in the first and second extensions only.
In the second extension, we consider situations where the decision maker may choose to
reverse a single accepting decision. This can be seen as having the option to continue with
the allocation if no better opportunity arises. Otherwise, the decision maker, at a price, may
choose to reclaim his resource unit and allocate it to the new opportunity.
In the third extension, we employ the Probability of Knowing structure for information. In
this setup, information gathering is modeled as an exercise in obtaining perfect information.
This structure is useful when modeling situations where the decision maker is seeking
information on a distinction that is known by others. Additionally, this structure helps to
assess the relative value of information-gathering activities.
7.5.1 Multiple Cost Structures In this section we extend our work on the value of information to include a delay and a
capacity cost in addition to the monetary cost. We set up the problem, extend our optimal
policy results, and then follow with a discussion of the use of multiple detectors.
Problem Structure As an abstraction, we consider that information is offered at a cost of k resource units, d
delay units, and a cost fraction f. A graphic representation of the problem is shown below
86
Definition 7.5.1: Threshold with Multiple Cost Structures We generalize the threshold π ππ‘ definition given in Section 7.3 to:
π ππ‘(π,π) =πππ‘+1
ππβππ‘+π
Note that the threshold given earlier is now stated as π ππ‘(1,1).
The Optimal Policy with Multiple Cost Structures
Proposition 7.5.1a: The Optimal Information Gathering Policy with Multiple Cost Structures When offered a deal ππ the decision maker should buy information if and only if
πππΌπβππ‘+π(ππ) β₯ ππππ‘ (ππ) β
π ππ‘(π,π + 1)1 β π
Proposition 7.5.1b: The Optimal Allocation Policy with Multiple Cost Structures After information is received, the decision maker should buy the deal if and only if
πΆπ(ππ|πΌπ) β₯ π πβππ‘+π(1,1)
Otherwise, the deal is worth buying without information if and only if
πΆπ(ππ) β₯ π ππ‘
Example 7.5.1 To better illustrate this example, we changed the parameters of the problem; we have a
capacity of 8 resource units and a time horizon of 10 periods. The following graph shows the
value of the deal flow with perfect information (clairvoyance) at different cost structures.
Accept
Seek info
Reject
Reject
Accept
πππ‘|ππ
ππβ1π‘+1 β πΆπ(ππ)
πππ‘+1
ππβ1βππ‘+1+π β πΆπ(ππ|πΌπ) β (1 β π)
ππβππ‘+1+π β (1 β π)
Ii
Figure 53 - Problem Structure with Information at Multiple Cost Types
87
Figure 54 β Example 7.5.1: Multiples of the Deal Flow with Clairvoyance at Different Cost Structures
The red line represents the multiple of the original deal flow with no clairvoyance. The blue
line shows the multiple of the deal flow with clairvoyance offered at a cost fraction of 5%.
The green line shows the multiple of the deal flow when clairvoyance is offered at 3 time
units per usage. Finally the black line shows the multiple when the cost of clairvoyance is 3
resource units of capacity.
Note that in the beginning of the time horizon, the deal flow with clairvoyance at a capacity
cost is worse than that with clairvoyance at a cost fraction. This changes as time passes when
the capacity at hand exceeds the number of periods left.
Figure 55 shows the ππππΌ of clairvoyance in the different cases described above.
Figure 55 - Example 7.5.1: Incremental multiples of the deal with clairvoyance at different cost structures
88
As stated above, ππππΌ increases with time, as shown in the properties above. Note that the
green line (with delay cost) goes to 1 (i.e., worthless) at time 10 because within this deal flow
decision makers cannot afford the delay cost, since there are only two units of time left.
Multiple Detectors We consider the case when the decision maker is offered two irrelevant detectors with
indications πΌ1and πΌ2 and costs (π1,π1,π1)and (π2,π2,π2), respectively.
Corollary 7.5.1: Identifying the Optimal Detector with Multiple Cost Structures Given the setup above, Detector 1 will be optimal when:
πππΌ1πππΌ2
β₯π ππ‘(π2,π2 + 1 )π ππ‘(π1,π1 + 1 )
β1 β π21 β π1
Where πππΌ1 = πππΌπβπ1π‘+π1/ πΌ1, and πππΌ2 = πππΌπβπ2
π‘+π2/ πΌ2
Otherwise Detector 2 will be optimal.
This can be extended to more than two detectors. Also, as previously discussed, this criterion
is not myopic. If the decision maker is offered both detectors she might still start with the
less βoptimalβ detector.
7.5.2 Decision Reversibility In this section we consider the option of reversing decisions. For a background on options
within Decision Analysis please refer to Howard (1995).
Problem Setup and Definitions This section requires generalizing the definitions used in this chapter so far.
Definition 7.5.2: Multiples with Decision Reversibility We extend the definitions of the values from Section 7.3. The generalizations include adding
a placeholder for the deal with an option.
The Certain Equivalent of the Deal Flow with an Option to Reverse a Decision ππ
π‘(π)|ππis the certain multiplier of the future value of the deal at time t with remaining
capacity c after observing a deal Xn with an option to reverse an allocation to deal Z.
πππ‘(π) is the certain multiplier of the future value of the deal at time t with remaining
capacity c before observing a specific deal with an option to reverse an allocation to deal Z.
89
Threshold with an Option to Reverse a Decision π ππ‘(π,π,π) is the threshold with an option to reverse an allocation to deal Z.
π ππ‘(π,π,π) =πππ‘+1(π)
ππβππ‘+1+π(π)
The Incremental Cost Multiple of Capacity ππΆππ‘(ππ,π) is the incremental cost multiple of capacity.
ππΆππ‘(ππ,π) =π ππ‘(1,1,π)πΆπΈ(ππ) β¨ 1
Definition 7.5.2: The Incremental Value of a Deal with an Option
πππππ‘(ππ,π) =πππ‘+1(ππ)ππβ1π‘+1(π)
The incremental multiple of a deal with an option to reverse a decision is defined as the
indifference buying fraction of the deal with the option and is equal to the contribution of
the deal with a free option to of the multiple of the deal flow.
We limit our discussion to the option of reversing a single accepting decision at any time. The
decision maker can keep an option on one deal only. So, in effect, the decision maker has
three alternatives at each stage. The first alternative is to wait. The second is to accept
without buying an option and the third is to accept with an option to reverse the decision
later.
In general, we assume that decision maker has previously bought an option on a deal Z and
now he is offered a deal ππ at time t with remaining capacity c. The first alternative is for the
decision maker to wait and not change anything. Another alternative is for him to accept ππ
without buying an option so he still has the option on Z. The third alternative is for him to
accept ππ and buy an option. Here, since we allow one option only, he discards Z and does
not allocate more capacity. The following is a graphical description of the deal.
90
The Optimal Policy with an Option to Reverse an Allocation Decision
Proposition 7.5.2 The Optimal Allocation Policy with an Option When offered a deal ππ with an option on deal π, the decision maker should accept the deal
and buy an option on it if and only if:
πππππ‘(ππ,π) β₯ ππΆππ‘ (ππ,π) βπΆπ(π)1 β π
Otherwise, the decision maker should accept if and only if:
πΆπ(ππ) > π ππ‘(1,1,π)
So the option is worth buying when the incremental multiple of the deal with the option is
higher than the sum of its price, the incremental cost of capacity, and the multiple of the
deal liquidated.
7.5.3 The Probability of Knowing Detectors In this section we consider the probability of knowing detectors. This is an alternative
method of modeling information-gathering activities defined by Howard (1998). We include
this procedure because it allows us a clear way to value and order detectors. Here a detector
is modeled as a probability of obtaining clairvoyance versus no information. In this extension
we do not use the multiple cost structure. We set up the problem, extend our optimal policy
results, and then follow with a discussion of using multiple detectors.
Problem Structure The setup here differs from the basic one in the structure of information. When the decision
maker seeks information, he either gets clairvoyance with probability or gets no information.
After getting the indication, the decision maker can then decide whether to accept or reject
the deal at hand. The following is a graphical representation of the problem.
Reject
Accept Donβt
Buy option
πππ‘(π)|ππ
πππ‘+1(ππ) β πΆπ(ππ)/πΆπ(π) β (1 β π)
ππβ1π‘+1(ππ) β πΆπ(ππ)
πππ‘+1(π)
Figure 56 - Problem Structure with an Option to Reverse an Allocation Decision
91
If decision makers receive clairvoyance, they get the incremental multiple of the deal with
Perfect information (iMPI) is defined as
ππππΌππ‘ (ππ) = πΆπ(ππππ‘(ππ|πΌ))
Note that if the decision maker does not receive clairvoyance, the deal doesnβt change.
The Optimal Policy with a Probability of Knowing Detectors
Proposition 7.5.3a: The Optimal Information Gathering Policy with a Probability of Knowing Detectors Given a detector defined as above with a probability of knowing p and cost fraction f, the
decision maker should buy information if and only if:
οΏ½π’ οΏ½ππππΌππ‘(ππ)πππ
π‘(ππ) οΏ½ β 1οΏ½ β₯οΏ½π’ οΏ½ 1
1 β ποΏ½ β 1οΏ½
π
Proposition 7.5.3b: The Optimal Allocation Policy with a Probability of Knowing Detectors If clairvoyance is obtained, the decision maker should buy the deal if and only if:
πΆπ(ππ|πΌπ) β₯ π ππ‘
Otherwise, if no clairvoyance is obtained or the decision maker did not buy information, then
the decision maker should buy the deal if and only if:
Figure 57 - Problem Structure with Probability of Knowing Detectors
Ii
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject πππ‘|ππ
ππβ1π‘+1 β πΆπ(ππ)
πππ‘+1
Reject
Accept ππβ1π‘+1 β πΆπ(ππ|πΌπ) β (1 β π)
πππ‘+1 β (1 β π)
Reject
Accept ππβ1π‘+1 β πΆπ(ππ) β (1 β π)
πππ‘+1 β (1 β π)
p
92
πΆπ(ππ) β₯ π ππ‘
Multiple Detectors with a Probability of Knowing Detectors Now say we have two irrelevant detectors with probability of clairvoyance p1 and p2 and cost
fractions f1 and f2, respectively.
Corollary 7.5.3: Identifying the Optimal Detector with a Probability of Knowing Detectors Given the situation above, Detector 1 will be optimal when
οΏ½π’ οΏ½ 11 β π1
οΏ½ β 1οΏ½
π1<οΏ½π’ οΏ½ 1
1 β π2οΏ½ β 1οΏ½
π2
Otherwise Detector 2 will be optimal. In this setup, this optimality is myopic. So if we have
multiple irrelevant detectors we use them in decreasing order of οΏ½π’ οΏ½ 11βπ
οΏ½ β 1οΏ½ ποΏ½ .
As stated earlier, the corollary here differs from its parallel in the additive setting.
93
Chapter 8 β Step 3: Funnel Application and Examples
βSometimes it's the smallest decisions that can
change your life forever.β
Keri Russell In the final step of the process, we discuss how we apply the results of the valuation funnel and discuss some examples in detail.
Figure 58 - Step 3 of the Solution Process
8.1 Overview The third step of our template is to apply the results of valuation funnel. The goal of this step
is to discuss the application of the valuation funnel. By the end of this step, the decision
maker may use the results of the valuation funnel to evaluate meta-decisions that relate to
the process itself. Additionally, the results will provide the optimal policy regarding real-time
decisions.
This chapter is organized into six parts. In Section 2 we discuss how the valuation funnel is
applied in meta-decisions and real time decisions. In Section 3 we outline the example setup.
In Section 4 we give real-time application examples, specifically, discussing when the decision
maker should accept or reject a deal and when to buy an additional alternative. In Section 5
we give examples of meta-decisions applications.
94
8.2 Application of the Funnel In this chapter we discuss how to apply the funnel. The application process depends on the
framing of the decision. As discussed in Section 5.2, framing is done on two levels; the first
relates deal flow and the second to the deals as they arrive. Hence, we apply the funnel on
two different levels; the first is to evaluate meta-decisions, that is, decisions that affect the
deal flow as a whole. The second is to evaluate real-time decisions, that is, decisions that
occur during the process and are particular to specific deals.
Figure 59 β Two Levels of Funnel Application
8.3 Two Types of Decisions: Meta and Real-Time Meta-decisions include decisions to set the timeline, capacity level, deal sourcing, and
information sources available. In all of these contexts, the decision is not limited to a specific
deal; rather, it relates to the future deal flow as a whole.
The funnel can be used to evaluate meta-decisions through the following steps: 1. Model the meta-alternatives 2. Build the Valuation Funnel for each alternative 3. Choose the one with highest future deal flow value
In Section 8.6 we give three policy evaluation examples. The first is concerned with choosing
the focus of a Venture Capital firm. The second and third examples discuss issues around
hiring a partner for the VC firm.
Real-time decisions include accepting a deal, buying information, buying control, and buying
options. These are decisions that must be applied as the deals arrive.
95
The results of the funnel provide a guide to evaluating deals. The funnel gives the
incremental values of the alternatives concerning a specific deal. Specifically, when the
funnel can be used to evaluate real-time decisions through the following steps:
1. Update the generic diagrams to incorporate beliefs about the specific deal
2. Refer to the funnel results to find the incremental value of the deal with the different
alternatives
3. Choose the alternative with the highest incremental value
For example, consider a deal offered at (c,t) for which the decision makers have three
alternatives, namely, accept, reject, and buy information. The decision maker first updates
the generic diagram with her beliefs. She then assesses the detectorβs accuracy and cost.
Finally, she refers to the funnel results to find the incremental value of the deal and of the
detector. Now she chooses the alternative with the highest incremental value.
8.4 Setup Examples In this chapter we consider the following example of Saad, the owner of DA Ventures, a
venture capital firm. Saadβs firm focuses on early-stage technology startups. For this round,
they have the following parameters:
- They can invest in 9 companies at most (c = 9)
- They have 50 periods to receive proposals (T = 50)
- They have a 10% chance, at each period, of receiving no proposal
- They follow the delta property and their risk tolerance = $100Million (Ξ³= 0.01)
The firm classifies the startups using four uncertainties, namely, hiring success, technical
success, customer acquisition, and dilution effect. For simplicity, the first three uncertainties
are modeled as binary uncertainty where failure in one indicates the failure of the venture.
The first, hiring success, pertains to the startupβs success in attaining the skills needed on the
team. The second uncertainty, technical success, models the startupβs success in developing
the technology needed for the venture, given that they have the needed skills on their team.
The third uncertainty, customer acquisition, models the success of the startup in acquiring
customers, given that they have the needed skills on their team and that their technology
works. The fourth uncertainty, dilution effect, models the stake the VC firm has in the
96
startup, given its success. This uncertainty is broken down into three degrees, namely, high,
medium, and low dilution effect.
The firm is considering two categories of technology startups, hardware and software as a
service (SAAS). The following tree represents their beliefs about hardware startups:
Figure 60 - Application Example: Hardware Deals
The firm believes that any hardware startup proposal they receive is equally likely to be one
of twenty-seven different deals. The deals are equally probable combinations of the possible
values of the three uncertainties, p1, p2, and p3, where p1 = [0.2, 0.4, 0.6], p2 = [0.15, 0.3,
0.45], and p3 = [0.75, 0.85, 0.95].
The following tree represents their beliefs about software as service deals:
Figure 61 - Application Example: Software as Service Deals
Hardware Early Stage Startup Liquidity Effect
0.15 Low $Millions300
Customer Acquisitionp3 Success 0.7 Medium
200Technical Success
p2 Success 0.15 High100
Hiring Successp1 Success 1-p3 Failure
-15
1-p2 Failure-10
1-p1 Failure-0.5
Software As Service Early Stage Startup Liquidity Effect
0.15 Low $Millions500
Customer Acquisitionp3 Success 0.7 Medium
300Technical Success
p2 Success 0.15 High100
Hiring Successp1 Success 1-p3 Failure
-20
1-p2 Failure-5
1-p1 Failure-0.1
97
The firm believes that any software as a service startup proposal they receive is equally likely
to be one of twenty-seven different deals. The deals are equally probable combinations of
the possible values of the three uncertainties, p1, p2, and p3, where p1 = [0.4, 0.6, 0.8], p2 =
[0.5, 0.65, 0.8], and p3 = [0, 0.2, 0.4].
With the above structure, we apply the funnel to real-time decisions and meta-decisions. For
real-time decisions we consider:
1. Buying Information
2. Accepting Deals
For meta-decisions we consider the following three situations:
1. Situation 1: Choosing the Focus of the Firm
2. Situation 2: Hiring Partners: Evaluating Their Skills
3. Situation 3: Hiring Partners: Evaluating Synergies in Their Skills
8.5 Real-Time Application Examples
8.5.1 Real-Time Setup Example Here we consider a situation in which Saad has 4 resource units left and is at t=40 (10 steps
away from the deadline). Here we would like to see which deals Saad should buy information
on and when he would accept the deals. In this example we assume information is offered at
a cost of m=8.
As stated before, we have to look for the incremental values of the deals with the different
alternatives and then accept the alternative with the highest incremental value. The situation
is described Figure 62.
Figure 62 - Example 8.4: Real-Time Decisions Structure
98
8.5.2 Should Saad Buy Information? To answer this question we refer to Proposition 6.3.1a, which states that the decision maker
should buy information if and only if:
πππΌππ‘(ππ) β₯ ππππ‘(ππ) + π
This is equivalent to buying information if and only if:
ππππΌππ‘(ππ) β₯ π
Thus, we find the incremental value of information for the different deals and compare them
to the cost of information. Saad buys information when the incremental value of information
is higher than its cost. The example is illustrated in Figure 63.
Figure 63 - Example 8.4: Should Saad Buy Information?
Figure 63 shows that Saad should not pay 8 monetary units for information on any deal at
time t=40. This analysis allows Saad to negotiate with the information provider. Saad may
choose to inform her that if she provides information at m=6 instead of 8 then he will be 50%
likely to buy it from her. At m=8, however, he will never buy information, regardless of the
deal at hand.
8.5.3 Should Saad Invest in the Deals? To answer this question we refer to Proposition 6.3.1b, which states that the decision maker
should accept the deal with information if and only if πΆπΈ(ππ|πΌπ) β₯ π ππ‘ and should accept it
without information if and only if πΆπΈ(ππ) β₯ π ππ‘. This is equivalent to stating thatπππΌππ‘(ππ) >
0 or ππππ‘(ππ) > 0.
99
Since we found above that Saad will not be buying information for any of the deals, we limit
our consideration to deals without information. We refer to the funnel results and find the
incremental values of the deals without information. If the incremental value is positive, then
we invest the deal; otherwise, we reject it. The example is illustrated in Figure 64.
Figure 64 - Example 8.4: Should Saad Invest in the Deals?
The graph above shows that Saad should invest if the deal is deal 4, deal 5, or deal 6.
Otherwise, Saad should reject the deal.
8.6 Examples of Meta-Decision Applications
8.6.1 Situation 1: Choosing the Focus of the Firm Saad is deciding where to focus his firm. He can focus on hardware startups, software
startups, or a combination of both. We study this in the case of the basic deal flow and of the
deal flow with information.
The Basic Case Figure 65 shows the difference between the value of the process focusing solely on hardware
startups and that focusing on software startups.
100
Figure 65 β Example 8.5: Should Saad Focus on Hardware or Software?
From Figure 65 we can see that, regardless of the current stage of the process, the firm will
always prefer to focus on hardware startups over SAAS startups.
Figure 66 shows the possible combinations of the two categories. It is clearly shown that the
inclusion of SAAS hurts the value of the process.
Figure 66 β Example 8.5: What Combination of HW and SW Should Saad Focus on?
Information Effect Next, we study the effect of available information. The following graph shows the difference
between the two deal flows when we have free clairvoyance.
0
5
10
15
20
25
0 10 20 30 40 50 60
Val
ue (
$)
Time
HW >> SW
SW >> HW
0
10
20
30
40
0 10 20 30 40 50 60
Val
ue (
$)
Time
101
Figure 67 - Example 8.5: With Information, Should Saad Focus on Hardware or Software?
It is worth noting how the optimal focus shifts for capacity levels of 4 and 1 units. For the first
proposals, it is better to focus on SAAS. This optimality shifts later back to hardware startups.
To understand this shift, note that while SAAS is less optimal, it has better prospects. With
information, the firm can identify the better deals. Since early in the process the firm can
afford to wait for better deals to arrive, for a deal flow with few resource units, it is worth
using SAAS and waiting for the better deals.
Figure 68 shows the possible combinations of the two categories.
Figure 68 - Example 8.5: With Information, What Combination of HW and SW Should Saad Focus on?
In the beginning of the process the firm benefits from having a pure SAAS focus; towards the
end the optimality shifts to a pure hardware focus. Figure 68 shows how the optimality shifts
in the middle of the time horizon.
-20
-15
-10
-5
0
5
10
15
20
25
0 10 20 30 40 50 60Val
ue (
$)
Time
HW >> SW
SW >> HW
60
70
80
90
100
110
120
130
0 10 20 30 40 50
Val
ue (
$)
Time
All HW
All SW
Combination
102
8.6.2 Situation 2: Hiring Partners: Evaluating Synergies in Their Skills For the sake of clarity, we reduce the time horizon available here to 20 instead of 50.
Rana, an aspiring engineer, wants to join DA Ventures. Saad believes that Rana can add value
to the firm in two different ways through her network. She can recruit great talent to the
startups and she can attract more proposals. Here we consider the question of evaluating the
synergies in the skills. We model the skills of Rana as:
- Recruiting talent: modeled as control over the hiring uncertainty by multiplying the
odds by 2
- Attracting more proposals: modeled as increasing the time horizon from 20 to 40
Let V1 and V2 be the value added by controlling hiring and increasing the time horizon,
respectively. Let V3 be the value added by the combination of both the control and the
increase in the time horizon. The following graph shows the difference between V3 and
(V1+V2).
Figure 69 β Example 8.5: Are the Partnerβs Skills Complementary or Synergetic?
Note that the difference is negative for low capacity levels and thus indicates that the skills
can substitute for each other. However, for capacity levels higher than 5, the difference is
positive, indicating synergy between the skills. When we have more resource units at hand,
increasing the time horizon allows us more opportunity to apply the control. With few
resource units, the time horizon is already long enough to maximize the benefit of the
control available.
-0.2
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10 12
Valu
e ($
M)
Capacity
Complementary Synergetic
Hybrid β Sum
103
8.6.3 Situation 3: Hiring Partners: Evaluating Their Skills Here we consider Maha, a professor in software engineering. Maha offered to join DA
Ventures to start a focus on SAAS. Maha requested that the organization put one third of the
deal sourcing activities into SAAS. DA Ventures needs now to evaluate how much the
addition of Maha will add to the firm. The decision makers believe that she will be able to
increase the number of proposals they receive by 20% (so T = 60), will increase their capacity
by one unit (so c = 10), and will be able to gather information about the technical and
customer acquisition uncertainties within the SAAS proposals. The firm, however, is not sure
about the accuracy of Mahaβs information and hence not sure how to value Mahaβs input to
the firm. To solve this, we model Mahaβs information-gathering activities as detectors with
different levels of accuracy and then find the minimum accuracy needed for Maha to be
useful.
The following graph shows the value of the deal flow without Maha (black lines) and with
Maha at different levels of accuracy (acc = 0.55, 0.6, 0.65, 0.7).
Figure 70 β Example 8.5: How Good Must the Partner's Information be to Justify Hiring her?
In the graph above we see that the firm has to believe that Maha has an accuracy of at least
0.7 to be worthwhile. Recall that SAAS is less optimal than hardware, so including SAAS
reduces the value of the deal flow to the firm. Hence, Maha has to provide information on
SAAS so that it will offset the decrease in value.
30
35
40
45
50
55
60
0 10 20 30 40 50 60
Val
ue (
$)
Time
At least 70%
Higher accuracy required
Basic Case
104
Chapter 9 β Conclusions and Future Research
βLife isn't about finding yourself. Life is about
creating yourself.β
βWhen I was young I observed that nine out of
every ten things I did were failures, so I did ten
times more work.β
George Bernard Shaw
9.1 Conclusions Our dissertation is motivated by the problem of the Venture Capitalist. We abstracted the
problem in the form of the Fleeting Opportunities structure. In this formulation, the decision
maker has flows of opportunities (deals) that arise over time. He may only accept a limited
number of deals and has to immediately decide how to react to each deal as it arrives. We
limited our discussion to situations where the distribution over deals does not change over
time and where the deals are considered irrelevant. Additionally, we limit the decision
makerβs risk attitude to either exhibit constant absolute risk aversion or constant relative risk
aversion.
In the fleeting opportunities setup, we generalized the current dynamic programming
structures to include risk aversion and information gathering among other extensions. In this
way, our structure will better capture the complexities of the underlying situation. We
introduced new definitions and principles to the application of the power u-curve.
Additionally, we characterized the value of information and the optimal policy, thus allowing
the decision maker to develop an intuitive understanding of the deal flow.
We employed the concept of Decision Analysis methodologies in solving this problem. The
result is a valuation template for the fleeting opportunities problem. The decision maker may
105
follow the template to evaluate the alternatives presented by the deal flow and the
alternatives presented by the specific deals as they arrive. We call for the development of
valuation templates in other fields and in general to the development of other DA
methodologies in an effort to standardize Decision Analysis.
9.2 Future Research In the future we intend to extend this work to improve the application of the template to the
fleeting opportunities problem. We also intend to define valuation templates in different
application areas. We identify seven dimensions of future research.
9.2.1 Power U-Curve We are working on extending the work on the power u-curve to include risk sharing, risk
scaling, and hedging. Our study of the risk sharing and scaling properties of the power u-
curve will follow those of the exponential u-curve studied by Howard (1998). In hedging, we
are working on extending the work by Seyller (2008) to the multiplicative setting.
9.2.2 Random Capacity Requirements We would like to extend the work of Kleywegt & Papastavrou (2001) on requests with
random capacity sizes and include risk aversion, information, control, and options. This will
improve the applicability of the process in two ways. First, it will allow us to model requests
with different requirements that are not always known ahead of time. Second, it allows us to
model the capacity cost in a more consistent way. For example, we will not have to charge
the same capacity requirement for information as for a request.
9.2.3 Relevance Our model now assumes that requests are irrelevant given their arrival time. We aim to
change the underlying process to follow the Markov property. This, again, will improve the
extent to which the process models the actual requests.
Another dimension of relevance of requests is that within or across different input types.
That is, a more comprehensive model would allow the value of a specific deal type to be
affected by the number of the other types at hand. This extension will allow us to optimize
and diversify the portfolio.
106
9.2.4 Options and liquidity We have described an option that allows us to reverse an investment decision. We would like
to also study the option to reverse the rejection decision, in other words, to put items on
hold.
We would like to also extend the investment reversal option to allow us to include liquidity
considerations. By modeling the evolution of deals as time passes we can include liquidity as
a future decision.
9.2.5 Learning While we model information-gathering activities, we do not model how decision makers
improve their ability to gather information. By modeling how the uncertainties in the
requests resolve, we can allow the deal flow to update the accuracy of the detectors and
thus to model learning.
9.2.6 Decision Analysis Methodologies We would like to follow our method in this dissertation and design other DA methodologies,
specifically templates for valuation, for areas other than the ones discussed here. Of special
interest to us are decisions relating to renewable energy projects and to Islamic financing
products. These two areas are understudied, in our belief; thus, an introduction of a
valuation template is likely to have an impact on their current practices.
βBe happy for this moment. This moment is
your life.β
Omar Khayyam
107
Bibliography Abbas, A. E. (2003). Entropy Methods in Decision Analysis. Dissertation submitted to the
department of Management Science & Engineering of Stanford University.
Abbas, A. E. (2007). Invariant Utility Functions and Certain Equivalent Transformations.
Decision Analysis , 4 (1), 17-31.
Arrow, K. J. (1971). Essays in the Theory of Risk Bearing. Amsterdam, Holland: Markham
Publishing Co.
Barron, A. R., & Cover, T. M. (1988). A bound on the financial value of information. IEEE
Transactions on Information Theory , 34 (5).
Bayes, T. (1763). An Essay Towards Solving a Problem in the Doctrine of Chances. Biometrika
, 45, 293-315.
Bernoulli, D. (1954). Exposition of a New Theory on the Measurement of Risk. Econometrical ,
22 (1), 23-26.
Bickel, E. (2008). The relationship between perfect and imperfect information in a two-action
risk-sensitive problem. Decision Analysis , 5 (3), 116-128.
Boulis, A., & Srivastava, M. (2004). Node-Level Energy Management for Sensor Networks in
the Presence of Multiple Applications. Wireless Networks , 10 (6), 737-746.
Copeland, T. E., Koller, T., & Murrin, J. (2005). Valuation : Measuring and Managing the Value
of Companies (4 ed.). Wiley.
Cornell, B. (1993). Corporate Valuation: Tools for Effective Appraisal and Decision-Making.
McGraw-Hill.
Cover, T. M., & Thomas, J. A. (1991). Elements of Information The. New York: Wiley.
Damodaran, A. (2002). Investment Valuation: Tools and Techniques for Determining the
Value of Any Asset. New York: John Wiley and Sons.
Damodaran, A. (n.d.). Probabilistic Approaches: Scenario Analysis, Decision Trees and
Simulations. Unpublished .
Damodaran, A. (2001). The Dark Side of Valuation. FT Press.
108
DelquiΓ©, P. (2008). The Value of Information and Intensity of Preference. Decision Analysis , 5
(3), 129-139.
Derman, C., Lieberman, G. J., & Ross, S. M. (1972). A Sequential Stochastic Assignment
Problem. Management Science (18), 349-355.
Feller, J. (2002). Pricing of Multidimensional Resources in Revenue Management
(Multidimensional Dynamic Stochastic Knapsack Problem). Operations Research Proceedings,
(pp. 407-413). Klagenfurt.
Fried, V. H., & Hisrich, R. D. (1988). Venture capital research: Past, present and future.
Entrepreneurship Theory and Practice , 13, 15-28.
Gilbert, J., & Mosteller, F. (1966). Recognizing the maximum of a sequence. Journal of
American Statistics Association (61), 35-73.
Gould, J. P. (1974). Risk, stochastic preference, and the value of information. Journal
Economic Theory (8), 64-84.
Hilton, R. W. (1981). Determinants of information value: Synthesizing some general results.
Management Science (27), 57β64.
Howard, R. A. (1965). Bayesian Decision Models for Systems Engineering. IEEE Transactions
on Systems Science and Cybernetics , 1 (1), 36-40.
Howard, R. A. (1966a). Decision Analysis: Applied Decision Theory. In D. B. Hertz, & J. Melese
(Ed.), Proceedings of the Fourth International Conference on Operational Research (pp. 55-
71). New York: Wiley-Interscience.
Howard, R. A. (1970). Decision Analysis: Perspectives on Inference, Decision and
Experimentation. In R. A. Howard, & J. E. Matheson (Eds.), Readings on the Principles and
Applications of Decision Analysis (Vol. 2, pp. 821-834). Menlo Park, CA: Strategic Decisions
Group.
Howard, R. A. (1988). Decision Analysis: Practice and Promise. Management Science (38),
679-695.
Howard, R. A. (1960). Dynamic Programming and Markov Processes. Cambridge, MA: The
MIT Press.
109
Howard, R. A. (1992). In Praise of the Old Time Religion. Utility Theories: Measurements and
Applications , 27-55.
Howard, R. A. (1966b). Information Value Theory. IEEE Transactions on Systems Science and
Cybernetics , 2 (1), 22-26.
Howard, R. A. (1989). Knowledge Maps. Management Science , 35 (8), 903-922.
Howard, R. A. (1980). On Making Life and Death Decisions. In R. A. Howard, & J. E. Matheson
(Eds.), Readings on the Principles and Applications of Decision Analysis (Vol. 2, pp. 483-506).
Menlo Park, CA: Strategic Decisions Group.
Howard, R. A. (1995). Options. Wise Choices: Games, Decisions, and Negotiations. a
symposium in honor of Howard Raiffa. Harvard Buisness School.
Howard, R. A. (2004). Speaking of decisions: Precise decision language. Decision Analysis , 1
(2), 71-78.
Howard, R. A. (1968). The foundations of decision analysis. IEEE Transactions on Systems
Science and Cybernetics , 4 (3), 211β219.
Howard, R. A. (2007). The Foundations of Decisions Analysis Revisited. In W. Edwards (Ed.),
Advances in Decision Analysis from Foundations to Applications. Cambridge, UK: Cambridge
University Press.
Howard, R. A. (1998). The Fundamentals of Decision Analysis, manuscript in progress.
Stanford, CA: Unpublished.
Howard, R. A. (1967). Value of information lotteries. IEEE Transactions on Systems Science
and Cybernetics , 3 (1), 54β60.
Howard, R. A., & Abbas, A. E. (2011E). Foundations of Decision Analysis. Prentice Hall.
Howard, R. A., & Matheson, J. E. (1981). Influence Diagrams. In R. A. Howard, & J. E.
Matheson (Eds.), Readings on the Principles and Applications of Decision Analysis (Vol. 2, pp.
719-762). Menlo Park, CA: Strategic Decisions Group.
Howard, R. A., & Matheson, J. (1972). Risk-sensitive Markov Decision Processes.
Management Science (18), 356-369.
Jaynes, E. T. (2003). Probability Theory: the Logic of Science. England: Cambridge University
Press.
110
Jennergren, L. P. (2002). A Tutorial on the McKinsey Model for Valuation of Companies.
SSE/EFI Working Paper Series in Business Administration No. 1998:1 .
Keeney, R. L. (1982). Decision Analysis: an Overview. Operations Research , 30, 803-838.
Keeney, R. L., & Raiffa, H. (1976). Decisions with Multiple Objectives: Preferences and Value
Trade-Offs. New York: Wiley.
Kellerer, H., Pferschy, U., & Pisinger, D. (2004). Knapsack Problems. Berlin: Springer.
Kelly, J. L. (1956). A new interpretation of information rate. Bell System Technical Journal , 35,
917-926.
Kemmerer, B., Mishra, S., & Shenoy, P. P. (Unpublished). Bayesian Causal Maps as Decision
Aids in Venture Capital Decision Making: Methods and Applications. Unpublished.
Kleinberg, R. D. (2005). A Multiple-Choice Secretary Algorithm with Applications to Online
Auctions. Proceedings of Symposium on Discrete Algorithms (16), 630β631.
Kleywegt, A. J. (1996). Dynamic and Stochastic Models with Freight Distribution Applications.
Ph.D. Dissertation. Boston, MA: Massachusetts Institute of Technology.
Kleywegt, A. J., & Papastavron, J. D. (1998). The Dynamic and Stochastic Knapsack Problem.
Operation Research (46), 17-35.
Kleywegt, A. J., & Papastavrou, J. D. (2001). The Dynamic and Stochastic Knapsack Problem
with Random Sized Items. Operations Research , 49 (1), 26-41.
Laplace, P. S. (1902). A Philosophical Essay on Probability. New York: John Wiley.
MacMillan, I. C., Zemann, L., & Subbanarasimha, P. N. (1987). Criteria distinguishing
successful from unsuccessful ventures in the venture screening process. Journal of Business
Venturing , 2, 123β137.
MacMillan, I., Siegel, R., & Narasimha, P. N. (1985). Criteria Used By Venture Capitalists To
Evaluate New Venture Proposals. Journal of Business Venturing , 1 (1), 119-128.
MacQueen, J., & Miller, R. J. (1960). Optimal Persistence Policies. Operations Research (8),
362β380.
Matheson, D. (1983). Managing the Corporate Business Portfolio. In R. A. Howard, & J. E.
Matheson (Eds.), Readings on the Principles and Applications of Decision Analysis, (Vol. 1, pp.
719-762). Menlo Park, CA: Strategic Decisions Group.
111
Matheson, D., & Matheson, J. (2005). Describing and Valuing Interventions That Observe or
Control Decision Situations. Decision Analysis , 2 (3), 165β181.
Matheson, D., & Matheson, J. E. (1998). The Smart Organization: Creating Value Through
Strategic R&D. Massachusetts: Harvard Business School Press.
Matheson, J. E., & Howard, R. A. (1968). An Introduction to Decision Analysis. In R. A.
Howard, & J. E. Matheson (Eds.), Readings on the Principles and Applications of Decision
Analysis (Vol. 1, pp. 17-55). Menlo Park, CA: Strategic Decisions Group.
McNamee, P., & Celona, J. (1990). Decision Analysis for the Professional. Redwood City, CA:
Scientific Press.
Papastavron, J. D., Rajagopalan, S. H., & Kleywegt, A. J. (1996). The Dynamic and Stochastic
Knapsack Problem with Deadlines. Management Science (42), 1706-1718.
Pratt, J. W. (1964). Risk Aversion in the Small and in the Large. Econometrica , 32 (2), 122-
136.
Pratt, J. W., Raiffa, H., & Schlaifer, R. (1964). The Foundations of Decision under Uncertainty:
an Elementary Exposition. Journal of the American Statistical Association , 59, 353-375.
Raiffa, H. (1968). Decision Analysis: Introductory Lectures on Choice under Uncertainty.
Reading, Massachusetts: Addison-Wesley.
Richman, J. (2009). An Analysis of Decision-Making in Venture Capital. Stanford, CA:
Unpublished undergraduate honors thesis.
Ross, K. W., & Tsang, D. (1989). The Stochastic Knapsack Problem. IEEE Transactions on
Communications (34), 47-53.
Sahlman, W. A. (1986, May-June). Aspects of Financial Contracting in Venture Capital.
Harvard Business Review .
Seyller, T. C. (2008). The Value of Hedgin. Stanford: Dissertation submitted to the
department of Management Science & Engineering of Stanford University.
Shachter, R. D. (1986). Evaluating Influence Diagrams. Operations Research , 34 (Nov.-Dec.
1986), 871-882.
Shachter, R. D. (1988). Probabilistic Inference and Influence Diagrams. Operations Research ,
36 (July-Aug. 1988), 589-605.
112
Shepherd, D. A., & Zacharakis, A. L. (1999). Conjoint Analysis: a New Methodological
Approach for Researching the Decision Policies of Venture Capitalists. Venture Capital - An
International Journal of Entrepreneurial Finance , 1 (3), 197 - 217.
Shepherd, D. A., & Zacharakis, A. L. (2002). Venture Capitalists' Expertise: A Call for Research
into Decision Aids and Cognitive Feedback. Journal of Business Venturing , 17 (1), 1-20.
Spetzler, C. S., & StaΓ«l von Holstein, A. S. (1975). Probability Encoding in Decision Analysis.
Management Science , 22 (3), 340-358.
Thorp, E. O. (1997). The Kelly Criterion in Blackjack, Sports Betting, and the Stock Market. The
10th International Conference on Gambling and Risk Taking.
Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases.
Science , 185 (4157), 1124-1131.
Tyebjee, T., & Bruno, A. (1984). A Model of Venture Capitalist Investment Activity.
Management Science , 30 (9), 1051-1066.
Van Slyke, R., & Young, Y. (2000). Finite Horizon Stochastic Knapsacks with Applications to
Yield Management. Operations Research , 48 (1 ), 155-172.
Veinott, A. F. (2004). Lecture Notes in Dynamic Programming. Stanford, CA: Stanford
University Bookstore.
Von Neumann, J., & Morgenstern, O. (1947). Theory of Games and Economic Behavior.
Princeton, New Jersey: Princeton University.
Wells, W. (1973). Venture Capital Decision-making. Thesis. Carnegie-Mellon University.
Zacharakis, A. L., & Meyer, G. D. (1998). A Lack of Insight: Do Venture Capitalists Really
Understand their own Decision Process? Journal of Business Venturing , 13 (1), 57-76.
Zacharakis, A. L., & Meyer, G. D. (2000). The Pothential of Acturial Decision Models: Can they
Improve the Venture Capital Investment. Journal of Business Venturing , 15 (1), 323β346.
Zacharakis, A. L., & Shepherd, D. A. (2001). The Nature of Information and Overconfidence on
Venture Capitalists' Decision Making. Journal of Business Venturing , 16 (4), 311-332.
113
Appendix A1 β Proofs
A1.1 Chapter 3 Proofs
Proposition 3.2.1 Applying the buying price of the prospects, change the equivalent return of the deal to be:
π0 β πΆπ β (1 β πΉπ)
For the decision maker to be indifferent between the deal and his/her initial wealth, the
buying price must satisfy:
π0 β πΆπ β (1 β πΉπ) = π0
(1 β πΉπ) =1πΆπ
β πΉπ =πΆπ β 1πΆπ
Proposition 3.2.2 Along similar lines to the discussion above, the selling price must satisfy:
π0 β πΆπ = π0 β (1 + πΉπ)
πΆπ = (1 + πΉπ)
β πΉπ = πΆπ β 1
Proposition 3.2.3 Let (pi, ri) with an investment fraction of f1 be the current deal with CEM =πΆπ1. Let the new
deal with CEM=πΆπ2 that the decision maker is offered be represented with (qi, si) and an
investment fraction f2. Let ππ1 and ππ2 be the u-multipliers of the two deals respectively.
The prospects the DM faces with the two deals are represented as follows:
(1 + ππ β π1)οΏ½1 + π π β π2οΏ½ β π0
The CEM of the new deal can be found as follows:
ππ(ππππππππ ππππ) = β β {ππ(1 + ππ β π1)π β ππ β οΏ½1 + π π β π2οΏ½ππ
π=1ππ=1 }
114
ππ(ππππππππ ππππ) = {οΏ½ππ(1 + ππ β π1)π} β {οΏ½πποΏ½1 + π π β π2οΏ½π}π
π=1
π
π=1
ππ(ππππππππ ππππ) = ππ1 β ππ2
πΆπ(ππππππππ ππππ) = οΏ½ππ(ππππππππ ππππ)π = οΏ½ππ1 β ππ2π = πΆπ1 β πΆπ2
For the decision maker to be indifferent between buying the new deal and his/her current
prospects, the buying price must satisfy:
π0 β πΆπ1 = π0 β πΆπ1 β πΆπ2 β (1 β πΉπ)
1 = πΆπ2 β (1 β πΉπ)
β πΉπ =πΆπ2 β 1πΆπ2
Proposition 3.2.4 The decision maker before obtaining information has the following CEM:
π0 β πΆπ1
and after obtaining information and paying a fraction Fb, has the following CEM:
π0.β πΆπ2 β (1 β πΉπ)
For the decision maker to be indifferent between both states, the buying price must satisfy:
π0 β πΆπ1 = π0 β πΆπ2 β (1 β πΉπ)
πΆπ1 = πΆπ2 β (1 β πΉπ)
β πΉπ = 1 β πΆπ1
πΆπ2
And in terms of UM we have:
1 β πΉπ = πΆπ1
πΆπ2
(1 β πΉπ)π = ππ1
ππ2
β πΉπ = 1 β οΏ½ππ1
ππ2
π
115
Proposition 3.2.5 The value of control can be directly related to the value of information above. We can
consider the case when the wizard (perfect control) changes the deal from πΆπ1 to πΆπ2. The
proof then follows in the same lines as above.
Proposition 3.2.6 Consider a deal x ~ (pi, ri)
From the Pratt approximation, we have:
πΆπΈ(π) β πΈ(π) β12
ποΏ½πΈ(π)οΏ½.ππ΄π (π)
πΆπ β πΈ(π)π0
β12
π(πΈ(π)).ππ΄π (π)π0
(1)
πππ‘π π‘βππ‘ πΈ(π)π0
= πΈ(1 + ππ)π0
π0= πΈ(1 + ππ) (2)
πππ π(πΈ(π)) = (1 β π)
π0(1+< ππ >) (3)
πππππππ¦,ππ΄π (π) = πΈ(π2) β πΈ(π)2
ππ΄π (π) = πΈ(( 1 + ππ)2π02) β πΈ((1 + ππ)π0)2
ππ΄π (π) = πΈ(1 + 2ππ + ππ2)π02 β πΈ(1 + ππ)2π0
ππ΄π (π) = οΏ½πΈοΏ½ππ2οΏ½β πΈ(ππ)2οΏ½π02 = ππ΄π (ππ)π0
2
ππ΄π (π)π0
= ππ΄π (ππ)π0 (4)
Now, we substitute (2), (3), and (4) back into (1) and get
πΆπ β πΈ(1 + ππ) β12
(1 β π)
π€οΏ½1 + πΈ(ππ)οΏ½.ππ΄π (ππ)π€
πΆπ β πΈ(1 + ππ) β12
(1 β π)
οΏ½1 + πΈ(ππ)οΏ½.ππ΄π (ππ)
πππ‘ π π = 1 + ππ β πΈ(π π) = πΈ(1 + ππ) & ππ΄π (π π) = ππ΄π (ππ)
116
β πΆπ β πΈ(π π) β 12
(1 β π).ππ΄π (π π)πΈ(π π)
Proposition 3.2.7 Consider the case for the investor who invests a small fraction f of his wealth a deal x ~ (pi, ri)
ππ = οΏ½ππ(1 + π ππ)ππ
π=1
π ππ π π
= οΏ½πππ ππ(1 + π ππ)πβ1π
π=1
π2 ππ π π2
= οΏ½πππ (π β 1)ππ2(1 + π ππ)πβ2π
π=1
πππ¦πππ π πππππ ππ₯ππππ πππ π€ππ‘β π‘π€π πππππππ ππππ’ππ π = 0 ππ
ππ β 1 + π ποΏ½ππ
π
π=1
ππ + π(1 β π)π2οΏ½ππ
π
π=1
ππ2
ππ β 1 + πππΈ( ππ) + π(1 β π)π2
2πΈ(ππ2)
Corollary 3.2.1 The optimal fraction, f*, that maximizes the approximate u-multiplier from Proposition 7 is
calculated as follows:
π πππ π
= 0 β ππΈ(ππ) + π(1 β π)πβπΈ(ππ2) = 0
β πβ = πΈ(ππ)
(1 β π)πΈ(ππ2)
Corollary 3.2.2 With the approximation from Proposition 7, the maximum fraction, fm, above which the
decision maker is better off not investing, is calculated as follows:
ππ = 1 β 1 + ππΈ(ππ)ππ + π(π β 1)πΈ(ππ2)(ππ)2
2= 1
β ππΈ(ππ)ππ + π(π β 1)πΈ(ππ2)(ππ)2
2= 0
117
πΈ(ππ)ππ + (π β 1)πΈ(ππ2)(ππ)2
2= 0 (π β 0)
ππ οΏ½πΈ(ππ) + (π β 1)πΈ(ππ2)ππ
2 οΏ½ = 0
β ππ = 0, ππ
ππ = 2πΈ(ππ)
(1 β π)πΈ(ππ2)
Proposition 3.2.8 Consider a deal x ~ (pi, ri)
ππ = οΏ½ππ(1 + ππ)ππ
π=1
π(π)ππ π π(π) = οΏ½οΏ½ππ(1 + ππ)π (ln(1 + ππ))ποΏ½
π
π=1
π(π) ππ(π = 0)π π(π) = οΏ½ππ(ln(1 + ππ))π
π
π=1
= πΈ[(ln(1 + ππ))π]
πππ¦πππ π πππππ ππ₯ππππ πππ ππππ’ππ π = 0 ππ
β ππ βοΏ½ππ
π!πΈοΏ½(ππ(π π))ποΏ½
π
π=1
118
A1.2 Chapter 6 Proofs
A1.2.1 Section 6.3 Main Results For convenience, we reproduce the recursion here:
The recursion can be rewritten as:
πππ‘|ππ = οΏ½[ππβ1π‘+1 + πΆπΈ(ππ)] β¨ πππ‘+1 β¨ οΏ½ππβ1π‘+1 β π + πΆπΈ[πΆπΈ(ππ|πΌπ) β¨ π ππ‘]οΏ½οΏ½ (1)
Proposition 6.3.1a: Optimal Information Gathering Policy When offered a deal ππ the decision maker should buy information if and only if
πππΌππ‘(ππ) β₯ ππππ‘(ππ) + π
Proposition 6.3.1b: Optimal Allocation Policy After information is received, the decision maker should accept the deal if and only if
πΆπΈ(ππ|πΌπ) β₯ π ππ‘
Otherwise, the deal is worth accepting without information if and only if
πΆπΈ(ππ) β₯ π ππ‘
The recursion can be rewritten as:
πππ‘|ππ = οΏ½[ππβ1π‘+1 + πΆπΈ(ππ)] β¨ πππ‘+1 β¨
οΏ½ππβ1π‘+1 βπ + πΆπΈ[πΆπΈ(ππ|πΌπ) β¨ π ππ‘]οΏ½οΏ½ (1)
Using the definition of the incremental value of the deal with information, the recursion
simplifies to
Accept
Seek info
Reject
Reject
Accept
πππ‘|ππ
ππβ1π‘+1 + πΆπΈ(ππ)
πππ‘+1
ππβ1π‘+1 + πΆπΈ(ππ|πΌπ) βπ
πππ‘+1 β π
Ii
119
We seek information if and only if:
πππ‘+1 + πππΌππ‘(ππ)βπ β₯ οΏ½οΏ½ππβ1π‘+1 + πΆπΈ(ππ)οΏ½ β¨ πππ‘+1οΏ½
πππΌππ‘(ππ) β₯ {(πΆπΈ(ππ) β π ππ‘) β¨ 0} + π
So,
πππΌππ‘(ππ) β₯ ππππ‘(ππ) + π
After information is received, the deal is worth accepting if and only if:
ππβ1π‘+1 + πΆπΈ(ππ|πΌπ) βπ β₯ πππ‘+1 β π
πΆπΈ(ππ|πΌπ) β₯ πππ‘+1 β ππβ1π‘+1
πΆπΈ(ππ|πΌπ) β₯ π ππ‘+1
Otherwise, we prefer to go with the deal without information. We buy the deal if and only if
ππβ1π‘+1 + πΆπΈ(ππ) β₯ πππ‘+1
ππ πΆπΈ(ππ) β₯ π ππ‘
Proposition 6.3.2: Characterizing V 1. πππ‘ is non decreasing in c for all t 2. πππ‘ is non increasing in t for all c
Proposition 6.3.2 Statement 1 We again write the recursion as:
πππ‘|ππ = οΏ½[ππβ1π‘+1 + πΆπΈ(ππ)] β¨ πππ‘+1 β¨
οΏ½ππβ1π‘+1 βπ + πΆπΈ[πΆπΈ(ππ|πΌπ) β¨ π ππ‘]οΏ½οΏ½ (1)
We prove this by induction. Take t = T, (1) becomes
πππ|ππ = οΏ½[ππβ1π+1 + πΆπΈ(ππ)] β¨ πππ+1 β¨
οΏ½ππβ1π+1 β π + πΆπΈ[πΆπΈ(ππ|πΌπ) β¨ π ππ]οΏ½οΏ½
accept
Seek info
reject πππ‘|ππ
ππβ1π‘+1 + πΆπΈ(ππ)
πππ‘+1
πππ‘+1 + πππΌππ‘(ππ) βπ
120
Since πππ+1 = 0 for all c, this equation reduces to
πππ π β₯ 1
πππ|ππ = οΏ½ πΆπΈ(ππ) β¨ 0 β¨
πΆπΈ[{πΆπΈ(ππ|πΌπ) β¨ π ππ}]βπ οΏ½
πππ π = 0
π0π|ππ = 0
Hence it is non decreasing in c at t = T
Now, assume this is true for t = k, or
πππ β₯ ππβ1π πππ πππ π β₯ 1 (2)
Now, prove that it is true for t = k-1, or
πππβ1 β₯ ππβ1πβ1 πππ πππ π β₯ 1 (3)
Where:
πππβ1|ππ = οΏ½οΏ½ππβ1π + πΆπΈ(ππ)οΏ½ β¨ πππ β¨
οΏ½ππβ1π β π + πΆπΈ[πΆπΈ(ππ|πΌπ) β¨ π ππβ1]οΏ½οΏ½
And
ππβ1πβ1|ππ = οΏ½οΏ½ππβ2π + πΆπΈ(ππ)οΏ½ β¨ ππβ1π β¨
οΏ½ππβ2π β π + πΆπΈοΏ½πΆπΈ(ππ|πΌπ) β¨ π πβ1πβ1οΏ½οΏ½οΏ½
But since πππ is non decreasing in c, (2), we have
ππβ1πβ1|ππ β€ οΏ½οΏ½ππβ1π + πΆπΈ(ππ)οΏ½ β¨ πππ β¨
οΏ½ππβ1π β π + πΆπΈ[πΆπΈ(ππ|πΌπ) β¨ π ππβ1]οΏ½οΏ½ = πππβ1|ππ
Hence,
ππβ1πβ1οΏ½ππ β€ πππβ1οΏ½ππ πππ πππ ππ
Meaning (3) is true, namely
ππβ1πβ1 β€ πππβ1
And, finally, by induction
121
ππβ1π‘ β€ πππ‘ πππ πππ π β₯ 1 πππ πππ π‘
Proposition 6.3.2 Statement 2 This statement follows directly from the recursion. We have
πππ‘|ππ = οΏ½ππβ1π‘+1 + πΆπΈ(ππ) β¨ πππ‘+1 β¨
ππβ1π‘+1 β π + πΆπΈ({πΆπΈ(ππ|πΌπ) β¨ π ππ‘})οΏ½
Hence
πππ‘|ππ β₯ πππ‘+1 πππ πππ πππππ π
Leading to
πππ‘ β₯ πππ‘+1 πππ πππ π
So, πππ‘ is non increasing in t for all c
Proposition 6.3.3: Characterizing R 1. π ππ‘ is non increasing in c for all t 2. π ππ‘ is non increasing in t for all c
Proposition 6.3.3 Statement 1 We prove this by induction. Take the case when t = T, the statement becomes
π ππ β₯ π π+1π
This is true since
π ππ = πππ+1 β ππβ1π+1 = 0 πππ πππ π β₯ 1
Hence π ππ is non increasing in c at t = T
Now, assume this is true for t = k, lets prove it for t = k β 1.
So we know that
π ππ β₯ π π+1π πππ πππ π β₯ 1 (1)
β πππ+1 β ππβ1π+1 β₯ ππ+1π+1 β πππ+1
2πππ+1 β₯ ππ+1π+1 + ππβ1π+1 πππ πππ π ππ‘ π‘ = π + 1
For the statement to be true at t = k β 1, the following must be true
π ππβ1 β₯ π π+1πβ1 πππ πππ π (2)
2πππ β₯ ππ+1π + ππβ1π πππ πππ π ππ‘ π‘ = π (3)
122
For each Xn, we can rewrite πππ as:
πππ|ππ = οΏ½πΆπΈ(ππ) + ππβ1π+1 β¨ πππ+1 β¨
πΆπΈοΏ½{πΆπΈ(ππ|πΌπ) + ππβ1π+1 β¨ πππ+1} βποΏ½οΏ½
Now, define πππ as
πππ(ππ) = πΆπΈοΏ½{πΆπΈ(ππ|πΌπ) β¨ π ππ} βποΏ½
So, we have:
πππ|ππ = ππβ1π+1 + {πΆπΈ(ππ) β¨ π ππ β¨ πππ(ππ)}
Now, rewrite (3) for each Xn as:
2{πΆπΈ(ππ) β¨ π ππ β¨ πππ(ππ)} + 2 ππβ1π+1
β₯ οΏ½πΆπΈ(ππ) β¨ π π+1π β¨ ππ+1π (ππ)οΏ½
+ ππ+1π+1 + {πΆπΈ(ππ) β¨ π ππ‘1π
β¨ ππβ1π (ππ)} + ππβ2π+1 (4)
Which is equivalent to
π ππ β π πβ1π + οΏ½πΆπΈ(ππ) β¨ π π+1π β¨ ππ+1π (ππ)οΏ½ β 2{πΆπΈ(ππ) β¨ π ππ β¨ πππ(ππ)}
+οΏ½πΆπΈ(ππ) β¨ π πβ1π β¨ ππβ1π (ππ)οΏ½ β€ 0 (5)
Note that from (1) we have
π πβ1π β₯ π ππ β₯ π π+1π (6)
This directly leads to:
ππβ1π (ππ) β₯ πππ(ππ) β₯ ππ+1π (ππ) (7)
From (6), we know that if
πΆπΈ(ππ) β₯ π ππ
Then we cannot reject the deal for any higher c. The same goes for buying information, if
πΆπΈ(ππ) β₯ πππ(ππ)
Then we cannot accept information for any higher c.
We first prove the following relation:
123
ππβ1π (ππ)β π πβ1π β€ πππ(ππ)β π ππ (8)
πΏπ»π = πΆπΈοΏ½οΏ½πΆπΈ(ππ|πΌπ) β¨ π πβ1π οΏ½ βποΏ½ β π πβ1π
= πΆπΈ οΏ½οΏ½πΆπΈ(ππ|πΌπ) β π πβ1π οΏ½+οΏ½ β π
β€ πΆπΈοΏ½[πΆπΈ(ππ|πΌπ) β π ππ]+οΏ½ β π
= πΆπΈοΏ½{πΆπΈ(ππ|πΌπ) β¨ π ππ} βποΏ½ β π ππ = π π»π
Note that (8) means that if Rejecting a deal was better than getting information on that deal
for a given c, then that will be the case for any lower cβs.
In the following, we drop (Xn) from the Q term for clarity.
After we reject cases that contradict (6), (7), or (8), we consider the following 10 cases:
Maximum Term
Case (c+1,t) (c,t) (c-1,t)
1 R R R
2 Q R R
3 Q Q R
4 Q Q Q
5 CE(Xn) R R
6 CE(Xn) Q R
7 CE(Xn) Q Q
8 CE(Xn) CE(Xn) R
9 CE(Xn) CE(Xn) Q
10 CE(Xn) CE(Xn) CE(Xn)
CASE 1:
(5) = π ππ β π πβ1π + π π+1π β 2 π ππ + π πβ1π = π π+1π β π ππ
but π π+1π‘ β€ π ππ‘ by the induction assumption, (1),β (5) β€ 0
CASE 2:
(5) = π ππ β π πβ1π + ππ+1π β 2 π ππ + π πβ1π = ππ+1π β π ππ
by (7), (5) β€ πππ β π ππ
124
by the case assumption, (5) β€ 0
CASE 3:
(5) = π ππ β π πβ1π + ππ+1π β 2πππ + π πβ1π = π ππ + ππ+1π β 2πππ = {π ππ β πππ} + {ππ+1π β πππ}
from the case induction assumption, (1), ππ+1π β€ πππ
and from the case assumption,π ππ β€ πππ β (5) β€ 0
CASE 4:
(5) = π ππ β π πβ1π + ππ+1π β 2 πππ + ππβ1π
by (8), we have ππβ1π (ππ)β π πβ1π β€ πππ(ππ) β π ππ, so
(5) β€ π ππ + ππ+1π β 2 πππ + πππ β π ππ = ππ+1π β πππ
by the induction assumption, (1),ππ+1π β€ πππ β (5) β€ 0
CASE 5:
(5) = π ππ β π πβ1π + πΆπΈ(ππ) β 2 π ππ + π πβ1π = πΆπΈ(ππ) β π ππ
by the case assumption, πΆπΈ(ππ) β€ π ππ β (5) β€ 0
CASE 6:
(5) = π ππ β π πβ1π + πΆπΈ(ππ) β 2 πππ + π πβ1π = π ππ + πΆπΈ(ππ)β 2 πππ
by the case assumption, πΆπΈ(ππ),π ππ β€ πππ β (5) β€ 0
CASE 7:
(5) = π ππ β π πβ1π + πΆπΈ(ππ) β 2 πππ + ππβ1π
by (8), we have ππβ1π (ππ)β π πβ1π β€ πππ(ππ)β π ππ, π π
(5) β€ π ππ + πΆπΈ(ππ)β 2 πππ + πππ β π ππ = πΆπΈ(ππ)β πππ
by the case assumption, πΆπΈ(ππ) β€ πππ β (5) β€ 0
CASE 8:
(5) = π ππ β π πβ1π + πΆπΈ(ππ) β 2 πΆπΈ(ππ) + π πβ1π = π ππ β πΆπΈ(ππ)
by the case assumption, π ππ β€ πΆπΈ(ππ) β (5) β€ 0
CASE 9:
125
(5) = π ππ β π πβ1π + πΆπΈ(ππ) β 2 πΆπΈ(ππ) + ππβ1π = π ππ β πΆπΈ(ππ) + ππβ1π β π πβ1π
by (8), we have ππβ1π (ππ)β π πβ1π β€ πππ(ππ)β π ππ, π π
(5) β€ π ππ β πΆπΈ(ππ) + πππ β π ππ = πππ β πΆπΈ(ππ)
by the case assumption, πππ β€ πΆπΈ(ππ) β (5) β€ 0
CASE 10:
(5) = π ππ β π πβ1π + πΆπΈ(ππ) β 2 πΆπΈ(ππ) + πΆπΈ(ππ) = π ππ β π πβ1π
by the induction assumption, (1), π ππ β€ π πβ1π β (5) β€ 0
So since (5) is true for every Xn, we know that (3) is true and:
π ππβ1 β₯ π π+1πβ1 πππ πππ π β₯ 1 (1)
Finally, by induction, we know that this is true for all t, or
π ππ‘ β₯ π π+1π‘ πππ πππ π β₯ 1 ππ‘ πππ¦ π‘
Property 6.3.2 Statement 2 We prove this by induction.
Take the case when t = T, the statement becomes
π ππβ1 β₯ π ππ
This is true since
π ππ = πππ+1 β ππβ1π+1 = 0 πππ πππ π β₯ 1
However, since V is non decreasing in c, we have:
π ππβ1 = πππ β ππβ1π β₯ 0
Hence π ππ is non increasing in t at t = T-1
Assume this is true for t=k and show this statement is true for t=k-1
So we know:
π ππβ1 β₯ π ππ (1)
Meaning:
πππ β ππβ1π β₯ πππ+1 β ππβ1π+1 (2)
126
And we want to show
π ππβ2 β₯ π ππβ1 (3)
Alternatively,
πππβ1 β ππβ1πβ1 β₯ πππ β ππβ1π (4)
We rewrite πππ‘ for each ππ as:
πππ‘|ππ = {πΆπΈ(ππ) + ππβ1π‘+1 β¨ πππ‘+1 β¨
πΆπΈ({πΆπΈ(ππ|πΌπ) + ππβ1π‘+1 β¨ πππ‘+1} βπ)}
Recall that πππ‘ is defined as
πππ‘(ππ) = πΆπΈ({πΆπΈ(ππ|πΌπ) β¨ π ππ‘} βπ)
So, we have:
πππ‘|ππ = ππβ1π‘+1 + {πΆπΈ(ππ) β¨ π ππ‘ β¨ πππ‘(ππ)}
Now, rewrite (4) for each Xn as:
ππβ1π +{πΆπΈ(ππ) β¨ π ππβ1 β¨ πππβ1(ππ)}
βππβ2π βοΏ½πΆπΈ(ππ) β¨ π πβ1πβ1 β¨ ππβ1πβ1(ππ)οΏ½
βππβ1π+1 β{πΆπΈ(ππ) β¨ π ππ β¨ πππ(ππ)}
+ ππβ2π+1 +οΏ½πΆπΈ(ππ) β¨ π πβ1π β¨ ππβ1π (ππ)οΏ½ β₯ 0 (5)
Which is equivalent to
π πβ1π β π πβ1πβ1 β{πΆπΈ(ππ) β¨ π ππβ1 β¨ πππβ1(ππ)}
+οΏ½πΆπΈ(ππ) β¨ π πβ1πβ1 β¨ ππβ1πβ1(ππ)οΏ½
+ {πΆπΈ(ππ) β¨ π ππ β¨ πππ(ππ)}
β οΏ½πΆπΈ(ππ) β¨ π πβ1π β¨ ππβ1π (ππ)οΏ½ β€ 0 (5)
Note that from statement 1 we have
π ππ‘ β₯ π π+1π‘ πππ πππ π‘ πππ π β₯ 1 (6)
Leading to:
πππ‘(ππ) β₯ ππ+1π‘ (ππ)
127
πππ πππ π‘ πππ π β₯ 1 (7)
Also, (1) gives us
πππβ1(ππ) β₯ πππ(ππ) πππ πππ π β₯ 1 (8)
In similar lines as in statement 1, we prove:
πππβ1(ππ) β π ππβ1 β€ πππ(ππ) β π ππ (9)
πΏπ»π = πΆπΈοΏ½{πΆπΈ(ππ|πΌπ) β¨ π ππβ1} βποΏ½ β π ππβ1
= πΆπΈοΏ½[πΆπΈ(ππ|πΌπ) β π ππβ1]+οΏ½ β π
By the induction assumption,
πΏπ»π β€ πΆπΈοΏ½[πΆπΈ(ππ|πΌπ) β π ππ]+οΏ½ β π
= πΆπΈοΏ½{πΆπΈ(ππ|πΌπ) β¨ π ππ} βποΏ½ β π ππ = π π»π
Recall, from the proof of statement 1, that:
ππβ1π‘ (ππ) β π πβ1π‘ β€ πππ‘(ππ) β π ππ‘ (10)
In the following, we drop (Xn) from the Q term for clarity.
In the same manner as in the proof for statement 3, we reject all the cases that contradict
statements (6), (7), and (8) and end up with the following 20 cases.
Maximum Term Case (c,k-1) (c-1,k-1) (c,k) (c-1,k) 1 R R R R 2 R R Q R 3 R R Q Q 4 R R CE(Xn) R 5 R R CE(Xn) Q 6 R R CE(Xn) CE(Xn) 7 Q R Q R 8 Q R Q Q 9 Q R CE(Xn) R 10 Q R CE(Xn) Q 11 Q R CE(Xn) CE(Xn) 12 Q Q Q Q 13 Q Q CE(Xn) Q 14 Q Q CE(Xn) CE(Xn) 15 CE(Xn) R CE(Xn) R 16 CE(Xn) R CE(Xn) Q
128
17 CE(Xn) R CE(Xn) CE(Xn) 18 CE(Xn) Q CE(Xn) Q 19 CE(Xn) Q CE(Xn) CE(Xn) 20 CE(Xn) CE(Xn) CE(Xn) CE(Xn)
Now, let us consider the cases and evaluate relation (5):
CASE 1: (5) = π πβ1π β π πβ1πβ1 + π ππ β π πβ1π β π ππβ1 + π πβ1πβ1 = π ππ β π ππβ1
but, from (1),π ππ β€ π ππβ1 β (5) β€ 0 CASE 2: (5) = π πβ1π β π πβ1πβ1 + πππ β π πβ1π β π ππβ1 + π πβ1πβ1 = πππ β π ππβ1
From the case we know: π ππβ1 β₯ πππβ1 From (8) we have: πππβ1 β₯ πππ
thus, (5) β€ 0 CASE 3: (5) = π πβ1π β π πβ1πβ1 + πππ β ππβ1π β π ππβ1 + π πβ1πβ1 = π πβ1π + πππ β ππβ1π β π ππβ1
Note that, as with case 2, π ππβ1 β₯ πππ And by the case assumption, π πβ1π β€ ππβ1π β (5) β€ 0
CASE 4: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β π πβ1π β π ππβ1 + π πβ1πβ1 = πΆπΈ(ππ)β π ππβ1
By the induction assumption (1), (5) = πΆπΈ(ππ) β π ππβ1 β€ πΆπΈ(ππ) β π ππ
By the case assumption, πΆπΈ(ππ) β€ π ππ β (5) β€ 0
CASE 5: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β ππβ1π β π ππβ1 + π πβ1πβ1 = π ππ β π ππβ1
By the induction assumption, π ππβ1 β₯ π ππ β (5) β€ 0
CASE 6: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β πΆπΈ(ππ) β π ππβ1 + π πβ1πβ1 = π πβ1π β π ππβ1
(5) = π πβ1π β π ππβ1 + πΆπΈ(ππ) β πΆπΈ(ππ) = οΏ½π πβ1π β πΆπΈ(ππ)οΏ½ + {πΆπΈ(ππ) β π ππβ1}
But by the case assumptions, we have π πβ1π β€ πΆπΈ(ππ) πππ
πΆπΈ(ππ) β€ π ππβ1 β (5) β€ 0 CASE 7:
(5) = π πβ1π β π πβ1πβ1 + πππ β π πβ1π β πππβ1 + π πβ1πβ1 = πππ β πππβ1 by (8), πππ β€ πππβ1 β (5) β€ 0
CASE 8: (5) = π πβ1π β π πβ1πβ1 + πππ β ππβ1π β πππβ1 + π πβ1πβ1 =
οΏ½π πβ1π β ππβ1π οΏ½ + {πππ β πππβ1} By the case assumption, π πβ1π β€ ππβ1π and by (8), πππ β€ πππβ1 β (5) β€ 0
CASE 9:
129
(5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β π πβ1π β πππβ1 + π πβ1πβ1 = πΆπΈ(ππ) β πππβ1 by the case assumption, πππβ1 β₯ πΆπΈ(ππ) β (5) β€ 0
CASE 10: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ)β ππβ1π β πππβ1 + π πβ1πβ1
= {π πβ1π β ππβ1π } + {πΆπΈ(ππ) βπππβ1} by the case assumptions, we have:
π πβ1π β€ ππβ1π and πΆπΈ(ππ) β€ πππβ1 β (5) β€ 0 CASE 11:
(5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β πΆπΈ(ππ) β πππβ1 + π πβ1πβ1 = π πβ1π β πππβ1 in similar manner to CASE 6, we add and subtracπ‘ πΆπΈ(ππ)
(5) = οΏ½π πβ1π β πΆπΈ(ππ)οΏ½ + {πΆπΈ(ππ)β πππβ1} by the case assumptions we have:
π πβ1π β€ πΆπΈ(ππ) πππ πΆπΈ(ππ) β€ πππβ1 β (5) β€ 0
CASE 12: (5) = π πβ1π β π πβ1πβ1 + πππ β ππβ1π β πππβ1 + ππβ1πβ1 =
οΏ½οΏ½ππβ1πβ1 β π πβ1πβ1οΏ½ β οΏ½ππβ1π β π πβ1π οΏ½οΏ½ + {πππ β πππβ1}
by (8), πππ β€ πππβ1 by (9), ππβ1πβ1 β π πβ1πβ1 β€ ππβ1π β π πβ1π β (5) β€ 0
CASE 13: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) βππβ1π β πππβ1 + ππβ1πβ1
= οΏ½ππβ1πβ1 β π πβ1πβ1οΏ½ β οΏ½ ππβ1π β π πβ1π οΏ½ + {πΆπΈ(ππ)β ππβ1π }
by the case assumption we have ππβ1π β₯ πΆπΈ(ππ)
and by (9), ππβ1πβ1 β π πβ1πβ1 β€ ππβ1π β π πβ1π β (5) β€ 0
CASE 14: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β πΆπΈ(ππ) β πππβ1 + ππβ1πβ1
= οΏ½ππβ1πβ1 β π πβ1πβ1οΏ½ β οΏ½ πππβ1 β π πβ1π οΏ½ by (9), ππβ1πβ1 β π πβ1πβ1 β€ ππβ1π β π πβ1π π π,
(5) = οΏ½ππβ1πβ1 β π πβ1πβ1οΏ½ β οΏ½ πππβ1 β π πβ1π οΏ½ β€ οΏ½ππβ1π β π πβ1π οΏ½ β οΏ½ πππβ1 β π πβ1π οΏ½
= ππβ1π β πππβ1 add and subtract πΆπΈ(ππ)
ππβ1π β πΆπΈ(ππ) + πΆπΈ(ππ) β πππβ1 by the case assumptions,
ππβ1π β€ πΆπΈ(ππ), πππ πΆπΈ(ππ) β€ πππβ1 β (5) β€ 0
CASE 15: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β π πβ1π β πΆπΈ(ππ) + π πβ1πβ1 = 0
CASE 16: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β ππβ1π β πΆπΈ(ππ) + π πβ1πβ1 = π πβ1π β ππβ1π
by the case assumption,π πβ1π β€ ππβ1π β (5) β€ 0 CASE 17:
(5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β πΆπΈ(ππ) β πΆπΈ(ππ) + π πβ1πβ1 = π πβ1π β πΆπΈ(ππ)
130
and by the case assumption,π πβ1π β€ πΆπΈ(ππ) β (5) β€ 0 CASE 18: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β ππβ1π β πΆπΈ(ππ) + ππβ1πβ1
= οΏ½ππβ1πβ1 β π πβ1πβ1οΏ½ β οΏ½ ππβ1π β π πβ1π οΏ½ by (9), ππβ1πβ1 β π πβ1πβ1 β€ ππβ1π β π πβ1π β (5) β€ 0
CASE 19: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ)β πΆπΈ(ππ) β πΆπΈ(ππ) + ππβ1πβ1
= π πβ1π β π πβ1πβ1 β πΆπΈ(ππ) + ππβ1πβ1 by (9), ππβ1πβ1 β π πβ1πβ1 β€ ππβ1π β π πβ1π
so, (5) β€ π πβ1π β πΆπΈ(ππ) + ππβ1π β π πβ1π = ππβ1π β πΆπΈ(ππ) by the case assumption,
ππβ1π β€ πΆπΈ(ππ) , β (5) β€ 0 CASE 20:
(5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β πΆπΈ(ππ) β πΆπΈ(ππ) + πΆπΈ(ππ) = π πβ1π β π πβ1πβ1 by the induction assumption, (1),
π πβ1πβ1 β₯ π πβ1π β (5) β€ 0 So, (5) β€ 0 is true for all Xn and hence must be true for the certain equivalent over Xn. So, (3)
is true, or π ππβ1 β₯ π ππ
Finally, by induction, we know that π ππ‘β1 β₯ π ππ‘ πππ πππ π ππ‘ πππ¦ π‘
Corollary 6.3.1: Characterizing iV and iVI The incremental values of a deal ππ with and without information exhibit the following two
properties
I. ππππ‘(ππ) πππ πππΌππ‘(ππ) are non decreasing in c for all t II. ππππ‘(ππ) πππ πππΌππ‘(ππ) are non decreasing in t for all c
Corollary 6.3.1 for iV By definition of iV, we have
ππππ‘(ππ) = {πΆπΈ(ππ)β π ππ‘ β¨ 0}
Consider the first term of the max relation:
πΆπΈ(ππ) β π ππ‘
Since CE(Xn) is neither a function of t nor c, ππππ‘(ππ) changes with c and t in opposite
direction as that of π ππ‘ . The second term of the max relation is zero which does not change
with either. Hence, ππππ‘(ππ) changes in the opposite direction of π ππ‘.
Thus, the incremental value is non decreasing in c for all t and non decreasing in t for all c.
Corollary 6.3.1 for iVI By definition we have
131
πππΌππ‘(ππ) = πΆπΈοΏ½ππππ‘(ππ|πΌπ)οΏ½
Note that the indication is not a function of the state (c,t) and hence πππΌππ‘ follows ππππ‘and is
also non decreasing in c for all t and non decreasing in t for all c.
Proposition 6.3.4: Characterizing the IBP of Information (iVoI) The IBP of information exhibits the following properties
I. For a given c, IBP of information is increasing in t and reaches a maximum when
π ππ‘ = πΆπΈ(ππ) then it decreases in t until it converges at πΆπΈ(ππβ) β πΆπΈ(ππ)
II. For a given t, IBP of information is increasing in c and reaches a maximum when
π ππ‘ = πΆπΈ(ππ) then it decreases in c until it converges at πΆπΈ(ππβ) β πΆπΈ(ππ)
Where ππβ is the deal with free information.
Recall that the IBP of information is defined as:
ππππΌππ‘(ππ) = πππΌππ‘(ππ)β ππππ‘(ππ)
ππππΌππ‘(ππ) = πΆπΈ(πΆπΈ(ππ|πΌπ) β π ππ‘ β¨ 0) β (πΆπΈ(ππ) β π ππ‘ β¨ 0) (1)
We take two cases, namely, as πΆπΈ(ππ) relates to π ππ‘
CASE1: πΆπΈ(ππ) β€ π ππ‘
(1) reduces to
ππππΌππ‘(ππ) = πΆπΈ(πΆπΈ(ππ|πΌπ) β π ππ‘ β¨ 0) β 0
So ππππΌππ‘(ππ) moves in the opposite direction as π ππ‘
Thus, ππππΌππ‘(ππ) is increasing in t and c when πΆπΈ(ππ) β€ π ππ‘
CASE2: πΆπΈ(ππ) β₯ π ππ‘
(1) reduces to
ππππΌππ‘(ππ) =
πΆπΈ(πΆπΈ(ππ|πΌπ) β π ππ‘ β¨ 0) β πΆπΈ(ππ) + π ππ‘
ππππΌππ‘(ππ) = πΆπΈ(πΆπΈ(ππ|πΌπ) β¨ π ππ‘)β πΆπΈ(ππ)
So ππππΌππ‘(ππ) moves in the same direction as π ππ‘
Thus, ππππΌππ‘(ππ) is decreasing in t and c when πΆπΈ(ππ) β₯ π ππ‘
132
Now, since ππππΌππ‘(ππ) increases in case1 and then decreases in case2, we can see that it
reaches a maximum when π ππ‘ = πΆπΈ(ππ)
Finally, we study the convergence of the term. At π‘ = π, (1) reduces to:
ππππΌππ‘(ππ) = πΆπΈ(πΆπΈ(ππ|πΌπ) β¨ 0) β πΆπΈ(ππ)
But, the value of the deal with free information outside the funnel, πΆπΈ(ππβ), equals:
πΆπΈ(ππβ) = πΆπΈ(πΆπΈ(ππ|πΌπ) β¨ 0)
Thus, ππππΌππ‘ = πΆπΈ(ππβ) β πΆπΈ(ππ)
The same is true when c = C.
Proposition 6.3.5: Characterization of Optimal Policy The optimal policy for a given deal ππ is characterized as follows
I. For a given c, the optimal policy can only change over time from rejecting, to buying information, and finally to accepting.
II. For a given t, the optimal policy can only change over capacity from rejecting, to buying information, and finally to accepting.
Proposition 3.5 Statement 1 We follow the notation used in proposition 3.3. Namely, define πππ‘(ππ) as
πππ‘(ππ) = πΆπΈ(πΆπΈ(πποΏ½|π) β¨ π ππ‘) βπ
Now, we have
πππ‘|ππ = ππβ1π‘+1 + {π ππ‘ β¨ πππ‘(ππ) β¨ πΆπΈ(ππ)}
To prove statement 1, we need to show that for a given c:
β’ If πΆπΈ(ππ) β₯ {π ππ‘,πππ‘(ππ)}, then this is true for any π‘β > π‘
β’ If πππ‘(ππ) β₯ π ππ‘ , then this is true for any π‘β > π‘
To prove the first condition, we note that π ππ‘ is decreasing in t for a given c. Hence, if the first
condition is true at any given t, then it must be true for larger values of t. Now, from the
definition of πππ‘(ππ) we see that it moves in the direction of π ππ‘. Thus, πππ‘(ππ) will decrease
in t and the first condition is true.
To prove the second condition we note that it is sufficient to prove
πππ‘(ππ)β π ππ‘ β₯ πππ‘β1(ππ)β π ππ‘β1
133
We rewrite this as
πΆπΈ(πΆπΈ(ππ|πΌπ) β¨ π ππ‘)β π β π ππ‘ β₯
πΆπΈ(πΆπΈ(ππ|πΌπ) β¨ π ππ‘β1)β π β π ππ‘β1
πΆπΈ(πΆπΈ(ππ|πΌπ) β π ππ‘) β₯ πΆπΈ(πΆπΈ(ππ|πΌπ) β π ππ‘β1)
πππΌππ‘(ππ) β₯ πππΌππ‘β1(ππ)
Which we know is true from proposition 3.2
Proposition 3.5 Statement 2 This can be proven in the same lines as in statement1 by replacing c with t.
Proposition 6.3.6: Identifying Optimal Detector Consider two detectors with incremental value of deals with information πππΌ1 and πππΌ1 and
cost π1 and π2 respectively. Detector 1 will be optimal when:
πππΌ1β πππΌ2 > π1 βπ2
Otherwise, detector 2 will be optimal.
This optimality is not myopic, that is, if the decision maker is offered the use of both, he/she
should not always start with the optimal detector.
let πΌπ1, πΌπ2 be the indications associated with detectors 1 and 2 respectively. The alternative of
buying information is worth
οΏ½ππβ1π‘+1 βππ + πΆπΈοΏ½πΆπΈοΏ½πποΏ½πΌπποΏ½ β¨ π ππ‘οΏ½οΏ½
where j is the number of the detector. For detector 1 to be preferable over detector 2, the
following needs to be true.
οΏ½ππβ1π‘+1 β π1 + πΆπΈοΏ½πΆπΈοΏ½πποΏ½πΌπ1οΏ½ β¨ π ππ‘οΏ½οΏ½ β₯ οΏ½ππβ1π‘+1 β π2 + πΆπΈοΏ½πΆπΈοΏ½πποΏ½πΌπ2οΏ½ β¨ π ππ‘οΏ½οΏ½
οΏ½βπ1 + πΆπΈοΏ½πΆπΈοΏ½πποΏ½πΌπ1οΏ½ β¨ π ππ‘οΏ½οΏ½ β₯ οΏ½βπ2 + πΆπΈοΏ½πΆπΈοΏ½πποΏ½πΌπ2οΏ½ β¨ π ππ‘οΏ½οΏ½
οΏ½βπ1 + πΆπΈοΏ½(πΆπΈοΏ½πποΏ½πΌπ1οΏ½ β π ππ‘) β¨ 0οΏ½+ π ππ‘οΏ½ β₯ οΏ½βπ2 + πΆπΈοΏ½(πΆπΈοΏ½πποΏ½πΌπ2οΏ½ β π ππ‘) β¨ 0οΏ½ + π ππ‘οΏ½
οΏ½βπ1 + πΆπΈοΏ½ππππ‘οΏ½πποΏ½πΌπ1οΏ½οΏ½οΏ½ β₯ οΏ½βπ2 + πΆπΈοΏ½ππππ‘οΏ½πποΏ½πΌπ2οΏ½οΏ½οΏ½
134
πππ‘ ππππΌππ‘π(ππ) = πΆπΈοΏ½ππππ‘οΏ½πποΏ½πΌπποΏ½οΏ½
βπ1 + ππππΌππ‘1(ππ) β₯ βπ2 + ππππΌππ‘2(ππ)
ππππΌππ‘1(ππ)β ππππΌππ‘2(ππ) β₯ π1 βπ2
ππππΌ1β ππππΌ2 β₯ π1 βπ2
A1.2.2 Section 6.4 The Long-Run Problem
Proposition 4.1: Characterizing Long-Run Problem In here we characterize the problem parameters in similar lines as in Section 3. We found
that all the relations are maintained along the capacity dimension.
I. When offered a deal ππ the decision maker should buy information if and only if
πππΌπ(ππ) β₯ πππ(ππ) + π. Otherwise, the deal is worth buying without
information if and only if πΆπΈ(ππ) β₯ π π
II. ππ is non decreasing in c
III. π π is non increasing in c
IV. πππ(ππ) πππ πππΌπ(ππ) are non decreasing in c
V. ππππΌπ(ππ) is increasing in c and reaches a maximum when π π = πΆπΈ(ππ) then it
decreases in c until it converges at πΆπΈ(ππβ)β πΆπΈ(ππ). Where ππβ is the deal with
free information.
VI. The optimal policy can only change over c from rejecting, to buying information,
and finally to accepting.
Proposition 4.1 Statement 1 The recursion can be rewritten as:
Accept
Seek info
Reject
Reject
Accept
ππ|ππ
πΏ ππβ1 + πΆπΈ(ππ)
πΏ ππ
πΏ ππβ1 + πΆπΈ(ππ|πΌπ)βπ
πΏ ππ β π
Ii
135
ππ|ππ = οΏ½[πΏ ππβ1 + πΆπΈ(ππ)] β¨ πΏ ππ β¨
οΏ½πΏ ππβ1 β π + πΆπΈ[πΆπΈ(ππ|πΌπ) β¨ π π]οΏ½οΏ½ (1)
Using the definition of the incremental value of the deal with information, the recursion
simplifies to
We seek information when:
πΏ ππ + πππΌπ(ππ) βπ β₯ οΏ½οΏ½πΏ ππβ1 + πΆπΈ(ππ)οΏ½ β¨ πΏ πποΏ½
πππΌπ(ππ) β₯ {(πΆπΈ(ππ) β π π) β¨ 0} + π
So,
πππΌπ(ππ) β₯ πππ(ππ) + π
Otherwise, we prefer to go with the deal without information. We buy the deal if
πΏ ππβ1 + πΆπΈ(ππ) β₯ πΏ ππ
ππ πΆπΈ(ππ) β₯ π π
Proposition 4.1 Statements 2 and 3 We prove the two properties in two steps. First, we prove that it is true for the finite horizon
case when we introduce discounting. Then, we use successive approximations to show that
the infinite horizon V converges to that in the finite horizon as we push the deadline T to the
limit.
Step 1-1: Characterizing V
We prove this by induction. Take t = T, (1) becomes
πππ|ππ = οΏ½[πΏππβ1π+1 + πΈ(ππ)] β¨ πΏπππ+1 β¨
οΏ½πΏππβ1π+1 β π + πΈ[πΈ(ππ|πΌπ) β¨ π ππ]οΏ½οΏ½
Since πππ+1 = 0 for all c, this equation reduces to
Accept
Seek info
Reject ππ|ππ
πΏ ππβ1 + πΆπΈ(ππ)
πΏ ππ
πΏ ππ + πππΌπ(ππ)βπ
136
πππ|ππ =
{ πΈ(ππ) β¨ 0 β¨ πΈ[{πΈ(ππ|πΌπ) β¨ π ππ}] βπ }
πππ π β₯ 1
π0π|ππ = 0 πππ π = 0
Hence it is non decreasing in c at t = T
Now, assume true for t = k, or
πππ β₯ ππβ1π πππ πππ π β₯ 1 (2)
Now, prove that it is true for t = k-1, or
πππβ1 β₯ ππβ1πβ1 πππ πππ π β₯ 1 (3)
Where:
πππβ1|ππ = οΏ½οΏ½πΏππβ1π + πΈ(ππ)οΏ½ β¨ πΏπππ β¨
οΏ½πΏππβ1π β π + πΈ[πΈ(ππ|πΌπ) β¨ π ππβ1]οΏ½οΏ½
And
ππβ1πβ1|ππ = οΏ½οΏ½πΏππβ2π + πΈ(ππ)οΏ½ β¨ πΏππβ1π β¨
οΏ½πΏππβ2π β π + πΈοΏ½πΈ(ππ|πΌπ) β¨ π πβ1πβ1οΏ½οΏ½οΏ½
But since πππ is non decreasing in c, (2), we have
ππβ1πβ1|ππ β€ οΏ½οΏ½πΏππβ1π + πΈ(ππ)οΏ½ β¨ πΏπππ β¨
οΏ½πΏππβ1π β π + πΈ[πΈ(ππ|πΌπ) β¨ π ππβ1]οΏ½οΏ½ = πππβ1|ππ
Hence,
ππβ1πβ1οΏ½ππ β€ πππβ1οΏ½ππ πππ πππ ππ
Meaning (3) is true, namely
ππβ1πβ1 β€ πππβ1
And, finally, by induction
ππβ1π‘ β€ πππ‘ πππ πππ π β₯ 1 πππ πππ π‘
137
Step 1-2: Characterizing R
We prove this by induction. Take the case when t = T, the statement becomes
π ππ β₯ π π+1π
This is true since
π ππ = πΏπππ+1 β πΏππβ1π+1 = 0 πππ πππ π β₯ 1
Hence π ππ is non increasing in c at t = T
Now, assume this is true for t = k, lets prove it for t = k β 1.
So we know that
π ππ β₯ π π+1π πππ πππ π β₯ 1 (1)
β πΏπππ+1 β πΏππβ1π+1 β₯ πΏππ+1π+1 β πΏπππ+1
2πππ+1 β₯ ππ+1π+1 + ππβ1π+1
πππ πππ π ππ‘ π‘ = π + 1
For the statement to be true at t = k β 1, the following must be true
π ππβ1 β₯ π π+1πβ1 πππ πππ π (2)
2πππ β₯ ππ+1π + ππβ1π πππ πππ π ππ‘ π‘ = π (3)
For each Xn, we can rewrite πππ as:
πππ|ππ = οΏ½πΆπΈ(ππ) + πΏππβ1π+1 β¨ πΏπππ+1 β¨
πΆπΈοΏ½{πΆπΈ(ππ|πΌπ) + πΏππβ1π+1 β¨ πΏπππ+1} βποΏ½οΏ½
Now, define πππ as
πππ(ππ) = πΆπΈοΏ½{πΆπΈ(ππ|πΌπ) β¨ π ππ} βποΏ½
So, we have:
πππ|ππ = πΏππβ1π+1 + {πΆπΈ(ππ) β¨ π ππ β¨ πππ(ππ)}
Now, rewrite (3) for each Xn as:
2{πΆπΈ(ππ) β¨ π ππ β¨ πππ(ππ)} + 2 πΏππβ1π+1
β₯ οΏ½πΆπΈ(ππ) β¨ π π+1π β¨ ππ+1π (ππ)οΏ½ + πΏππ+1π+1 +
138
{πΆπΈ(ππ) β¨ π ππ‘1π β¨ ππβ1π (ππ)} + πΏππβ2π+1 (4)
Which is equivalent to
π ππ β π πβ1π + οΏ½πΆπΈ(ππ) β¨ π π+1π β¨ ππ+1π (ππ)οΏ½
β2{πΆπΈ(ππ) β¨ π ππ β¨ πππ(ππ)}
+οΏ½πΆπΈ(ππ) β¨ π πβ1π β¨ ππβ1π (ππ)οΏ½ β€ 0 (5)
And the rest of the proof follows directly from the case without discounting.
Step 2: Iterative Approximations
Here we prove the infinite horizon case by iterative approximations.
Define the operator β as
β(ππ|ππ) = οΏ½οΏ½πΏππβ1 + πΈ(ππ)οΏ½ β¨ πΏππ β¨
(πΏππβ1 β π + πΈ[πΈ(ππ|πΌπ) β¨ π ππ ])οΏ½
If we take the input of the first iteration to be 0, then the solution to the fixed point relation
of β above is πππ. Following the same methodology as above we can show that the infinite
horizon ππ converges to that of the finite horizon and the properties we proved for the finite
horizon extend to the infinite horizon.
Proposition 4.1 Statement 4 By definition, we have
πππ(ππ) = [πΈ(ππ) β π π]+
So, πππ is inversely related to π π and hence πππ is non decreasing in c
Similarly, we have
πππΌπ(ππ) = πΈ[(πΈ(ππ|πΌπ) β π π β¨ 0)] = πΈ[πππ(ππ|πΌπ)]
So, πππΌπ moves along the same direction as πππ and hence is also non decreasing in c
Proposition 4.1 Statement 5 Recall that the IBP of information is defined as:
ππππΌπ(ππ) = πππΌπ(ππ) β πππ(ππ)
ππππΌπ(ππ) =
πΈ(πΈ(ππ|πΌπ) β π π β¨ 0) β (πΈ(ππ) β π π β¨ 0) (1)
139
We take two cases as πΈ(ππ) relates to π π
CASE1: πΈ(ππ) β€ π π
(1) reduces to
ππππΌπ(ππ) = πΈ(πΈ(ππ|πΌπ) β π π β¨ 0) β 0
So ππππΌπ(ππ) moves in the opposite direction as π π
Thus, ππππΌπ(ππ) is increasing in c when πΈ(ππ) β€ π π
CASE2: πΈ(ππ) β₯ π π
(1) reduces to
ππππΌπ(ππ) =
πΈ(πΈ(ππ|πΌπ) β π π β¨ 0) β πΈ(ππ) + π π
ππππΌπ(ππ) = πΈ(πΈ(ππ|πΌπ) β¨ π π) β πΈ(ππ)
So ππππΌπ(ππ) moves in the same direction as π π
Thus, ππππΌπ(ππ) is decreasing in t and c when πΈ(ππ) β₯ π π
Now, since ππππΌπ(ππ) increases in case1 and then decreases in case2, we can see that it
reaches a maximum when π π = πΈ(ππ)
Finally, we study the convergence of the term. At π = πΆ, (1) reduces to:
ππππΌπΆ(ππ) = πΈ(πΈ(ππ|πΌπ) β¨ 0) β πΈ(ππ)
But, the value of the deal with free information outside the funnel, πΈ(ππβ), equals:
πΈ(ππβ) = πΈ(πΈ(ππ|πΌπ) β¨ 0)
Thus, ππππΌπΆ = πΈ(ππβ) β πΈ(ππ)
Proposition 4.1 Statement 6 Define ππ(ππ) as
ππ(ππ) = πΈ(πΈ(πποΏ½|π) β¨ π π) βπ
Now, we have
ππ|ππ = ππβ1 + {π π β¨ ππ(ππ) β¨ πΈ(ππ)}
To prove this statement, we need to show that:
140
β’ If πΈ(ππ) β₯ {π π ,ππ(ππ)}, then this is true for any πβ > π
β’ If ππ(ππ) β₯ π π , then this is true for any πβ > π
To prove the first condition, we note that π π is decreasing in c. Hence, if the first condition is
true at any given c, then it must be true for larger values of c. Now, from the definition of
ππ(ππ) we see that it moves in the direction of π π. Thus, ππ(ππ) will decrease in c and the
first condition is true.
To prove the second condition we note that it is sufficient to prove
ππ(ππ) β π π β₯ ππβ1(ππ)β π πβ1
We rewrite this as
πΈ(πΈ(ππ|πΌπ) β¨ π π) β π βπ π β₯
πΈ(πΈ(ππ|πΌπ) β¨ π πβ1) β π βπ πβ1
πΈ(πΈ(ππ|πΌπ) β π π) β₯ πΈ(πΈ(ππ|πΌπ) β π πβ1)
πππΌπ(ππ) β₯ πππΌπβ1(ππ)
Which we know is true from proposition 4.1 statement
A1.2.3 Section 6.5.1 Extensions β Multiple Cost Structures The problem with different cost structures is shown in the graph below.
Proposition 6.5.1a: Optimal Information Gathering Policy with Multiple Cost Structures When offered a deal ππ the decision maker should buy information if and only if
πππΌπβππ‘+π(ππ) β₯ ππππ‘ (ππ) + π ππ‘(π,π + 1) + π
Accept
Seek info
Reject
Reject
Accept
πππ‘|ππ
ππβ1π‘+1 + πΆπΈ(ππ)
πππ‘+1
ππβ1βππ‘+1+π + πΆπΈ(ππ|πΌπ) βπ
ππβππ‘+1+π β π
Ii
141
Proposition 6.5.1b: Optimal Allocation Policy with Multiple Cost Structures After information is received, the decision maker should accept the deal if and only if
πΆπΈ(ππ|πΌπ) β₯ π πβππ‘+π(1,1)
Otherwise, the deal is worth accepting without information if and only if
πΆπΈ(ππ) β₯ π ππ‘
We first write the recursion as:
πππ‘|ππ = {ππβ1π‘+1 + πΆπΈ(ππ) β¨ πππ‘+1 β¨ [ππβππ‘+1+π β π + πΆπΈ({πΆπΈ(ππ|πΌπ) β¨ π ππ‘ (π,π + 1)})]}
where
π ππ‘(π,π + 1) = πππ‘+1 β ππβππ‘+1+π
This reduces to
πππ‘|ππ = πππ‘+1 + οΏ½πΆπΈ(ππ)β π ππ‘ (1,1) β¨ 0
β¨ οΏ½ππβππ‘+1+π β πππ‘+1 β π + πΆπΈ({πΆπΈ(ππ|πΌπ) β π ππ‘(π,π + 1) β¨ 0})οΏ½οΏ½
πππ‘|ππ = πππ‘+1 + {ππππ‘ (ππ) β¨ ππβππ‘+1+π β πππ‘+1 β π + ππππΌπβππ‘+π (ππ)}
πππ‘|ππ = πππ‘+1 + {ππππ‘ (ππ) β¨ ππππΌπβππ‘+π (ππ)β π ππ‘(π,π + 1) βπ}
Hence we seek information when
ππππΌπβππ‘+π (ππ)β π ππ‘(π,π + 1) βπ β₯ ππππ‘ (ππ)
ππππΌπβππ‘+π (ππ) β₯ ππππ‘ (ππ) + π ππ‘(π,π + 1) + π
After information is received, the decision maker should accept if and only if:
ππβ1βππ‘+1+π + πΆπΈ(ππ|πΌπ) βπ β₯ ππβππ‘+1+π β π
πΆπΈ(ππ|πΌπ) β₯ ππβππ‘+1+π β ππβ1βππ‘+1+π
πΆπΈ(ππ|πΌπ) β₯ π πβππ‘+π(1,1)
If information is not obtained, the decision maker should accept if and only if:
ππβ1π‘+1 + πΆπΈ(ππ) β₯ πππ‘+1
πΆπΈ(ππ) β₯ π ππ‘ (1,1)
142
Corollary 6.5.1: Identifying Optimal Detector with Multiple Cost Structures Given the setup above, detector 1 will be optimal when:
πππΌ1 β πππΌ2 β₯ π ππ‘(π2,π2 + 1 ) β π ππ‘ (π1,π1 + 1 ) + π1 βπ2
Where πππΌ1 = πππΌπβπ1π‘+π1ππ£ππ πΌ1,πππ πππΌ2 = πππΌπβπ2
π‘+π2ππ£ππ πΌ2
Otherwise Detector 2 will be optimal.
From above we have that the value of the alternative of information for detector 1, A1, is
worth:
π΄1 = πππ‘+1 + ππππΌ1 β π ππ‘(π1,π1 + 1) βπ1
We define A1 in the same manner.
So, we prefer detector 1 over detector 2 when π΄1 β₯ π΄2. Or
πππ‘+1 + ππππΌ1β π ππ‘(π1,π1 + 1) βπ1 β₯ πππ‘+1 + ππππΌ2 β π ππ‘(π2,π2 + 1) βπ2
ππππΌ1β ππππΌ2 β₯ π ππ‘(π2,π2 + 1 ) β π ππ‘ (π1,π1 + 1 ) + π1 βπ2
A1.2.4 Section 6.5.2 Extensions β Decision Reversibility The problem setup is represented below.
Proposition 6.5.2: Optimal Allocation Policy with an Option When offered a deal ππ with an option on deal π, the decision maker should accept the deal
ππ and buy an option on it if and only if:
πππππ‘(ππ,π) β₯ ππΆππ‘ (ππ,π) +π + πΆπΈ(π)
Otherwise, the decision maker should accept if and only if:
πΆπΈ(ππ) > π ππ‘(1,1,π)
Where
Reject
Accept Donβt
Buy option
πππ‘(π)|ππ
πππ‘+1(ππ) + πΆπΈ(ππ)β πΆπΈ(π) βπ
ππβ1π‘+1(π) + πΆπΈ(ππ)
πππ‘+1(π)
143
πππππ‘(ππ,π) = πππ‘+1(ππ)β ππβ1π‘+1(π)
ππΆππ‘(ππ,π) = [π ππ‘(1,1,π) β πΆπΈ(ππ)]+
We have the following recursion
πππ‘(π)|ππ = {(πππ‘+1(ππ) + πΆπΈ(ππ) β πΆπΈ(π) βπ) β¨ οΏ½ππβ1π‘+1(π) + πΆπΈ(ππ)οΏ½ β¨ οΏ½πππ‘+1(π)οΏ½}
πππ‘(π)|ππ β ππβ1π‘+1(π) β πΆπΈ(ππ)
= {(πππ‘+1(ππ) β ππβ1π‘+1(π) β πΆπΈ(π) βπ) β¨ (0)
β¨ οΏ½πππ‘+1(π)βππβ1π‘+1(π) β πΆπΈ(ππ)οΏ½}
πππ‘(π)|ππ β ππβ1π‘+1(π) β πΆπΈ(ππ)
= {(πππ‘+1(ππ)β ππβ1π‘+1(π) β πΆπΈ(π) βπ) β¨ (0) β¨ οΏ½π ππ‘(1,1,π) β πΆπΈ(ππ)οΏ½}
πππ‘(π)|ππ β ππβ1π‘+1(π) β πΆπΈ(ππ) = {(πππ‘+1(ππ) β ππβ1π‘+1(π) β πΆπΈ(π) βπ) β¨ ππΆππ‘(ππ,π)}
So we buy the option when:
(πππ‘+1(ππ)β ππβ1π‘+1(π) β πΆπΈ(π) βπ) β₯ ππΆππ‘(ππ,π)
οΏ½πππ‘+1(ππ)β ππβ1π‘+1(π)οΏ½ β₯ ππΆππ‘(ππ,π) + πΆπΈ(π) +π
πππππ‘(ππ,π) β₯ ππΆππ‘ (ππ,π) +π + πΆπΈ(π)
Otherwise, the deal is worth accepting if and only if:
πππ‘+1(π) β€ ππβ1π‘+1(π) + πΆπΈ(ππ)
πππ‘+1(π) β€ ππβ1π‘+1(π) + πΆπΈ(ππ)
πΆπΈ(ππ) β₯ π ππ‘+1(1,1,π)
A1.2.5 Section 6.5.3 Extensions β Probability of Knowing Detectors For convenience, we present the structure of the problem with probability of knowing
detectors.
144
Recursion Equation Define the following terms as before
ππππ‘ (ππ) = [πΆπΈ(ππ) β π ππ‘]+
πππππΌππ‘(ππ) = πΆπΈ(ππππ‘(ππ|πΌπ))
By subtracting πππ‘+1 from all the end terms, the recursion above reduces to:
And finally reduces to
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject πππ‘|ππ β πππ‘+1
πΆπΈ(ππ)β π ππ‘
0
Reject
Accept πΆπΈ(ππ|πΌπ) β π ππ‘ β π
βπ
Ii
Reject
Accept πΆπΈ(ππ)β π ππ‘ β π
βπ
p
p
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject πππ‘|ππ
ππβ1π‘+1 + πΆπΈ(ππ)
πππ‘+1
Reject
Accept ππβ1π‘+1 + πΆπΈ(ππ|πΌπ) βπ
πππ‘+1 β π
Ii
Reject
Accept ππβ1π‘+1 + πΆπΈ(ππ) βπ
πππ‘+1 β π
145
Proposition 6.5.3a: Optimal Information Gathering Policy with Probability of Knowing Detectors Given a detector defined as above with a probability of knowing p and price m, the decision
maker should buy information if and only if:
π’(ππππΌππ‘ (ππ)β ππππ‘(ππ)) >π’(π)π
We seek information when seeking information provides a higher value than not. We state
this relationship in terms of u-values.
π’οΏ½ππππ‘(ππ)οΏ½ β€ π π’(πππππΌππ‘(ππ) βπ) + (1 β π)π’(ππππ‘(ππ) βπ)
For simplicity, we drop the terms in iV and iVoI
1 β πβπΎ(ππ) β€ ποΏ½1 β πβπΎ(πππππΌβπ)οΏ½ + (1 β π)(1 β πβπΎ(ππβπ))
πβπΎ(ππ) β₯ π πβπΎ(πππππΌβπ) + (1 β π)πβπΎ(ππβπ)
1 β₯ π πβπΎ(πππππΌβππβπ) + (1 β π)ππΎ(π)
πβπΎπ β₯ π πβπΎ(πππππΌβππ) + (1 β π)
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject
πΆπΈ(ππ)β π ππ‘
0
Reject
Accept πΆπΈ(ππ|πΌπ) β π ππ‘ β π
βπ
Ii
Reject
Accept πΆπΈ(ππ)β π ππ‘ β π
βπ
ππππ‘(ππ)
(1)
(2)
πππ‘|ππ β πππ‘+1
p
(2) = ππππ‘(ππ) βπ
Where: (1) = πππππΌππ‘(ππ) βπ
146
πβπΎπ β 1 β₯ π πβπΎ(πππππΌβππ) β π
1 β πβπΎπ β€ π(1 β πβπΎ(πππππΌβππ))
π’(ππππΌππ‘(ππ)β ππππ‘(ππ)) β₯π’(π)π
Proposition 6.5.3b: Optimal Allocation Policy with Probability of Knowing Detectors If clairvoyance is received, the decision maker should accept the deal if and only if:
πΆπΈ(ππ|πΌπ) β₯ π ππ‘
Otherwise, if no clairvoyance is received or the decision maker did not buy information; then
the decision maker should accept the deal if and only if:
πΆπΈ(ππ) β₯ π ππ‘
The proof for this follows that of the earlier propositions.
Corollary 6.5.3: Identifying Optimal Detector with Probability of Knowing Detectors Given the setup above, detector 1 will be optimal when
π’(π1)π1
<π’(π2)π2
Otherwise Detector 2 will be optimal. In this setup, this optimality is myopic. So if we have
multiple irrelevant detectors we use them in the decreasing order of π’(π) πβ
147
Directly from Proposition 6.1 we have that the benefit of detectors is inversely related to
π’(π)π
.
In the case with two detectors, we have the following setup
(1)
(3)
No Clairvoyance
Accept πΆπΈ(ππ) β π ππ‘ β π1 β π2
βπ1 βπ2
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject πππ‘|ππ β πππ‘+1
πΆπΈ(ππ)β π ππ‘
0
Reject
Accept πΆπΈ(ππ|πΌπ) β π ππ‘ β π1
βπ1
Ii
Reject
Accept πΆπΈ(ππ)β π ππ‘ β π1
βπ1
Clairvoyance
Seek info
Reject
Accept
βπ ππ‘ β π1 βπ2 πΆπΈ(ππ|πΌπ)
βπ1 βπ2
Ii
Reject
ππππ‘(ππ)
ππππ‘(ππ)βπ1
(1) = πππππΌππ‘(ππ) βπ1
(3) = ππππ‘(ππ) βπ1 βπ2
Where:
(2) = πππππΌππ‘(ππ) βπ1 βπ2
(2)
p1
p2
148
To prove the myopic feature, we reduce the above structure to the following:
Note that if a detector does not satisfy:
π’(ππππΌ β ππ) β₯π’(π)π
Then it is useless. Hence we consider the case where both detectors satisfy the above
equation. This reduces the recursion to:
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject πππ‘|ππ
ππβ1π‘+1 + πΆπΈ(ππ)
πππ‘+1
Reject
Accept ππβ1π‘+1 + πΆπΈ(ππ|πΌπ) βπ1
πππ‘+1 β π1
Ii
Reject
Accept ππβ1π‘+1 + πΆπΈ(ππ) βπ1
πππ‘+1 β π1
No Clairvoyance
Clairvoyance
Seek info
Reject
Accept (1)
(2)
Ii
Reject
Accept (3)
(4)
p1
p2
(2) = πππ‘+1 β π1 βπ2 (3) = ππβ1π‘+1 + πΆπΈ(ππ) βπ1 βπ2 (4) = πππ‘+1 β π1 βπ2
Where: (1) = ππβ1π‘+1 + πΆπΈ(ππ|πΌπ) βπ1 βπ2
149
So, the value of the deal flow when using detector 1 before detector 2 is:
π’(πππ‘|ππ β πππ‘+1)
= π1 π’(πππππΌππ‘(ππ) βπ1) + (1 β π1)[ π2 π’(πππππΌππ‘(ππ) βπ1 βπ2)
+ (1 β π2)π’(ππππ‘(ππ)βπ1 βπ2)
Again, we drop the terms in iVoI and iV for clarity to get
π’(πππ‘|ππ β πππ‘+1)
= π1π’(πππππΌ β π1) + (1 β π1)[π2π’(πππππΌ β π1 βπ2)
+ (1 β π2)π’(ππ β π1 βπ2)]
After much algebra, this reduces to
π’(πππ‘|ππ β πππ‘+1)
= 1 β π1πβπΎ(πππππΌβπ1) β (π2 β π1π2)πβπΎ(πππππΌβπ1βπ2)
β (1 β π1)(1 β π2)πβπΎ(ππβπ1βπ2)
(2)
πππ‘|ππ β πππ‘+1
ππππΌππ‘(ππ) βπ1
p1
No Clairvoyance
Clairvoyance Seek info
Reject
Accept πΆπΈ(ππ|πΌπ) β π ππ‘ β π1
βπ1
Ii
p2
No Clairvoyance
Clairvoyance
Seek info
Reject
Accept πΆπΈ(ππ|πΌπ) β π ππ‘ βπ1 βπ2
βπ1 βπ2
Ii
Reject
Accept πΆπΈ(ππ)β π ππ‘ β π1 βπ2
βπ1 βπ2
(2) = ππππ‘(ππ) βπ1 βπ2
Where: (1) = πππππΌππ‘(ππ) βπ1 βπ2
(1)
150
Denote this case by (I) and the case with detector 2 before detector 1 by II.
So in order to have detector 1 before detector 2 we must satisfy
π’(πΌ) > π’(πΌπΌ)
Hence,
1 β π1πβπΎ(πππππΌβπ1) β (π2 β π1π2)πβπΎ(πππππΌβπ1βπ2) β (1 β π1)(1 β π2)πβπΎ(ππβπ1βπ2)
β₯
1 β π2πβπΎ(πππππΌβπ2) β (π2 β π1π2)πβπΎ(πππππΌβπ1βπ2) β (1 β π1)(1 β π2)πβπΎ(ππβπ1βπ2)
After canceling repeated terms, this inequality reduces to
βπ1πβπΎ(πππππΌβπ1) β π2πβπΎ(πππππΌβπ1βπ2) β₯ βπ2πβπΎ(πππππΌβπ2) β π1πβπΎ(πππππΌβπ1βπ2)
Or
π1πβπΎ(πππππΌβπ1) + π2πβπΎ(πππππΌβπ1βπ2) β€ π2πβπΎ(πππππΌβπ2) + π1πβπΎ(πππππΌβπ1βπ2)
We multiply throughout by ππΎ(πππππΌ) to get
π1ππΎ(π1) + π2ππΎ(π1+π2) β€ π2ππΎ(π2) + π1ππΎ(π1+π2)
Now we multiply throughout by πβπΎ(π1+π2) to get
π1πβπΎ(π2) + π2 β€ π2πβπΎ(π1) + π1
π1πβπΎ(π2) β π1 β€ π2πβπΎ(π1) β π2
π1(πβπΎ(π2) β 1) β€ π2(πβπΎ(π1) β 1)
π1π’(π2) β₯ π2π’(π1)
And finally,
π’(π1)π1
β€π’(π2)π2
151
A1.3 Chapter 7 Proofs
A1.3.1 Section 7.3 Main Results For convenience, we reproduce the recursion here:
The recursion can be rewritten as:
πππ‘|ππ = οΏ½[ππβ1
π‘+1 β πΆπ(ππ)] β¨ πππ‘+1 β¨ οΏ½ππβ1
π‘+1 β (1 β π) β πΆπ[πΆπ(ππ|πΌπ) β¨ π ππ‘]οΏ½οΏ½ (1)
Proposition 7.3.1a: Optimal Information Gathering Policy When offered a deal ππ the decision maker should buy information if and only if
πππΌππ‘(ππ) β₯πππ
π‘(ππ)1 β π
Proposition 7.3.1b: Optimal Allocation Policy After information is received, the decision maker should accept the deal if and only if
πΆπ(ππ|πΌπ) β₯ π ππ‘
Otherwise, the deal is worth accepting without information if and only if
πΆπ(ππ) β₯ π ππ‘
Using the definition of the incremental value of information, the recursion simplifies to
We seek information when:
Accept
Seek info
Reject πππ‘|ππ
ππβ1π‘+1 β πΆπ(ππ)
πππ‘+1
πππ‘+1 β ππππΌππ‘(ππ) β (1 β π)
Accept
Seek info/ Control Reject
Reject
Accept
πππ‘|ππ β π0
ππβ1π‘+1 β πΆπ(ππ) β π0
πππ‘+1 β π0
ππβ1π‘+1 β πΆπ(ππ|πΌπ) β (1 β π) β π0
πππ‘+1 β (1 β π) β π0
Ii
152
πππ‘+1 β ππππΌππ‘(ππ) β (1 β π) β₯ οΏ½οΏ½ππβ1
π‘+1 β πΆπ(ππ)οΏ½ β¨ πππ‘+1οΏ½
ππππΌππ‘(ππ) β₯ {(πΆπ(ππ)/π ππ‘) β¨ 1}/(1β π)
So,
ππππΌππ‘(ππ) β₯ ππ(ππ)1 β π
After receiving information, the deal is worth accepting if and only if:
ππβ1π‘+1 β πΆπ(ππ|πΌπ) β (1 β π) β π0 β₯ ππ
π‘+1 β (1 β π) β π0
ππβ1π‘+1 β πΆπ(ππ|πΌπ) β₯ ππ
π‘+1
πΆπ(ππ|πΌπ) β₯ π ππ‘
Otherwise, we prefer to go with the deal without information if and only if:
ππβ1π‘+1 β πΆπ(ππ) β π0 β₯ ππ
π‘+1 β π0
ππβ1π‘+1 β πΆπ(ππ) β₯ ππ
π‘+1
πΆπ(ππ) β₯ π ππ‘
Proposition 7.3.2: Characterizing Deal Flow Certain Multiplier (π΄ππ)
1. πππ‘ is non decreasing in c for all t
2. πππ‘ is non increasing in t for all c
Proposition 7.3.2 Statement 1 We prove this by induction. Take t = T, (1) becomes
πππ|ππ = οΏ½[ππβ1
π+1 β πΆπ(ππ)] β¨ πππ+1 β¨ οΏ½ππβ1
π+1 β (1 β π) β πΆπ[πΆπ(ππ|πΌπ) β¨ π ππ]οΏ½οΏ½
Since πππ+1 = 1 for all c, this equation reduces to
πππ|ππ = { πΆπ(ππ) β¨ 1 β¨ πΆπ[{πΆπ(ππ|πΌπ) β¨ π ππ}] β (1 β π) } πππ π β₯ 1
π0π|ππ = 1 πππ π = 0
Hence it is non decreasing in c at t = T
Now, assume true for t = k, or
πππ β₯ ππβ1
π πππ πππ π β₯ 1 (2)
Now, prove that it is true for t = k-1, or
153
πππβ1 β₯ ππβ1
πβ1 πππ πππ π β₯ 1 (3)
Where:
πππβ1|ππ = οΏ½[ππβ1
π β πΆπ(ππ)] β¨ πππ β¨ οΏ½ππβ1
π β (1 β π) β πΆπ[πΆπ(ππ|πΌπ) β¨ π ππβ1]οΏ½οΏ½
And
ππβ1πβ1|ππ = οΏ½[ππβ2
π β πΆπ(ππ)] β¨ ππβ1π β¨ οΏ½ππβ2
π β (1 β π) β πΆποΏ½πΆπ(ππ|πΌπ) β¨ π πβ1πβ1οΏ½οΏ½οΏ½
But since πππ is non decreasing in c, (2), we have
ππβ1πβ1|ππ β€ οΏ½οΏ½ππβ1
π β πΆπ(ππ)οΏ½ β¨ πππ β¨ οΏ½ππβ1
π β (1 β π) β πΆπ[πΆπ(ππ|πΌπ) β¨ π ππβ1]οΏ½οΏ½
= πππβ1|ππ
Hence,
ππβ1πβ1οΏ½ππ β€ ππ
πβ1οΏ½ππ πππ πππ ππ
Meaning (3) is true, namely
ππβ1πβ1 β€ ππ
πβ1
And, finally, by induction
ππβ1π‘ β€ ππ
π‘ πππ πππ π β₯ 1 πππ πππ π‘
Proposition 7.3.2 Statement 2 This statement follows directly from the recursion. We have
πππ‘|ππ = {ππβ1
π‘+1 β πΆπ(ππ) β¨ πππ‘+1 β¨ ππβ1
π‘+1 β (1 β π) β πΆπ({πΆπ(ππ|πΌπ) β¨ π ππ‘})}
Hence
πππ‘|ππ β₯ ππ
π‘+1 πππ πππ πππππ π
Leading to
πππ‘ β₯ ππ
π‘+1 πππ πππ π
So, πππ‘ is non increasing in t for all c
Proposition 7.3.3: Characterizing Threshold (πΉππ) 1. π ππ‘ is non increasing in c for all t 2. π ππ‘ is non increasing in t for all c
154
Proposition 7.3.3 Statement 1 We prove this by induction.
Take the case when t = T, the statement becomes
π ππ β₯ π π+1π
This is true since
π ππ = πππ+1/ππβ1
π+1 = 0 πππ πππ π β₯ 1
Hence π ππ is non increasing in c at t = T
Now, assume this is true for t = k, lets prove it for t = k β 1.
So we know that
π ππ β₯ π π+1π πππ πππ π β₯ 1 (1)
β πππ+1/ ππβ1
π+1 β₯ ππ+1π+1/ ππ
π+1
οΏ½πππ+1οΏ½2 β₯ ππ+1
π+1 β ππβ1π+1 πππ πππ π ππ‘ π‘ = π + 1
For the statement to be true at t = k β 1, the following must be true
π ππβ1 β₯ π π+1πβ1 πππ πππ π (2)
οΏ½ππποΏ½2 β₯ ππ+1
π β ππβ1π πππ πππ π ππ‘ π‘ = π (3)
For each Xn, we can rewrite πππ as:
πππ|ππ = {πΆπ(ππ) β ππβ1
π+1 β¨ πππ+1 β¨ πΆποΏ½οΏ½πΆπ(ππ|πΌπ) β ππβ1
π+1 β¨ πππ+1οΏ½ β (1 β π)οΏ½}
Now, define πππ as
πππ(ππ) = πΆποΏ½{πΆπ(ππ|πΌπ) β¨ π ππ} β (1 β π)οΏ½
So, we have:
πππ|ππ = ππβ1
π+1 β {πΆπ(ππ) β¨ π ππ β¨ πππ(ππ)}
Now, rewrite (3) for each Xn as:
155
{πΆπ(ππ) β¨ π ππ β¨ πππ(ππ)}2 β οΏ½ππβ1π+1οΏ½2
β₯ οΏ½πΆπ(ππ) β¨ π π+1π β¨ ππ+1π (ππ)οΏ½
β ππ+1π+1 β {πΆπ(ππ) β¨ π ππ‘1π
β¨ ππβ1π (ππ)} β ππβ2π+1 (4)
Which is equivalent to
π ππ
π πβ1π βοΏ½πΆπ(ππ) β¨ π π+1π β¨ ππ+1π (ππ)οΏ½
οΏ½πΆπ(ππ) β¨ π ππ β¨ πππ(ππ)οΏ½2β οΏ½πΆπ(ππ) β¨ π πβ1π β¨ ππβ1π (ππ)οΏ½ β€ 1 (5)
Note that from (1) we have
π πβ1π β₯ π ππ β₯ π π+1π (6)
This directly leads to:
ππβ1π (ππ) β₯ πππ(ππ) β₯ ππ+1π (ππ) (7)
From (6), we know that if
πΆπ(ππ) β₯ π ππ
Then we cannot reject the deal for any higher c. The same goes for buying information, if
πΆπ(ππ) β₯ πππ(ππ)
Then we cannot accept information for any higher c.
We first prove the following relation:
ππβ1π (ππ)/ π πβ1π β€ πππ(ππ)/ π ππ (8)
πΏπ»π = πΆποΏ½οΏ½πΆπ(ππ|πΌπ) β¨ π πβ1π οΏ½
π οΏ½ /π πβ1π = πΆποΏ½πΆπ(ππ|πΌπ)π πβ1π β¨ 1οΏ½ / π
β€ πΆποΏ½πΆπ(ππ|πΌπ)
π ππβ¨ 1οΏ½ / π = πΆποΏ½
{πΆπ(ππ|πΌπ) β¨ π ππ}π οΏ½ / π ππ = π π»π
Note that (8) means that if Rejecting a deal was better than getting information on that deal
for a given c, then that will be the case for any lower cβs.
Note that (8) means that if Rejecting a deal was better than getting information on that deal
for a given c, then that will be the case for any lower cβs.
In the following, we drop (Xn) from the Q term for clarity.
156
After we reject cases that contradict (6), (7), or (8), we consider the following 10 cases:
Maximum Term Case (c+1,t) (c,t) (c-1,t) 1 R R R 2 Q R R 3 Q Q R 4 Q Q Q 5 CM(Xn) R R 6 CM(Xn) Q R 7 CM(Xn) Q Q 8 CM(Xn) CM(Xn) R 9 CM(Xn) CM(Xn) Q 10 CM(Xn) CM(Xn) CM(Xn)
CASE 1:
(5) =π ππ
π πβ1π βπ π+1π
οΏ½π πποΏ½2 β π πβ1
π =π π+1π
π ππ
ππ’π‘ π π+1π‘ β€ π ππ‘ ππ¦ π‘βπ ππππ’ππ‘πππ ππ π π’πππ‘πππ, (1),β (5) β€ 1
CASE 2:
(5) =π ππ
π πβ1π βππ+1π
οΏ½π πποΏ½2 β π πβ1
π =ππ+1π
π ππ
ππ¦ (7), (5) β€πππ
π ππ
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, (5) β€ 1
CASE 3:
(5) =π ππ
π πβ1π βππ+1π
οΏ½ππποΏ½2 β π πβ1
π =π ππ β ππ+1π
οΏ½ππποΏ½2 =
π ππ
πππβππ+1π
πππ
ππππ π‘βπ πππ π ππππ’ππ‘πππ ππ π π’πππ‘πππ, (1), ππ+1π β€ πππ
πππ ππππ π‘βπ πππ π ππ π π’πππ‘πππ,π ππ β€ πππ β (5) β€ 1
CASE 4:
(5) =π ππ
π πβ1π βππ+1π
οΏ½ππποΏ½2 β ππβ1
π
157
ππ¦ (8),π€π βππ£πππβ1π (ππ)π πβ1π β€
πππ(ππ)π ππ
, π π
(5) β€π ππ
π ππβππ+1π
οΏ½ππποΏ½2 β ππ
π =ππ+1π
πππ
ππ¦ π‘βπ ππππ’ππ‘πππ ππ π π’πππ‘πππ, (1),ππ+1π β€ πππ β (5) β€ 1
CASE 5:
(5) =π ππ
π πβ1π βπΆπ(ππ)
οΏ½π πποΏ½2 β π πβ1π =
πΆπ(ππ)π ππ
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, πΆπ(ππ) β€ π ππ β (5) β€ 1
CASE 6:
(5) =π ππ
π πβ1π βπΆπ(ππ)
οΏ½ππποΏ½2 β π πβ1π = π ππ β
πΆπ(ππ)
οΏ½ππποΏ½2
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, πΆπ(ππ),π ππ β€ πππ β (5) β€ 1
CASE 7:
(5) =π ππ
π πβ1π βπΆπ(ππ)
οΏ½ππποΏ½2 β ππβ1π
ππ¦ (8),π€π βππ£πππβ1π (ππ)π πβ1π β€
πππ(ππ)π ππ
, π π
(5) β€π ππ
π ππβπΆπ(ππ)
οΏ½ππποΏ½2 β πππ =
πΆπ(ππ)πππ
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, πΆπ(ππ) β€ πππ β (5) β€ 1
CASE 8:
(5) =π ππ
π πβ1π βπΆπ(ππ)
οΏ½πΆπ(ππ)οΏ½2β π πβ1π =
π ππ
πΆπ(ππ)
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, π ππ β€ πΆπ(ππ) β (5) β€ 1
CASE 9:
158
(5) =π ππ
π πβ1π βπΆπ(ππ)
οΏ½πΆπ(ππ)οΏ½2β ππβ1π
ππ¦ (8),π€π βππ£πππβ1π (ππ)π πβ1π β€
πππ(ππ)π ππ
, π π
(5) β€ π ππ βπΆπ(ππ)
οΏ½πΆπ(ππ)οΏ½2βπππ
π ππ=
πππ
πΆπ(ππ)
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, πππ β€ πΆπ(ππ) β (5) β€ 1
CASE 10:
(5) =π ππ
π πβ1π βπΆπ(ππ)
(πΆπ(ππ) )2β πΆπ(ππ) =
π ππ
π πβ1π
ππ¦ π‘βπ ππππ’ππ‘πππ ππ π π’πππ‘πππ, (1), π ππ β€ π πβ1π β (5) β€ 1
So since (5) is true for every Xn, we know that (3) is true and:
π ππβ1 β₯ π π+1πβ1 πππ πππ π β₯ 1 (1)
Finally, by induction, we know that this is true for all t, or
π ππ‘ β₯ π π+1π‘ πππ πππ π β₯ 1 ππ‘ πππ¦ π‘
π ππβ1 β₯ π π+1πβ1 πππ πππ π β₯ 1 (1)
Finally, by induction, we know that this is true for all t, or
π ππ‘ β₯ π π+1π‘ πππ πππ π β₯ 1 ππ‘ πππ¦ π‘
Proposition 7.3.3 Statement 2 We prove this by induction.
Take the case when t = T, the statement becomes
π ππβ1 β₯ π ππ
This is true since
π ππ = πππ+1/ππβ1
π+1 = 0 πππ πππ π β₯ 1
While, since V is non decreasing in c, we have:
π ππβ1 = πππ/ππβ1
π β₯ 1
Hence π ππ is non increasing in t at t = T-1
159
Assume this is true for t=k and show this statement is true for t=k-1
So we know:
π ππβ1 β₯ π ππ (1)
Meaning:
πππ
ππβ1π β₯
πππ+1
ππβ1π+1 (2)
And we want to show
π ππβ2 β₯ π ππβ1 (3)
Alternatively,
πππβ1
ππβ1πβ1 β₯
πππ
ππβ1π (4)
As with Proposition 7.3.3 Statement 1, or each Xn, we can rewrite πππ‘ as:
πππ‘|ππ = {πΆπ(ππ) β ππβ1
π‘+1 β¨ πππ‘+1 β¨ πΆπ({πΆπ(ππ|πΌπ) β ππβ1
π‘+1 β¨ πππ‘+1} β (1 β π))}
Recall that πππ‘ is defined as
πππ‘(ππ) = πΆπ({πΆπ(ππ|πΌπ) β¨ π ππ‘} β (1 β π))
So, we have:
πππ‘|ππ = ππβ1
π‘+1 β {πΆπ(ππ) β¨ π ππ‘ β¨ πππ‘(ππ)}
Now, rewrite (4) for each Xn as:
ππβ1π β{πΆπ(ππ) β¨ π ππβ1 β¨ πππβ1(ππ)} /ππβ2
π /οΏ½πΆπ(ππ) β¨ π πβ1πβ1 β¨ ππβ1πβ1(ππ)οΏ½β¦
β¦/ππβ1π+1 /{πΆπ(ππ) β¨ π ππ β¨ πππ(ππ)} β ππβ2
π+1 βοΏ½πΆπ(ππ) β¨ π πβ1π β¨ ππβ1π (ππ)οΏ½ β₯ 1 (5)
Which is equivalent to
π πβ1π
π πβ1πβ1 β {πΆπ(ππ) β¨ π ππ β¨ πππ(ππ)}
οΏ½πΆπ(ππ) β¨ π πβ1π β¨ ππβ1π (ππ)οΏ½β οΏ½πΆπ(ππ) β¨ π πβ1πβ1 β¨ ππβ1πβ1(ππ)οΏ½οΏ½πΆπ(ππ) β¨ π ππβ1 β¨ πππβ1(ππ)οΏ½
β€ 1 (5)
Note that from Proposition 7.3.3 statement 1 we have
π ππ‘ β₯ π π+1π‘ πππ πππ π‘ πππ π β₯ 1 (6)
Leading to:
πππ‘(ππ) β₯ ππ+1π‘ (ππ) πππ πππ π‘ πππ π β₯ 1 (7)
Also, (1) gives us
160
πππβ1(ππ) β₯ πππ(ππ) πππ πππ π β₯ 1 (8
In similar lines as in Proposition 7.3.3 statement 1, we prove:
πππβ1(ππ)/ π ππβ1 β€ πππ(ππ)/ π ππ (9)
πΏπ»π = πΆποΏ½{πΆπ(ππ|πΌπ) β¨ π ππβ1} β (1 β π)οΏ½/π ππβ1 = πΆποΏ½οΏ½πΆπ(ππ|πΌπ)π ππβ1
β¨ 1οΏ½ β (1 β π)οΏ½
ππ¦ π‘βπ ππππ’ππ‘πππ ππ π π’πππ‘πππ,
πΏπ»π β€ πΆποΏ½οΏ½πΆπ(ππ|πΌπ)
π ππβ¨ 1οΏ½ β (1 β π)οΏ½ =
πΆποΏ½[πΆπ(ππ|πΌπ) β¨ π ππ] β (1 β π)οΏ½π ππ
= π π»π
Recall, from the proof of Proposition 7.3.3 statement 1, that:
ππβ1π‘ (ππ)/ π πβ1π‘ β€ πππ‘(ππ)/π ππ‘ (10)
In the following, we drop (Xn) from the Q term for clarity.
In the same manner as in the proof for Property 7.3.2 statement 1, we reject all the cases
that contradict statements (6), (7), and (8) and end up with the following 20 cases.
Maximum Term Case (c,k-1) (c-1,k-1) (c,k) (c-1,k) 1 R R R R 2 R R Q R 3 R R Q Q 4 R R CM(Xn) R 5 R R CM(Xn) Q 6 R R CM(Xn) CM(Xn) 7 Q R Q R 8 Q R Q Q 9 Q R CM(Xn) R 10 Q R CM(Xn) Q 11 Q R CM(Xn) CM(Xn) 12 Q Q Q Q 13 Q Q CM(Xn) Q 14 Q Q CM(Xn) CM(Xn) 15 CM(Xn) R CM(Xn) R 16 CM(Xn) R CM(Xn) Q 17 CM(Xn) R CM(Xn) CM(Xn) 18 CM(Xn) Q CM(Xn) Q 19 CM(Xn) Q CM(Xn) CM(Xn) 20 CM(Xn) CM(Xn) CM(Xn) CM(Xn)
161
Now, let us consider the cases and evaluate relation (5):
CASE 1:
(5) = π πβ1π
π πβ1πβ1 β π ππ
π πβ1π β π πβ1πβ1
π ππβ1=
π ππ
π ππβ1
ππ’π‘,ππππ (1),π ππ β€ π ππβ1 β (5) β€ 1
CASE 2:
(5) = π πβ1π
π πβ1πβ1 β πππ
π πβ1π β π πβ1πβ1
π ππβ1=
πππ
π ππβ1
ππππ π‘βπ πππ π π€π ππππ€:π ππβ1 β₯ πππβ1
ππ’π‘,ππππ (8),π€π βππ£π πππβ1 β₯ πππ
π‘βπ’π , (5) β€ 1
CASE 3:
(5) = π πβ1π
π πβ1πβ1 β πππ
ππβ1π β π πβ1πβ1
π ππβ1=π πβ1π
π ππβ1 βπππ
ππβ1π
πππ‘π π‘βππ‘, as with case 2, π ππβ1 β₯ πππ
πππ ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, π πβ1π β€ ππβ1π β (5) β€ 1
CASE 4:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)π πβ1π β
π πβ1πβ1
π ππβ1=πΆπ(ππ)π ππβ1
ππ¦ π‘βπ ππππ’ππ‘πππ ππ π π’πππ‘πππ (1), (5) =πΆπΈ(ππ)π ππβ1
β€πΆπΈ(ππ)π ππ
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, πΆπ(ππ) β€ π ππ β (5) β€ 1
CASE 5: (5) = π πβ1π β π πβ1πβ1 + πΆπΈ(ππ) β ππβ1π β π ππβ1 + π πβ1πβ1 = π ππ β π ππβ1
ππ¦ π‘βπ ππππ’ππ‘πππ ππ π π’πππ‘πππ, (1), π ππβ1 β₯ π ππ β (5) β€ 1
CASE 6:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)πΆπ(ππ)
β π πβ1πβ1
π ππβ1=πΆπ(ππ)π ππβ1
βπ πβ1π
πΆπ(ππ)
ππ’π‘ ππ¦ π‘βπ πππ π ππ π π’πππ‘ππππ ,π€π βππ£π
162
π πβ1π β€ πΆπΈ(ππ) πππ πΆπΈ(ππ) β€ π ππβ1 β (5) β€ 1
CASE 7:
(5) = π πβ1π
π πβ1πβ1 β πππ
π πβ1π β π πβ1πβ1
πππβ1=
πππ
πππβ1
ππ¦ (8), πππ β€ πππβ1 β (5) β€ 1
CASE 8:
(5) = π πβ1π
π πβ1πβ1 β πππ
ππβ1π β π πβ1πβ1
πππβ1=π πβ1π
ππβ1π βπππ
πππβ1
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, π πβ1π β€ ππβ1π
πππ ππ¦ (8), πππ β€ πππβ1 β (5) β€ 1
CASE 9:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)π πβ1π β
π πβ1πβ1
πππβ1=πΆπ(ππ)πππβ1
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, πππβ1 β₯ πΆπ(ππ) β (5) β€ 1
CASE 10:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)ππβ1π β
π πβ1πβ1
πππβ1=π πβ1π
ππβ1π βπΆπ(ππ)πππβ1
ππ¦ π‘βπ πππ π ππ π π’πππ‘ππππ ,π€π βππ£π:
π πβ1π β€ ππβ1π πππ πΆπ(ππ) β€ πππβ1 β (5) β€ 1
CASE 11:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)πΆπ(ππ)
β π πβ1πβ1
πππβ1=π πβ1π
πππβ1
ππ π ππππππ ππππππ π‘π πΆπ΄ππΈ 6,π€π πππ πππ π π’ππ‘ππππ‘ πΆπ(ππ)
(5) = οΏ½π πβ1π /πΆπ(ππ)οΏ½ + {πΆπ(ππ)/πππβ1}
ππ¦ π‘βπ πππ π ππ π π’πππ‘ππππ π€π βππ£π:
π πβ1π β€ πΆπ(ππ) πππ πΆπ(ππ) β€ πππβ1 β (5) β€ 1
CASE 12:
(5) = π πβ1π
π πβ1πβ1 β πππ
ππβ1π β ππβ1πβ1
πππβ1
ππ¦ (8), πππ β€ πππβ1
163
ππ¦ (9), ππβ1πβ1/π πβ1πβ1 β€ ππβ1π /π πβ1π β (5) β€ 1
CASE 13:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)ππβ1π β
ππβ1πβ1
πππβ1
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ π€π βππ£π ππβ1π β₯ πΆπ(ππ)
πππ ππ¦ (9), ππβ1πβ1/π πβ1πβ1 β€ ππβ1π /π πβ1π β (5) β€ 1
CASE 14:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)πΆπ(ππ)
β ππβ1πβ1
πππβ1
ππ¦ (9), ππβ1πβ1/ π πβ1πβ1 β€ ππβ1π /π πβ1π π π,
(5) = οΏ½ππβ1πβ1
π πβ1πβ1οΏ½ / οΏ½ πππβ1
π πβ1π οΏ½ β€ οΏ½ππβ1π
π πβ1π οΏ½ / οΏ½ πππβ1
π πβ1π οΏ½ =ππβ1π
πππβ1
πππ πππ πππ£πππ πππ ππ’ππ‘ππππ¦ ππ¦ πΆπ(ππ)
(5) = ππβ1π
πΆπ(ππ) βπΆπ(ππ)
πππβ1
ππ¦ π‘βπ πππ π ππ π π’πππ‘ππππ ,
ππβ1π β€ πΆπ(ππ), πππ πΆπ(ππ) β€ πππβ1 β (5) β€ 1
CASE 15:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)π πβ1π β
π πβ1πβ1
πΆπ(ππ)= 1
CASE 16:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)ππβ1π β
π πβ1πβ1
πΆπ(ππ) =π πβ1π
ππβ1π
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ,π πβ1π β€ ππβ1π β (5) β€ 1
CASE 17:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)πΆπ(ππ)
β π πβ1πβ1
πΆπ(ππ)=
π πβ1π
πΆπ(ππ)
πππ ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ,π πβ1π β€ πΆπ(ππ) β (5) β€ 1
CASE 18:
164
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)ππβ1π β
ππβ1πβ1
πΆπ(ππ)=ππβ1πβ1
π πβ1πβ1 β π πβ1π
ππβ1π
ππ¦ (9), ππβ1πβ1/π πβ1πβ1 β€ ππβ1π /π πβ1π β (5) β€ 1
CASE 19:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)πΆπ(ππ)
β ππβ1πβ1
πΆπ(ππ)=π πβ1π
π πβ1πβ1 βππβ1πβ1
πΆπ(ππ)
ππ¦ (9), ππβ1πβ1/ π πβ1πβ1 β€ ππβ1π / π πβ1π
π π, (5) β€π πβ1π
πΆπ(ππ) βππβ1π
π πβ1π =ππβ1π
πΆπ(ππ)
ππ¦ π‘βπ πππ π ππ π π’πππ‘πππ, ππβ1π β€ πΆπ(ππ) , β (5) β€ 1
CASE 20:
(5) = π πβ1π
π πβ1πβ1 β πΆπ(ππ)πΆπ(ππ)
β πΆπ(ππ)πΆπ(ππ)
=π πβ1π
π πβ1πβ1
ππ¦ π‘βπ ππππ’ππ‘πππ ππ π π’πππ‘πππ, (1), π πβ1πβ1 β₯ π πβ1π β (5) β€ 1
So, (5) β€ 1 is true for all Xn and hence must be true for the certain equivalent over Xn. So, (3)
is true, or π ππβ1 β₯ π ππ
Finally, by induction, we know that π ππ‘β1 β₯ π ππ‘ πππ πππ π ππ‘ πππ¦ π‘
Corollary 7.3.1: Characterizing IBF of Deals (ππππ‘(ππ) and πππΌππ‘(ππ))
The IBF multiples of a deal ππ with and without information exhibit the following two
properties
I. ππππ‘(ππ) πππ πππΌππ‘(ππ) are non decreasing in c for all t
II. ππππ‘(ππ) πππ πππΌππ‘(ππ) are non decreasing in t for all c
Recall the definition of ππππ‘(ππ)
ππππ‘(ππ) = οΏ½
πΆπ(ππ)π ππ‘
β¨ 1οΏ½
From the definition we see that ππππ‘(ππ) moves in the opposite direction of π ππ‘ and thus it is
non decreasing in c for all t and non decreasing in t for all c.
165
Recall that πππΌππ‘(ππ) = πΆπ(ππππ‘(ππ|πΌπ)). Note that the indication is not a function of the
state (c,t) and hence πππΌππ‘ follows ππππ‘and is also non decreasing in c for all t and non
decreasing in t for all c.
Proposition 7.3.4: Characterizing the IBF of Information οΏ½ππ΄ππ°ππ(πΏπ)οΏ½ The IBF of information exhibits the following properties
I. For a given c, IBF of information is increasing in t and reaches a maximum when π ππ‘ = πΆπ(ππ) then it decreases in t until it converges at πΆπ(ππβ)/πΆπ(ππ)
II. For a given t, IBF of information is increasing in c and reaches a maximum when π ππ‘ = πΆπ(ππ) then it decreases in c until it converges at πΆπ(ππβ)/πΆπ(ππ)
Recall that the IBP of information is defined as:
ππππΌππ‘(ππ) = πππΌππ‘(ππ)/ ππππ‘(ππ)
ππππΌππ‘(ππ) = πΆπ(πΆπ(ππ|πΌπ)/π ππ‘ β¨ 1)/(πΆπ(ππ)/π ππ‘ β¨ 1) (1)
We take two cases, namely, as πΆπ(ππ) relates to π ππ‘
CASE1: πΆπ(ππ) β€ π ππ‘
(1) reduces to ππππΌππ‘(ππ) = πΆπ(πΆπ(ππ|πΌπ)/π ππ‘ β¨ 1)
So ππππΌππ‘(ππ) moves in the opposite direction as π ππ‘
Thus, ππππΌππ‘(ππ) is increasing in t and c when πΆπ(ππ) β€ π ππ‘
CASE2: πΆπ(ππ) β₯ π ππ‘
(1) reduces to ππππΌππ‘(ππ) = πΆποΏ½πΆπ(ππ|πΌπ)π ππ‘
β¨ 1οΏ½ /(πΆπΈ(ππ)π ππ‘
)
ππππΌππ‘(ππ) = πΆπ(πΆπ(ππ|πΌπ) β¨ π ππ‘)/ πΆπ(ππ)
So ππππΌππ‘(ππ) moves in the same direction as π ππ‘
Thus, ππππΌππ‘(ππ) is decreasing in t and c when πΆπ(ππ) β₯ π ππ‘
Now, since ππππΌππ‘(ππ) increases in case1 and then decreases in case2, we can see that it
reaches a maximum when π ππ‘ = πΆπ(ππ)
Finally, we study the convergence of the term. At π‘ = π, (1) reduces to:
ππππΌππ‘(ππ) = πΆπ(πΆπ(ππ|πΌπ) β¨ 1)/πΆπ(ππ)
But, the value of the deal with free information outside the funnel, πΆπ(ππβ), equals:
166
πΆπ(ππβ) = πΆπ(πΆπ(ππ|πΌπ) β¨ 1)
Thus, ππππΌππ‘ = πΆπ(ππβ)/πΆπ(ππ)
The same is true when c = C.
Proposition 7.3.5: Characterizing of the Optimal Policy The optimal policy for a given deal ππ is characterized as follows
I. For a given c, the optimal policy can only change over time from rejecting, to buying information, and finally to accepting.
II. For a given t, the optimal policy can only change over capacity from rejecting, to buying information, and finally to accepting.
Proposition 7.3.5 Statement 1 Recall
πππ‘|ππ = ππβ1
π‘+1 β {πΆπ(ππ) β¨ π ππ‘ β¨ πππ‘(ππ)}
Where πππ‘(ππ) = πΆπ({πΆπ(ππ|πΌπ) β¨ π ππ‘}/π)
Proposition 7.3.3 states:
π πβ1π‘ β₯ π ππ‘ β₯ π π+1π‘
Meaning ππβ1π‘ (ππ) β₯ πππ‘(ππ) β₯ ππ+1π‘ (ππ)
So if πΆπ(ππ) β₯ π ππ‘
Then we cannot reject the deal for any higher c. The same goes for buying information, if
πΆπ(ππ) β₯ πππ‘(ππ)
Then we cannot accept information for any higher c.
We also showed
ππβ1π‘ (ππ)/ π πβ1π‘ β€ πππ‘(ππ)/ π ππ‘
This means that if buying information is better than rejecting for a given c, then it will be the
case for higher c.
Proposition 7.3.5 Statement 2 The proof follows in the same lines as statement 1.
Proposition 7.3.6: Identifying Optimal Detector Given the setup above, detector 1 will be optimal when:
167
πππΌ1πππΌ2
>(1 β π2)(1 β π1)
Otherwise, detector 2 will be optimal.
let πΌπ1, πΌπ2 be the indications associated with detectors 1 and 2 respectively. The alternative of
buying information is worth
οΏ½ππβ1π‘+1 β (1 β ππ) β πΆποΏ½πΆποΏ½πποΏ½πΌπ
ποΏ½ β¨ π ππ‘οΏ½οΏ½
where j is the number of the detector. For detector 1 to be preferable over detector 2, the
following needs to be true
οΏ½ππβ1π‘+1 β (1 β π1) β πΆποΏ½πΆποΏ½πποΏ½πΌπ1οΏ½ β¨ π ππ‘οΏ½οΏ½ > οΏ½ππβ1
π‘+1 β (1 β π2) β πΆποΏ½πΆποΏ½πποΏ½πΌπ2οΏ½ β¨ π ππ‘οΏ½οΏ½
πΆποΏ½πΆποΏ½πποΏ½πΌπ1οΏ½ β¨ π ππ‘οΏ½ β (1 β π1) > πΆποΏ½πΆποΏ½πποΏ½πΌπ2οΏ½ β¨ π ππ‘οΏ½ β (1 β π2)
πΆποΏ½πΆποΏ½πποΏ½πΌπ1οΏ½/π ππ‘ β¨ 1οΏ½ β π ππ‘ β (1 β π1) > πΆποΏ½πΆποΏ½πποΏ½πΌπ2οΏ½/π ππ‘ β¨ 1οΏ½ β π ππ‘ β (1 β π2)
πΆποΏ½ππππ‘οΏ½πποΏ½πΌπ1οΏ½οΏ½ β (1 β π1) > πΆποΏ½πππ
π‘οΏ½πποΏ½πΌπ2οΏ½οΏ½ β (1 β π2)
πππ‘ πππΌπ(ππ) = πΆποΏ½ππππ‘οΏ½πποΏ½πΌπ
ποΏ½οΏ½
πππΌ1(ππ) β (1 β π1) > πππΌ2(ππ) β (1 β π2)
πππΌ1πππΌ2
>(1 β π2)(1 β π1)
A1.3.2 Section 7.4 The Long-Run Problem
Accept
Seek info Reject
Reject
Accept
ππ|ππ β π0
πΏ ππβ1 β πΈ(ππ) β π0
πΏ ππ β π0
πΏ ππβ1 β πΈ(ππ|πΌπ) β π0 β (1 β π)
πΏ ππ β π0 β (1 β π)
Ii
168
Proposition 7.4.1: Characterizing Long-Run Problem I. When offered the deal n, the decision maker should buy information if and only
if πππΌπ(ππ) β₯ πππ(ππ)/(1β π). After information is received, the decision maker should buy the deal if and only if πΆπ(ππ|πΌπ) β₯ π π . Otherwise, the deal is worth buying without information if and only if πΆπ(ππ) β₯ π π.
II. ππ is non-decreasing in c. III. π π is non-increasing in c. IV. πππ(ππ) πππ πππΌπ(ππ) are non-decreasing in c. V. ππππΌπ(ππ) is non-decreasing in c and reaches a maximum when π π = πΆπ(ππ)
then it decreases to πΆπ(ππβ)/πΆπ(ππ), where ππβ is the deal with free information.
VI. The optimal policy can only change over c from rejecting to buying information, and from buying information to accepting.
Proposition 4.1 Statement 1 The recursion can be rewritten as:
ππ|ππ = οΏ½[πΏ ππβ1 β πΆπ(ππ)] β¨ πΏ ππ β¨
οΏ½πΏ ππβ1 β (1 β π) β πΆπ[πΆπ(ππ|πΌπ) β¨ π π]οΏ½οΏ½ (1)
Using the definition of the incremental value of the deal with information, the recursion
simplifies to
We seek information when:
πΏ ππ β πππΌπ(ππ) β (1 β π) β₯ οΏ½οΏ½πΏ ππβ1 β πΆπ(ππ)οΏ½ β¨ πΏ πποΏ½
πππΌπ(ππ) β₯ οΏ½οΏ½πΆπ(ππ)π π
οΏ½ β¨ 1οΏ½ /(1 β π)
So,
πππΌπ(ππ) β₯ πππ(ππ)/(1β π)
Otherwise, we prefer to go with the deal without information. We buy the deal if
Accept
Seek info
Reject ππ|ππ
πΏ ππβ1 β πΆπ(ππ)
πΏ ππ
πΏ ππ β πππΌππ‘(ππ) β (1 β π)
169
πΏ ππβ1 β πΆπ(ππ) β₯ πΏ ππ
ππ πΆπ(ππ) β₯ π π
Proposition 4.1 Statements 2 and 3 We prove the two properties in two steps. First, we prove that it is true for the finite horizon
case when we introduce discounting. Then, we use successive approximations to show that
the infinite horizon M converges to that in the finite horizon as we push the deadline T to the
limit.
Step 1-1: Characterizing M
We prove this by induction. Take t = T, (1) becomes
πππ|ππ = οΏ½
[πΏππβ1π+1 β πΈ(ππ)] β¨ πΏππ
π+1 β¨οΏ½πΏππβ1
π+1 β (1 β π) β πΈ[πΈ(ππ|πΌπ) β¨ π ππ]οΏ½οΏ½
Since πππ+1 = 1 for all c, this equation reduces to
πππ|ππ =
{ πΈ(ππ) β¨ 1 β¨ πΈ[{πΈ(ππ|πΌπ) β¨ π ππ}] β (1 β π) }
πππ π β₯ 1
π0π|ππ = 0 πππ π = 0
Hence it is non decreasing in c at t = T
Now, assume true for t = k, or
πππ β₯ ππβ1
π πππ πππ π β₯ 1 (2)
Now, prove that it is true for t = k-1, or
πππβ1 β₯ ππβ1
πβ1 πππ πππ π β₯ 1 (3)
Where:
πππβ1|ππ = οΏ½
οΏ½πΏππβ1π β πΈ(ππ)οΏ½ β¨ πΏππ
π β¨
οΏ½πΏππβ1π β (1 β π) β πΈ[πΈ(ππ|πΌπ) β¨ π ππβ1]οΏ½
οΏ½
And
170
ππβ1πβ1|ππ = οΏ½
οΏ½πΏππβ2π β πΈ(ππ)οΏ½ β¨ πΏππβ1
π β¨
οΏ½πΏππβ2π β (1 β π) β πΈοΏ½πΈ(ππ|πΌπ) β¨ π πβ1πβ1οΏ½οΏ½
οΏ½
But since πππ is non decreasing in c, (2), we have
ππβ1πβ1|ππ β€ οΏ½
οΏ½πΏππβ1π β πΈ(ππ)οΏ½ β¨ πΏππ
π β¨
οΏ½πΏππβ1π β (1 β π) β πΈ[πΈ(ππ|πΌπ) β¨ π ππβ1]οΏ½
οΏ½ = πππβ1|ππ
Hence,
ππβ1πβ1οΏ½ππ β€ ππ
πβ1οΏ½ππ πππ πππ ππ
Meaning (3) is true, namely
ππβ1πβ1 β€ ππ
πβ1
And, finally, by induction
ππβ1π‘ β€ ππ
π‘ πππ πππ π β₯ 1 πππ πππ π‘
Step 1-2: Characterizing R
We prove this by induction. Take the case when t = T, the statement becomes
π ππ β₯ π π+1π
This is true since
π ππ = πΏπππ+1/πΏππβ1
π+1 = 1 πππ πππ π β₯ 1
Hence π ππ is non increasing in c at t = T
Now, assume this is true for t = k, lets prove it for t = k β 1.
So we know that
π ππ β₯ π π+1π πππ πππ π β₯ 1 (1)
β πΏπππ+1/ πΏππβ1
π+1 β₯ πΏππ+1π+1/ πΏππ
π+1
οΏ½πππ+1οΏ½2 β₯ ππ+1
π+1 β ππβ1π+1
πππ πππ π ππ‘ π‘ = π + 1
For the statement to be true at t = k β 1, the following must be true
π ππβ1 β₯ π π+1πβ1 πππ πππ π (2)
171
οΏ½ππποΏ½2 β₯ ππ+1
π β ππβ1π πππ πππ π ππ‘ π‘ = π (3)
For each Xn, we can rewrite πππ as:
πππ|ππ = οΏ½
πΆπ(ππ) β πΏππβ1π+1 β¨ πΏππ
π+1 β¨ πΆποΏ½οΏ½πΆπ(ππ|πΌπ) β πΏππβ1
π+1 β¨ πΏπππ+1οΏ½ β (1 β π)οΏ½
οΏ½
Now, define πππ as
πππ(ππ) = πΆποΏ½{πΆπ(ππ|πΌπ) β¨ π ππ} β (1 β π)οΏ½
So, we have:
πππ|ππ = πΏππβ1
π+1 β {πΆπ(ππ) β¨ π ππ β¨ πππ(ππ)}
Now, rewrite (3) for each Xn as:
οΏ½οΏ½πΆπ(ππ) β¨ π ππ β¨ πππ(ππ)οΏ½ β πΏππβ1π+1οΏ½2
β₯ οΏ½πΆπ(ππ) β¨ π π+1π β¨ ππ+1π (ππ)οΏ½ β πΏππ+1π+1 β
{πΆπ(ππ) β¨ π ππ‘1π β¨ ππβ1π (ππ)} β πΏππβ2π+1 (4)
Which is equivalent to
π ππ/ π πβ1π β οΏ½πΆπ(ππ) β¨ π π+1π β¨ ππ+1π (ππ)οΏ½
/{πΆπ(ππ) β¨ π ππ β¨ πππ(ππ)}2
β οΏ½πΆπ(ππ) β¨ π πβ1π β¨ ππβ1π (ππ)οΏ½ β€ 1 (5)
And the rest of the proof follows directly from the case without discounting.
Step 2: Iterative Approximations
Here we prove the infinite horizon case by iterative approximations.
Define the operator β as
β(ππ|ππ) = οΏ½οΏ½πΏππβ1 β πΈ(ππ)οΏ½ β¨ πΏππ β¨
(πΏππβ1 β (1 β π) + πΈ[πΈ(ππ|πΌπ) β¨ π ππ ])οΏ½
If we take the input of the first iteration to be 0, then the solution to the fixed point relation
of β above is πππ. Following the same methodology as above we can show that the infinite
horizon ππ converges to that of the finite horizon and the properties we proved for the finite
horizon extend to the infinite horizon.
172
Proposition 4.1 Statement 4 By definition, we have
πππ(ππ) = [πΈ(ππ)π π
β¨ 1]
So, πππ is inversely related to π π and hence πππ is non decreasing in c
Similarly, we have
πππΌπ(ππ) = πΈ[(πΈ(ππ|πΌπ)/ π π β¨ 1)] = πΈ[πππ(ππ|πΌπ)]
So, πππΌπ moves along the same direction as πππ and hence is also non decreasing in c
Proposition 4.1 Statement 5 Recall that the IBP of information is defined as:
ππππΌπ(ππ) = πππΌπ(ππ)/ πππ(ππ)
ππππΌπ(ππ) =
πΈ(πΈ(ππ|πΌπ)/π π β¨ 1)/(πΈ(ππ)/π π β¨ 1) (1)
We take two cases as πΈ(ππ) relates to π π
CASE1: πΈ(ππ) β€ π π
(1) reduces to ππππΌπ(ππ) = πΈ(πΈ(ππ|πΌπ)/π π β¨ 1)
So ππππΌπ(ππ) moves in the opposite direction as π π
Thus, ππππΌπ(ππ) is increasing in c when πΈ(ππ) β€ π π
CASE2: πΈ(ππ) β₯ π π
(1) reduces to ππππΌπ(ππ) =
πΈ(πΈ(ππ|πΌπ)/π π β¨ 1)/ πΈ(ππ) β π π
ππππΌπ(ππ) = πΈ(πΈ(ππ|πΌπ) β¨ π π)/ πΈ(ππ)
So ππππΌπ(ππ) moves in the same direction as π π
Thus, ππππΌπ(ππ) is decreasing in t and c when πΈ(ππ) β₯ π π
Now, since ππππΌπ(ππ) increases in case1 and then decreases in case2, we can see that it
reaches a maximum when π π = πΈ(ππ)
173
Finally, we study the convergence of the term. At π = πΆ, (1) reduces to:
ππππΌπΆ(ππ) = πΈ(πΈ(ππ|πΌπ) β¨ 1)/πΈ(ππ)
But, the value of the deal with free information outside the funnel, πΈ(ππβ), equals:
πΈ(ππβ) = πΈ(πΈ(ππ|πΌπ) β¨ 1)
Thus, β ππππΌπΆ = πΈ(ππβ)/πΈ(ππ)
Proposition 4.1 Statement 6 Define ππ(ππ) as
ππ(ππ) = πΈ(πΈ(πποΏ½|π) β¨ π π) β (1 β π)
Now, we have
ππ|ππ = ππβ1 β {π π β¨ ππ(ππ) β¨ πΈ(ππ)}
To prove this statement, we need to show that:
β’ If πΈ(ππ) β₯ {π π ,ππ(ππ)}, then this is true for any πβ > π
β’ If ππ(ππ) β₯ π π , then this is true for any πβ > π
To prove the first condition, we note that π π is decreasing in c. Hence, if the first condition is
true at any given c, then it must be true for larger values of c. Now, from the definition of
ππ(ππ) we see that it moves in the direction of π π. Thus, ππ(ππ) will decrease in c and the
first condition is true.
To prove the second condition we note that it is sufficient to prove
ππ(ππ)/π π β₯ ππβ1(ππ)/π πβ1
We rewrite this as
πΈ(πΈ(ππ|πΌπ) β¨ π π) β (1 β π) β π π β₯
πΈ(πΈ(ππ|πΌπ) β¨ π πβ1) β (1 β π) β π πβ1
πΈ(πΈ(ππ|πΌπ)/π π) β₯ πΈ(πΈ(ππ|πΌπ)/π πβ1)
β πππΌπ(ππ) β₯ πππΌπβ1(ππ)
Which we know is true from proposition 4.1 statement.
174
A1.3.3 Section 7.5.1 Extensions β Multiple Cost Structures
Proposition 7.5.1a: Optimal Information Gathering Policy with Multiple Cost Structures When offered a deal ππ the decision maker should buy information if and only if
πππΌπβππ‘+π(ππ) β₯ ππππ‘ (ππ) β
π ππ‘(π,π + 1)1 β π
Proposition 7.5.1b: Optimal Allocation Policy with Multiple Cost Structures After information is received, the decision maker should buy the deal if and only if
πΆπ(ππ|πΌπ) β₯ π πβππ‘+π(1,1)
Otherwise, the deal is worth buying without information if and only if
πΆπ(ππ) β₯ π ππ‘
The recursion is reproduced here:
Proposition 7.5.1a We first write the recursion as:
πππ‘|ππ = οΏ½ππβ1
π‘+1 β πΆπ(ππ) β¨ πππ‘+1 β¨ οΏ½ππβπ
π‘+1+π β (1 β π) β πΆποΏ½οΏ½πΆπ(ππ|πΌπ)π ππ‘(π,π + 1)
β¨ 1οΏ½οΏ½οΏ½οΏ½
where
π ππ‘(π,π + 1) = πππ‘+1/ ππβππ‘+1+π
This reduces to
πππ‘|ππ = ππ
π‘+1 οΏ½πΆπ(ππ)π ππ‘
β¨ 1 β¨ οΏ½ππβππ‘+1+π
πππ‘+1 β (1 β π) β πΆπ οΏ½οΏ½
πΆπ(ππ|πΌπ)π ππ‘(π,π + 1)
β¨ 1οΏ½οΏ½οΏ½οΏ½
Accept
Seek info
Reject
Reject
Accept
πππ‘|ππ β π0
ππβ1π‘+1 β πΆπ(ππ)
πππ‘+1 β π0
ππβ1βππ‘+1+π β πΆπ(ππ|πΌπ) β π0 β (1 β π)
ππβππ‘+1+π β π0 β (1 β π)
Ii
175
πππ‘|ππ = ππ
π‘+1 οΏ½ππππ‘ (ππ) β¨ οΏ½
ππβππ‘+1+π
πππ‘+1 β (1 β π) β πππΌππ‘ (ππ)οΏ½οΏ½
πππ‘|ππ = ππ
π‘+1 οΏ½ππππ‘ (ππ) β¨ οΏ½
πππΌπβππ‘+π (ππ)π ππ‘(π,π + 1)
β (1 β π)οΏ½οΏ½
Hence we seek information when
πππΌπβππ‘+π (ππ)π ππ‘(π,π + 1)
β (1 β π) β₯ ππππ‘ (ππ)
β πππΌπβππ‘+π (ππ) β₯ ππππ‘ (ππ) β
π ππ‘(π,π + 1)1 β π
After information is received, the deal is worth accepting if and only if:
ππβ1βππ‘+1+π β πΆπ(ππ|πΌπ) β π0 β (1 β π) β₯ ππβπ
π‘+1+π β π0 β (1 β π)
ππβ1βππ‘+1+π β πΆπ(ππ|πΌπ) β₯ ππβπ
π‘+1+π
β πΆπ(ππ|πΌπ) β₯ π πβππ‘+π(1,1)
Corollary 7.5.1: Identifying Optimal Detector with Multiple Cost Structures Given the setup above, detector 1 will be optimal when:
πππΌ1πππΌ2
β₯π ππ‘(π2,π2 + 1 )π ππ‘(π1,π1 + 1 )
β1 β π21 β π1
Where πππΌ1 = πππΌπβπ1π‘+π1ππ£ππ πΌ1,πππ πππΌ2 = πππΌπβπ2
π‘+π2ππ£ππ πΌ2
Otherwise Detector 2 will be optimal.
From Proposition 7.5.1 we have that the value multiple of the alternative of information for
detector 1, A1, is worth:
π΄1 =πππ‘+1 β πππΌ1
π ππ‘(π1,π1 + 1)β (1 β π1)
We define A1 in the same manner.
So, we prefer detector 1 over detector 2 when π΄1 > π΄2. Or
πππ‘+1 β πππΌ1
π ππ‘(π1,π1 + 1)β (1 β π1) β₯
πππ‘+1 β πππΌ2
π ππ‘(π2,π2 + 1)β (1 β π2)
176
β πππΌ1πππΌ2
β₯π ππ‘(π2,π2 + 1)π ππ‘(π1,π1 + 1 )
β1 β π21 β π1
A1.3.4 Section 7.5.2 Extensions β Decision Reversibility The recursion is reproduced here:
Proposition 7.5.2 Optimal Allocation Policy with an Option When offered a deal ππ with an option on deal π, the decision maker should accept the deal
ππ and buy an option on it if and only if:
πππππ‘(ππ,π) β₯ ππΆππ‘ (ππ,π) βπΆπ(π)1 β π
Otherwise, the decision maker should accept if and only if:
πΆπ(ππ) > π ππ‘(1,1,π)
We have the following recursion
πππ‘(π)|ππ = οΏ½οΏ½
πππ‘+1(ππ) β πΆπ(ππ)
πΆπ(π) β (1 β π)οΏ½ β¨ οΏ½ππβ1π‘+1(π) β πΆπ(ππ)οΏ½ β¨ οΏ½ππ
π‘+1(π)οΏ½οΏ½
πππ‘(π)|ππ
ππβ1π‘+1(π) β πΆπ(ππ)
=
πππ‘(π)|ππ
ππβ1π‘+1(π) β πΆπ(π
πππ‘(π)|π
ππβ1π‘+1(π) β πΆπ
So we buy the option when:
οΏ½πππ‘+1(ππ)
ππβ1π‘+1(π) β πΆπ(π)
β (1 β π)οΏ½ β₯ ππΆππ‘(ππ,π)
Reject
Accept Donβt
Buy option
πππ‘(π)|ππ β π0
πππ‘+1(ππ) β πΆπ(ππ) β π0/(πΆπ(π) β πΉ)
ππβ1π‘+1(π) β πΆπ(ππ) β π0
πππ‘+1(π) β π0
177
οΏ½πππ‘+1(ππ)ππβ1π‘+1(π)
οΏ½ β₯ ππΆππ‘(ππ,π) β πΆπ(π)/(1 β π)
β πππππ‘(ππ,π) β₯ ππΆππ‘(ππ,π) βπΆπ(π)1 β π
Otherwise, the deal is worth accepting if and only if:
ππβ1π‘+1(π) β πΆπ(ππ) β π0 β₯ ππ
π‘+1(π) β π0
πΆπ(ππ) β₯πππ‘+1(π)
ππβ1π‘+1(π)
πΆπ(ππ) β₯ π ππ‘(1,1,π)
A1.3.5 Section 7.5.3 Extensions β Probability of Knowing Detectors The recursion is reproduced here
Recursion Equation Define the following terms as before
ππππ‘ (ππ) = οΏ½
πΆπ(ππ)π ππ‘
β¨ 1οΏ½
ππππΌππ‘(ππ) = πΆπ(ππππ‘(ππ|πΌπ))
By dividing all the end terms by πππ‘+1 β π0, the recursion above reduces to:
πππ‘|ππ β π0
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject
ππβ1π‘+1 β πΆπ(ππ) β π0
πππ‘+1 β π0
Reject
Accept ππβ1π‘+1 β πΆπ(ππ|πΌπ) β (1 β π) β π0
πππ‘+1 β (1 β π) β π0
Ii
Reject
Accept ππβ1π‘+1 β πΆπ(ππ) β (1 β π) β π0
πππ‘+1 β (1 β π) β π0
p
178
And finally reduces to
Proposition 7.5.3a: Optimal Information Gathering Policy with Probability of Knowing Detectors Given a detector defined as above with a probability of knowing p and cost fraction f, the
decision maker should buy information if and only if:
οΏ½π’ οΏ½ππππΌππ‘(ππ)πππ
π‘(ππ) οΏ½ β 1οΏ½ β₯οΏ½π’ οΏ½ 1
1 β ποΏ½ β 1οΏ½
π
ππππ‘(ππ) β (1 β π)
πππ‘|ππ/ππ
π‘+1
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject
πΆπ(ππ)/π ππ‘
1
Reject
πΆπ(ππ|πΌπ) β (1 β π)/π ππ‘
(1 β π)
Ii
Reject
Accept πΆπ(ππ) β (1 β π)/π ππ‘
(1 β π)
ππππ‘(ππ)
(1)
p
Accept
Where (1) = ππππΌππ‘(ππ) β (1 β π)
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject πππ‘|ππ/ππ
π‘+1
πΆπ(ππ)/π ππ‘
1
Reject
Accept πΆπ(ππ|πΌπ) β (1 β π)/π ππ‘
(1 β π)
Ii
Reject
Accept πΆπ(ππ) β (1 β π)/π ππ‘
(1 β π)
p
179
Proposition 7.5.3b: Optimal Allocation Policy with Probability of Knowing Detectors If clairvoyance is received, the decision maker should buy the deal if and only if:
πΆπ(ππ|πΌπ) β₯ π ππ‘
Otherwise, if no clairvoyance is received or the decision maker did not buy information; then
the decision maker should buy the deal if and only if:
πΆπ(ππ) β₯ π ππ‘
We seek information when seeking information provides a higher value than not. We state
this relationship in terms of u-values.
π’οΏ½ππππ‘(ππ)οΏ½ β€ π π’(ππππΌππ‘(ππ) β (1 β π)) + (1 β π)π’(πππ
π‘(ππ) β (1 β π))
For simplicity, we drop the terms in iM and iMI
(ππ)π β€ π (ππππΌ β (1 β π))π + (1 β π)(ππ β (1 β π))π
(ππ
1 β π)π β€ π (ππππΌ)π + (1 β π)(ππ)π
(ππ
1 β π)π β€ π (ππππΌ)π β π(ππ)π + (ππ)π = π[(ππππΌ)π β (ππ)π] + (ππ)π
(1
1 β π)π β€ π[(
ππππΌππ
)π β 1] + 1
(1
1 β π)π β 1 β€ π[(
ππππΌππ
)π β 1] = π(ππππΌππ
)π β π
β (ππππΌππ
)π β 1 β₯ οΏ½πΉπ β 1οΏ½/π
Corollary 7.5.3: Identifying Optimal Detector with Probability of Knowing Detectors Given the setup above, detector 1 will be optimal when
οΏ½π’ οΏ½ 11 β π1
οΏ½ β 1οΏ½
π1<οΏ½π’ οΏ½ 1
1 β π2οΏ½ β 1οΏ½
π2
Otherwise Detector 2 will be optimal. In this setup, this optimality is myopic. So if we have
multiple irrelevant detectors we use them in the decreasing order of οΏ½π’ οΏ½ 11βπ
οΏ½ β 1οΏ½ ποΏ½
180
Directly from Proposition 7.5.3 we have that the benefit of detectors is inversely related
toοΏ½π’ οΏ½ 11βπ
οΏ½ β 1οΏ½ ποΏ½ .
181
In the case with two detectors, we have the following setup(after dividing through by W0)
In the same manner as before, we reduce this to the following:
πππ‘|ππ
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject
ππβ1π‘+1 β πΆπ(ππ)
πππ‘+1
Reject
Accept ππβ1π‘+1 β πΆπ(ππ|πΌπ) β (1 β π1)
πππ‘+1 β (1 β π1)
Ii
Reject
Accept ππβ1π‘+1 β πΆπ(ππ) β (1 β π1)
πππ‘+1 β (1 β π1)
No Clairvoyance
Clairvoyance
Seek info
Reject
Accept
β (1 β π1) β (1 β π2) ππβ1
π‘+1 β πΆπ(ππ|πΌπ)
πππ‘+1 β (1 β π1) β (1 β π2)
Ii
Reject
Accept ππβ1π‘+1 β πΆπ(ππ) β (1 β π1)
β (1 β π2)
πππ‘+1 β (1 β π1) β (1 β π2)
p1
p2
182
πππ‘|ππ/ππ
π‘+1
No Clairvoyance
Clairvoyance
Seek info
Reject
Accept Ii
No Clairvoyance
Clairvoyance
Seek info
Reject
Accept Ii
Reject
Accept
πΆπ(ππ|πΌπ)π ππ‘βπΉ1
1/πΉ1
ππππΌππ‘(ππ)πΉ1
πΆπ(ππ|πΌπ)π ππ‘βπΉ1βπΉ2
1πΉ1βπΉ2
(1)
πΆπ(ππ)π ππ‘ β πΉ1 β πΉ2
1πΉ1βπΉ2
(2)
(2) = ππππ‘(ππ)/(πΉ1 β πΉ2)
Where: (1) = πππππΌππ‘(ππ)/(πΉ1 β πΉ2)
p1
p2
183
Note that if a detector does not satisfy:
(ππππΌππ
)π β 1 β₯ οΏ½πΉπ β 1οΏ½/π
Then it is useless. Hence we consider the case where both detectors satisfy the above
equation. This reduces the recursion to:
So, the value of the deal flow when using detector 1 before detector 2 is:
π’(πππ‘|ππ/ππ
π‘+1)
= π1 π’(ππππΌππ‘(ππ)/πΉ1) + (1 β π1)[ π2 π’(ππππΌππ‘(ππ)/(πΉ1 β πΉ2))
+ (1 β π2)π’(ππππ‘(ππ)/(πΉ1 β πΉ2))]
(2)
πΆπ(ππ)π ππ‘ β πΉ1 β πΉ2
(1)
πππ‘|ππ/ππ
π‘+1
No Clairvoyance
Clairvoyance
Seek info
Accept
Reject
πΆπ(ππ)/π ππ‘
1
Ii
Reject
Accept πΆπ(ππ)/(π ππ‘ β πΉ1)
1/π1
No Clairvoyance
Clairvoyance
Seek info
Reject
Accept πΆπ(ππ|πΌπ)π ππ‘βπΉ1βπΉ2
1πΉ1βπΉ2
Ii
Reject
Accept
1πΉ1βπΉ2
ππππ‘(ππ)
πΆπ(ππ|πΌπ)/(π ππ‘ β πΉ1)
1/πΉ1
ππππΌππ‘(ππ)πΉ1
πΉ = 1/(1 β π)
(2) = ππππ‘(ππ)/(πΉ1 β πΉ2)
Where:
(1) = ππππΌππ‘(ππ)/(πΉ1 β πΉ2)
p1
p2
184
Again, we drop the terms in iMI and iM for clarity to get
π’(πππ‘|ππ/ππ
π‘+1)
= π1π’(ππππΌ/πΉ1) + (1 β π1)[π2 π’(ππππΌ/(πΉ1 β πΉ2))
+ (1 β π2)π’(ππ/(πΉ1 β πΉ2))]
π’(πππ‘|ππ/ππ
π‘+1) = π1 οΏ½ππππΌπΉ1
οΏ½π
+ (1 β π1)[π2 οΏ½ππππΌπΉ1 β πΉ2
οΏ½π
+ (1 β π2) οΏ½ππ
πΉ1 β πΉ2οΏ½π
]
π’(πππ‘|ππ/ππ
π‘+1)
= π1 οΏ½ππππΌπΉ1
οΏ½π
+ π2 οΏ½ππππΌπΉ1 β πΉ2
οΏ½πβ π1π2 οΏ½
ππππΌπΉ1 β πΉ2
οΏ½π
+ οΏ½ππ
πΉ1 β πΉ2οΏ½π
β π1 οΏ½ππ
πΉ1 β πΉ2οΏ½πβ π2 οΏ½
πππΉ1 β πΉ2
οΏ½π
+ π1π2 οΏ½ππ
πΉ1 β πΉ2οΏ½π
Denote this case by (I) and the case with detector 2 before detector 1 by II.
So in order to have detector 1 before detector 2 we must satisfy
π’(πΌ) β π’(πΌπΌ) β₯ 0
Hence, after eliminating canceling terms,
π1 οΏ½ππππΌπΉ1
οΏ½πβ π2 οΏ½
ππππΌπΉ2
οΏ½π
+ (π2 β π1) οΏ½ππππΌπΉ1 β πΉ2
οΏ½πβ₯ 0
We multiply throughout by οΏ½πΉ1βπΉ2ππππΌ
οΏ½π
to get
π1(πΉ2)π β π2(πΉ1)π + (π2 β π1) β₯ 0
And finally,
β οΏ½ 1
1 β π2οΏ½πβ 1
π2β₯οΏ½ 1
1 β π1οΏ½πβ 1
π1
185
Appendix A2 β A Generic Decision Diagram Example
A2.1 Introduction Within a Venture Capital context, early stage decisions are difficult to make for several
reasons. The uncertainties are rather complex and span different areas. To account for
biases, firms resort to the use of due diligence lists. The lists fall short of specifically
accounting for uncertainties and the connections between them. Decision diagrams
accurately capture the effects of uncertainties, but they are not commonplace, due to the
perceived difficulty of applying them.
This appendix gives a sample generic decision diagram for Venture Capitalists evaluating an
Internet consumer business startup. We faced two challenges when developing the diagram.
The first is the trade-off between simplicity and generality. The second is to define the nodes
so as to pass the clarity test. As stated in Chapter 5, Richman (2009) vetted this model with
20 Venture Capitalists from Silicon Valley.
The rest of the chapter is organized as follows. In Section 2, we present the generic diagram
and discuss its setup and frame. We elaborate on the different nodes in detail in Section 3.
We discuss how to incorporate a higher level of detail into the diagram in Section 4. In
Section 5 we present some possible extensions.
A2.2 Generic Diagram Setup and Frame
Decision Maker Our decision maker is a potential investor in the company. His/her involvement is limited to
funding and influencing the hiring of the management team. The decision maker is taken to
have no influence on the business decisions of the company.
Frame Some firms only evaluate companies in specific areas or those that are recommended by a
specific group. The frame of the decision allows the decision maker to filter the opportunities
under discussion based on her preferences. Hence, outside of the diagram, the frame should
be well defined within the firm to incorporate their policies.
186
Setup The diagram presented below gives a high-level representation to allow for a quick analysis.
More detail can be added for the specific problem under consideration. We incorporate time
evolution by representing each node as a time series. We also allow feedback through the
βObservationβ node. This node captures all that is observable and it is relevant, with a single
step delay, to some of the other nodes in the diagram.
We divide the nodes into five groups, namely, Initial Analysis, Execution, Results, Liquidation,
and Additional Considerations. The grouping is based on the chronological order of
assessments and exists solely to simplify the representation of the diagram.
We adopt the notation discussed in Howard (1989) to represent uncertainties, decisions, and
evocative nodes. We represent delay with a βZβ on the arrow. The color-coding is there for
ease of identification only.
Diagram
Figure 71 - Example of a Generic Decision Diagram
Invest?
INITIAL EXECUTION RESULTS LIQUIDATION
Market Size & Growth
Technology
Team
Competitors
Possible Applications
Business Model
Revenue
Cost Future Financing
Exit
Dilution
ValueProf it
Cash Balance
Observables
PartnershipsHiring Barriers to Entry
Z
Market Share & Growth
187
where
Figure 72 - Generic Diagram Node Key
The βInvestβ decision node represents the alternatives available to the decision maker. These
include the decision whether to invest or not, and if so, how much to invest and on what
conditions. The conditions that can be made are on the choice of the team and on the
milestones set on future investment decisions. The decision node is also a time series.
Invest The decision maker decides how much to invest, if any, and on what conditions. The
conditions include influencing the companyβs team and setting conditions for future
investments if appropriate.
A2.3 Generic Diagram Node Definitions
First Step: Initial Analysis Here we consider the aspects of the investment that are usually first presented to the
decision makers. Here we consider three nodes, namely, Team, Market Size and Growth, and
Technology. The following is a detailed discussion of the nodes.
Team Given a set of characteristics X, Y, and Z that the firm deems important in the team, this node
gives the distribution over the decision makerβs belief of whether his/her partners will find
the team to be X, Y, and Z and to what degree. This node is conditioned on the decision
makerβs investment decision.
Nodes
uncertainty
Decision
Value
Observation
Arrows
Possible Dependence
Optional assessment
Feedback assessmentZ
188
The management should consider the characteristics that matter most for the business at
hand. An example might be their technical know-how in the case of a proprietary business.
This distinction is influenced by the βInvestβ decision node. Some of the main considerations
for assessing the team, important at this node, are their capabilities, prior experience,
education, and personalities.
Market Size and Growth This node gives a join distribution over the size and growth of the productβs addressable
market. All the potential applications of the product should be considered when assessing
the market size and growth. Also, the possibility of applications newer than those currently
considered should be included. The timing of the company and the current trends are
important considerations when assessing the size and growth of the available market.
Technology This node gives a distribution over the features of the technology to be developed. This is
conditioned on the Team node.
Failure to develop the technology is incorporated as a probability of failure within this node.
Hiring, especially hiring difficulties, should be considered when assessing how the technology
evolves over time, so added it as an optional node to emphasize its importance. The
conditioning on the team includes the know-how and training of the team members.
Other considerations include the state of the art of the technology used in the product, the
relevant patents, and the scalability and maintainability of the technology. The code
platform, and whether it is open-source or not, is relevant to how the team develops the
technology.
Second Step: Execution Here we consider the companyβs business model, the competitive landscape, and the
companyβs market share and number of users.
Competitors This node gives a distribution over the set of possible actions by competitors. The distinction
over competitors evolves with time given the observable aspects of the company. This node
also includes the possibility of new competitors emerging. Barriers to entry should be
considered when assessing this node. Barriers to entry include patents and the cost and
189
delay in customer acquisition. We also included this as an optional node to emphasize its
importance.
Business Model This node gives a distribution over the possible variations of the business model that the
company will apply to its customers given the actions of its competitors.
There are uncertainties around peopleβs acceptance of the business model and of the prices
the company can charge. This node also considers the effectiveness of the management
team in evolving their pricing to suit the market. The following are some of the
considerations around the Business Model.
Market Share & Growth This node gives the distribution over the companyβs market share and its growth rate. This is
conditioned on the business model, team, technology, and competitors.
The market share node can be substituted with a node for the number of users, which will
also be conditioned on the Market Size. This node will be the most difficult to assess for
many reasons. For one, it is conditioned on several other nodes and the assessments
required might be too many for the decision maker to comprehend. This node is a good
example of where the inclusion of a more detailed layer is important.
Third Step: Results This section tracks the financial results of the company. It section includes three nodes:
revenue, cost and profit.
Revenue This node gives the distribution over the companyβs revenue given the business model,
market size & growth, and market share & growth.
Cost This node gives the distribution over the companyβs cost given the team, market size &
growth, and market share & growth. When assessing cost, the decision maker should
consider hiring costs. The uncertainties around the variable and fixed costs incurred by the
company should also be included here. Other considerations include the costs to acquire and
maintain a customer.
190
Profit This node gives is a deterministic function of the companyβs profits given its revenue and
costs.
Fourth Step: Liquidation
Exit This node gives the distribution over the possible exit scenarios for the company given its
level of profit. This node can be conditioned on the market share and growth instead of
profit for early-stage exit. The exit strategy includes the variations on the exit terms
acceptable to the company.
Future Financing This node gives the distribution over the available future financing (in dollars) given the
manner in which the company evolves. Macroeconomic considerations such as the
availability of capital and the credit market should be including when assessing this
distinction. The teamβs financial strength and its financial network are also relevant to the
availability of future financing.
Cash Balance This node gives the distribution over the cash balance of the company conditioned on its
profit and future financing. The cash balance affects the valuation and exit strategy available
to the company. If the company is pressed for cash, they will be at a disadvantage when
negotiating the valuation.
Dilution This node gives the distribution over the effects of future financing on the investorβs share of
the company. This node reflects the price of the future financing obtained by the company.
Value and Feedback In this section, we discuss the value of this company to the decision maker. We also describe
the feedback in the diagram through the observation node.
Value This node gives the distribution of the decision makerβs financial return (in dollars) from this
company, which is conditioned on the companyβs profit, cash balance, and exit strategy. It is
also conditioned on the dilution incurred on the decision makerβs interest in the company.
191
This node also includes the assessed value (in dollars) of the decision makerβs partnerships in
other firms.
Partnerships This node gives the distribution over the possible relationship scenarios with other firms
through this company and their assessed value (in dollars).
Observations This is a deterministic node that captures all the observable factors within this diagram. In
order to simplify this diagram we omitted the arrows from the observable nodes to the
observations node. Any node that can be observed after time as passed is connected to this
node. In this way, this node gives a summary report on all that was observable about the
evolution of the company.
Feedback is modeled by conditioning some of the distinctions to the observations node in
the previous time period. We can extend this view by conditioning the nodes on different
subsets of the observations by having multiple observation nodes that are connected to
different distinctions.
A2.4 Deeper Layers The diagram can be made more comprehensive and/or specific by adding new layers of
detail. We recommend the firm uses the first layer described above for quick analysis and
develops a second layer for in-depth analysis. The second layer can also cater to the specific
preferences of the firm. A third layer can be added to include the specifics of the venture
under consideration.
In the following we discuss a more detailed representation of the relationship between
competition and the market share. Such a representation can be used to build a more
detailed decision diagram.
192
Competitors The competitive landscape can be represented in the following Venn diagram:
Figure 73 - Venn Diagram of the Competitive Landscape
We represent two metrics: the overlap between our own addressable markets and those of
each competitor, and the competitorβs captured market from each overlap. In the diagram
above, the background orange box is the companyβs addressable market. We represent how
close the offering of a competitor is to that of the company by the overlap of their
addressable markets. The silver box, for example, shows the overlap between competitor C
and the company. We can see that competitor Cβs offering is distant from that of A, as its
addressable markets do not overlap. Within each of the competitorsβ addressable markets
we represent their market share within that overlap with an orange box outline. This metric
represents the strength of the competitorβs market presence.
Following this representation we can elaborate the competitorsβ node in the decision
diagram above into the cluster show in Figure 74.
Competitor A
Competitor B
Competitor C
Addressable market
Captured Markets
193
Figure 74 - Cluster Diagram of the Competitorsβ Node
Market Share Here we follow the same representation as that shown in Figure 73. We note three distinct
areas. First, there is an area of the market that is only addressed by us. This includes any
markets created by our company. The second type is the share of the market that is
addressed by other competitors but still not captured by anyone. For a growing market, this
includes the customers who are yet not aware of the offerings. It also includes new entrants
to the market (teenagers above a certain age, etc.). The third type, the most common, is the
market that is addressable by our companyβs product but is already captured by another
competitor. Figure 75 illustrates the different types.
Figure 75 - Types of Market Share
The market share can be represented by a decision diagram that includes the conversion
ratios of each of these areas. The following diagram uses only on conversion ratio for each
Actions by Current
Competitors
Actions by New
Competitors
Competitorsβ Addressable
Market
Competitorsβ Captured Market
Competitors
Observables
Barriers to Entry
Competitor A
Competitor B
Competitor C
Area 1
Area 3
Area 2
194
area. However, one can elaborate the diagram even further by specifying a conversion ratio
to each specific combination of the competitors.
Figure 76 - Example of an internal diagram for Market Share
The conversion cost node represents the cost incurred by a customer to switch from a
competitorβs solution to that of our company. This cost includes the financial price and the
time delay.
A2.5 Extensions In this section we discuss two extensions to the investment problem that are based on the
generic diagram. The first relates to the incorporation of future decisions, while the second
address the study of investment milestones.
Future Decisions It is common in VC practice to divide the investment in a venture into multiple phases. In this
way, the decision makers are able to track the progress of the company before committing
further funds. This strategy can be analyzed as an options problem. Consider the base case
in which the investor is asked to commit, in a single decision, all the funds needed to take the
company to an exit. The decision maker finds value in the option to delay part of the
investment to a later period, thereby obtaining some information about the venture before
committing the rest of the funds. Investment rounds can be thought of as options on future
investment decisions and their value can be calculated by following the method in Howard
(1995).
% conversion from Area 2
% conversion from Area 1
Conversion Cost
% conversion from Area 3
Market Share
Technology
Team
Competitors
Business Model
195
Milestones Venture firms usually set certain milestones with each investment phase. These milestones
are commonly set on the basis of common practice or to alleviate a certain concern. As an
extension to this work, milestones can be studied as imperfect tests. Strict milestones can be
represented as tests with high specificity and low sensitivity. Relaxed milestones are the
opposite; they have low specificity and high sensitivity. The tradeoffs between the specificity
and sensitivity of the test might shed some light on the factors that come into play in
designing milestones.
196
Appendix A3 β Venture Capital Valuation In this appendix we give a brief introduction to Venture Capital valuation in the literature for
practitioners and from direct interviews. From the literature, we give an overview of
valuation models and then focus on Venture Capital valuation as discussed by the HBR article
The Venture Capital Method. We also interviewed several Venture Capitalists in Silicon
Valley. While this overview does not rise to meet academic requirements, we believe the
methods we assessed give an interesting insight on actual Venture Capitalistsβ decision
making.
A5.1 Literature Review In the following we give a high-level overview of valuation models and then focus on the
valuation within a Venture Capital context.
A5.1.1 Valuation Overview We consider three, not mutually exclusive, general frameworks for valuation. First, and most
notable, is the discounted cash flow model. The second is the relative valuation and the third
is the option pricing valuation framework. We refer the interested reader to Damodaran
(2001) and (2002), Copeland et al. (2005), and Cornell (1993).
The discounted cash flow framework, in short, is based on adjusting the estimated value of
future cash flow projections downwards by an appropriate discount rate. The discount rate
should reflect the characteristics of the investment, including, mainly, its βriskiness.β The
relative valuation model derives the value of an investment from the known market value of
comparable investments. In this framework, the analyst uses specific, standard measures of
value, including the price/earning ratio. The option pricing framework is similar to the DA
understanding of options and I will use it when applicable.
In general, the valuation of a firm considers the following aspects. First, the projections of
the future cash flow of the company are set. These projections are point estimates of the
future that are rationalized through a deterministic model. At the end of the projection
period, a terminal value for the investment is projected. Then the appropriate discounted
197
cash flow model is chosen given the specifics of the investment. After that, the discount rate
appropriate for this investment is determined, taking into account its individual
characteristics. Finally, other characteristics of the investment that are not taken into
account by the projected cash flows are considered. These characteristics may include the
marketability (liquidity) of the investment, the managerial flexibility, and the control rights
associated with the investment.
Relative valuation determines the value of an asset by comparing it to similar assets that
have an established market price. Similarity is considered largely in terms of risk and return.
Other factors include sector, liquidity, seasonality, and business models.
Option pricing models define the investments in terms of options and then evaluate the
options in terms of market prices. These are best suited to specific investment opportunities
that have embedded or real options.
A5.1.2 The Venture Capital Method
Overview The Venture Capital Method (VCM) was nitroduced by Sahlman (1986) and it focuses on
determining the appropriate discount rate to be used by the investor and the best way to use
this discount rate to determine what percentage the investor acquires. The latter part is
mainly algebraic manipulation that makes sure that the investorβs share in the company is
sufficient to entail a satisfactory return and/or degree of control with a minimum of risk.
Projections for Future Cash Flow VCM avoids this part of the analysis by assuming that the investor takes future cash flow as a
given and actually recommends asking the entrepreneur to give his estimate based on the
best-case scenario.
Setting the Discount Rate The main part of VCM is devoted to determining the appropriate discount rate to apply to
the risky investment. VCM suggests the following components that determine the discount
rate:
Base Rate of Return: This is the base rate available from risk-free investments. It
compensates for inflation and is usually set by the government bonds.
198
Risk Premium: Here the VC considers a premium for both unsystematic risk and systemic
risk. Unsystematic risk, by the VCM definition, is the non-market risk, which includes all the
uncertainties that are specific to the investment and are irrelevant to the market conditions.
VCM is based on the premise that the investor can diversify such risks and hence should not
require a premium for taking on such risks. Systematic risk is the market risk, which includes
all the uncertainties that are relevant to market conditions. Startup-like investments are
usually highly vulnerable to market conditions, and therefore VCM assigns them a high risk
parameter.
Liquidity Premium: The marketability of equity in startups and privately held businesses is
limited due to a number of reasons, including legal restrictions. VCM accounts for this
limitation by requiring a higher discount rate
Value Added Premium: This premium is relevant when the investors are actively engaged in
the business. VCM suggests requiring an increase in the discount rate that accounts for the
time the investors spend with the company.
Cash Flow Adjustment: Here the VCM takes into account the prior experience of the
investor. VCM adjusts the terminal value of the investment by taking into account how
similar investments evolve. To illustrate, VCM states that the investor usually faces three
prospects, namely, success, lateral movement, and loss. Here, βsuccessβ means that the
company will meet or exceed its expectations; βlateral movementβ means that the investor
will only be able to retrieve his investment; and, finally, βlossβ means that the company is
liquidated. Using VCM, the investor takes the projections to be the mean given the three
prospects.
Estimating the Terminal Value VCM suggests that the investor uses the price-to-earning (PER) ratio to estimate the value of
the investment. The PER that is applied to a given investment can be approximated from
similar publicly traded firms. VCM also suggests checking the following points when
estimating the PER for the investment (taken directly from the paper):
1. Are the companyβs revenue forecasts consistent with:
a. The overall industry projections?
b. The level of entry and competition?
c. The interval ability of the company to sustain the growth rate?
199
2. Are the companyβs margin forecasts consistent with:
a. The level of entry barriers now and expected in the future?
b. The relative bargaining power of suppliers and customers?
c. The threat of substitutes?
d. The intensity and form of current and projected competition?
3. Is the terminal valuation, given agreement on terminal sales and margin forecasts,
consistent with the current level of valuations in the market for:
a. Liquidation?
b. Initial public offering?
c. Acquisition by another company?
It should be noted, however, that VCM does not suggest a framework of incorporating these
questions into the estimation of the PER.
A5.1.3 The First Chicago Method (FCM) The VCM paper also gives a βstate-of-the-artβ method that differs from the one described
above, as it takes explicitly into account the different scenarios discussed in the βCash flow
adjustmentβ process described above. Therefore, with FCM the investor lays down the cash
flow projections associated with the different scenarios and their respective probabilities. As
expected, the investor will reduce the required discount rate after explicitly accounting for
some of the risk associated with the investment.
FCM allows the investor to account for some of the specifics of the company. For example,
with FCM the investor can differentiate between companies with different liquidation rates
in the loss scenario.
A5.2 The βReal VCβ View The following is a summary of the thoughts of Venture Capitalists based on conversations
with a number of representative VC. The discussions covered four aspects: their preferences,
the investments profile, due diligence, and valuation.
Preferences Investors are either market first or team first.
200
Market First These VC believe that great markets are the key to success and that they can accommodate
bad teams and command great talent.
Team First These VC believe that great people are the key to success. In their belief, great people can do
great things, create great ideas, and drive great markets.
Investments Profile VC focus on investments that can go public (IPO); such investments satisfy the following
criteria after about five years. The investments should have the potential to achieve
revenues between $50M and $100M. They should also exhibit annual growth rates of 50%-
100%. In this view, the investment decisions are binary; deals either fit the profile or they do
not.
VC classify opportunities into three categories. Opportunities can have IPO potential,
acquisition potential, and those that will be βoutβ. The classifications are based on the annual
revenues and growth rates of the opportunities. The following graph shows the different
possible scenarios:
Only firms that are believed to follow the green curve are considered as investment options by VC.
Figure 76 - Venture Capital Opportunity Classifications
Arrows show passage of time __ : IPO
__ : Acquisition
__ : Out
Revenue ($)
Growth (%)
$100M $50M
50%
100%
201
Due Diligence The goal of this part of the process is to decide whether a certain investment fits the VCβs
criteria or not. They do this by studying the market, management team, value proposition,
and companyβs competence. VC leverage their area expertise and connections to be able to
assess the value of the deals and add value to the ones in which they decide to invest.
Valuation The VC we interviewed did not put much emphasis on valuation. In their view, if they believe
an investment has what it takes to reach the upper right quadrant, then the investment
required is irrelevant. To decide on the investment, they consider three factors.
First, they consider the amount of investment the company needs to achieve its goals and
mitigate the risks through the multiple rounds. Second, they set the valuation in a way that
allows them to acquire enough percentage in the company to justify their commitment.
Finally, they require the option to maintain the same percentage in future rounds.
A5.3 Summary The section above is intended to provide context for the environment in which VC make their
decisions and the factors involved. This description provides a background in which DA can
be considered as a tool for decision-making.