quantification and persuasion in managerial...
TRANSCRIPT
Quantification and Persuasion in Managerial Judgment
Kathryn Kadous Emory University
Atlanta, Georgia 30322 [email protected]
Lisa Koonce
University of Texas Austin, Texas 78712
Kristy L. Towry Emory University
Atlanta, Georgia 30322 [email protected]
August 2003
The Business Measurement and Assurance Services Center in the Department of Accounting at the University of Texas provided funding for this project. A previous version of this paper received helpful comments from Urton Anderson, Wendy Bailey, Jake Birnberg, Tom Dyckman, Jeffrey Hales, Susan Krische, Marlys Lipe, Joan Luft, Anne Magro, Linda Myers, Geoff Sprinkle, and participants at the 2003 AAA Management Accounting Section Mid-year Meeting, the 2003 AAA Annual Meeting, and at workshops at Cornell University and the Universities of Illinois and Oklahoma.
Quantification and Persuasion in Managerial Judgment
Abstract
Managers proposing new projects or changes in firm operating policy must decide how to present their proposals so as to maximize the likelihood of approval. Conventional wisdom holds that putting numbers to a proposal, or quantifying it, enhances its persuasive power. However, little scholarly evidence exists to support or refute this claim. In this paper, we develop an original process-based model of how quantification influences persuasion in a managerial decision-making context. We posit that quantifying a proposal enhances its persuasive power by increasing both the perceived competence of the proposal preparer and the perceived plausibility that a favorable outcome could occur. However, quantification also invites scrutiny of the details of the proposal, which potentially offsets these effects. We experimentally test implications of our model, investigating conditions in which quantification is more and less likely to result in criticism of the quantified proposal. We also test the model, itself, using structural equations methods. Results largely support the model, which should prove of value to researchers interested in the effects of measurement and quantification on decisions. Further, our results should be of interest to managers who prepare proposals, as well as to those who evaluate them.
1
Quantification and Persuasion in Managerial Judgment
1. Introduction
Managers and others (analysts, accountants, etc.) proposing new projects or changes in firm
operating policies must decide how to present their proposals so as to maximize the likelihood of
approval. One common, but not universal, approach is to include a quantitative analysis of the
proposal (IMA 1999). While anecdotal evidence suggests that decision makers are more easily
convinced by quantified than by non-quantified proposals (Birdsell 1998), little scholarly
evidence exists to support or refute this claim. In this paper, we propose a new model that
articulates the process by which quantification can positively and negatively influence proposal
persuasiveness. Further, we provide empirical evidence on the validity of the model using two
experimental studies.
Quantification involves measurement and analysis, and thus it is the essence of accounting.
Understanding the process by which quantification influences managerial decision-making is
important because doing so allows us to identify the conditions under which quantification is
likely or unlikely to increase the persuasiveness of a proposal. That is, this understanding will
help us to refine the conventional wisdom that quantified arguments are more convincing than
non-quantified arguments. Although many might argue that this conventional wisdom is overly
simplistic, surprisingly little research has been done to achieve a more comprehensive
understanding of the impact of quantification. Business professionals who spend costly time and
effort developing quantified proposals would benefit from knowing when their efforts are likely
to be cost-beneficial.
Briefly, our original process-based model suggests that quantification of proposals influences
persuasion via three potentially offsetting effects. Specifically, we posit that a quantified propo-
2
sal can be more persuasive than a non-quantified proposal because the former reflects more
positively both on the competence of the manager who prepared it and on the plausibility that a
favorable outcome could result from the proposed action. Offsetting these two positive effects,
though, is the possibility that quantification provides a greater opportunity for the superior to
scrutinize the proposal details. If such scrutiny results in criticism, quantification can make a
proposal less persuasive. Thus, whether quantification is more persuasive than non-
quantification depends on the relative impact of these three effects.
We argue that two important contextual factors that vary in managerial decision settings—
the nature of the underlying data (subjective versus objective) and the incentives facing the
proposal preparer—are important determinants of the extent to which the positive effects of
quantification (i.e., competence and plausibility) outweigh the negative effects (i.e., criticism).
In two studies using participants with significant work experience and business training, we
experimentally vary the level of subjectivity in the underlying data and the degree to which the
preparer’s incentives are consistent with the long-run interests of the firm, along with whether or
not the proposal is quantified, to validate our model and to test its implications.1 We also use
structural equations methods to test the model directly.
Our results confirm most aspects of the model, and, in doing so, they provide evidence about
when quantification will be persuasive and when it will not. Specifically, we find that
quantification increases persuasion via the higher perceived competence of a preparer of a
quantified vs. non-quantified proposal. We also document that when the proposal inputs are
subjective or the proposal preparer has incentives to mislead, quantification leads to greater
1 As explained in more detail in the method section of the paper, we conduct a three-way fractional factorial design, meaning we vary three independent factors, but do not fully cross these factors. Thus, rather than collecting 23 = 8 cells of data, we collect only the six cells most relevant to our theory. Our model development focuses on two
3
critical analysis, reducing proposal persuasiveness. By identifying and separately measuring
these component effects, we are able to provide the first scholarly demonstration of why
quantification sometimes is more persuasive and sometimes is no more persuasive than non-
quantification.
Our paper is important for several reasons. First, we provide new insights to the literature on
quantification. Specifically, we develop an original process-based model of how extensive
quantification influences managerial decision-making, and we provide empirical evidence
supporting that model. Our theoretical insights are particularly important because the limited
literature on quantification focuses on how people equate numbers with words (e.g.,
Viswanathan and Childers 1996) or how memorable numbers are relative to words (e.g., Kida
and Smith 1995). Given the importance of quantification within accounting, we believe
theoretical development is long overdue.
Second, our experimental results refine the general notion that quantification improves
proposal persuasiveness. We show that while quantified proposals enhance the perceived
competence of the preparer, they also, under certain circumstances, invite greater scrutiny than
do non-quantified proposals. While prior research in auditing and financial reporting suggests
that subjective data and inconsistency between the communicator’s and decision maker’s
incentives may lead to greater skepticism on the part of decision makers (e.g., Hirst, 1994;
Hutton et al. 2003), we extend this research by demonstrating how those features influence
behavior via their interaction with quantification. Indeed, we propose a theory and provide
evidence that the skepticism associated with subjective data and preparer incentives differentially
affects quantified and non-quantified proposals. Moreover, we illuminate the process by which
(overlapping) sets of four cells, and for ease of exposition we present our results within the framework of two (2 x 2) experimental studies.
4
skepticism is translated into reduced persuasion for quantified proposals. Further, and contrary
to popular wisdom (Porter 1995), our results suggest that supervising managers evaluating
proposals from subordinates do not exhibit “blind trust in numbers” and are unlikely to be
inappropriately persuaded by quantified proposals.
The remainder of this paper is organized as follows. In section 2 we develop our model of
quantification and formulate testable hypotheses. The third section describes our experimental
method. Sections 4 and 5 describe the results of our hypothesis tests for studies 1 and 2,
respectively. The final section contains a discussion and our conclusions.
2. Theory and Hypotheses
When managers propose new projects or changes to firm operating policies, they typically
develop proposals outlining expected outcomes of the proposed actions. Substantial variation
exists in the content of these proposals (IMA 1999). Some contain only qualitative arguments
explaining the proposed project. Others are quantified, associating numbers and numerical
analysis with each component of the proposal.2 Conventional wisdom suggests that
quantification enhances one’s ability to persuade superiors to accept a proposal (Birdsell 1998;
McCleary et al. 2000), yet it is not clear why this would always hold true. In other words, we
know very little about the process by which quantification affects a proposal’s persuasiveness.
In this paper, we propose and experimentally validate a new model of the process by which
quantification affects persuasiveness. Previous research in marketing and psychology has
examined the substitution of numbers for words (e.g., “5 percent” versus “small proportion”)
(e.g., Viswanathan and Childers 1996; Wallsten et al. 1993). Previous accounting research has
2 We verified that there is considerable real-world variation in the extent of quantification in proposals via discussions with several managerial accountants with substantial work experience. While many (but not all) companies have standardized, heavily-quantified formats for capital budgeting proposals, there is more diversity in the format of proposals for other projects and activities, such as the change in operating policy used in our study.
5
focused on how numbers are encoded and retrieved in memory-based decision tasks (e.g., Kida
and Smith 1995; Kida et al. 1998).3 Our investigation differs from prior research in that it takes
place in a managerial setting in which decisions are not typically memory-based. Further, we
study a more extensive form of quantification than examined in previous research. We define
quantification as the practice of assigning numbers to events and performing calculations on
those numbers. As such, quantification is not merely a formatting issue or a substitution of
numbers for words. Rather, a quantified proposal provides more information of a specific type
than a non-quantified proposal. We examine extensive quantification because it is common in
managerial accounting settings.
Because quantification provides more information, one might expect that it always enhances
persuasiveness (assuming that this information is consistent with the proposal’s objective).
However, our model allows us to postulate conditions under which a quantified proposal will and
will not be more persuasive than a non-quantified one. In the following sections, we develop our
process-based model of quantification that describes how individuals react to quantified versus
non-quantified proposals. We use our model to formulate and test specific hypotheses, which are
formally stated following development of our model of quantification. We also empirically test
the model, itself, using a structural-equations approach.
2.1 A PROCESS MODEL OF QUANTIFICATION
In our model, we propose that receipt of a quantified (as compared to a non-quantified)
proposal can influence persuasion by 1) enhancing the superior’s perception of the preparer’s
3 While no research that we know of has examined the persuasive power of quantification in prediction tasks such as managerial project-related decisions, limited research on the issue has been performed using a diagnostic auditing task. In a study of auditor judgment, Anderson et al. (2003) find that even though a client’s quantified explanation for an unusual fluctuation in financial statement numbers provided information about the plausibility of the client’s proposed cause of the fluctuation, auditors did not find the quantified explanation to be more persuasive than the
6
competence, 2) increasing the superior’s perception of the plausibility that a favorable outcome
to the proposed action could occur, and 3) potentially increasing the likelihood that the superior
will criticize the details of the proposal. Figure 1 illustrates the model. The model components
are explained below.
--------------------------------------------- Insert Figure 1 here
---------------------------------------------
2.1.1. Competence Effects. When a superior receives a proposal from a subordinate, we posit
that the superior will make an inference about the preparer’s competence (Jones and Davis
1965). Specifically, we hypothesize that a superior receiving a quantified proposal will infer that
the preparer possesses the knowledge required to perform the calculations and logical analysis, is
willing to exert the effort to do so, and is prepared for discussions with the manager (Anderson et
al. 2003). That is, unless there are obvious errors or biases in the estimates and calculations,
quantification will evoke positive inferences about the preparer’s competence. In contrast, when
a superior receives a non-quantified proposal, he has less information regarding the preparer’s
competence. Therefore, we expect a superior will assess the preparer’s competence higher when
the proposal is quantified than when it is not (Link 1). We further expect that managers will be
more willing to rely on the representations of managers they view as more competent, and thus
we hypothesize that perceived preparer competence and proposal persuasiveness are positively
associated (Link 2).
2.1.2. Plausibility Effects. We expect that quantification demonstrates the plausibility of the
proposed (favorable) outcome, and that this effect enhances the persuasiveness of a proposal.
Quantification demonstrates that a favorable outcome could occur (at least under the assumptions
non-quantified one. Those authors do not manipulate the nature of the inputs to the analysis, do not examine critical analysis of the quantification, and do not test a process-based model of quantification.
7
of the analysis) and provides details as to how it could occur (McLeary et al. 2000 84). For
example, a quantified proposal may reveal that one alternative has a higher net present value than
other alternatives under the given assumptions. In contrast, a non-quantified proposal can only
make a directional argument, and so it cannot demonstrate plausibility to this extent. Thus, we
expect perceptions of the plausibility of favorable proposal outcomes to be greater when
proposals are quantified than when they are not (Link 3). We further propose that plausibility of
a favorable outcome and proposal persuasiveness will be positively associated (Link 4).
2.1.3. Critical Analysis Effects. Research on persuasion predicts that superiors develop
knowledge about various techniques (called “persuasion tactics”) that subordinates use to
convince others to take a desired action (Friestad and Wright 1994 4).4 Although prior research
has not identified quantification as a persuasion tactic, given the general wisdom that quantified
proposals are more persuasive than non-quantified ones (e.g., McLeary et al. 2000), we expect
that superiors view quantification as a persuasion tactic.5
Via their experiences with many persuasion attempts, it is likely that superiors will have
developed approaches to coping with persuasion tactics (Friestad and Wright 1994 4). Coping
techniques are varied and depend on the type of persuasion approach. In the context of proposal
evaluation, one potential coping technique is to critically analyze the details of the proposal.
4 We build on the persuasion knowledge model, which originates from marketing. While this model does not address how quantification per se influences the persuasion process, we rely on some of its ideas to help us make predictions about how superiors cope with proposals they receive. Other persuasion models (including the elaboration-likelihood model (Petty and Cacioppo 1986) and the heuristic-systematic model (Eagly and Chaiken 1993)) largely deal with how cognitive effort influences how people process arguments – a focus that is unrelated to our research question. Like the persuasion knowledge model, these latter models do not address the role of quantification. 5 We develop and test our theory in the context of a high-quality quantified analysis. That is, the quantified analysis has no obvious errors or biases, but, as with any proposal, it presents opportunities for critical analysis (see footnote 11). Accordingly, our theory may not generalize to poor-quality quantifications. In addition, our study context is one in which the numbers can be made to support a favorable outcome to the proposed action. When quantification cannot support a favorable outcome, it may be that the lack of quantification represents a persuasion tactic, and our theory may not generalize to that setting.
8
Quantified proposals are more likely to trigger coping behavior than are non-quantified
proposals, because quantified proposals are more likely to be perceived as persuasion tactics.
Further, the specific coping technique of critical analysis is more readily available to evaluators
of quantified proposals because such proposals provide detail of the analysis, while non-
quantified proposal do not (cf. Fischhoff 1995 143). For both of these reasons, we predict that
quantification will positively affect the level of critical analysis (Link 5). When critical analysis
does occur, we expect that it will weaken the persuasive power of the proposal (Link 6).
2.1.4. Mediation by Plausibility. We also predict that perceived plausibility of a favorable
outcome will play an additional, more subtle role than that already discussed. Specifically, the
perceived plausibility of a favorable outcome will partially mediate the relationship between
persuasion and both perceived preparer competence and critical analysis. Turning first to the
link between preparer competence and plausibility (Link 7), proposals received from preparers
who are perceived to be more (less) competent are likely to be viewed as more (less) plausible.
This prediction is based on research demonstrating the inferential value of information is a
function of the credibility of the source (Bamber 1983; Hirst 1994). For the link between critical
analysis and plausibility (Link 8), we rely on research indicating that the plausibility of
information is influenced by the extent to which individuals resist a persuasion attempt
(Ahluwalia 2000). That is, when the likelihood of critical analysis is greater (smaller), the
perceived plausibility of a proposal is lessened (heightened).6
6 Although we expect plausibility to mediate the relationship between preparer competence and persuasion and between critical analysis and persuasion, we do not believe that plausibility will be a complete mediator. That is, we still anticipate a significant path from preparer competence to persuasion (Link 2) and from critical analysis to persuasion (Link 6). These links are based on prior research indicating that characteristics of the source and the degree of resistance to persuasion can have direct impacts on persuasion. For example, Eagly and Chaiken (1993) show that people often judge situations based on characteristics of the communicator, such as their perceived knowledge or trustworthiness, and not necessarily the content of their message. Ahluwalia (2000) argues that a message recipient may have critical thoughts about a communicator that are unrelated to the content of the message at hand but that nevertheless influence the perceived persuasiveness of the message.
9
2.2 FACTORS INFLUENCING RELATIVE STRENGTHS OF PATHS
Our model proposes that quantification affects persuasion via three potentially offsetting
paths. Thus, the overall effect of quantification will depend on the relative strengths of these
three paths, which, in turn, will be determined by contextual factors. In this paper, we focus on
the critical analysis path. Specifically, we identify contextual factors likely to influence the
degree to which quantification leads to critical analysis. We argue that the association between
quantification and critical analysis depends on characteristics of the data (i.e., nature of inputs)
and characteristics of the preparer (i.e., incentives) (Friestad and Wright 1994 5).7 Each is
separately considered in turn below.
2.2.1. Effects of Nature of Proposal Inputs. Because proposals predict future outcomes, they
necessarily are somewhat subjective. That is, the estimates, predictions and assumptions may
not be readily verifiable (Libby and Kinney 2000). On the other hand, in some cases proposals
can be made more objective by applying a standardized set of assumptions, or by using data from
similar projects or situations that have occurred in the past (e.g., Hilton 1994). Thus, the level of
subjectivity of the inputs to the analysis varies across proposals.
Prior accounting research suggests that superiors will be more skeptical of a proposal when
the underlying data are subjective rather than objective. For example, Hutton et al. (2003) show
that the market does not react to good news earnings forecasts that contain subjective discussion
but does react to such disclosures that contain more objective and verifiable statements. We
7 The lower perceived managerial competence that results from lack of quantification may cause higher desire on the part of the superior to critically analyze a proposal. At first blush, this line of reasoning suggests that an additional path should be added to our model from perceived competence to critical analysis. However, recall that quantification provides the opportunity to engage in critical analysis. Thus, while lack of quantification indicates lower competence, thereby potentially increasing the desire to critically analyze the details of the proposal, it also decreases the superior’s ability to successfully criticize the details (as there are fewer details in a non-quantified proposal). Given the focus of our study, we measure successful criticism, and not the desire to criticize, and so the path does not apply. Because a relatively high level of competence is expected from all managers in professional
10
apply this intuition to a managerial accounting setting, and extend this literature by arguing that
variation in the subjectivity of the inputs to managerial proposals will differentially affect
reactions to quantified versus non-quantified proposals.
In particular, because a quantified proposal is seen as a persuasion tactic, upon receipt of
such a proposal, the superior will look to contextual cues to determine how to best react (Friestad
and Wright 1994, 4-5). Suppose that the proposal preparer has incentives inconsistent with those
of the firm. If the quantified proposal is based on subjective inputs, superiors will likely be
concerned that the preparer could manipulate subjective data to suit his personal goals. The
superior can cope with likely manipulation by critically analyzing the details of the quantified
proposal. On the other hand, if a superior evaluating a quantified proposal believes that the
inputs are relatively objective, he will realize that the objective nature of the analysis prevents
manipulation and will be less likely to engage in critical analysis. That is, the superior will be
more likely to accept the analysis as presented.
In contrast, a non-quantified proposal is less likely to be viewed as a persuasion tactic, and it
does not lend itself to the same degree of scrutiny as does a quantified proposal. Thus, superiors
will be less inclined to critically analyze non-quantified proposals, regardless of the nature of the
data. Therefore, we hypothesize that when the preparer’s incentives diverge from the firm’s,
quantification and the nature of proposal inputs will interact. Specifically, a superior will be
more likely to criticize the details of a quantified proposal based on subjective inputs than a
quantified proposal based on objective inputs or a non-quantified proposal based on either
subjective or objective inputs.
settings, we leave to future research a more-detailed analysis of when and how perceptions of competence affect critical analysis.
11
These critical analysis effects, along with the effects for competence and plausibility
discussed earlier, suggest the following effects of quantification on proposal persuasiveness
when the preparer’s incentives are not aligned with the firm’s. When data are objective, we
hypothesize that two of the paths in our process-based model—those through perceived preparer
competence and plausibility—will increase persuasion, and that the other—through critical
analysis—will have little or no impact on persuasion. Accordingly, we anticipate that quantified
proposals will be more persuasive than non-quantified proposals when inputs are objective.
However, when inputs to the analysis are subjective, quantification leads to a higher likelihood
of critical analysis, and so the model predicts offsetting effects. That is, with subjective inputs,
superiors are less likely to judge quantified proposals as more persuasive than non-quantified
ones. This prediction is formalized below.
Summary Hypothesis 1: When the preparer’s incentives diverge from the firm’s, quantification and the nature of inputs will interact such that a superior will be more persuaded by a quantified proposal based on objective inputs than a quantified proposal based on subjective inputs or a non-quantified proposal, regardless of the nature of inputs.
2.2.2. Effects of the Preparer’s Incentives. We also consider whether quantification can be
persuasive even when the inputs to the proposal are subjective. We argue that when inputs are
subjective, a characteristic of the preparer—namely, whether his incentives are aligned with the
firm’s—will affect the relative persuasiveness of quantified vs. non-quantified proposals. Prior
research in accounting has documented that the incentives of a preparer affect how a message is
perceived. For example, the market reacts differently to information provided by managers with
and without incentives for earnings management (Healy and Wahlen 1999). In addition, auditors
react more favorably to information obtained from those without incentives to mislead (e.g.,
other auditors) than they do to those with such incentives (e.g., client managers) (Hirst 1994).
12
We extend this general finding, proposing that increased skepticism toward messages from those
with incentives to mislead will manifest in different ways for quantified and non-quantified
proposals, and thus will have different effects on the persuasiveness of quantified and non-
quantified proposals.
We posit that the incentives of the preparer will influence the degree to which quantification
leads to critical analysis. In particular, when the preparer’s incentives are consistent with the
long-run interests of the company, the preparer will have no reason to manipulate the numbers to
mislead the superior. Accordingly, the superior will have little motivation to criticize the details
of the analysis, regardless of the nature of the underlying data, and critical analysis of the
quantified proposal is unlikely. When the preparer has incentives inconsistent with those of the
firm, however, we expect the superior to cope by criticizing the details of the quantified
proposal. As above, we expect that critical analysis is less likely for non-quantified proposals.
Therefore, we hypothesize that when the data underlying the proposal are subjective,
quantification and the preparer’s incentives will interact. Specifically, the level of critical
analysis will be greater for a quantified proposal from a preparer with incentives that are
inconsistent with the firm’s than for a quantified proposal from a preparer with consistent
incentives, or for a non-quantified proposal, regardless of the preparer’s incentives.
Based on the preceding discussion, we hypothesize that when inputs are subjective and the
preparer’s incentives are consistent with the firm’s, two of the paths—those through perceived
preparer competence and plausibility of a favorable outcome—will increase persuasion, and that
the other—through critical analysis—will have little or no impact on persuasion. Therefore,
superiors should find quantified proposals more persuasive than non-quantified proposals in this
situation. However, when the nature of the inputs is subjective and the preparer has interests
13
inconsistent with those of the firm, quantification leads to relatively greater critical analysis, and
so our process-based model predicts offsetting effects. Therefore, superiors are less likely to find
quantified proposals more persuasive than non-quantified ones in these circumstances. This
prediction is formalized below.
Summary Hypothesis 2: When the data underlying a proposal is subjective, quantification and the preparer’s incentives will interact such that a superior will be more persuaded by a quantified proposal from a preparer with incentives that are consistent with the firm’s than a quantified proposal from a preparer with inconsistent incentives or a non-quantified proposal, regardless of the preparer’s incentives.
3. Method
We experimentally tested our hypotheses with 114 graduate business students from a
Business Week Top 25 MBA program. All participants voluntarily took part in the study as a
class exercise, and they received no compensation. We conduct a three-way (quantified vs. non-
quantified, subjective vs. objective inputs, consistent vs. inconsistent incentives) fractional (as
opposed to fully crossed) factorial design (Winer et al. 1991 585). Because testing our model
does not necessitate investigation of the two consistent incentives / objective inputs cells, we do
not collect data for these cells. Our model development focuses on two (partially overlapping)
sets of four cells, and for ease of exposition, from this point forward we present our results
within the framework of two 2 x 2 experimental studies. However, it should be noted that the
two studies share two cells of data (the two inconsistent incentives / subjective inputs cells).
Figure 2 presents an overview of our experimental design.
--------------------------------------------- Insert Figure 2 here
---------------------------------------------
Participants ranged in age from 25 to 45, with an average age of 29. Seventy-three percent of
participants were male. Participants had, on average, five years of work experience, primarily in
14
engineering, management, and corporate accounting. Further, all participants had completed
core MBA courses in economics, financial and managerial accounting, management, marketing,
finance, decision analysis, strategy, and information technology. Participants’ work experience
and academic training provided them with the knowledge needed to evaluate a proposal such as
that used in our study.
Each participant adopted the role of supervising manager of a plastics company. The
superior was charged with deciding whether to postpone routine, but expensive, maintenance
(called “turnaround”) on machinery for one division of the company. After reading background
information, participants read a proposal prepared by the division manager (the preparer) that
recommended postponing the turnaround for one year. Participants’ task was to determine
whether to agree to the proposal, thereby lengthening the maintenance schedule from two to
three years, or to continue with the current maintenance plan. We ensured that the proposals
were of high quality by having a former operations analyst with experience in the polypropylene
industry help us develop them, and by having plant engineers and finance professionals in the
industry review the materials.
In study 1, the preparer was described as having personal incentives to recommend
postponing the planned turnaround. Specifically, the preparer was due to be transferred to
another plant at the end of the current year as part of the normal management development
rotation. Moreover, his yearly bonus was to be based on the current year’s profitability for his
plant. The design of study 1 was two (type of proposal: quantified or not quantified) by two
(nature of the inputs to the proposal: objective or subjective), with both factors manipulated
between participants. Both the quantified and non-quantified proposals listed the same
categories of costs and benefits of postponing the maintenance, and both recommended
15
postponement. The quantified proposal, however, provided an estimate for each line item and
showed the details of all calculations.
The nature of the inputs was manipulated by including one of following passages in the
experimental materials. In the objective inputs condition, participants were told that turnaround
analyses are inherently objective, as follows:
Most aspects of turnaround analyses are objective. In fact, much of the analysis done for this type of decision is typically based on hard data, such as historical production rates, current market prices, and data on unplanned shutdowns at this and other plants. Accordingly, there is not much room for judgment in this type of analysis.
Participants in the subjective inputs condition were told that turnaround analyses are inherently
subjective, as follows:
Most aspects of turnaround analyses are inherently subjective. In fact, much of the analysis typically done for this type of decision is based on soft data, such as opinions and assumptions. Accordingly, there is a lot of room for judgment in this type of analysis.
Study 2 was identical to study 1 except that the inputs were always described as subjective,
as above, and the study involved a 2 x 2 between-subjects design with type of proposal
(quantified or not quantified) and the incentives of the preparer (consistent or inconsistent with
the firm’s) as manipulated variables. In the inconsistent incentives condition, the preparer’s
incentives were described as in study 1. In the consistent incentives conditions, the preparer was
described as having an employment contract likely to give him a long-term focus and thus likely
to make decisions in the best long-run interests of the firm. The type of proposal manipulation
was identical to that in study 1.
After participants read background information and the proposal from the manager, we asked
them to indicate the likelihood they would recommend postponing the turnaround on a 101-point
scale with end points labeled “definitely will not support postponing the turnaround” (0) and
16
“definitely will support postponing the turnaround” (100). We then asked participants to explain
the reasoning behind their recommendations. The explanations were used to determine whether
participants critically analyzed the details of the subordinate manager’s proposal. Next,
participants answered a series of questions, including several designed to measure their
perceptions of the competence of the division manager and the plausibility of the proposal.
Finally, responses to manipulation checks and demographic items were collected.
4. Study 1 Results
4.1 MANIPULATION AND OTHER CHECKS
When asked to identify whether they had received a quantified or non-quantified proposal
from the division manager, all but one participant correctly identified the proposal type,
indicating a successful manipulation of proposal type. When asked whether the turnaround
analysis involved objective or subjective inputs, 95% (78%) of the participants in the subjective
(objective) condition indicated the analysis was subjective (objective). These responses are
significantly associated with the experimental condition (χ12 = 25.37, p < 0.01), indicating a
successful manipulation of the nature of the inputs to the proposal. Participants also rated the
objective inputs as more reliable (mean of 4.67 on an 11-point scale with endpoints labeled “very
unreliable” (0) and “very reliable” (10)) than the subjective inputs (mean of 3.62) (F1,70 = 4.95, p
< 0.03), verifying that the participants interpreted the proposal inputs as intended.
We also asked participants to identify whether the division manager’s compensation
agreement gave him a short-term or long-term focus. All participants appropriately responded to
this question, indicating that we were able to successfully communicate to participants the short-
term nature of the division manager’s incentives.
17
To test whether participants viewed quantification as a persuasion tactic, we asked them to
gauge the extent to which they believed the subordinate was trying to get them to buy into his
analysis. As expected, participants receiving a quantified proposal believed the preparer had a
stronger desire to get them to buy into the proposal than did those receiving a non-quantified
proposal (F1,71 = 14.43, p < 0.01). We also asked participants about the kind of information that
would be most effective if they wanted to persuade someone to undertake a business project. Of
the three choices (quantitative, qualitative, and either qualitative or quantitative), all but four
participants indicated that a quantitative analysis would be most effective, as expected.
4.2 SUMMARY HYPOTHESIS 1
In Summary Hypothesis 1, we predict that a superior will be more persuaded by a quantified
proposal based on objective inputs than a quantified proposal based on subjective inputs or a
non-quantified proposal, regardless of the nature of the inputs. We test this interaction
hypothesis first. Following, we test the process model that generated this prediction.
Panel A of Table 1 shows the descriptive statistics for the postponement decision by
experimental condition and Panel B presents the planned contrast test of Summary Hypothesis 1
and follow-up contrasts. Because the hypothesis predicts a specific pattern of cell means, we
employ contrast coding to statistically test the hypothesis. Consistent with the hypothesized
pattern, contrast weights are +3 in the quantified proposal/objective inputs condition, and -1 in
the other three conditions. The planned contrast is statistically significant (F1,71 = 8.89, p <
0.01), supporting Summary Hypothesis 1.8
8 Note that the hypothesis specifies that the persuasiveness of the proposal in the quantified/objective condition will be higher than in any of the other three conditions, but makes no ex ante comparisons of the other three conditions to each other. Thus, our main contrast test compares the quantified/objective condition to the mean of the other three conditions. For completeness, we test the robustness of our hypothesis test to a number of alternate contrast codings, all of which meet the requirement that the quantified/objective condition is higher than any of the other three conditions (4, 2, -2, -2; 4, -1, -2, -2; 4, -1, -2, -1; 4, -1, -1, -2; and 4, 3, -4, -3). All of these tests yield identical inferences (all p-values < 0.05, two tailed).
18
--------------------------------------------- Insert Table 1 here
---------------------------------------------
We follow up on this significant interaction by examining the simple main effects of
quantification in the objective and subjective input conditions. When inputs to the analysis are
objective, mean postponement decisions are significantly higher when a quantified proposal is
received than when a non-quantified proposal is received (t71 = 1.97, p = 0.03). However, when
inputs to the analysis are subjective, mean postponement decisions are not significantly different
in the quantified and non-quantified conditions (t71 = 0.40, p = 0.69).
4.3 PROCESS MODEL TESTS
To more thoroughly ascertain that these results are due to the processes described by our
model, we rely on structural equations-based path analysis. This approach simultaneously
estimates each link in the model shown in Figure 1, allowing us to determine the validity of our
model and its ability to explain our persuasion results.9 In our path analysis, the type of proposal
and nature of inputs correspond to our experimental manipulations, as described previously.
Persuasiveness corresponds to the likelihood of taking the proposed action (postponing the
maintenance), as previously discussed. Below, we briefly summarize the measures used for each
of the other (endogenous) variables used in the model.
4.3.1. Measurement of Model Variables. The preparer competence measure is a composite
score created from participants’ responses to four items. These items asked how prepared the
preparer was to explain the need for the turnaround, how knowledgeable he was about the costs
and benefits of the turnaround, how much effort he put into generating the information, and,
9 The modeling procedure that we utilize (using AMOS software) employs maximum likelihood estimation, the generally accepted estimation method. We verify that the main effect results hold for weighted least-squares with mean and variance adjustment using MPLUS. The latter method is particularly appropriate when outcome variables are categorical, as is the case with our critical analysis variable. Inferences are identical for the two methods.
19
finally, how competent he was at his job. All measures were collected on 11-point Likert scales
on which higher scores refer to higher levels of preparedness, knowledge, effort, and
competence. Table 2, Panel A summarizes responses to the four items by experimental
condition. In a confirmatory factor analysis, the eigenvalue for the first factor is 3.04 and all
other eigenvalues are less than one. The first factor explains 76% of the variance in the
measures. This analysis suggests that one “competence” factor is highly explanatory of these
data. Accordingly, we used the first factor as our composite measure of competence.10
--------------------------------------------- Insert Table 2 here
---------------------------------------------
The plausibility measure is also a composite score, created through a factor analysis. Three
questions, designed to measure perceptions of the plausibility that the proposal would lead to a
favorable outcome, asked about the extent to which the preparer’s analysis demonstrated how the
cost savings associated with postponing the planned turnaround could materialize; how plausible
it was that extending the turnaround could be good for the company in the long run; and the
extent to which the participant believed it was plausible that the benefits of postponing the
turnaround could exceed its costs. All measures were collected on 11-point Likert scales on
which higher scores refer to higher levels of plausibility. Table 3, Panel A summarizes
responses to the three items by experimental condition. Confirmatory factor analysis yielded an
eigenvalue of 1.81 for the first factor, with all other eigenvalues less than one. The first factor
10 Participants’ requests for more detail in the proposal (obtained from their responses to the open-ended written explanation task where participants described the reasoning behind their postponement decision) likely capture information about superiors’ views of preparer’s preparation (which is a component of competence). Consistent with this idea, across both studies, requests for more detail are significantly associated with lower perceived competence (F1,111 = 11.68, p < 0.01).
20
explains 60% of the variance in the measures. One “plausibility” factor is highly explanatory of
these data, and so we use the first factor as our composite measure.
--------------------------------------------- Insert Table 3 here
---------------------------------------------
We derived a measure of whether participants criticized the details of the analysis from their
written explanations of the reasoning behind their postponement decisions. These explanations
were coded for whether they contained comments indicating that participants questioned the
accuracy or completeness of the information or the validity of the analysis method. Thus, the
critical analysis measure is dichotomous, with a written explanation containing a critical analysis
comment coded as a “yes,” and one not containing any critical analysis being coded as “no.”11
Two research assistants independently coded the comments while blind to experimental
conditions and hypotheses. Cohen’s Kappa indicates that inter-coder agreement is significantly
greater than chance (K = 0.74, t = 6.65, p < 0.01). Coders met to reconcile discrepancies, and all
tests use the reconciled coding. Table 4, Panel A reports the proportion of participants critically
analyzing the proposal in each experimental condition.
--------------------------------------------- Insert Table 4 here
---------------------------------------------
4.3.2. Path Model. To investigate the overall model, we use structural equations-based path
analysis. We first conduct a test of the overall goodness of fit. The Tucker-Lewis Index, a
measure of the proportion of improvement of the fit of the model over a null model, is 104%,
11 Examples of critical analysis items include: “There’s a risk that the unplanned shutdown days is greater than expected” and “safety issues are not addressed because plant will more likely have a bad incident.” Notably, responses indicating that the participant would like to see more numbers or more detail were not coded as items of critical analysis. For an item to be scored as critical analysis, it had to specifically mention some perceived problem with the analysis as presented.
21
which is well above the accepted cutoff value of 90% (Kline 1998 131). We confirm the
model’s goodness of fit with a conventional χ2 test (χ112 = 6.93, p = 0.81) and an Incremental Fit
Index (102%), which is above the cutoff of 95% for a well-fitting model (Byrne 2001 83). Thus,
the overall model provides a good fit for describing the relationships in the data. This model,
along with standardized path coefficients, is presented in Figure 3. We use t-tests to test the
significance of each path coefficient.
--------------------------------------------- Insert Figure 3 here
---------------------------------------------
Consistent with our hypothesized model, the (Link 1) relationship between the presence of
quantification and the perceived competence of the division manager is significantly positive (t66
= 6.86, p < 0.01), as is the (Link 2) relationship between perceived competence and the
postponement decision (t66 = 2.95, p < 0.01). Further, our results show that perceived
plausibility is positively associated with the postponement decision (Link 4, t66= 4.69, p < 0.01).
However, participants receiving a quantified analysis did not judge it more plausible that the
proposal would lead to a favorable outcome than participants who received a non-quantified
proposal (Link 3, t66 = 0.25, p = 0.40). Thus, in our analysis of the two processes through which
quantification is predicted to positively affect the persuasiveness of a proposal, we find full
support for one (competence) and only partial support for the other (plausibility).
Recall that we also predict that quantification will have a negative impact on proposal
persuasiveness, through increased critical analysis, and that the relationship between
quantification and critical analysis depends on whether the proposal inputs are subjective or
objective. Accordingly, we test whether the type of proposal (quantified or non-quantified) and
the nature of the inputs to the proposal (subjective or objective) interact in determining whether
22
critical analysis occurs. Our results show a positive relationship (Link 5) between quantification
and critical analysis for both types of inputs (t66 = 5.73, p < 0.01 for subjective inputs and t66 =
1.74, p = 0.04 for objective inputs), and that the effect is significantly larger for subjective inputs
(χ12 = 4.88, p = 0.03), supporting our expectation.12 Further, the path (Link 6) between the
critical analysis variable and the postponement decision variable is negative, as expected (t66 =
1.73, p = 0.04).
Our results also support plausibility as a partially mediating the relationship between
perceived competence and the persuasiveness of the proposal (Link 7). That is, the perceived
competence of the preparer is positively associated with the perceived plausibility of a favorable
outcome (t66 = 2.21, p = 0.02). The relationship between critical analysis and plausibility is
negative, as expected (Link 8, t66 = 1.65, p = 0.05). These results, together with the previously
discussed relationships, demonstrate that perceived competence and critical analysis have both
direct and indirect (mediated by plausibility) effects on persuasiveness.
In summary, the structural equations modeling results are largely consistent with our
predictions. Our analysis reveals two main paths by which quantification influences persuasion,
and verifies that these paths hold simultaneously. First, quantification increases persuasion by
increasing the perceived competence of the proposal preparer. Second, quantification decreases
persuasion by increasing the likelihood that evaluators will critically analyze the details of the
proposal when the analysis is subjective, but less so when the analysis is objective. Finally,
while we did not find that quantification increases the perceived plausibility of the proposal, we
did find a positive relation between the perceived plausibility and the persuasiveness of the
12 This interaction is assessed using a nested model comparison, as suggested by Rigdon et al. (1998). Using this procedure, two models are compared – one that permits the path from quantification to critical analysis to vary across the subjective and objective input conditions, and one that restricts it to be the same in the two conditions. A
23
proposal, and that plausibility mediates the relationships between preparer competence and
persuasiveness and between critical analysis and persuasiveness. These model results
demonstrate that the overall effect described by Summary Hypothesis 1 is driven by the process
we propose.
5. Study 2 Results
In study 2, we hold constant the nature of the inputs (i.e., they are always subjective) and
vary type of proposal (quantified or non-quantified) and the division manager’s incentives
(consistent or inconsistent with those of the firm). We expect quantified analyses to be more
persuasive than non-quantified analyses when the manager preparing the proposal has incentives
consistent with those of the firm. On the other hand, when the manager’s incentives are
inconsistent, we do expect quantification to have a less positive effect.
5.1 MANIPULATION AND OTHER CHECKS
When asked to identify the type of proposal they received from the division manager, all but
one participant correctly identified their proposal type, indicating a successful manipulation. All
(90%) of the participants in the short-term (long-term) condition identified the division
manager’s focus as short-term (long-term), indicating that the incentives manipulation was
successful. Further, participants deemed the estimates more reliable when the manager’s
incentives were consistent (mean of 4.87 on an 11-point scale with endpoints labeled “very
unreliable” (0) and “very reliable” (10)) versus inconsistent with those of the firm (mean of 3.62)
(F1,72 = 7.37, p < 0.01).
We asked participants to indicate whether the division manager’s analysis contained
subjective or objective inputs. All but three participants appropriately responded to this
significant interaction reflects that the unrestricted model provides a significantly better fit for the data than the restricted model.
24
question, indicating that we were able to successfully communicate to participants that the inputs
were subjective. We also verified that participants saw the quantified proposal as a persuasion
tactic. As in study 1, participants receiving a quantified proposal thought the division manager
had a stronger desire to get them to buy into the proposal than did those receiving a non-
quantified proposal (F1,73 = 4.22, p = 0.05).
5.2 SUMMARY HYPOTHESIS 2
Recall that Summary Hypothesis 2 predicts that the likelihood of postponement will be
higher when the proposal is quantified and the preparer’s incentives are consistent with the
firm’s than when the proposal is quantified and the preparer’s incentives are inconsistent, or
when the proposal is not quantified, regardless of the preparer’s incentives. Panel C of Table 1
shows the descriptive statistics for the postponement decision by experimental condition, and
Panel D presents the planned contrast test of Summary Hypothesis 2 and follow-up contrasts. As
in study 1, we employ contrast coding to test for this specific pattern of cell means. Contrast
weights are +3 in the quantified proposal/consistent incentives condition, and -1 in the other
three conditions. The planned contrast is statistically significant (F1,73 = 14.73, p < 0.01),
supporting Summary Hypothesis 2.13
Follow-up simple-effects tests reveal that when the preparer’s incentives are consistent with
the firm’s, mean postponement decisions are significantly higher when a quantified proposal is
received than when a non-quantified proposal is received (t73 = 2.04, p = 0.02). However, when
the preparer’s incentives are inconsistent with the firm’s, mean postponement decisions are not
significantly different in the quantified and non-quantified conditions (t73 = 0.41, p = 0.68).
13 As in study 1, we test the robustness of our hypothesis test to a number of alternate contrast codings, all of which meet the requirement that the quantified/consistent condition is higher than any of the other three conditions (4, 2, -2, -2; 4, -1, -2, -2; 4, -1, -2, -1; 4, -1, -1, -2; and 4, 3, -4, -3). All of these tests yield identical inferences (all p-values < 0.01, two tailed).
25
5.3 PROCESS MODEL TESTS
As for study 1, we rely on structural equations modeling to more thoroughly investigate the
model underlying our persuasion results. The quantification and incentive variables correspond
to our experimental manipulations, and the measures for preparer competence, plausibility, and
critical analysis are calculated as for study 1. Panel B of Tables 2 and 3 provide descriptive
statistics for responses to the items making up the competence and plausibility measures,
respectively.14 Panel B of Table 4 provides the proportion of participants critically analyzing the
proposal in each experimental condition. Cohen’s Kappa indicates that inter-coder agreement is
significantly greater than chance for the critical analysis variable in study 2 (K = 0.77, t = 6.95, p
< 0.01).
5.3.1. Path Model. The Tucker-Lewis Index value of 105% achieved with this model is well
above the accepted cutoff value of 90%. A conventional χ2 test (χ112 = 5.84, p = 0.88) and an
Incremental Fit Index (102%) confirm that the model provides a good fit for describing the
relationships in the data. This model, along with standardized path coefficients, is presented in
Figure 4.
--------------------------------------------- Insert Figure 4 here
---------------------------------------------
Our analysis of the two processes through which quantification is predicted to positively
affect the persuasiveness of a proposal finds full support for one (competence) and only partial
support for the other (plausibility), consistent with study 1 results. That is, the relationship
between presence of quantification and perceived competence is significantly positive (Link 1, t68
14 As for study 1, we performed factor analysis to examine whether the four competence questions and the three plausibility questions each comprise a unitary construct. For competence, the eigenvalue for the first factor is 3.00 (explains 75% of the variance) and all other eigenvalues are less than one. For plausibility, the eigenvalue for the first factor is 1.76 (explains 59% of the variance) with all other eigenvalues less than one. These analyses indicate that one “competence” and one “plausibility” factor are highly explanatory of the study 2 data.
26
= 5.40, p < 0.01), as is the relationship between perceived competence and the postponement
decision (Link 2, t68 = 3.19, p < 0.01). While perceived plausibility is positively associated with
the postponement decision (Link 4, t68 = 3.75, p < 0.01), no significant association exists
between the type of proposal and perceived plausibility (Link 3, t68 = 1.13, p = 0.13).
We also predict that quantification will have a negative impact on proposal persuasiveness,
through increased critical analysis, and that the relationship between quantification and critical
analysis depends on whether the preparer’s incentives are consistent or inconsistent with those of
the firm. As in study 1, we test this interaction using nested model comparisons. Consistent
with the nature of incentives influencing superiors’ coping mechanisms, results of the path
analysis indicate that the type of proposal (quantified or non-quantified) and the nature of the
preparer’s incentives (consistent or inconsistent) interact in determining whether critical analysis
occurs (Link 5, χ12 = 17.93, p < 0.01). Specifically, when the preparer’s incentives are
inconsistent with the firm’s, there is a positive relationship between quantification and critical
analysis (t68 = 5.73, p < 0.01). On the other hand, when the manager’s incentives are consistent,
quantification does not increase the level of critical analysis (t68 = 0.87, p = 0.19). As in study 1,
the relationship between critical analysis and postponement is negative (Link 6, t68 = 2.08, p =
0.02). Finally, the mediating variable paths through plausibility (Links 7 and 8) are inferentially
identical to those in study 1 (Link 7, t68 = 2.89, p < 0.01; Link 8, t68 = 1.84, p = 0.04).15
6. Discussion and Conclusions
Our broad objective is to better understand the impact of quantification on persuasion in
managerial decision making. In doing so, we question the common wisdom that adding numbers
15 Modification indices approximate the increase in the overall model’s fit by adding or deleting specific paths, thereby allowing examination of whether the components are connected in some way not specified in our model. An examination of modification indices for both studies 1 and 2 suggests that no additional paths will improve the model’s fit, using a p=0.10 level of statistical significance.
27
to a proposal will make arguments more convincing. To explore this issue, we develop an
original process-based model of how quantification influences proposal persuasiveness. Our
model suggests that a quantified proposal can be more persuasive than a non-quantified proposal
because the former reflects more positively both on the competence of the manager who
prepared it and on the plausibility of a favorable outcome. Offsetting these two positive effects,
though, is the possibility that quantification provides a greater opportunity for scrutinizing the
proposal details than non-quantification. If such scrutiny results in criticism, quantification can
make a proposal less persuasive. Thus, whether quantified proposals are more persuasive than
non-quantified proposals depends on the relative impact of each of these three effects.
Our empirical investigation of the model then focuses on two factors that influence variation
in the likelihood that the proposal evaluator will criticize the details. In our first study, we show
that quantification positively influences persuasion via the greater perceived competence of the
preparer of the quantified (versus non-quantified) proposal. We also show, though, that
quantified proposals receive greater critical analysis than do non-quantified proposals, and that
this effect is significantly greater when inputs are subjective than when they are objective.
Because critical analysis is negatively associated with proposal persuasiveness, the nature of the
inputs to the analysis (subjective or objective) moderates the influence of quantification on
persuasion. We show that this moderation occurs through the critical analysis path.
In our second study, we again show that quantification positively influences persuasion via
the greater perceived competence of the preparer. We also show that when the preparer has
incentives that diverge from the firm’s interests, quantified proposals are more likely to be
critically analyzed than are non-quantified proposals and, as before, this criticism negatively
impacts proposal persuasiveness. In contrast, when the preparer’s incentives are consistent with
28
the firm’s interests, superiors are willing to rely on analysis as provided, and quantified proposals
invite no more criticism than do non-quantified proposals. The overall result is that
quantification increases proposal persuasiveness when the preparer’s incentives are consistent
with the firm’s, but not when they are inconsistent. Thus, while prior work in accounting has
focused on the direct effects of incentives on individuals’ behaviors (e.g., Bonner et al. 2000,
Sprinkle 2000, Bonner and Sprinkle 2002), our study identifies indirect effects of incentives on
the judgments and decisions of those evaluating the work of individuals with incentives.
Together, the model and empirical results provide evidence to challenge the conventional
wisdom that quantification always increases proposal persuasiveness. Instead, we show that
whether or not this is true depends on contextual factors. Thus, our study has important
implications for managers and others determining how best to present their proposals.
Specifically, our results suggest that if managers have incentives that diverge from the firm’s,
then unless they can convince their superiors that a quantified proposal is based on objective
data, their efforts to quantify the proposal are unlikely to increase its persuasive power.
Similarly, if the data underlying proposals are believed to be subjective, then unless managers
can convince superiors that their incentives are consistent with the firm’s, quantifying a proposal
is unlikely to make it more persuasive. If managers cannot convince their superiors of the
appropriateness of their incentives or of data objectivity, the cost of quantifying a proposal will
likely exceed its benefits.
In addition, our model identifies the specific means by which quantification influences
persuasion. Knowledge of these mechanisms may allow managers to develop alternative means
of persuasion. For example, a manager with subjective data and inconsistent incentives may
attempt to bolster his perceived competence (by, for example, attaching his resume to the
29
proposal), thereby negating the unfavorable impact of critical analysis on the persuasiveness of a
quantified proposal. Alternatively, a manager in this position might attempt to short-circuit
critical analysis by supplementing quantification with independent assurances that the details of
the proposal are appropriate.
Our model suggests a number of potentially fruitful avenues for additional research. While
we demonstrate here that the influence of quantification on persuasiveness is moderated by
characteristics of the data (objective or subjective) and characteristics of the preparer (consistent
or inconsistent incentives), future research could examine the role of characteristics of the
proposal evaluator or the situation. For example, evaluators might react to proposals and to the
proposal preparer differently depending on their prior knowledge about the type of proposal
normally used within the company (e.g., quantified or not-quantified) or the type of proposal
normally prepared by the employee (Payne et al. 1993). Further, evaluators might be more likely
to engage in critical analysis if they receive proposals recommending actions that are
inconsistent with their strongly held prior opinions (Ditto and Lopez 1992).
Future research also could examine the influence of alternative forms of quantification on
persuasion. For our initial inquiry into the influence of quantification on persuasion in
managerial accounting, we used a fairly extensive but straightforward form of quantification that
utilized point estimates. It is possible that other forms of quantifications may be less likely to be
viewed as persuasion tactics and thus may be less likely to invoke critical analysis. For example,
the use of ranges or sensitivity analysis when the underlying data are subjective (i.e., to explicitly
acknowledge the subjectivity) may increase the persuasive power of quantification.
Future research also could focus on contextual factors influencing the strength of various
paths in our process-based model. In both of our studies, we explore factors that affect the
30
likelihood that an evaluator will critically analyze the details of quantified proposal (Link 5).
Other factors may moderate the effect of quantification on perceived proposal plausibility (Link
3) or preparer competence (Link 1). For example, while we find that quantification enhances the
perceived competence of the preparer, this result likely depends on the quality of the analysis.
In summary, we develop an original process model of how quantification influences
managerial decision-making and provide empirical evidence supporting its validity. Our model
is important because it extends the limited literature on quantification (which has focused
primarily on how people equate numbers with words or how memorable numbers are relative to
words) into contexts that are most relevant to managerial accounting. Thus, our model provides
much-needed theoretical guidance to address the anecdotal claims of the persuasive power of
quantification in managerial decision making (Porter 1995).
Although we conduct our study in the context of an operational decision, we believe our
model has implications for the persuasive power of quantification in other accounting and
business contexts as well. For example, as investors and analysts have become more aware of
the conflicted incentives facing company managers, these managers may find it less persuasive
to quantify forward-looking information, which is necessarily subjective in nature. In short, our
model and results suggest that firm managers need to consider that whether quantification will be
more persuasive than non-quantification depends on the potentially offsetting effects of the
increased competence that others attribute to managers who provide quantified information, the
contingent likelihood that these parties will critically analyze the details of the quantified
information, and the enhanced plausibility of quantified information.
31
TABLE 1
LIKELIHOOD OF ACCEPTING OF THE PROPOSAL
PANEL A: Means (Standard Deviations) of Likelihood of Postponing the Turnaround, Study 1
Objective Inputs
Subjective Inputs
Row Means
Non-quantified Proposal
33.42
(24.16) n = 18
27.03
(15.46) n = 20
30.05
(20.04)
Quantified Proposal
48.21
(27.61) n = 19
30.00
(22.69) n = 18
39.35
(26.64)
Column Means
41.01
(26.70)
28.43
(19.01)
PANEL B: Tests of Summary Hypothesis, Study 1
Contrast df Statistic p-value Summary Hypothesis 1: Mean postponement judgments will be higher in the quantified, objective condition than in the other three conditions.
1,71
F = 8.89
< 0.01 (2-tailed)
Effect of a quantified v. non-quantified proposal for subjective inputs.
71 t = 0.40 0.69 (2-tailed)
Effect of a quantified v. non-quantified proposal for objective inputs.
71 t = 1.97 0.03 (1-tailed)
32
TABLE 1 (Cont.)
LIKELIHOOD OF ACCEPTING OF THE PROPOSAL
PANEL C: Means (Standard Deviations) of Likelihood of Postponing the Turnaround, Study 2
Consistent Incentives
Inconsistent Incentives
Row Means
Non-quantified Proposal
40.11
(27.58) n = 19
27.03
(15.46) n = 20
33.40
(22.89)
Quantified Proposal
54.75
(22.62) n =20
30.00
(22.69) n = 18
43.03
(25.62)
Column Means
47.62
(25.91)
28.43
(19.01)
PANEL D: Tests of Summary Hypothesis, Study 2
Contrast df Statistic p-value Summary Hypothesis 2: Mean postponement judgments will be higher in the quantified, consistent incentives condition than in the other three conditions.
1,73
F = 14.73
< 0.01 (2-tailed)
Effect of a quantified v. non-quantified proposal for inconsistent incentives.
73 t = 0.41 0.68 (2-tailed)
Effect of a quantified v. non-quantified proposal for consistent incentives.
73 t = 2.04 0.02 (1-tailed)
This table shows the descriptive statistics and result of hypothesis tests for the question designed to capture the persuasiveness of the manager’s proposal to postpone the planned turnaround for one year. Panel A shows the means and standard deviations by experimental condition. Panel B shows planned contrasts for study 1. Panels C and D contain the corresponding information for study 2. The postponement question asked participants to rate the likelihood that they would recommend postponing for one year the planned turnaround, with 0 representing “definitely will not support postponing the turnaround” and 100 representing “definitely will support postponing the turnaround.” In study 1, the preparer’s incentives were always inconsistent with the firm’s, and type of proposal and nature of inputs were manipulated. In study 2, the nature of the inputs was always subjective, and type of proposal and preparer’s incentives were manipulated. The type of proposal manipulation captures whether the proposal was quantified or not quantified. The nature of inputs manipulation captures whether the proposal was based on subjective or objective inputs. The incentives manipulation captures whether the proposal preparer’s incentives were consistent or inconsistent with the long-run interests of the firm.
33
TABLE 2
ASSESSMENTS OF MANAGER COMPETENCE
PANEL A: Mean (Standard Deviations of) Measures of Manager’s Competence, Study 1 Manager Preparedness Manager Knowledge Manager Effort Manager Competence Objective
Inputs Subjective
Inputs Row
Means Objective
Inputs Subjective
Inputs Row
Means Objective
Inputs Subjective
Inputs Row
Means Objective
Inputs Subjective
Inputs Row
Means
Non-quantified Proposal
3.67
(2.75)
2.88
(2.11)
3.25
(2.43)
4.57
(2.14)
4.45
(1.98)
4.51
(2.03)
3.13
(2.43)
2.48
(1.67)
2.78
(2.07)
4.89
(1.90)
4.28
(1.62)
4.57
(1.76)
Quantified Proposal
7.17
(1.65)
5.94
(1.76)
6.57
(1.91)
6.32
(1.41)
5.69
(1.86)
6.01
(1.65)
6.53
(1.70)
5.58
(1.85)
6.07
(1.81)
6.90
(1.32)
5.88
(1.55)
6.42
(1.50)
Column Means
5.47
(2.84)
4.33
(2.56)
5.47
(1.98)
5.04
(2.00)
4.87
(2.68)
3.95
(2.34)
5.92
(1.90)
5.01
(1.76)
PANEL B: Mean (Standard Deviations of) Measures of Manager’s Competence, Study 2 Manager Preparedness Manager Knowledge Manager Effort Manager Competence Consistent
Incentives Inconsistent Incentives
Row Means
Consistent Incentives
Inconsistent Incentives
Row Means
Consistent Incentives
Inconsistent Incentives
Row Means
Consistent Incentives
Inconsistent Incentives
Row Means
Non-
quantified Proposal
3.76
(2.25)
2.88
(2.11)
3.31
(2.19)
4.40
(2.07)
4.45
(1.98)
4.42
(2.00)
3.18
(2.19)
2.48
(1.67)
2.82
(1.95)
5.26
(2.07)
4.28
(1.62)
4.76
(1.89)
Quantified Proposal
5.60
(2.26)
5.94
(1.76)
5.76
(2.12)
5.53
(2.17)
5.69
(1.86)
5.61
(2.00)
5.73
(1.92)
5.58
(1.85)
5.66
(1.86)
6.28
(1.97)
5.88
(1.55)
6.10
(1.78)
Column Means
4.71
(2.41)
4.33
(2.56)
4.97
(2.17)
5.04
(2.00)
4.49
(2.40)
3.95
(2.34)
5.78
(2.06)
5.01
(1.76)
This table shows descriptive statistics for the four questions designed to capture participants’ assessments of the firm manager’s competence. Panel A shows the means and standard deviations by experimental condition for study 1. Panel B shows the corresponding information for study 2.
34
The preparedness question asked participants about the division manager’s level of preparation, with 0 representing “not at all prepared” and 10 representing “fully prepared.” The knowledge question asked participants to rate the manager’s level of knowledge about the costs and benefits of a turnaround, with 0 representing “not at all knowledgeable” and 10 representing “extremely knowledgeable.” The effort question asked participants to rate the amount of effort the manager put into the information provided, with 0 representing “little effort” and 10 representing “considerable effort.” Finally, the competence question asked participants to rate the level of competence of the division manager, with 0 representing “not at all competent” and 10 indicating “extremely competent.” Independent variables are described in Table 1.
35
TABLE 3
ASSESSMENTS OF PLAUSIBILITY OF A FAVORABLE OUTCOME
PANEL A: Mean (Standard Deviations of) Measures of Plausibility, Study 1 Plausible that Cost Savings
Could Materialize? Plausible that Cost Savings
Good in Long Run? Plausible that Benefits
Greater than Cost Savings? Objective
Inputs Subjective
Inputs Row
Means Objective
Inputs Subjective
Inputs Row
Means Objective
Inputs Subjective
Inputs Row
Means
Non-quantified Proposal
3.21
(3.33)
1.94
(1.75)
2.54
(2.66)
4.66
(3.14)
4.14
(2.91)
4.39
(2.99)
6.33
(2.18)
5.56
(2.60)
5.93
(2.41)
Quantified Proposal
5.62
(2.02)
4.44
(2.18)
5.05
(2.15)
4.50
(2.52)
4.33
(2.67)
4.42
(2.56)
5.46
(2.61)
5.41
(2.97)
5.44
(2.76)
Column Means
4.45
(2.96)
3.13
(2.32)
4.58
(2.80)
4.23
(2.76)
5.89
(2.42)
5.49
(2.75)
PANEL B: Mean (Standard Deviations of) Measures of Plausibility, Study 2
Plausible that Cost Savings Could Materialize?
Plausible that Cost Savings Good in Long Run?
Plausible that Benefits Greater than Cost Savings?
Consistent Incentives
Inconsistent Incentives
Row Means
Consistent Incentives
Inconsistent Incentives
Row Means
Consistent Incentives
Inconsistent Incentives
Row Means
Non-
quantified Proposal
3.00
(2.52)
1.94
(1.75)
2.46
(2.20)
4.76
(2.72)
4.14
(2.91)
4.44
(2.80)
5.55
(2.80)
5.56
(2.60)
5.56
(2.66)
Quantified Proposal
4.85
(2.10)
4.44
(2.18)
4.66
(2.12)
5.95
(2.28)
4.33
(2.67)
5.18
(2.57)
6.63
(1.85)
5.41
(2.97)
6.05
(2.49)
Column Means
3.95
(2.47)
3.13
(2.32)
5.37
(2.54)
4.23
(2.76)
6.10
(2.39)
5.49
(2.75)
36
This table shows descriptive statistics for the three questions designed to capture participants’ assessments of the plausibility of the manager’s proposal. Panel A shows the means and standard deviations by experimental condition for study 1. Panel B shows the corresponding information for study 2. The first question asked participants about the extent to which the manager’s analysis demonstrated how the cost savings associated with postponing the planned turnaround could materialize, with 0 representing “to a very small extent” and 10 representing “to a very great extent.” The second question asked participants to rate how plausible it was that extending the turnaround could be good for the company in the long run, with 0 representing “not at all plausible” and 10 representing “very plausible.” The final question asked participants to rate the extent to which the participant believed that it was plausible that the benefits of postponing the turnaround could exceed its costs, with 0 representing “not at all plausible” and 10 representing “very plausible.” Independent variables are described in Table 1.
37
TABLE 4
CRITICAL ANALYSIS OF THE PROPOSAL
PANEL A: Proportion (Percentage) of Participants Making Comments Critical of the Proposal, Study 1
Objective Inputs
Subjective Inputs
Row Means
Non-quantified
Proposal
3/18
16.7%
2/20
10.0%
5/38
13.2%
Quantified Proposal
8/19
42.1%
14/18 77.8%
22/37 59.5%
Column Means
11/37 29.7%
16/38
42.11%
PANEL B: Proportion (Percentage) of Participants Making Comments Critical of the Proposal, Study 2
Consistent Incentives
Inconsistent Incentives
Row Means
Non-quantified
Proposal
5/19
26.3%
2/20
10.0%
7/39
17.9%
Quantified Proposal
3/20
15.0%
14/18 77.8%
17/38 44.7%
Column Means
8/39
20.5%
16/38 42.1%
This table shows the number of participants criticizing the details of the proposal in their explanations of how they formed their recommendations about whether or not to postpone the turnaround for one year. Panel A shows the information for study 1 and Panel B relates to study 2. For both studies, explanations were coded for whether they contained comments indicating that participants questioned the accuracy or completeness of the information or the validity of the analysis method. The critical analysis measure is dichotomous. Independent variables are described in Table 1.
38
FIGURE 1
HYPOTHESIZED MODEL OF THE EFFECTS OF QUANTIFICATION ON PROPOSAL PERSUASIVENESS
This figure illustrates our process-based model of how quantification influences the persuasiveness of a
managerial proposal. The parenthetical comment next to each link represents the expected coefficient sign.
* The strength of Link 5 is hypothesized to be moderated by characteristics of the proposal (i.e., the nature of the inputs to the proposal) and characteristics of the proposal preparer (i.e., his incentives).
Quantification
Persuasiveness (Postponement
Decision)
Critical Analysis
Perceived Plausibility of
Favorable Outcome
Perceived Competence of
Preparer
Link 1 (+)
Link 2 (+)
Link 3 (+)
Link 4 (+)
Link 6 (-)
Link 7 (+) Link 8 (-)
Link 5 (+) *
39
FIGURE 2
EXPERIMENTAL DESIGN
The entire cube represents a fully crossed factorial design involving the three factors investigated in this paper. The gray area represents Study 1, in which incentives were always inconsistent, and the experimental design varied the nature of the inputs (subjective vs. objective) and the form of the proposal (quantified vs. non-quantified). The dotted area represents Study 2, in which the inputs were always subjective, and the experimental design varied the incentives (consistent vs. inconsistent) and the form of the proposal (quantified vs. non-quantified). Thus, in our design, Study 1 and Study 2 share two cells – subjective inputs / inconsistent incentives / quantified proposal, and subjective inputs / inconsistent incentives / non-quantified proposal. The white cells represent conditions for which data hypotheses were not developed and data were not collected.
Inconsistent Consistent Incentives Incentives
Subjective Inputs Objective Inputs
Quantified Proposal
Non-Quantified Proposal
40
FIGURE 3
EMPIRICAL MODEL OF THE EFFECTS OF QUANTIFICATION ON PROPOSAL PERSUASIVENESS – STUDY 1
This figure shows the results of the path analysis designed to simultaneously test the relationships among our manipulated variable, mediating variables, and our primary dependent measure. Shown next to each path are the standardized path coefficients and corresponding t–statistics and p-values (one-tailed). Overall goodness of fit is measured through the Tucker-Lewis index (104%), which measures the proportion of improvement of the fit of the model over a null model, and confirmed with a χ2 test (χ11
2 = 6.93, p = 0.81) and an Incremental Fit Index (102%).
Perceived Plausibility of
Favorable Outcome
Link 5: Objective: +0.28, t66 = 1.74, p = 0.04 Subjective: +0.68, t66 = 5.73, p < 0.01
Difference: χ12 = 4.88, p = 0.03
Quantification
Persuasiveness (Postponement
Decision)
Critical Analysis
Perceived
Competence of Preparer
Link 1: +0.65, t66 = 6.86, p < 0.01
Link 7: +0.33,
t66 = 2.21, p = 0.02
Link 8: -0.23,
t66 = 1.65, p = 0.05
+0.04, t66 = 0.25, p = 0.40
Link 2: +0.34, t66 = 2.95, p < 0.01
Link 6: -0.20, t66 = 1.73, p = 0.04
+0.49, t66 = 4.69, p < 0.01
Link3:
Link4:
41
FIGURE 4
EMPIRICAL MODEL OF THE EFFECTS OF QUANTIFICATION ON PROPOSAL PERSUASIVENESS – STUDY 2
This figure shows the results of the path analysis designed to simultaneously test the relationships among our manipulated variable, mediating variables, and our primary dependent measure. Shown next to each path are the standardized path coefficients and corresponding t–statistics and p-values (one-tailed). Overall goodness of fit is measured through the Tucker-Lewis index (105%), which measures the proportion of improvement of the fit of the model over a null model, and confirmed with a χ2 test (χ11
2 = 5.84, p = 0.88) and an Incremental Fit Index (102%).
Link 7: +0.32,
t68 = 2.89, p < 0.01
Link 8: -0.22,
t68 = 1.84, p = 0.04
+0.14, t68 = 1.13, p = 0.13
Quantification
Persuasiveness (Postponement
Decision)
Critical Analysis
Perceived
Plausibility of Favorable Outcome
Perceived
Competence of Preparer
Link 1: +0.57, t68 = 5.40, p < 0.01
Link 5: Consistent: -0.14, t68 = 0.87, p = 0.19
Inconsistent: +0.69, t68 = 5.73, p < 0.01 Difference: χ1
2 = 17.93, p < 0.01
Link 2: +0.36, t68 = 3.19, p < 0.01
Link 6: -0.24, t68 = 2.08, p = 0.02
+0.43, t68 = 3.75, p < 0.01
Link 3:
Link 4:
42
REFERENCES
Ahluwalia, R. 2000. Examination of psychological processes underlying resistance to persuasion. Journal of Consumer Research 27 (September): 217-232.
Anderson, U., K. Kadous, and L. Koonce. 2003. The role of incentives to manage earnings and
quantification in auditors’ evaluations of management-provided evidence. Auditing: A Journal of Practice and Theory (forthcoming).
Bamber, E. M. 1983. Expert judgment in the audit team: A source reliability approach. Journal
of Accounting Research 21 (2): 396-412. Birdsell, D. 1998. Presentation Graphics. McGraw-Hill. On-line. Available:
http://www.mhhe.com/socscience/comm/luccas/student/birdsell/birdsell12.htm. Bonner, S. E., R. Hastie, G. B. Sprinkle, and S. M. Young. 2000. A review of the effects of
financial incentives on performance in laboratory tasks: Implications for management accounting. Journal of Management Accounting Research 12: 19-64.
Bonner, S. E. and G. B. Sprinkle. 2002. The effects of monetary incentives on effort and task performance: Theories, evidence, and a framework for research. Accounting, Organizations and Society 27(4-5): 303-345.
Byrne, B. (2001). Structural Equation Modeling with AMOS: Basic Concepts, Applications and Programming. Mahwah, NJ: Lawrence Erlbaum Associates.
Ditto, P. H., and D. F. Lopez. 1992. Motivated skepticism: Use of differential decision criteria for preferred and nonpreferred conclusions. Journal of Personality and Social Psychology 63 (4): 568-584.
Eagly, A., and S. Chaiken. 1993. The Psychology of Attitudes. Fort Worth: Harcourt Brace
Jovanovich College Publishers. Fischhoff, B. 1995. Risk perception and communication unplugged: Twenty years of progress.
Risk Analysis 15: 137-145.
Friestad, M., and P. Wright. 1994. The persuasion knowledge model: How people cope with persuasion attempts. Journal of Consumer Research 21 (June): 1-31.
Healy, P., and J. Wahlen. (1999). A review of the earnings management literature and its implications for standard setting. Accounting Horizons 13 (December): 365-383.
Hilton, R. W. 1994. Managerial Accounting, 2nd edition. New York, NY: McGraw-Hill, Inc. Hirst, E. (1994). Auditors’ sensitivity to source reliability. Journal of Accounting Research 32
(Spring): 113-126.
43
Hutton, A., G. Miller, and D. Skinner. (2003). The role of supplementary statements in the
disclosure of management earnings forecasts. Dartmouth College working paper. Institute of Management Accountants. 1999. Counting More, Counting Less, Transformations in
the Management Accounting Profession, The 1999 Practice Analysis. Montvale, NJ: Institute of Management Accountants.
Jones, E. and K. Davis. 1965. From acts to disposition: The attribution process in person
perception. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology (Vol. 2, pp. 219-266). New York, NY: Academic Press.
Kida, T., and J. Smith. 1995. The encoding and retrieval of numerical data for decision making
in accounting contexts: Model development. Accounting, Organizations and Society 20 (7/8): 585-610.
Kida, T., J. Smith, and M. Maletta. 1998. The effects of encoded memory traces for numerical
data on accounting decision making. Accounting, Organizations and Society 23 (5/6): 451-466.
Kline, R. B. 1998. Principles and Practices of Structural Equation Modeling. New York, NY:
The Guilford Press. Libby, R., and W. Kinney. 2000. Does mandated audit communication reduce opportunities
corrections to manage earnings to forecasts? The Accounting Review 75 (October): 383-404. McLeary, J., R., Haasnoot, J. McLeary, and S. Drake. 2000. By the Numbers: Using facts and
figures to get your projects, plans, and ideas approved. New York, NY: American Management Association.
Payne, J., J. Bettman, and E. Johnson. 1993. The Adaptive Decision Maker. Cambridge, England:
Cambridge University Press. Petty, R., and J. Cacioppo. 1986. Communication and Persuasion: Central and Peripheral
Routes to Attitude Change. New York, NY: Springer-Verlag. Porter, T. 1995. Trust in Numbers. Princeton, NJ: Princeton University Press. Rigdon, E. E., R. E. Schumacker, and W. Wothke. 1998. A comparative review of interaction
and nonlinear modeling. In Interaction and Nonlinear Effects in Structural Equation Modeling, edited by R. E. Schumacker and G. A. Marcoulides, Mahwah, NJ: Erlbaum Associates.
Sprinkle, G. B. 2000. The effect of incentive contracts on learning and performance. The
Accounting Review 75(3): 299-326.
44
Viswanathan, M., and T. Childers. 1996. Processing of numerical and verbal product information. Journal of Consumer Psychology 5 (4): 359-385.
Wallsten, T., D. Budescu, R. Zwick, and S. Kemp. 1993. Preferences and reasons for
communicating probabilistic information in verbal or numerical terms. Bulletin of the Psychonomic Society 31 (2): 135-138.
Winer, B. J., D. R. Brown, and K. M. Michels. 1991. Statistical Principles in Experimental
Design, 3rd Edition. Boston, MA: McGraw Hill.