does the balanced scorecard work: an empirical investigation · research paper no. 1/08 does the...

28
RP 1/08 DOES THE BALANCED SCORECARD WORK: AN EMPIRICAL INVESTIGATION © Andrew Neely Research Paper Series The Cranfield forum for the latest thinking in management research Available online at www.som.cranfield.ac.uk/som/research/researchpapers.asp

Upload: volien

Post on 05-Nov-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

RP 1/08

DOES THE BALANCED SCORECARD WORK:

AN EMPIRICAL INVESTIGATION

© Andrew Neely

Research Paper Series

The Cranfield forum for the latest thinking in management research

Available online at www.som.cranfield.ac.uk/som/research/researchpapers.asp

Research Paper no. 1/08

DOES THE BALANCED SCORECARD WORK:

AN EMPIRICAL INVESTIGATION

Professor Andrew Neely *

Centre for Business Performance, School of Management, Cranfield University,Cranfield, Bedford MK43

0AL, UK and Advanced Institute of Management Research, 4th Stewart House, 32 Russell Square, London

WC1B 5DN, UK

January 2008

The Cranfield School of Management Research Paper Series was established in 1987. It is

devoted to the dissemination of new insights which advance the theory and practice of business and the management of organisations.

The Research Papers in this Series often represent preliminary work. They are circulated to

encourage cross-fertilisation of ideas both between Cranfield scholars and across the wider

academic community.

© Andrew Neely. All rights reserved.

Any opinions expressed in this Research Paper are those of the author(s) and not those of

Cranfield School of Management.

ISBN: 1 85905 192 8

Editor: Catarina Figueira

For further information, please contact: Cranfield School of Management Research Paper Series

Cranfield University, Cranfield , Bedford MK43 0AL, UK

Tel. +44 1234 751122 extension 3846

Fax. +44 1234 752136

E-mail. [email protected]

* Corresponding author. Tel. +44 1234 751122 ; Fax + 44 1234 757409

E-mail. [email protected]

1

Research Paper No. 1/08

DOES THE BALANCED SCORECARD WORK:

AN EMPIRICAL INVESTIGATION

Abstract

Commentators suggest that between 30 and 60% of large US firms have adopted the

Balanced Scorecard, first described by Bob Kaplan and David Norton in their seminal

Harvard Business Review paper of 1992 (Kaplan and Norton, 1992; Marr et al, 2004).

Empirical evidence that explores the performance impact of the balanced scorecard,

however, is extremely rare and much that is available is anecdotal at best. This paper

reports a study that set out to explore the performance impact of the balanced scorecard by

employing a quasi-experimental design. Up to three years worth of financial data were

collected from two sister divisions of an electrical wholesale chain based in the UK, one of

which had implemented the balanced scorecard and one of which had not. The relative

performance improvements of geographically matched pairs of branches were compared to

establish what, if any, performance differentials existed between the branches that had

implemented the balanced scorecard and those that had not. The key findings of the study

are that while the Electrical division – the division that implemented the balanced

scorecard – sees improvements in sales and gross profit; similar performance

improvements are also observed in the sister division. Hence the performance impact of

the balanced scorecard has to be questioned. Clearly further work on this important topic

is required in similar settings where natural experiments occur.

Keywords: performance measurements, performance management, performance impact,

balanced scorecard.

2

1. Background and Context

Commentators suggest that between 30 and 60% of large US firms have adopted the

Balanced Scorecard, first described by Bob Kaplan and David Norton in their seminal

paper of 1992 (Frigo and Krumwiede, 1999; Frigo, 2000). Despite this impressive take up,

however, there is a paucity of empirical evidence that explores the performance impact of

the balanced scorecard and indeed of performance measurement systems more generally

(Franco-Santos et al. 2007; Melnyk et al. 2004). In fact the extant literature has tended to

focus on the problems with traditional measurement systems and how these can be

overcome with alternative measurement methods and frameworks, such as the Balanced

Scorecard and the Performance Prism (Kaplan and Norton, 1992; Neely et al., 2002). As a

result much work has been carried out on the design and deployment of measurement

systems, but relatively little on their impact (Bourne et al., 2000; Neely et al., 2000).

The aim of this paper is to explore the performance impact of balanced scorecards. The

paper draws on data gathered over three-year period from a major wholesaler of electrical

components in the UK, referred to as Electrical. The board of Electrical decided to

implement a balanced scorecard in late 1999 and began working on the design of their

balanced scorecard in early 2000. They spent six months completing the design phase of

the process and a further six months rolling the balanced scorecard out across their UK

branch network. On 1st January 2001 the balanced scorecard was formally launched and

from that day the business stopped releasing information on branch profitability, which

previously had been the main method of branch measurement. They also changed the

firm’s incentive scheme, moving away from a bonus based on branch profitability and to a

bonus based on performance against the balanced scorecard.

Importantly, mid-way during the rollout phase of the project (in the 3rd quarter of 2000),

Electrical was acquired by another UK wholesaler of electrical components. The acquiring

company – referred to in the paper as Sister – continued to use traditional methods of

performance reporting at the branch level, namely profit and loss accounts, throughout

2001. This situation presented the researchers with a valuable opportunity – effectively a

naturally occurring experiment. In essence the research team was able to construct a

sample of 56 pairs of matched branches (based on location), drawn from the samples

provided by Electrical and Sister. One branch in each matched sample had adopted the

balanced scorecard while the other had not. This matched sampling technique, known as

quasi-experimental design, is a powerful methodology for assessing the impact of

organizational changes (Cook and Campbell, 1979). In this study, the matched sample

provided two particularly valuable controls. First, because both divisions essential sold the

same products to the same range of customers, many sectoral differences are accounted

3

for. Second, the geographic matching controls for local economic activity – both in terms

of demand and in terms labour availability – two key determinants of branch performance.

The rest of the paper consists of five main sections. In the first the relevant literature is

explored, as this sets the scene for the study. In the second the methodology used in this

study is explained and justified. The third section presents a summary of the balanced

scorecard design and deployment process used by the firm. This is important as one of the

potential criticisms of this research – namely that the balanced scorecard was poorly

designed and deployed in negated when one considers how carefully the organisations

approached the design and deployment of their balanced scorecard. The fourth section

presents the quantitative analysis of the impact of the balanced scorecard, using branch

level performance data. These data are compared to performance data from Electrical’s

sister company, which as already discussed had not implemented the balanced scorecard at

that time. The fifth and final section of the paper summarises the implications of this

research for both the practitioner and academic audiences.

2. The Impact of Performance Measures

– Relevant Literature

The shortcomings and dysfunctional consequences of performance measurement systems have been discussed in the academic literature for at least fifty years (Ridgway, 1956), but

recently there has been a flurry of activity (Chenhall, 2005; Norreklit, 2000; Norreklit,

2003). Throughout the 1980s vocal and influential authors criticised the measurement

systems used by many firms (Johnson and Kaplan, 1988; Hayes and Abernathy, 1980). By

the 1990s the noise made by these voices had grown to a crescendo (Melnyk et al., 2004;

Neely et al., 1995; Neely, 1999) and increasing numbers of firms appeared to be "re-

engineering" their measurement systems, with data suggesting that between 1995 and

2000, 30 to 60% of companies transformed their performance measurement systems (Frigo

and Krumwiede, 1999; Frigo, 2000). Evidence suggests, for example, that by 2001 the

balanced scorecard had been adopted by 44% of organisations worldwide (57% in the UK,

46% in the US and 26% in Germany and Austria). And more recent data suggests that

85% of organisations had performance measurement system initiatives underway by the

end of 2004 (Marr et al., 2004; Rigby, 2001; Rigby, 2005; Silk, 1998; Speckbacher et al.,

2003). However, cautionary evidence from three Austrian academics reported that 8% of

174 companies from German speaking countries decided not to implement a performance

measurement system (and a balanced scorecard in particular) because they could not see

advantages or ‘positive impact’, especially given the implementation effort required

(Speckbacher et al., 2003).

4

Somewhat surprisingly (especially given all of this activity) there has been relatively little

empirical research into whether the balanced scorecard actually works. In fact this

criticism can be levelled at the field of performance measurement more generally, which

has seen much prescription, but relatively little empirical research (Franco-Santos et al.

2007). Kaplan and Norton have made some efforts to demonstrate the impact of the

balanced scorecard, but their approach has been to use largely anecdotal cases (Kaplan and

Norton, 2001). An important and notable exception is the work of Chris Ittner and David

Larcker, who report that only 23% of organizations that they surveyed consistently built

and tested causal models to underpin their measurement systems, but that these 23%

achieved 2.95% higher return on assets and 5.14% higher return on equity (Ittner and

Larcker, 2003).

Similar studies, executed less robustly, have been undertaken by consultancy and

commercial research firms. These studies tend to suggest that organisations managed

through ‘balanced’ performance measurement systems perform better than those that are

not (Gates, 1999; Lingle and Schiemann, 1996). Lingle and Schiemann (1996) report

evidence that organisations making more extensive use of financial and non-financial

measures and linking strategic measures to operational measures have higher stock market

returns (Lingle and Schiemann, 1996). While Lawson et al’s (2003) study shows that the

use of a performance measurement system as a management control tool reduces overhead

costs by 25% and increases sales and profits (Lawson et al. 2003). Other authors, such as

de Waal (2003) and Sandt et al (2001), report less tangible benefits from the use of

performance measurement systems (de Waal, 2003; Sandt et al. 2001). Dumond (1994)

and Sandt et al (2001) suggest that using balanced performance measurement systems

improves the decision-making performance of managers and employees (de Waal, 2003;

Dumond, 1994). Lawson et al (2003) and Dumond (1994) found that using performance

measurement systems and linking scorecards to compensation significantly increased

employee satisfaction, although Ittner et al. (2003b) present evidence to the contrary

(Dumond, 1994; Ittner et al., 2003b; Lawson et al., 2003).

Ketelhohn (1999) and Vasconcellos (1988) found that the identification and selection of

appropriate measures and key performance indictors enhance the implementation and

acceptance of business strategy, at the same time as enhancing employee understanding of

the business (Ketelhohn, 1998; Vasconcellos, 1988). Furthermore, Forza and Salvador’s

research (2000, 2001) supports the suggestion that employee communication that focuses

on feedback from measures increases collaboration and facilitates buy-in (Forza and

Salvador, 2000; Forza and Salvador, 2001).

In recent years there have been some attempts to evaluate the impact of the balanced

scorecard more robustly. Perhaps the best examples are the studies reported by (Davis and

Albright, 2004; Ittner et al., 2003a; Malina and Selto, 2001). The study by Ittner et al

(2003a) explored the performance of impact of the balanced scorecard and the associated

5

incentive scheme in a global financial services firm (Ittner et al., 2003a). This study is

rather critical of the balanced scorecard, arguing that the inherent subjectivity in the

incentive scheme associated with the balanced scorecard undermined the credibility of the

balanced scorecard in the organisation. Malina and Selto (2001) studied the operation of

the balanced scorecard in multiple divisions of a large international manufacturing firm.

They conclude that the balanced scorecard, as designed and implemented, proved an

effective device for controlling corporate strategy, but they provide limited empirical

evidence to support this assertion (Malina and Selto, 2001).

The final study – the one reported by Davis and Albright (2004) – most closely resembles

the study reported in this paper. The Davis and Albright (2004) study uses a control group

to evaluate the performance impact of the balanced scorecard. Davis and Albright (2004)

select nine matched pairs of branches in the banking sector and compare their performance.

This study finds evidence of superior financial performance in those branches that adopt

the balanced scorecard (Davis and Albright, 2004). In essence, the research reported in

this paper adopts a similar methodology to the Davis and Albright study, but bases the

study on a far larger data set – 56 matched pairs of branches instead of 9 - and three years

worth of financial data rather than two.

3. Research Methodology and Questions

As stated already, the aim of the research reported in this paper was to address the question

- what is the performance impact of the balanced scorecard? To address this question the

authors decided to adopt the quasi-experimental design methodology advocated by (Cook

and Campbell, 1979). Core to the experimental design was the authors’ involvement (over

an extended period) in the design and deployment of a balanced scorecard in a multi-

branch electrical wholesale business based in the UK. The author worked with the board

of the business and other members of the firm's senior management team over a two-year

period, facilitating the design and deployment of their balanced scorecard. The length and

extent of this involvement delivered significant benefits to the research in several ways.

Firstly, the author's involvement in the entire design and deployment process gave him

detailed insight and valuable contextual information, which was used in the subsequent

data analysis. Second, the author was able to obtain unparalleled access to the organisation

and particularly highly sensitive and confidential performance data.

In essence the author was able to access sales and profitability data from two sister

organizations. The first, Electrical, was the one which implemented the balanced

scorecard in January 2001, following a one year design and deployment process. Electrical

provided data for 122 branches. The second company, Sister, continued to use traditional

methods of performance reporting throughout the period of the study and provided data

6

from 190 branches. These two sets of data were compared and branches based in the same

location were matched. This matching by location enabled the research to compare

changes in organizational performance over the duration of the study, while controlling for

local economic conditions, product range and customer base.

Before the financial data are presented, the balanced scorecard design and deployment

process adopted by the firm will be explained. The description of the design and

deployment processes is central to the rest of the paper as it illustrates the effort that the

firm put into designing and deploying their balanced scorecard, thereby negating one

potential criticism of this paper – namely that the balanced scorecard being evaluated is

simply a poorly designed one.

4. The Design and Deployment of Electrical’s Balanced

Scorecard

Electrical’s history is an interesting one. Established late in the 19th century, the business

had an annual turnover of over £200 million and held 8.5% of the UK’s market share in

2002. A decade earlier the situation had been rather different. Electrical was a traditional

family business until the early 1990s. The organisation was tightly controlled and highly

centralised. Power rested with two members of the family and their inner circle. Generally the external perception of the business was that it was old-fashioned and offered

no particular threat to its competitors.

In 1993 a new Managing Director was appointed. As well as bringing new ideas, he

engaged in a significant change programme. New IT systems were introduced, a new

national distribution centre was constructed and processes were streamlined. The branch

managers, who previously had been tightly managed from the centre, were encouraged to

be more entrepreneurial, acting as if they were “running their own businesses”. The

impact of these changes was dramatic. Over a five year period Electrical experienced

significant growth. Revenues increased from £50 million to £200 million, market share

from 2% to 8.5% and the business delivered 5 consecutive years of 30% profit growth.

The Need for Change

While this new entrepreneurial culture resulted in business success, it also encouraged

some destructive behaviour. A key part of the culture change that the new Managing

Director introduced was to encourage branch managers to behave as if they were running

their own businesses. The branch managers were incentivised to be entrepreneurial,

constantly seeking out new business opportunities [to reinforce this branch manager bonus

payments were based on branch profitability. One consequence of this was that different

7

branches would compete with one another for the same order. Under the accounting

system employed by the business the branch that delivered the order was credited with the

sale, so it was in every branch managers’ financial interest [because of the bonus scheme]

to ensure that their branch won every order they could. Not surprisingly this resulted in

situations where one of Electrical’s branches would undercut another of Electrical’s

branches to win an order. Effectively the branches traded away margin to beat their sister

branches in bidding wars.

In addition to competing for orders the competition between branches meant that there was

very limited sharing of information between branches. Electrical’s customers exhibit some

interesting behaviours which make the lack of information sharing unusually problematic.

The work for electricians is geographically dispersed, with much of the UK building work

taking place in the South of the country. Yet electricians are remarkably loyal to their

home branch. Indeed it is clear from the interactions that take place in the branches that

many of the branch staff count their customers among their friends. When an electrician is

working away from home – for example, an electrician based in Newcastle is working in

London – the electrician will tend to phone their home branch to request product. The

electrician working in London will phone the Newcastle branch to ask for 200 light bulbs.

Rather than the Newcastle branch saying to the electrician, “we have a branch in London

that can deliver those to you”, he would arrange a special delivery [from Newcastle to

London – cities which are 300 miles apart]. The reason is simple, if the Newcastle branch

delivers the light bulbs then the Newcastle branch is credited with the sale and hence their

profit figures look better, particularly if they can deliver other products to customers in

Middlesborough, Darlington, York, Durham, Peterborough and Cambridge on the way to

London. The problem, of course, is that all of these cities have existing Electrical branches

capable of delivering the product more quickly and more cheaply [the net transport costs

would be lower], but the measurement and incentive schemes in the business were

structured in a way that discouraged the more efficient behaviour of passing orders from

one branch to another.

As the business grew the impact of these behaviours became more pronounced and more

noticeable and so in 2000 the board of Electrical began considering how they might change

the measurement and incentive schemes used in the business. After due consideration and

consultation the board made the decision to adopt a balanced scorecard and explicitly seek

to design a measurement and associated incentive scheme that would encourage a further

change in behaviour. Based on a slide that the business subsequently used in the balanced

scorecard communication programme, figure 1 summarises the behavioural and attitudinal

changes that the business was seeking to encourage.

8

Figure 1: Electrical’s Journey – Slide Used in the Cascade and Communication Programme

The Journey

Branches are profit centres

Branches are service centres

Branches are a network Branches are autonomous

Global optimisation Local optimisation

Branches share knowledge Branches hold knowledge

Amazon.com = competition

Traditional competitors

Differentiate by relationships Price war – compete on price

Today… Tomorrow…

9

Designing the Balanced Scorecard

The design and deployment of Electrical’s balanced scorecard consisted of the three

phases, as suggested in the literature (Bourne et al. 2000). The first phase, which ran from

January to May 2000 involved identifying what and how to measure, along with

construction of the bonus scheme. The second phase, which ran from June to December

2000, involved deploying the balanced scorecard – cascading the scheme across the

business and creating the necessary supporting infrastructure. The balanced scorecard was

formally launched on 1st January 2001 and the third phase – ongoing support – followed

this.

During the design phase the top management team – in this case the executive board –

were intimately involved in identifying what and how to measure. They followed well

established processes for designing balanced scorecards prescribed in the literature

(Bourne et al, 2002; Kaplan and Norton, 2000; Neely et al, 2002b). They began by

creating a success map for the business which articulated their theory about how the

business worked and only when they had considered this did the board turn to the question

of what to measure. They spent considerable time debating and refining the actual

performance measures, again using materials well described in the academic literature

(Neely et al., 1997). Throughout this process the board consulted widely with other key

stakeholders in the business. Workshops were held with influential groups of managers,

including the regional managers, who were seen as central to the successful

implementation of the scheme.

As well as working on the identification and definition of measures, the board also spent

time in the design phase considering how the firms’ incentive schemes should be

redesigned when the balanced scorecard was implemented. Having completed these

activities the balanced scorecard project moved into its second phase – deployment.

Scorecard Deployment

A key element of the deployment phase was the creation of so-called “balanced scorecard

champions”. As mentioned previously, the regional managers were seen as key influencers

in the successful design and deployment of the balanced scorecard. Each regional manager

had full financial and operational responsibility for one of eight regions in the UK.

Typically they had 15-20 branch managers reporting to them. The regional managers were

given a series of three two–day executive education courses on the balanced scorecard.

The first of these courses was introductory and involved an explanation of the balanced

scorecard – what it was and how it worked – as well as a review of Electrical’s balanced

scorecard. Throughout this course the regional managers were encouraged to comment on

Electrical’s scorecard, highlighting any concerns they had about it. The second course was

10

more in-depth and involved an exploration of the four perspectives on Electrical’s balanced

scorecard – financial, customer, operational and people [which replaced the innovation and

learning perspective]. During this course the regional managers were exposed to the latest

thinking in these four areas. The final two-day course involved consideration of the

change process. As well as sessions on change management, the regional managers

participated in substantive debates about how the balanced scorecard should be cascaded

through the business.

To support the cascade process a series of eight one-day regional workshops were held for

the branch managers. Each regional manager invited all of their direct reports to a regional

workshop. These workshops, which were delivered by a professional educator, were

designed to introduce the balanced scorecard to the branch managers. The managing

director of the business came for the last hour of each of these regional workshops to

answer any questions the branch managers wanted to raise.

Having completed their workshops the regional and branch managers delivered standard

cascade presentations to the branch staff. The cascade presentation was designed by the

regional management team, with the support of a professional educator and delivered by

the regional managers to everyone in the business during a series of evening workshops. It

is important to note that this business was not one that had invested heavily in training and

education previously. In informal discussions with the author many people in the business

commented that the level of investment made in the balanced scorecard project illustrated

how seriously the business was taking it. Indeed the Managing Director described the

balanced scorecard project as “the most significant change we have undertaken for a

decade”.

In parallel to training and education the development team continued to work with the

finance and IT functions to create “dummy” scorecard reports and ensure that the

necessary data for reporting purposes were available. Between August and December the

business began releasing “dummy” scorecard reports to the branches. The “dummy”

reports were sent with the monthly profit and loss reports that the branches had

traditionally received, but were accompanied with a note explaining that from the 1st

January the “dummy” reports were the only reports that branches would receive – i.e. the

business would no longer release to branch managers their own profit and loss accounts.

This change – the decision not to release monthly profit and loss accounts - was extremely

controversial when it was first announced. Indeed at the regional workshops there was

typically a stunned silence when the branch managers were told that from 1st January 2001

they would no longer be receiving their branch’s profit and loss account. Only when the

branch managers understood all of the measures on the balanced scorecard [which included

many of the elements that made up the profit and loss account] were the signs of disbelief

suspended.

11

Changing the Incentive Scheme

The balanced scorecard was formally launched in the business on 1st January 2001. From

that day the firm’s bonus scheme was coupled to the balanced scorecard for all branch

managers and staff. The firm adopted a bonus scheme where the bonus paid to individual

branches was based on two factors – the branch’s performance and the company’s overall

performance1. Branches earned points based on their performance against the performance

measures on the balanced scorecard. For every measure red, amber and green target zones

were set. When branches performed in the green zone they earned 3 points, the amber

zone 1.5 points and the red zone 0 points. The scorecard reports released to the branch

managers each month included the running total number of points that the branch had

earned that year.

The value of the points was based on the firm’s overall profitability. The board set aside a

bonus fund that grew in relation to the firm’s overall profitability. Below a minimum

threshold the firm’s profit was unacceptable and so no bonus was paid. When the

threshold was reached the firm’s guaranteed to pay a minimum of £3 million, for example,

in bonus payments. There was no ceiling on the bonus fund. The more profit the firm

generated the greater the amount of money paid into the bonus fund.

To calculate the value of each point, the firm calculated the total number of points earned

by all branches and divided the total bonus fund [based on the firm’s profitability] by the

total number of points earned. The advantage of this scheme is that it encouraged branch

managers and staff to think about overall company profitability, as well as the performance

of their branch.

Additional Support and Encouragement

To further reinforce the message about the importance of the balanced scorecard the

business put in place a number of other initiatives. First, a scorecard steering committee,

chaired by the HR Director was established. The scorecard steering committee oversaw

the implementation of the balanced scorecard, monitoring how well the scheme was

working and identifying what additional support was required. As well as gathering

informal feedback the steering committee established a set of formal scorecard reviews,

calling together all of the regional managers once a month to gather their opinions on the

balanced scorecard and how well it was working.

One of the common requests from the branches was for improved drill down information –

information that would allow the branch managers to identify the root causes of problems

1 The bonus was paid to the branch and then allocated to individual members of staff by the branch manager,

in discussion with the regional manager.

12

identified by the measures on the balanced scorecard. The steering committee

commissioned work from the IT department to address this issue, as well as establishing a

small group who could resolve questions and queries about the data contained in the

scorecard reports.

Given the structure of the business – largely homogeneous branches – the consistent

performance measures applied to the branches offered significant opportunities for

benchmarking and the identification and sharing of good practices. The regional managers

used the balanced scorecards to compare the relative performance of branches within their

regions as well as comparing the performance of branches across regions. Where

significant differences in performance were identified the regional managers set out to

explore why these differences existed and to identify whether there were any specific

practices they could transfer from one branch to another. One of the measures on the

balanced scorecard – the internal performance audit – further encouraged this activity. In

essence the internal performance audit was an audit of branch practices. Regional

managers were expected to audit the performance of the branches in their region, assessing

their performance against a number of standards – e.g. external appearance, trade counter,

warehouse, systems and procedures, staff, customers and back office. Undertaking this

audit forced regional managers to look in detail at the practices adopted by specific

branches, thereby enabling them to identify examples of good practices that they could

communicate to others.

Clearly the theme of communication ran heavily throughout the implementation of the

balanced scorecard in Electrical. As well as all of the education and awareness raising

activities, the business established a formal communication programme to emphasise the

importance of the scheme. They adopted the cartoon character Popeye as a symbol. All

communication about the balanced scorecard was accompanied by a cartoon showing

Popeye eating spinach, the food which gives him strength, and the caption “are you getting

your greens” – a reference to the green zone for the performance targets.

Overall the time and effort expended by Electrical on designing, deploying and supporting

the balanced scorecard was significant. Contrasting the approach they adopted with that

prescribed in the literature, it is difficult to see what more they could have done. So let us

now turn to the question of what was the performance impact?

5. Evaluating the Impact of the Balanced Scorecard

To explore the impact of the balanced scorecard on Electrical two phases of analysis were

undertaken. These are presented in the next two sections of the paper. In the first the

performance of all of Electrical’s branches will be considered, for the time period 2000-

13

2002. Electrical made branch level data available on monthly sales and gross profit for the

period 2000-2002. The business formally introduced the balanced scorecard on 1st

January 2001, from which time they stopped releasing profit and loss accounts to the

branches and linked the bonus scheme to the balanced scorecard. Following an internal

reorganization, Electrical reversed this decision from 1st January 2002, reintroducing profit

and loss reporting at the branch level.

To supplement the financial data this research draws on two other qualitative data sources.

These data, which were gathered via participant observation during the design and

deployment phase of the balanced scorecard project and through a series of semi-structured

interviews conducted six to nine months after the balanced scorecard had been

implemented, will be used to explore some of the findings from the quantitative data.

The firm also made available data on the sales and gross profit performance of the Sister

company’s branches. These data are used in a quasi-experimental design. Electrical

branches are matched with the geographically nearest Sister branch. The matching process

results in 56 matched pairs of branches. Matching branches based on geographical

location controls for local economic and labour market activity. Given Electrical and

Sister both operate in the same industry [indeed they had been direct competitors prior to

the take-over of Electrical by Sister] the matching process also controls for industry

effects. Unfortunately, for reasons of commercial sensitivity, Sister was only willing to

make monthly branch level sales and gross profit data available to the research team for

2000 and 2001 [not 2002]. However this still enabled the research team to explore the

relative changes in performance between Electrical and Sister during the period when the

balanced scorecard was introduced.

The Impact of the Balanced Scorecard in Electrical

As mentioned in the previous section, Electrical provided monthly branch level figures for

sales and gross profit for 122 branches for the calendar years 2000-2002. 32 branches

opened or closed during the period of study and some data were unavailable for a further 3

of them, leaving an effective sample of 87 branches. Between them, these 87 branches

achieved sales of just over £160 million/year during the period of the study. Figure 2

shows how Electrical’s sales and gross profit changed over the period under study. As can

be seen from the data there is some seasonality in the business, with sales and gross profit

dipping in December.

14

Figure 2: Electrical’s Normalised Monthly Financial Performance

0

20

40

60

80

100

120

140

160

180

Jan-00

Mar-00

May-00

Jul-00

Sep-00

Nov-00

Jan-01

Mar-01

May-01

Jul-01

Sep-01

Nov-01

Jan-02

Mar-02

May-02

Jul-02

Sep-02

Nov-02

Norm

alised Sales and Gross Profit

Normalised sales Normalised gross profit

Table 1 summarises these data and compares sales and gross profit in 2000 with 2001 and

2002. These data suggest that while sales were higher in 2001 (during the period when the

balanced scorecard was operating), sales dropped back in 2002 (once the company had

reverted to traditional profit and loss reporting).

Table 1: Summary Performance Data for Electrical

2000 2001 2002

Total Sales 100% 103.34% 91.83%

Gross Profit 100% 97.75% 92.28%

To explore these findings in more detail the research team conducted paired sample t-tests

comparing the sample means. As mentioned previously, the data in Figure 2 suggest some

seasonality, certainly in terms of sales, so the decision was taken to compare sales in

January 2000 with sales in January 2001 and to compare separately sales in February 2000

with sales in February 2001, etc. The process resulted in 48 paired samples being tested,

24 for each of sales and gross profit. Table 2 shows the results of the 48 tests.

15

Table 2: Comparison of Monthly Sales and Gross Profit2

Variable Dates Obs Mean Difference Std Err Std Dev t Pr(T>t)

Sales 01/2000-01/2001 87 -24.08 3.77 35.13 -6.39 1.00***

02/2000-02/2001 87 -4.58 3.18 29.66 -1.44 0.92*

03/2000-03/2001 87 -5.58 4.44 41.37 -1.26 0.89

04/2000-04/2001 87 -19.39 4.43 41.31 -4.38 1.00***

05/2000-05/2001 87 -6.67 4.25 39.67 -1.57 0.94*

06/2000-06/2001 87 -4.87 4.00 37.27 -1.22 0.89

07/2000-07/2001 87 -16.77 4.47 41.67 -3.75 1.00***

08/2000-08/2001 87 -7.09 4.90 45.68 -1.45 0.92*

09/2000-09/2001 87 0.76 3.98 37.13 0.19 0.42

10/2000-10/2001 87 -5.51 4.21 39.26 -1.31 0.90*

11/2000-11/2001 87 -7.95 3.25 30.36 -2.44 0.99***

12/2000-12/2001 87 4.11 2.18 20.30 1.89 0.03++

01/2001-01/2002 87 4.37 3.02 28.12 1.45 0.08*

02/2001-02/2002 87 4.31 3.09 28.78 1.40 0.08*

03/2001-03/2002 87 15.07 3.65 34.05 4.13 0.00***

04/2001-04/2002 87 0.91 3.76 35.11 0.24 0.40

05/2001-05/2002 87 -0.36 3.48 32.49 -0.10 0.54

06/2001-06/2002 87 26.54 3.28 30.59 8.09 0.00***

07/2001-07/2002 87 9.30 4.91 45.77 1.89 0.03**

08/2001-08/2002 87 23.78 6.26 58.35 3.80 0.00***

09/2001-09/2002 87 10.52 5.18 48.29 2.03 0.02**

10/2001-10/2002 87 14.19 6.90 64.40 2.05 0.02**

11/2001-11/2002 87 20.81 6.06 56.52 3.43 0.00***

12/2001-12/2002 87 5.87 4.11 38.30 1.43 0.08*

Gross Profit 01/2000-01/2001 87 -24.56 4.11 38.33 -5.98 1.00***

02/2000-02/2001 87 -7.89 3.16 29.47 -2.50 0.99***

03/2000-03/2001 87 -7.95 3.99 37.24 -1.99 0.98**

04/2000-04/2001 87 27.17 7.56 70.52 3.59 0.00+++

05/2000-05/2001 87 -8.15 4.14 38.58 -1.97 0.97**

06/2000-06/2001 87 -5.57 3.78 35.27 -1.47 0.93*

07/2000-07/2001 87 -10.45 4.22 39.39 -2.47 0.99***

08/2000-08/2001 87 -0.16 4.06 37.85 -0.04 0.52

09/2000-09/2001 87 2.81 3.77 35.14 0.75 0.23

10/2000-10/2001 87 -6.26 3.90 36.38 -1.61 0.94*

2 *** = significant at 1% and consistent with the hypothesis that the balanced scorecard has a positive

impact; ** = significant at 5% and consistent with the hypothesis that the balanced scorecard has a positive

impact; * = significant at 10% and consistent with the hypothesis that the balanced scorecard has a positive

impact; +++

= significant at 1% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact; ++ = significant at 5% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact; + = significant at 10% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact.

16

Variable Dates Obs Mean Difference Std Err Std Dev t Pr(T>t)

11/2000-11/2001 87 -0.25 3.38 31.54 -0.08 0.53

12/2000-12/2001 87 4.22 2.84 26.45 1.49 0.07+

01/2001-01/2002 87 11.70 3.04 28.36 3.85 0.00***

02/2001-02/2002 87 15.72 2.57 23.94 6.12 0.00***

03/2001-03/2002 87 19.76 3.16 29.52 6.24 0.00***

04/2001-04/2002 87 -11.28 2.95 27.51 -3.82 1.00+++

05/2001-05/2002 87 2.34 2.85 26.55 0.82 0.21

06/2001-06/2002 87 26.23 3.23 30.12 8.12 0.00***

07/2001-07/2002 87 -41.33 6.43 59.96 -6.43 1.00+++

08/2001-08/2002 87 15.81 7.07 65.97 2.24 0.01***

09/2001-09/2002 87 3.54 5.65 52.67 0.63 0.27

10/2001-10/2002 87 9.84 6.29 58.70 1.56 0.06*

11/2001-11/2002 87 14.81 5.48 51.11 2.70 0.00***

12/2001-12/2002 87 2.25 3.67 34.27 0.61 0.27

Of the 48 tests, 37 provided a statistically significant result (see Table 2). 31 of these were

consistent with the hypothesis that the balanced scorecard made a positive difference,

while six were inconsistent with this assertion. Nineteen of the statistically significant

results related to sales, eighteen of which supported the hypothesis that the balanced

scorecard made a positive difference. Of these nineteen statistically significant results,

nine supported the hypothesis that there was a statistically significant increase in sales after

the introduction of the balanced scorecard, while eight supported the hypothesis that there

was a statistically significant reduction in sales after the removal of the balanced scorecard.

In terms of changes in gross profit there were eighteen statistically significant results,

fourteen of which supported the hypothesis that the balanced scorecard made a positive

difference. Seven results showed that there was a statistically significant increase in gross

profit after the balanced scorecard was introduced, while a further seven showed that there

was a statistically significant decrease in gross profit after the removal of the balanced

scorecard.

On the surface, these data appear to suggest that introduction of the balanced scorecard has

had a positive impact in terms of both sales and gross profit and its removal has had a

negative impact on both of these variables. Clearly there are questions of what else was

happening in the business at the same time, especially given the fact that Electrical was

taken over during the third quarter of 2000, during the deployment of the balanced

scorecard. It could therefore be argued that many members of the organization would have

been distracted and concerned about the implications of the takeover and therefore it would

not be surprising to see performance deteriorate during the last quarter of 2000 and the

early part of 2001. Indeed it was clear to the author of this paper who was working with

the company at the time that many people were distracted, but it is important to understand

17

the magnitude of the change that the organization felt it was undertaking by implementing

the balanced scorecard. The Chief Executive frequently described the implementation of

the balanced scorecard as the biggest change that the business had made in a decade and it

had clearly captured the attention of many of the business’ most senior managers. Indeed,

in the 3rd quarter of 2000 the board of Electrical declared that they were willing

collectively to resign if their new parent company blocked the implementation of the

balanced scorecard3. The point is that while other events were clearly occurring in the

organization at the time of the study one of the most significant was widely perceived to be

the implementation of the balanced scorecard and associated incentive scheme.

Contrasting Electrical with Sister

As already discussed, in addition to the sales and gross profit data made available by

Electrical the company that acquired Electrical owned another wholesaler of Electrical

components in the UK – called Sister for the purposes of this paper. Sister provided

monthly data on sales and gross profit for some 192 branches for 2000 and 2001. This

data set was combined with the data provided by Electrical and 56 matched pairs of

branches (branches based in the same town/city) were identified. This process allowed the

research team to control for local economic conditions, product range and customer base,

as Electrical and Sister branches tended to stock a similar range of products and deal with a

similar range of customers. The data was normalized with the values for January 2000 in

each case being set to 100. Figure 3 summarises the average growth in sales and gross

profit performance for these matched sets of branches during 2000 and 2001 using these

data.

To explore these findings in more detail paired sample t-tests were conducted comparing

the sample means. Again, to take account of seasonality growth in both sales and gross

profit was compared on a year by year basis. Sales and gross profit growth were calculated

for both Electrical and Sister separately and then the rates of growth compared. Table 3

shows the results of these analyses.

3 Discussion between the board of Electrical and the Chairman and Finance Director of the company that acquired them in Sept., 2000.

18

Figure 3: Comparing Electrical and Sister’s Performance

0

20

40

60

80

100

120

140

160

Jan-00

Mar-00

May-00

Jul-00

Sep-00

Nov-00

Jan-01

Mar-01

May-01

Jul-01

Sep-01

Nov-01

Norm

alised Sales and Gross Profit

Normalised Electrical Sales Normalised Electrical Gross Profit

Normalised Sister Sales Normalised Sister Gross Profit

As Table 3 shows there is no statistically significant difference between the growth in sales

and the gross in growth profits between the Electrical and Sister branches, suggesting that

the earlier observed differences are more likely due to external factors [i.e. those affecting

all branches] rather than internal factors [e.g. management changes such as the introduction

of the balanced scorecard]. Given the balanced scorecard is designed to have an impact on

branch performance this is a surprising finding, but there are of course differences in the

rate of adoption of the balanced scorecard in different branches.

19

Table 3: Comparison of Monthly Sales and Gross Profit for Matched Branches4

Variable Dates Obs Mean Difference Std Err Std Dev t Pr(T>t)

Sales 01/2000-01/2001 56 -0.250 7.296 54.599 -0.034 0.514

02/2000-02/2001 56 -0.907 7.384 55.258 -0.123 0.549

03/2000-03/2001 56 -1.511 9.216 68.967 -0.164 0.565

04/2000-04/2001 56 0.600 6.015 45.013 0.100 0.460

05/2000-05/2001 56 -0.220 7.173 53.675 -0.031 0.512

06/2000-06/2001 56 -0.698 6.993 52.334 -0.100 0.540

07/2000-07/2001 56 -0.968 7.565 56.614 -0.128 0.551

08/2000-08/2001 56 -0.449 7.116 53.254 -0.063 0.525

09/2000-09/2001 56 -0.097 8.006 59.915 -0.012 0.505

10/2000-10/2001 56 0.758 7.094 53.089 0.107 0.458

11/2000-11/2001 56 1.045 8.623 64.532 0.121 0.452

12/2000-12/2001 56 0.915 5.600 41.905 0.163 0.435

Gross Profit 01/2000-01/2001 56 0.281 7.689 57.540 0.037 0.486

02/2000-02/2001 56 -0.846 6.660 49.836 -0.127 0.550

03/2000-03/2001 56 -1.847 7.790 58.294 -0.237 0.593

04/2000-04/2001 56 0.357 16.875 126.282 0.021 0.492

05/2000-05/2001 56 -0.059 6.645 49.728 -0.009 0.504

06/2000-06/2001 56 -0.534 6.761 50.594 -0.079 0.531

07/2000-07/2001 56 -0.770 6.879 51.480 -0.112 0.544

08/2000-08/2001 56 -0.412 7.751 58.005 -0.053 0.521

09/2000-09/2001 56 0.036 8.666 64.849 0.004 0.498

10/2000-10/2001 56 1.472 7.618 57.010 0.193 0.424

11/2000-11/2001 56 1.061 8.992 67.290 0.118 0.453

12/2000-12/2001 56 1.200 4.919 36.812 0.244 0.404

One of the other variables that the organisation made available to the researcher was

number of non-financial balanced scorecard points earned. This score was calculated by

awarding “points” for every green [above target performance]. At June 2001 the range of

points earned ran from a minimum of 4 points to a maximum of 67, with a mean of 24.22

points [n=50, as data on the number of non-financial balanced scorecard points earned

were not available for six of the matched pairs of branches]. These data suggest that some

branches took the balanced scorecard more seriously than others – or at least managed to

perform better against it than other branches. So the analysis was re-run in two separate

sets. First the analysis was re-run only for those branches scoring more than 24.22 points

[i.e. the branches that were performing well on the balanced scorecard]. Second the

analysis was re-run for those branches with less than 24.22 points [i.e. the branches that

4 *** = significant at 1% and consistent with the hypothesis that the balanced scorecard has a positive

impact; ** = significant at 5% and consistent with the hypothesis that the balanced scorecard has a positive

impact; * = significant at 10% and consistent with the hypothesis that the balanced scorecard has a positive

impact; +++

= significant at 1% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact; ++ = significant at 5% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact; + = significant at 10% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact.

20

were performing badly on the balanced scorecard]. Tables 4 and 5 summarise the results

of these analyses.

Table 4: Comparison of Monthly Sales and Gross Profit for Matched Branches

[Branches that Score Well on the Non-Financial Measures] 5

Variable Dates Obs Mean Difference Std Err Std Dev t Pr(T>t)

Sales 01/2000-01/2001 25 -7.220 10.262 51.308 -0.704 0.756

02/2000-02/2001 25 -7.240 11.247 56.236 -0.644 0.737

03/2000-03/2001 25 -13.319 14.833 74.165 -0.898 0.811

04/2000-04/2001 25 -3.731 10.059 50.293 -0.371 0.643

05/2000-05/2001 25 -2.926 11.220 56.100 -0.261 0.602

06/2000-06/2001 25 1.162 9.324 46.620 0.125 0.451

07/2000-07/2001 25 -0.090 9.686 48.432 -0.009 0.504

08/2000-08/2001 25 -10.468 11.032 55.162 -0.949 0.824

09/2000-09/2001 25 -9.943 12.536 62.680 -0.793 0.782

10/2000-10/2001 25 -8.344 11.816 59.081 -0.706 0.757

11/2000-11/2001 25 -13.048 16.239 81.193 -0.804 0.785

12/2000-12/2001 25 -11.730 10.091 50.454 -1.162 0.872

Gross Profit 01/2000-01/2001 25 -7.942 10.460 52.300 -0.759 0.773

02/2000-02/2001 25 -10.643 10.286 51.429 -1.035 0.844

03/2000-03/2001 25 -8.117 14.100 70.502 -0.576 0.715

04/2000-04/2001 25 -25.935 28.831 144.155 -0.900 0.811

05/2000-05/2001 25 -4.035 10.614 53.071 -0.380 0.646

06/2000-06/2001 25 4.353 10.228 51.139 0.426 0.337

07/2000-07/2001 25 -2.062 11.034 55.169 -0.187 0.573

08/2000-08/2001 25 -15.285 11.429 57.143 -1.337 0.903**

09/2000-09/2001 25 -12.183 12.049 60.246 -1.011 0.839

10/2000-10/2001 25 -11.892 11.557 57.787 -1.029 0.843

11/2000-11/2001 25 -13.486 15.430 77.151 -0.874 0.805

12/2000-12/2001 25 -8.811 8.211 41.053 -1.073 0.853

5 *** = significant at 1% and consistent with the hypothesis that the balanced scorecard has a positive

impact; ** = significant at 5% and consistent with the hypothesis that the balanced scorecard has a positive

impact; * = significant at 10% and consistent with the hypothesis that the balanced scorecard has a positive

impact; +++

= significant at 1% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact; ++ = significant at 5% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact; + = significant at 10% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact.

21

Table 5: Comparison of Monthly Sales and Gross Profit for Matched Branches

[Branches that Score Badly on the Non-Financial Measures] 6

Variable Dates Obs Mean Difference Std Err Std Dev t Pr(T>t)

Sales 01/2000-01/2001 25 8.034 12.582 62.911 0.639 0.265

02/2000-02/2001 25 2.489 11.901 59.503 0.209 0.418

03/2000-03/2001 25 15.805 13.484 67.419 1.172 0.126

04/2000-04/2001 25 5.142 8.752 43.759 0.588 0.281

05/2000-05/2001 25 4.448 10.478 52.391 0.425 0.338

06/2000-06/2001 25 5.503 11.538 57.692 0.477 0.319

07/2000-07/2001 25 10.669 12.476 62.382 0.855 0.201

08/2000-08/2001 25 17.151 10.204 51.022 1.681 0.053+

09/2000-09/2001 25 13.059 12.465 62.324 1.048 0.153

10/2000-10/2001 25 11.378 9.933 49.667 1.145 0.132

11/2000-11/2001 25 21.221 8.887 44.434 2.388 0.013++

12/2000-12/2001 25 15.002 5.757 28.783 2.606 0.008+++

Gross Profit 01/2000-01/2001 25 9.670 13.036 65.181 0.742 0.233

02/2000-02/2001 25 3.424 9.461 47.303 0.362 0.360

03/2000-03/2001 25 8.393 8.993 44.967 0.933 0.180

04/2000-04/2001 25 18.141 22.052 110.259 0.823 0.209

05/2000-05/2001 25 5.267 9.329 46.644 0.565 0.289

06/2000-06/2001 25 -0.752 10.076 50.380 -0.075 0.529

07/2000-07/2001 25 10.669 12.476 62.382 0.855 0.201

08/2000-08/2001 25 19.416 11.930 59.652 1.628 0.058+

09/2000-09/2001 25 12.054 14.689 73.445 0.821 0.210

10/2000-10/2001 25 16.542 11.516 57.578 1.437 0.082+

11/2000-11/2001 25 21.399 11.571 57.857 1.849 0.038++

12/2000-12/2001 25 15.002 5.757 28.783 2.606 0.008+++

Of the 24 tests presented in Tables 4 and 5, eight results are statistically significant. One

suggests that the balanced scorecard has a positive impact on gross profit for those

branches that perform well in terms of the non-financial measures (see Table 4). The other

seven results suggest that the balanced scorecard has a negative impact on sales (three

results) and gross profit (four results), when branches perform badly on the non-financial

measures. That is when branches perform badly on the non-financial measures they also

see lower sales and gross profits than their matched comparator branches. Two possible

explanations for this immediately spring to mind. It may be that branches that perform

badly on the non-financial measures are simply badly run branches – i.e. they don’t

6 *** = significant at 1% and consistent with the hypothesis that the balanced scorecard has a positive

impact; ** = significant at 5% and consistent with the hypothesis that the balanced scorecard has a positive

impact; * = significant at 10% and consistent with the hypothesis that the balanced scorecard has a positive

impact; +++

= significant at 1% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact; ++ = significant at 5% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact; + = significant at 10% and inconsistent with the hypothesis that the balanced scorecard has a positive

impact.

22

perform well on anything, hence it is no surprise that they perform badly on sales and gross

profits as well as the non-financial measures. Alternatively, it may be that the staff and

managers in these branches have spread their attention too thinly – i.e. by trying to manage

multiple dimensions of performance simultaneously (as Michael Jensen would argue) they

have lost their focus on the dimensions of performance that matter.

6. Discussion and Implications

What can be concluded overall from the data? The first point to note is that while, at first

sight, the data appear to suggest that the implementation of the balanced scorecard had a

positive impact on Electrical’s performance [in terms of sales and gross profit], further

investigation reveals that this is not the case. Indeed the introduction of the control group

through the quasi-experiment raises some significant questions about the performance

impact of the balanced scorecard. At the aggregate level, it appears that the introduction of

the balanced scorecard has had no significant impact on branch performance in terms of

either sales or gross profit. And when the sample is split based success on non-financial

measures, a case could be made that the balanced scorecard has actually had a detrimental

effect (see Table 5) as those that branches that score well on non-financial measures

generally do not outperform their comparator branches on the financial measures, while

those branches that score poorly on the non-financial measures also score poorly on the

financial measures. Perhaps the agency theorists are correct when they argue that multi-

dimensional measurement systems are simply confusing and lead to a dispersion of effort.

How else might these findings be explained – beyond the simple assertion that the

balanced scorecard does not work. The first possibility is a question of timescale. The

data reported in this study cover a three-year period, but the balanced scorecard was only

in operation for a twelve month period in one division. Anecdotal evidence suggests that

behaviours in the organisation changed following the introduction of the balanced

scorecard. For example, several of the Directors of the firm recounted caselets illustrating

how behaviours had changed during a series of interviews with one of the author’s

colleagues. The Human Resources director, for example, reported that one of the most

fiercely competitive regional managers had started passing orders to colleagues in other

regions after the introduction of the balanced scorecard. While these behavioural changes

may be desirable we know that changes in non-financial dimensions of performance do not

immediately impact financial performance, but instead take time to flow through the

system (Ferdows and De Meyer, 1990). Perhaps the balanced scorecard was simply not

left in place long enough to enable these changes to flow through and impact financial

performance.

23

An alternative explanation, and one that certainly merits further investigation, is the

question of what action managers could take based on the data. The performance

measurement system, per se, will have little impact on the organisation’s performance,

unless people take action and change things on the basis of the reported performance data.

Electrical spent a significant amount of time designing and deploying their balanced

scorecard, but paid relatively limited attention to the accompanying improvement process.

How could managers use the measurement data they were presented with to improve

organisational performance? Many of the measures were presented in a rather aggregated

form. One of the measures, for example, customer retention, reported the % of retained

customers, but how could managers act on this? All the managers received was a bland

figure stating that they had retained 74% of their customers. To act the branch managers

needed to know, by name, which customers they had retained and which they had lost.

They needed the contact details of the customers involved so they could make contact with

them and try to persuade them to come back to the business. Without this disaggregated

information the bland figure is somewhat worthless.

So what does this research mean for practitioners and researchers? First the study

highlights the fact that further research is required into the performance impact of balanced

scorecards and the timescale over which this performance impact can be observed.

Certainly the data presented in this study suggest that the balanced scorecard implemented

in Electrical had no significant impact in terms of sales growth or gross profit growth over

a twelve month period. It may be that the balanced scorecard would have had a

performance impact had it been retained for a longer period and the author is currently in

negotiation with another organization to access data that will allow them to test this claim.

Similarly, if the findings of this study are replicated in other similar naturally occurring

experiments, then work is required to understand why balanced scorecards do not have the

impact one would expect. Intuitively people accept that organizations need to keep track

of their performance so that they can identify how well they are doing and what they need

to improve. Well-designed balanced scorecards provide access to such measures and so if

correctly implemented one would assume that they should enable performance

improvement. Yet in the case of Electrical performance improvement does not appear to

have materialised. Why is this the case? Is it due to the fact that the organization did not

give the balanced scorecard long enough to work? Is it that the organization did not

supplement the balanced scorecard with an appropriate improvement methodology and/or

programme? Is it that the balanced scorecard suffers from the same criticisms that can be

made of many measurement systems – too much data arriving too late for managers to act

on them? These issues need to be explored and understood much more fully so that we can

advise practitioners not simply on how better to measure, but on how better to perform.

24

References

Bourne, M., Mills, J., Wilcox, M., Neely, A. and Platts, K. (2000) Designing,

Implementing and Updating Performance Measurement Systems. International

Journal of Operations & Production Management 20, 754-771.

Chenhall, R.H. (2005) Integrative Strategic Performance Measurement Systems, Strategic

Alignment of Manufacturing, Learning and Strategic Outcomes: An Exploratory Study.

Accounting Organizations and Society 30, 395-422.

Cook, T.D. and Campbell, D.T. (1979) New York: Houghton Mifflin.

Davis, S. and Albright, T. (2004) An investigation of the effect of Balanced Scorecard

implementation on financial performance. Management Accounting Research 15,

135-153.

de Waal, A.A. (2003) Behavioral Factors Important for the Successful Implementation

and Use of Performance Management Systems. Management Decisions 41, 688-697.

Dumond, E.J. (1994) Making Best Use of Performance-Measures and Information.

International Journal of Operations & Production Management 14, 16-31.

Ferdows, K. and De Meyer, A. (1990) Lasting improvements in manufacturing

performance: In search of a new theory. Journal of Operations Management 9

(2):168-184.

Forza, C. and Salvador, F. (2000) Assessing Some Distinctive Dimensions of Performance

Feedback Information in High Performing Plants. International Journal of Operations

& Production Management 20, 359-385.

Forza, C. and Salvador, F. (2001) Information Flows for High-Performance

Manufacturing. International Journal of Production Economics 70, 21-36.

Franco-Santos, M., Kennerley, M., Micheli, P., Martinez, V., Mason, S., Marr, B., Gray,

D. and Neely, A. (2007) Towards a Definition of a Business Performance

Measurement System. International Journal of Operations and Production

Management, forthcoming.

Frigo, M.L. (2000) 2000 CMG Survey on Performance Measurement: The Evolution of

Performance Measurement Systems. Cost Management Update 105, 1-3.

Frigo, M.L. and Krumwiede, K.R. (1999) Balanced Scorecards: A Rising Trend in

Strategic Performance Measurement. Journal of Strategic Performance Measurement

3, 42-48.

Gates, S. (1999) Aligning Strategic Performance Measures and Results. New York: The

Conference Board.

25

Hayes, R.H. and Abernathy, W.J. (1980) Managing Our Way to Economic Decline.

Harvard Business Review 67-77.

Ittner, C.D. and Larcker, D.F. (2003) Coming up Short on Nonfinancial Performance

Measurement. Harvard Business Review 81, 88-95.

Ittner, C.D., Larcker, D.F. and Meyer, M.W. (2003a) Subjectivity and the Weighting of

Performance Measures: Evidence From a Balanced Scorecard. Accounting Review 78,

725-758.

Ittner, C.D., Larcker, D.F. and Randall, T. (2003b) Performance Implications of Strategic

Performance Measurement in Financial Services Firms. Accounting Organizations and

Society 28, 715-741.

Johnson, H.T. and Kaplan, R.S. (1988) Relevance Lost - The Rise and Fall of

Management Accounting, Boston, MA: Harvard Business School Press.

Kaplan, R.S. and Norton, D.P. (1992) The Balanced Scorecard - Measures That Drive

Performance. Harvard Business Review 70, 71-79.

Kaplan, R.S. and Norton, D.P. (2000) Having Trouble with your Strategy? Then Map It.

Harvard Business Review 167-176.

Kaplan, R.S. and Norton, D.P. (2001) The Strategy-Focused Organization: How Balanced

Scorecard Companies Thrive in the New Business Environment, Boston, MA.:

Harvard Business School Press.

Ketelhohn, W. (1998) What is a Key Success Factor? European Management Journal

16, 335-340.

Lawson, R., Stratton, W. and Hatch, T. (2003) The Benefits of a Scorecard System. CMA

Management 24-26.

Lingle, J.H. and Schiemann, W.A. (1996) From Balanced Scorecard to Strategy Gauges:

Is Measurement Worth It? Management Review 56-62.

Malina, M.A. and Selto, F.H. (2001) Communicating and controlling strategy: An

empirical study of the effectiveness of the balanced scorecard. Journal of Management

Accounting Research 13 47-90. 10492127.

Marr, B. and Neely, A.D. (2003) Balanced Scorecard Software Report, edn. Stamford,

CT: Gartner.

Melnyk, S.A., Stewart, D.M. and Swink, M. (2004) Metrics and Performance

Measurement in Operations Management: Dealing With the Metrics Maze. Journal of

Operations Management 22, 209-217.

Neely, A. (1999) The Performance Measurement Revolution: Why Now and What Next?

International Journal of Operations & Production Management 19, 205-228.

26

Neely, A., Gregory, M. and Platts, K. (1995) Performance Measurement System Design -

A Literature Review and Research Agenda. International Journal of Operations &

Production Management 15, 80-116.

Neely, A., Mills, J., Platts, K., Richards, H., Gregory, M., Bourne, M. and Kennerley, M.

(2000) Performance Measurement System Design: Developing and Testing a Process-

Based Approach. International Journal of Operations & Production Management 20,

1119-1145.

Neely, A., Richards, H., Mills, J., Platts, K. and Bourne, M. (1997) Designing

Performance Measures: A Structured Approach. International Journal of Operations

& Production Management 17, 1131-1153.

Neely, A.D., Adams, C.A. and Kennerley, M. (2002a) The Performance Prism: The

Scorecard for Measuring and Managing Stakeholder Success, edn. London:

Financial Times/Prentice Hall.

Neely, A.D., Bourne, M.C.S., Mills, J.F., Platts, K.W. and Richards, A.H.2. (2002b)

Getting the Measure of your Business. Cambridge: Cambridge University Press.

Neely, A.D., Kennerley, M. and Walters, A., (Eds.) Business Performance Measurement

What is the State of the Art in Large US Firms? edn. Edinburgh, Scotland: (2004)

Norreklit, H. (2000) The Balance on the Balanced Scorecard: A Critical Analysis of Some

of Its Assumptions. Management Accounting Research 11, 65-88.

Norreklit, H. (2003) The Balanced Scorecard: What Is the Score? A Rhetorical Analysis

of the Balanced Scorecard. Accounting, Organisations and Society 28, 591-619.

Ridgway, V.F. (1956) Dysfunctional Consequences of Performance Measurements.

Administrative Science Quarterly 241-247.

Rigby, D. (2001) Management Tools and Techniques: A Survey. California Management

Review 43, 139-160.

Rigby, D. (2005) Management Tools 2005. Bain & Co.

Sandt, J., Schaeffer, U. and Weber, J. (2001) Balanced Performance Measurement

Systems and Manager Satisfaction - Empirical Evidence from a German Study. WHU

- Otto Beisheim Graduate School of Management.

Silk, S. (1998) Automating the Balanced Scorecard. Management Accounting 79, 38-44.

Speckbacher, G., Bischof, J. and Pfeiffer, T. (2003) A Descriptive Analysis on the

Implementation of Balanced Scorecards in German-Speaking Countries. Management

Accounting Research 14, 361-387.

Vasconcellos, J. (1988) The Impact of Key Success Factors on Company Performance.

Long Range Planning 21, 56-64.