spectrum vol 1 iss 2

14
SPECTRUM Volume 1 I Issue2 ROI and Market Research 2013 Putting Research in Context

Upload: christopheovaere

Post on 30-Mar-2016

232 views

Category:

Documents


2 download

DESCRIPTION

Paper on ROI in Market research

TRANSCRIPT

Page 1: Spectrum vol 1 iss 2

SPECTRUM

V o l u m e 1 I I s s u e 2 R O I a n d M a r k e t R e s e a r c h

2 0 1 3

P u t t i n g R e s e a r c h i n C o n t e x t

Page 2: Spectrum vol 1 iss 2

ROI and Market Research

1

Stephen Phillips - Chief

Happiness Officer and

Founder, Spring Research

Stephen Phillips - Chief Happiness Officer and Founder, Spring Research

[email protected]

020 7428 7379

In our second issue of SPECTRUM - our quarterly research project on the client

context - Spring founder Steve Phillips explores ROI and Market Research

Welcome back to the second issue of SPECTRUM magazine! For those of you who weren’t with us for

issue 1, Spectrum is a series of quarterly research projects that we’ve begun in order to understand

the client context. Our aim is to provide you with some insight on both what your peers in other

categories are experiencing, and what’s working well for them.

Issue 2 concerns ROI; a key aspiration for most of us, it seems, but not one which readily lends itself to

the service research provides to the business. The challenges faced by research departments across

the range of clients we spoke to seemed fairly consistent, but actual solutions were far scarcer.

However, as a result of this study, we’ve put together what we think constitutes a good base for

measuring the success of research. Read on to see what this entails...

As always, your comments and opinions stimulate some of the best insights, so we’d love to hear from

you. In fact, if you’d like to be interviewed for our next project, please get in touch. We’d love you to

help us continue to make SPECTRUM a success!

Page 3: Spectrum vol 1 iss 2

W h e r e t o f i n d t h i n g s :

□ I n t r o d u c t i o n a n d C o n t a c t D e t a i l s p 1

□ T h e H o l y G r a i l o f I n s i g h t D e p a r t m e n t s p 3

□ R O I i n t h e B u s i n e s s p 3

□ T h e C r i t i c a l Q u e s t i o n p 4

□ R e s e a r c h a s R O I I n p u t p 5

□ P i c k i n g a N u m b e r p 5

□ E n t e r t h e S u c c e s s M e t r i c p 6

□ S o , T h a t ’ l l H a v e t o D o ? p 7

□ F o c u s i n g o u r G a z e p 9

□ R o u t e s f o r B e s t P r a c t i s e p 1 0

□ A M e a n s a n d a M o t i v a t i o n p 1 2

2

Page 4: Spectrum vol 1 iss 2

The initial enthusiasm from clients to take

part in this issue of Spectrum was

overwhelming.

“Great”, we thought, “We’ll have

umpteen research ROI tools lined up

before we know it”.

After the first few interviews, however,

the real reason for our generous

respondents’ eagerness became obvious.

Most people just wanted to know how

everyone else did it. How were other

research teams managing to draw the

myriad outcomes of a research

department into a clear measurement

that offered some support in the regular

need to justify budgets

and build reputation in the

corporate environment?

It was clear that within the wider

business, ROI was being discussed and

indeed focused on. But precise

calculation seemed to be fairly rare

phenomena, even for marketing

in general.

Building a new factory naturally lends

itself to a workable definition of ROI – ‘it

costs x, will increase capacity by y, and

we estimate a sales uplift of z’. Like any

modelling, it’s full of assumptions, but

there’s enough clarity in the spend to

keep the Finance department happy and

confident. The question of ‘how far we’re

prepared to put our finger in the air for a

number’ lies at the heart of ROI

calculation, and the evidence suggests

that other departments are far more

confident about it than Research or

even Marketing.

Incidentally, our favourite quote from the

interviews? “Who measures the ROI of

the Finance department?!”.

Which leads us to...

H o l y G r a i l o f I n s i g h t D e p a r t m e n t s

3

T h e

R O I i n t h e B u s i n e s s

Page 5: Spectrum vol 1 iss 2

T h e C r i t i c a l Q u e s t i o n

4

How much should the research budget

be? Does the return on investment for

our labours justify a large increase in the

research budget? Many of us researchers

would believe so.

At a recent ISBA conference, we

presented the genesis of this article to a

room of market research procurement

experts. The questions raised by the

room in the subsequent discussion

probably show where the industry

currently stands in terms of how big the

market research budget should be:

Delegate 1: I’ve always used 5% of the

marketing budget as a rule of thumb

Delegate 2: Really?! I thought it was

supposed to be 1%?

Delegate 3: As low as 5%?

And so on, and so on...

Even the fiscal hawks of the procurement

department don’t have a reliable guide to

how big the research budget should be.

And the magnitude of the uncertainty

was clearly not about fine-tuning

budgets, not the difference between

£100 and £105, but about whether it

should be £100 or £200. The result was a

consensus that the size of the research

budget was, in most cases, a fairly

arbitrary decision.

Actually, arbitrary is the wrong word, as

what both procurement and our

respondents concluded was that the size

of the research budget was a political

decision. The most cited cause of a

significant change in research spend was

the arrival of a new CEO and the different

priorities – not to mention experience of

(or even, whisper it, respect for) research

- that this so often brought to a business.

This introduces our first requirement of

our research ROI measure:

O b j e c t i v e # 1

We need something based on provable success to use in discussions

with Finance

Page 6: Spectrum vol 1 iss 2

R e s e a r c h a s R O I I n p u t

5

It’s not as though research and ROI are

never mentioned in the same breath.

Indeed, research often provides key

variables for calculating the return on

marketing activity – for example brand

awareness, or favourability. But before

we think about research’s contribution,

we have to consider several reasons why

measuring the ROI of marketing like this

can run into significant difficulties.

To begin with, there’s a huge discrepancy

in the basic size of elements of the

marketing budget; it’s dominated by TV.

This means that during econometric

modelling most other fluctuations in

spend don’t register enough of an affect

to assess their impact. So marketers,

even those lucky enough to have great

sales data, typically can’t assess the ROI

of much of their marketing spend with

any accuracy.

Often, instead of these complex

econometrics which look backwards (and

from the sounds of it, mostly at TV),

marketing departments rely on more

simple research measures to determine

effectiveness.

P i c k i n g a N u m b e r

Isolating the value of advertising testing can be complicated.

When we think about a creative test, for example, consider the value of these 3 research outcomes:

1. The advert is great, don’t change a thing

2. The advert is good, make some changes

3. The advert is awful, don’t run it

In outcome 1, the campaign is a success but research has had no impact, so you could say there is no ROI. In option 2 you could argue that research has improved the effectiveness of the campaign and so claim perhaps 15% of the campaign’s positive brand impact – a reasonable potential ROI measure. But in option 3, you could say the research has saved the company the entire media spend, which is a lot of money and so a great ROI. One test, the same research, but 3 very different ROI’s.

Of course, there are a number of caveats – not least the degree to which developers and creatives actually embrace the research

This highlights one of Spring’s mottos: value is always co-created, it’s never just supplied.

Page 7: Spectrum vol 1 iss 2

t h e S U C C E S S M E T R I C Enter...

6

Input is easy; it’s about procurement.

‘Have we saved money compared to the

last time we conducted a study like this?’,

‘Have I driven the cost down through a

tender process?’, or ‘Have I come in

under budget?’.

Output is harder, but people were still

attempting to judge projects a success or

failure based on ‘softer’ questions like ‘Is

the marketing person happy with the

results?’, ‘Did I get more insight than I

expected?’, or ‘Did it feel like a high-

quality project?’.

That’s not to say that measures don’t

exist for the success of research projects.

Two types of success metrics emerged,

one with a quantitative feel, the other

somewhat qualitative. Simply put, you

either focus on inputs or outputs.

• Cost vs. competitors

• Cost vs. last time

• Cost vs. budget

• Perceived quality

• Stakeholder satisfaction

• Insight vs. expectations

hard soft

focus INPUTS OUTPUTS

“We go out to pitch” “Is the marketing

person happy?”

feel

measures

mindset

Page 8: Spectrum vol 1 iss 2

S o , t h a t ’ l l

h a v e t o d o ?

7

Perhaps it’s not perfect, then, but the impact

of marketing can be measured in some way .

What’s more, the contribution of research to

that success can clearly be felt. With

experience, marketers can even begin to

estimate what research is contributing to the

success – maybe a 15% premium on the

advertising spend, or 5% of the value of a

product’s eventual success. That, in turn, is a

starting point for discussing the

research budget.

These figures, however, remain anecdotal -

speculative even - and limited to research

which tests something with an identifiable

short-term impact. They favour the testing of

executions or prototypes.

That introduces the wider problem for

measuring the contribution of research

- it’s not just about testing.

What about more strategic, development

research? Admirably, Procter & Gamble CEO

Robert McDonald regularly visits consumers’

homes in emerging markets to get closer to

his audience, but how would you even begin

to attach a value and analyse the benefit of

this type of research?

And that’s for one of the purest forms of

research – ethnographically connecting

stakeholders to their consumers. Ever since

Theodore Levitt coined the phrase ‘marketing

myopia’, the industry has understood that

there’s huge value in getting mindsets out of

the office and into the real world, but

calculating ROI on it throws up a million and

one problems – and usually no answers.

So where does that leave us? Well, at this

point, it seems like ROI of testing research is

flawed, but possible, and ROI of development

research is nigh on impossible.

Page 9: Spectrum vol 1 iss 2

8

What this model does clarify, though, is a

second requirement of our research

ROI measure:

• ‘Buy’ focus

• Try to reduce budget

• High-level sponsor

• Less input scrutiny –

more output focus

testing development

focus INPUTS OUTPUTS

type

implications

O b j e c t i v e # 2

It has to work for Development research as well as Testing research

Fortunately, the input/output model

does identify a space for

development research.

Testing research has a very natural fit

with inputs, and the buying decision

usually comes down to choosing how

much to spend. It also provides a fairly

clear opportunity for measuring the

success of a project – once the ‘hygiene

factors’ of data delivery and clarity are

met, the focus is on cost reduction.

Development research, however, is much

harder. These projects tend to be high-

profile, often with a senior internal

stakeholder, which inevitably moves the

focus from costs to impact.

Page 10: Spectrum vol 1 iss 2

Success metrics are clearly not limited

to a focus on projects; departments and

people can also be subject to

evaluation.

In fact, the internal reputation was

commonly cited as the best indication of

the success of research. This was subject

to both hard and soft measurement,

from recording ‘hits’ on research

documents to ‘getting in the right

rooms’ as an internal consultant.

The success of people, of course, is one

of the most routine processes of

measurement in any organisation.

Personal reviews for some researchers

include a score for ‘impact on decisions’

– although, again, this is subjectively

calculated. To complicate matters, it can

actually be a better reflection of the

strength of someone’s ability to get their

point across than (necessarily) the

quality of the insight they’ve discovered.

So far, the ROI of research seems distant

from the technical performance of the

department, or the research projects

themselves.

Altogether, this gives us our third and

final requirement of a research ROI

measure:

F o c u s i n g o u r G a z e :

D e p a r t m e n t s , P e o p l e , o r P r o j e c t s ?

9

O b j e c t i v e # 3

It must work as evidence of the success at a project, person, and

departmental level

Page 11: Spectrum vol 1 iss 2

Before we consider some solutions for

some kind of ROI metric for research

outcomes, let’s recap what we need

from it:

A veneer of economic rigour should

improve a research departments’ ability

to make a financially viable argument in

budget discussions. That means that

whatever solution we adopt, they need at

least a quantitative element or

component.

We don’t expect to see this next idea

happen in the near future, but at a macro

level we’d love ESOMAR or the MRS to

commit to a review of key clients across

the industry. With a methodical

assessment of sales growth by research

spend (with all of the complexity of time-

lag, defining growth etc. which that

entails), the industry would at last have a

normative base to begin conversations

about the optimum size of the research

department. What’s more, it would give

an evidence-based rule of thumb for

research spend by marketing budget –

whether that’s 1%, 5%, or perhaps even

higher...

On a project-by-project basis, we propose

a mixed-methodology.

Firstly, there should always be a

discussion on the value key stakeholders

got from the project. This lets research

teams learn from not just the research

data, but from the entire process of

internal data delivery. Just as importantly,

since this in itself raises the profile of the

research team, it shows a department

keen to enhance and develop the value it

delivers. This builds reputation within the

business.

R o u t e s f o r B e s t P r a c t i c e

10

M e a s u r i n g t h e S u c c e s s o f D e v e l o p m e n t R e s e a r c h

1. Something concrete to take to the Finance department

2. Something that works for development research as well as testing research

3. Something that reflects on the person, the project, and the department

Page 12: Spectrum vol 1 iss 2

This can be done with some standardised

metrics such as:

• How much the research added to their understanding of the consumer (from high to low)

• How much the research gave them confidence to take the make the next decision

• How valuable the research was compared to the cost

Our second suggestion is a little bolder which

is to capture a quantitative valuation from

key marketing stakeholders. This needs to

record what these stakeholders believe the

insight impact was on the project. This can be

assessed in relation to the overall project

itself and measured as an improvement in

the project:

ROI can then be assessed by calculating the

role of research against the size of the

project:

Of course you may not know the value of the

project up front (it may not be a success of

course) but as a proxy you can use the cost of

the project which probably is known

This obviously gives us a financial metric to

show during budget discussions but also

allows us to compare across research

projects conducted and hence better allocate

budgets.

These metrics let us measure the outcomes

of the research, however, one party has so

far escaped scrutiny: the research supplier.

Since responsibility for the technical

expertise of the research lies with the

agency, we believe both clients and suppliers

would (and typically do) benefit from looking

at reviewing the two most important aspects:

timeliness of delivery, and quality of data

delivered.

11

I m p a c t E v a l u a t i o n

Q . T o w h a t e x t e n t d i d t h e r e s e a r c h i m p a c t o n t h e p r o j e c t

a ) M a d e n o i m p a c t

b ) I m p r o v e d t h e p r o j e c t o u t c o m e b y a p p r o x . 5 %

c ) I m p r o v e d t h e p r o j e c t o u t c o m e b y a p p r o x . 1 0 %

d ) I m p r o v e d t h e p r o j e c t o u t c o m e b y a p p r o x . 2 0 %

value of project

ROI =

% impact on project x

cost of research

Page 13: Spectrum vol 1 iss 2

We believe that this does provide a

methodical way to prove the value of

research to the business – and that means

all research, not just research with an

obvious short-term impact.

At a practical level, it provides a measure

of impact on stakeholders outside of the

research department, which lends it

credibility for interrogation by Finance,

and it also adds a measurement of output

(impact, insight, or assisting decision-

making) to research, relieving it of the

purely procurement-focused ‘input’

measurements like cost or savings.

Finally, through the focus on the impact of

our research and the process of collection

of the data from stakeholders, it’s

evidence of importance research places

on being involved in and impacting on

decision-making.

This, of course, means these success

metrics perform a dual function - they

contribute to both a researcher’s career,

and the standing of research within the

business and the wider corporate world.

Does it justify the investment? We think

it’s a good start.

Va l u i n g R e s e a r c h :

A M e a n s a n d a M o t i v a t i o n

12

Page 14: Spectrum vol 1 iss 2

www.springresearch.com

S P R I N G R E S E A R C H B E D F O R D H O U S E 125-133 CAMDEN HIGH STREET

LO N D O N N W 1 7 J R U N I T E D K I N G D O M