learning from obama: redesigning analytics

Post on 24-Feb-2016

39 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Learning from Obama: Redesigning Analytics. The fundraising challenge. In 2008, Obama campaign raised $750 million Would not be enough in 2012. $750 million?. Not impressed. The fundraising challenge. But fundraising was proving more difficult in 2012 than in 2008 - PowerPoint PPT Presentation

TRANSCRIPT

Learning from Obama: Redesigning Analytics

In 2008, Obama campaign raised $750 million Would not be enough in 2012

The fundraising challenge

Not impressed.

$750 million?

The fundraising challenge But fundraising was proving more

difficult in 2012 than in 2008 President less available for fundraising

events In early campaign, we saw average online

donation was half of what it had been in 2008

We had to be smarter, and more innovative

Overview A/B testing in Obama’s digital

department

Lessons learned Don’t trust your gut Foster a culture of testing Make it personal

Winning with A/B Testing

Example: Draft language

What impact can testing have?

version Subject line donors moneyv1s1 Hey 263 $17,646v1s2 Two things: 268 $18,830v1s3 Your turn 276 $22,380v2s1 Hey 300 $17,644v2s2 My opponent 246 $13,795v2s3 You decide 222 $27,185v3s1 Hey 370 $29,976v3s2 Last night 307 $16,945v3s3 Stand with me today 381 $25,881v4s1 Hey 444 $25,643v4s2 This is my last campaign 369 $24,759v4s3 [NAME] 514 $34,308v5s1 Hey 353 $22,190

v5s2There won't be many more of these deadlines 273 $22,405

v5s3 What you saw this week 263 $21,014v6s1 Hey 363 $25,689v6s2 Let's win. 237 $17,154v6s3 Midnight deadline 352 $23,244

ACTU

AL ($

3.7m)

IF SEN

DING AVG

IF SEN

DING WORST

$0$2$4

Full send (in millions)

$2.2 million additional revenue from sending best draft vs. worst, or $1.5 million additional from sending best vs. average

Test sends

Test every element After testing drafts and subject lines, we

would split the remaining list and run additional tests Example: Unsubscribe language

Variation Recips UnsubsUnsubs

per recipient

Significant differences in unsubs

per recipient

578,994 105 0.018% None

578,814 79 0.014% Smaller than D4

578,620 86 0.015% Smaller than D4

580,507 115 0.020% Larger than D3 and D4

No, really. Test every element.

We also were always running tests in the background via personalized content

Then, keep testing Example: how much email should we

send?+6 emails per week

The results Campaign raised over one billion dollars

Raised over half a billion dollars online Over 4 million Americans donated

Recruited tens of thousands of volunteers, publicized thousands of events and rallies

Did I mention raising >$500 million online? Conservatively, testing probably resulted in

~$200 million in additional revenue

Lessons

Click icon to add picture

Don’t Trust Your Gut

Lesson #1

Don’t trust your gut We don’t have all the answers

Conventional wisdom is often wrong Long-held best practices are often wrong You are not your audience

There was this thing called the Email Derby… If even the experts are bad at predicting a

winning message, it shows just how important testing is.

Experiments: Ugly vs. Pretty We tried making our emails prettier

That failed So we asked: what about ugly?

Ugly yellow highlighting got us better results

Foster a culture of testing

Lesson #2

The culture of testing Check your ego at the door

Use every opportunity to test something

Compare against yourself, not against your competitors or “the industry” Are you doing better this month than last

month? Are you doing better than you would have

otherwise?

When in doubt, test In a culture of testing, all questions are

answered empirically

Example: With the ugly yellow highlighting, we worried about the novelty factor Maybe highlighting would only work for a

short time before people started ignoring it (or being irritated by it).

We decided to do a multi-stage test across three consecutive emails

The ugly highlighting experiment

Experimental design:

Determined through this test that novelty was indeed a factor

Group 1Group 2Group 3Group 4Group 5Group 6Group 7Group 8

Group 1Group 2Group 3Group 4Group 5Group 6Group 7Group 8

Group 1Group 2Group 3Group 4Group 5Group 6Group 7Group 8

First Email Second Email Third Email

Use data to make the user experience more personal

Lesson #3

Big data ≠ big brother

Testing allows you to listen to your user base Let them tell you what they like Whether through A/B testing or behavioral

segmentation, optimization gives them a better experience

Usually, the interactions that are the most human are the ones that win

Be human! In general, we founds shorter, less

formal emails and subject lines did best. Classic example: “Hey”

When we dropped a mild curse word into a subject line, it usually won “Hell yes, I like Obamacare” “Let’s win the damn election” “Pretty damn cool”

Good segmentation: behavioral Behavioral segmentation was much more

effective than demographic segmentation Donor vs. non-donor High-dollar vs. low-dollar Volunteer status What issues do people say they care about?

After using A/B tests to create a winning message, we could tweak it slightly for various behavioral groups and get better results

Experiments: Personalization Adding “drop-in sentences” that reference

people’s past behavior can increase conversion rates

Example: asking recent donors for more money

Added sentence significantly raised donation rate Confirmed in several similar experiments

…it's going to take a lot more of us to match them.

You stepped up recently to help out -- thank you. We all need to dig a little deeper if we're going to win, so I'm asking you to pitch in again. Will you donate $25 or more today?

…it's going to take a lot more of us to match them.

Will you donate $25 or more today?

Conclusions

Conclusions Test everything, especially your gut

instinct

Foster a culture of testing

Use data to make it personal

top related