finding eureka · that follow the scientific method. as information, knowledge, ... online testing...

24
FINDING EUREKA e new science of online testing. 2016

Upload: dothuan

Post on 28-Aug-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

FINDING EUREKAThe new science of online testing.

2016

22016 | Finding Eureka

21

TABLE OF CONTENTS

Educate your guess.

Take your ideas to the digital lab.

Put results under the microscope.

Embrace the science of failure.

Tell the world.

461519

2016 | Finding eureka. 2

32016 | Finding Eureka2016 | Finding eureka.

Isaac Newton and gravity. Marie Curie and radiation. Albert Einstein and relativity. The world’s greatest discoveries started with an inkling. A vague “what if” that sparked into a bright idea. A hypothesis that was nudged to life through a series of experiments.

Breakthroughs like these are almost always based on rigorous tests that follow the scientific method. As information, knowledge, and technology progress, the experiments build on one another, becoming even more sophisticated, inspiring new discoveries, and speeding scientific advancements.

You can apply a similar method to your online testing. A continuous stream of recent technological advancements has made it possible to run tests informed by real data. With the right blend of analytics, testing , and automation, your marketing teams can get to eureka faster than ever before. And you can do it in increasingly sophisticated ways, moving far beyond A/B testing.

The opportunities for marketing discovery are unlimited. With the right testing strategy, you can experiment with everything from site or app navigation and the overall experience flow to smaller, subtler elements, like headlines, calls to action, and images. Then, as you evaluate each of these components in your marketing mix, you can pinpoint exactly which factors and combinations are having the biggest impact—and for which audiences. Even better, you can automate much of the process.

All of this means that you can continually personalize and fine-tune your interactions based on what you learn from your data and test results, so you’ll keep getting better at serving the right experience to the right person.

Just as scientists rely on the scientific method to run the most effective and informative tests, you can employ a step-by-step, data-driven method to get the best possible results from your online testing. Here are five steps that scientists use to reach new discoveries—and that you should build into your online testing practice, too.

3

42016 | Finding Eureka

1. EDUCATE your guess.

Companies often fall into the trap of letting their “HiPPOs” run the show. This term, coined at Amazon, stands for the “highest paid person’s opinion.” Maybe your HiPPO tries to make the call on what should be changed on your company’s website, based on little more than a gut feeling. Even when office politics don’t influence the decision-making process, online testing doesn’t always hold itself to basic standards for valid experimentation. Instead, most marketers have limited themselves by basing their testing on hunches, basic background knowledge, or personal experiences.

But science proves there’s a better way. From Newton’s Law of Gravitation to Darwin’s Theory of Evolution, every revolutionary scientific discovery has started with an educated guess based on evidence—a hypothesis that can be tested. Online testing should follow the same process, and thanks to modern technology, it can. Using strong data to back up your hypotheses and subsequent tests can help you break out of that “decision-making by HiPPO” mentality.

2016 | Finding eureka. 4

Collect your data. “All testing starts with a bright idea, but you need to validate those ideas with data,” says Ryan Pizzuto, manager of web testing and optimization for T-Mobile.1 The more data you have about your audiences, the better you’ll be able to plan your test. Using advanced analytics techniques, you can quickly drill down into detailed, real-time data to better understand your customers’ needs. And with algorithms and machine learning , you can further refine your testing criteria, so your hypotheses are based on solid data, not hunches.

Take Symantec. Using integrated reporting and analysis, the company is able to form solid hypotheses about which services and products customers want and which web navigation, content, and offers will drive higher conversion and sales. “By leveraging analytics in conjunction with testing , we can formulate and validate data-driven hypothesis and deliver the best possible customer experience,” says Peter McRae, team manager of optimization for Symantec.2 The team used this approach to test layouts for the company website, resulting in a new landing page design that garnered a 2X lift in upsells.

Similarly, your data can offer insight into those areas of the customer experience that you can improve. A good starting place is to look at the high-traffic areas of your website or app, since these will allow you to gather data faster. Watch for pages with low conversion rates or high drop-off rates that can be improved. Once you know exactly who your visitors are, you’ll also have a better sense of where your greatest opportunities lie—and you can build hypotheses around that data.

All testing starts with a bright idea, but you need to validate those ideas with data.

RYAN PIZZUTO Manager of Web Optimization, T-Mobile

52016 | Finding Eureka

Identify your goals. Your key performance indicators, which can help inform your conversion goals, are the metrics you’ll use to decide whether or not your testing variation is more successful than the original version. These metrics can include a wide range of data points, including click-throughs and traffic, app performance, or the lifetime value of a customer.

“It’s never about a hunch. It’s about looking at the key performance indicators that you care about as a business, and the analytics and data that you have available,” explains Jeff Fuhriman, a former senior manager of demand and web strategy and conversion optimization for Adobe. “Focus on answering questions about these indicators. Once you dig into that, you’re likely to find lots of opportunities for improvement.”3

Generate your hypothesis. Once you’ve identified a goal, it’s time to come up with a hypothesis. Typically, this involves predicting why you think a new solution will perform better than the current version. For example, say you have a landing page where customers can order your product. There’s a form to capture names, email addresses, and other information. When you look at the data, you see that for some reason, potential customers are clicking through to learn more, but leaving before they buy anything.

Equipped with this data-driven insight about your visitors’ behaviors, you might wonder if your form needs more information on it—so customers don’t have to click away from the page to learn more. Your hypothesis could be, “If we add more details on the landing page and tweak the content to be more benefits-oriented, the conversion rate will go up.” With a hypothesis based on solid data, you’re now ready to build your experiment.

2016 | Finding eureka. 5

62016 | Finding Eureka

2. TAKE YOUR IDEAS to the digital lab.

Equipped with a testable hypothesis based on evidence, you’re ready for the fun part—testing it. Although legend has it that Galileo dropped two balls from the Leaning Tower of Pisa to test how gravity worked, he actually had a much more refined way to test objects in motion—and it was safer for bystanders below! Instead of merely dropping the balls, he designed a special plank with a groove etched in the center of it and rolled the balls down it. By timing how quickly balls rolled down the board, he created a more precise way to measure his hypothesis.

Having a well-designed experiment right from the start will make a huge difference in your own testing success. Here are some core elements you should consider when choosing and developing the ideal testing plan for your company.

2016 | Finding eureka. 6

Map your goals and your ability to test them. Part of designing any valid experiment is planning. Now that you have a list of testing ideas and desired outcomes, prioritize them in order of the expected impact and the difficulty of implementation. This is also a good time to evaluate your ability to carry out your testing efforts internally.

Start by asking yourself what systems currently exist within your organization. Who will manage the testing process from your end? Does your organization have the know-how, or is training required? How will you integrate your analytics and testing capabilities with your marketing tools? Last but not least, is your organization ready for these shifts? If your organization already puts an emphasis on continual improvement, your testing efforts are much more likely to be supported.

In addition to the required technology, any successful optimization and testing plan requires executive-level sponsorship as well as enterprise-wide buy-in and participation. That means seamless collaboration among multiple stakeholders—content creators, customer experience designers, developers, test analysts, and your marketing teams.

72016 | Finding Eureka2016 | Finding eureka. 7

Understand your testing options. To create a valid, data-driven test, start by considering the types of tests available, from basic to the most sophisticated.

Know your tests.

Test Use Advantages Challenges

A/B Test two versions of a single digital experience.Simple to design and implement. Results are easy to analyze.

Ideas must be tested one at a time.

A/BnTest unlimited versions of a single digital experience.

Provides more options than standard A/B tests, but the results are still fairly easy to analyze.

Cannot test multiple elements on a single digital property, such as variable headlines, images, and logos.

MultivariateTest multiple elements of a digital experience at the same time to discover the best possible combination.

Makes it easy to create unlimited “recipes” of combined variants within a single digital experience.

Must have a statistically valid sample of visitors to run the tests.

Multi-channelTest different aspects of the customer journey across channels.

Provides a clear view of customer behaviors and preferences across every digital experience.

More challenging to design tests and analyze the results.

TargetedTest various audience segments to see how they respond to different versions.

Allows you to precisely pinpoint what specific visitors want, so you can deliver more personalized experiences.

Smaller sample sizes make it difficult to statistically validate results.

82016 | Finding Eureka2016 | Finding eureka.

A/B testing. This is a good place to start, since it’s easy to quickly gather results. This type of test splits traffic between two different versions of a digital experience, such as a webpage or an app, to determine which one performs better. For example, you can test a chart to see if visitors stay longer on the page, or change your call to action copy to see if one version gets more clicks than the other.

These small adjustments can have a big impact on your ability to detect and deliver the type of content and experiences your customers really want.

Of course, like any good scientist conducting an experiment, you’ll need a control to accurately measure your results as you test various elements of your page. Be sure you’re testing the changes against a static version of the page so you can best measure the effect of the changes.

“This type of testing is useless without analytics that provide detailed results,” says Gina Casagrande, personalization evangelist at Adobe. “Optimal A/B testing involves analyzing your results as far down your sales funnel as possible—from visits to actual sales.”4

Example: Let the tests begin.This chart illustrates just a few of the many marketing experiments you can conduct.

LayoutSite UX • Templates • Charts & tabs • Element existence Hyperlinks vs. buttons • Length of web pages

ContentMessaging • Call to action • Benefits vs. features vs. branding Gated content vs. non-gated content • Email subject lines Use of special offers and discounts

CreativeLook & feel • Imagery • Photography vs. videos Images of people vs. images of products • Static images vs. videos

FunctionalityExperience flow • Navigation • Sign in & checkout Landing pages • Forms

8

92016 | Finding Eureka2016 | Finding eureka. 9

A/Bn testing. As another form of A/B testing , this approach compares multiple versions of a webpage to identify the best performing variation. The “n” refers to the unknown number of variations being tested.

Because the original web page is the control, it doesn’t change. But you may, for instance, have up to three additional variations of your website that do change—whether it’s three different font choices, three different placements for your call to action, or three different layouts for your home page content. Traffic will be directed to these three pages and the control in percentages. In this case, 25 percent of traffic would be sent to each variation. Then at the end of the test, you should be able to see the number of conversions for each page. From there, you can determine the best variation. Keep in mind that because your traffic is split, with smaller test groups receiving the variations, you’ll have to wait longer to get statistically valid results.

A/B and A/Bn testing are typically preliminary tests to conduct before moving on to more sophisticated options, like multivariate testing. As you mature your testing practices, you’ll be able to drill your testing down even further to extract insights that will help you deliver more targeted experiences to your customers.

102016 | Finding Eureka2016 | Finding eureka. 10

Multivariate testing (MVT). When you’re ready to move beyond the testing basics, multivariate testing allows you to throw a mixture of different elements into your virtual test tube to come up with the winning formula. With multivariate testing , you can change multiple elements at the same time to determine which combination of variations performs best. Multivariate looks at all the elements that matter for conversion—size of logo, navigation, header image, copy, footer, and so on—and tests the combined variants on that page. It takes all of those elements and puts them into recipes that change the whole experience in as many ways as possible. Then it spits out the best possible combination.

“MVT takes the black and white comparisons of the A/B test and complicates them by mixing and matching variables,” says Jamie Brighton, Adobe product marketing manager. “By increasing the possible variations of any given page, you can get more accurate information about what’s works best.”5 In other words, while A/B tests are usually performed to find out the better of two content variations, multivariate tests the effectiveness of infinite combinations—so you can really zero in on what’s moving the needle.

The only limit to the number of variables and combinations you test is the amount of time it takes to get a statistically valid sample of visitors. In other words, to conduct multivariate tests, you need to have enough traffic. Traffic estimation tools can help you determine if your site is getting enough hits to run your multivariate tests, and these tools can even help you adjust your tests based on your traffic level.

Take the shortcut to statistically valid results.The Taguchi method, developed by well-known statistician Dr. Genichi Taguchi, offers a shortcut to multivariate testing that allows you to boil down all the possible combinations into the recipes that work best together. Basically, you can get the accurate results you’re looking for with far lower traffic requirements.

Rather than having to test all possible combinations, this method tests pairs of combinations to identify which groupings work best together, saving time and resources. The Taguchi method is best used when there’s an intermediate number of variables— anywhere from three to 50.

112016 | Finding Eureka2016 | Finding eureka. 11

For example, if you’re testing a large number of combinations and your conversion rate and impressions are too low, a traffic estimation tool can show you exactly how long the test will need to run to be successful. The tool can also suggest a lower number of combinations, so you can run the test the for a shorter number of days. The more combinations you test, the higher your traffic requirements will be. So be selective about the combinations you test.

It’s also good to keep in mind that multivariate and A/B testing aren’t mutually exclusive. Once your multivariate testing has suggested which page variations are having the biggest impact, you can follow up with A/B testing to further refine your winning combination. For instance, if a page combination with a punchy headline and prominent video rises to the top of your multivariate heap, you can follow up by A/B testing more punchy headlines to determine which one works best.

Examples of multivariate tests include the following:

• Testing text and visual elements on a webpage together • Testing text and color of a call to action button together • Testing the number of form fields and call to action test together

Lift conversions by 20% with mulitvariate testing.Marketers at the online investing firm Scottrade used multivariate testing to test several landing page themes during a three-month period. The goal was to urge prospects to open new accounts online. So the firm executed a number of multivariate tests on its landing page that used different design elements.

The benefits were immediately evident. By attaching a select set of product-based keywords to a variety of landing page themes, Scottrade was able to quickly see the winning landing page combinations broken down by keyword. “Thanks to the changes we made to our landing page, we saw a 20% lift in conversions,” says Bill Dehlendorf, interactive advertising analyst at Scottrade.6

122016 | Finding Eureka 12

Multi-channel testing. A mature enterprise-class online testing platform must be able to support A/B and multivariate tests in addition to collecting and analyzing data across new and emerging channels. Multi-channel testing makes this possible.

One way to adopt a mult-channel testing approach is to simply expand your testing beyond your website. While your site offers plenty of opportunities to learn what visitors respond to most, there’s even more value in running tests across additional channels, so you can create truly engaging experiences from search to email to mobile and beyond.

To take multichannel testing even further, you can tie together an array of digital experiences and test them across the whole customer journey. For example, you may want to look at how your customers move from display ads to your web and mobile channels, and even how their behaviors differ from mobile websites to mobile apps. Multi-channel testing enables this deep examination of your customers’ complex digital actions—actually allowing you to test whole conversion paths—but it takes mult-channel analytics to pull it off.

“Integrated data is key to making sure you have a 360 degree view of the customer journey,” says Bridgette Darling , product marketing manager for Adobe. “If you have your data in silos, it’s tough to know what the customer is doing across all channels.”7

It’s also nearly impossible to run multi-channel tests. But once you’re equipped with the ability to gather and analyze all your disparate sources of data, you can begin to understand how your customers use various channels. And you can test different ways to reach them wherever they are. Perhaps you want to look at the best moment in the customer journey to offer a discount. Or when it makes the most sense to throw in an upsell. When you test across channels, you can discover the best ways to encourage customers to take the next steep.

For a relatively simple way to try this type of multichannel testing , consider testing an email-to-web experience. What inspires customers to open your email? And to click through to your website? Once they’re on your site, what’s the fastest path to conversion? By testing within the email and then tying that test to a web test, you can understand visitor paths across both channels, and create orchestrated experiences that inspire customers to take action.

2016 | Finding eureka.

Test beyond the website.Email. You know that you just can’t keep sending irrelevant, unpersonalized messages and expect loyal customers. Testing is key to find out what’s resonating with customers and when the best time is to send an email.

Mobile. Consider testing layout, navigational elements and imagery of your mobile experiences. Also think about taking test lessons from your desktop site and applying them to your mobile site as appropriate.

Display. Test within your display ads based on data collected from your website. You can even use traffic from your display ads as a key segment for your website testing.

Search. Leverage the rich data you have available to test search algorithms, so you can present search result pages and recommendations that drive traffic.

132016 | Finding Eureka2016 | Finding eureka. 13

Targeted testing. To take your online testing practices even further, try designing experiments that target specific segments of your audience. This takes a little help from advanced segmentation tools, which allow you to instantly identify differences between visitors and create audience segments for testing. For instance, if you’re running a test on your home page, you could define segments such as repeat or direct visitors, or people coming to you from email. You can then use these groups to better understand how each audience responds to the different variants of your tests.

These types of targeted tests lay the foundation for personalization, opening new pathways of discovery as you learn what your customers and prospects really want. “When you’re able to test targeted experiences, you see that those interactions convert better than general, non-targeted experiences,” says Fuhriman.8 And with the results from your targeted tests, you’ll be better able to match the right visitor segments with the right experiences.

You’ll also get better results sooner, enabling deeper insight into areas where you can personalize the customer experience even more. Advanced targeting options include geographic variables, real-time behavioral targeting , and even the ability to integrate offline or internal CRM data for test segmenting.

Keep your experiments from overlapping.Just as a good scientist knows it’s essential to keep experiments separate from each other for the most accurate results, it’s important that you keep your online tests from overlapping with each other. As your testing program grows, you may easily get to the point where you’re running more than 10 tests at any given point in time. You’ll need to make sure that visitors don’t cross into different tests.

Mutual exclusivity ensures that each visitor participates in only one test so you get the cleanest, most accurate test results possible. This will help you more easily determine which visitor experience was most successful. “Not having the ability to keep visitors in mutually exclusive campaigns can create heartache for analysts as they try to interpret test results,” says Brian Hawkins, a partner at Web Analytics Demystified.9

Often organizations don’t consider this key piece when looking at the different testing solutions available, but they soon run into this issue three to six months later if they’re doing a lot of tests. Consider implementing mutual exclusivity safeguards into your plan right from the start to avoid this headache as your testing process evolves.

142016 | Finding Eureka2016 | Finding eureka. 14

Test like a robot. Think like a human.A mature online testing program typically runs hundreds of tests per month. But you can’t get to that level of maturity without automation. When your time and resources are already limited, having to run multiple testing cycles manually severely curbs the number and types of tests you can run at any one time. Instead, your online testing platform must be able to support a testing process that’s simple, quick, and able to adapt to increased demands. To achieve this, Forrester recommends automation across the entire test and learn cycle, from development and execution, all the way to data analysis and acting on your test results.10

Even more important than increasing your testing capacity and efficiency, automation allows you to improve the accuracy of your tests at every step in the process. With automated testing , you can run algorithms beneath a test to validate the assumptions you made in your hypothesis, or even to uncover audiences you may have overlooked. The algorithms can then quickly pinpoint associations between predicted and observed response behavior, so you can see instantly how your hypothesis holds up under testing.

These algorithms can even self-optimize, adjusting how content, design or entire experiences are displayed to customers based on up-to-the-minute analyses of testing data. With automation sweating all these details, you’re free to focus your time on designing interesting and valuable experiences for customers—and developing the big-picture strategy behind them.

The algorithmic advantage.With automated testing and self-learning algorithms, you can do the following:

1. Run hundreds of tests simultaneously while using far fewer resources.

2. Predict which experiences, elements, and offers will appeal to which audience segments.

3. Adapt to changing trends—whether seasonal changes, shifting public sentiment, or new campaigns—and take immediate action via self-optimization.

According to the 2015 Adobe Digital Marketing Survey Report, automation of the testing process alone increased conversion by 15% for all respondents.11

15%

152016 | Finding Eureka

Examine your results by segment. Whether you’re running a simple A/B test or an ongoing series of multi-channel tests, you can use segmentation analysis to take a closer look at your test results, so you can understand how each campaign is performing for various audience segments. For example, in the hypothetical example in figure 1, a marketer tests four different experiences on her website. When looking at the combined results, she sees that overall, experience C is the most effective, with a 4.65% lift in revenue per visitor. But when she digs a bit deeper through segmentation analysis, as shown in figure 2, she sees that the group of visitors who came to the website via Google actually responded better to experience D, with a 32.06% increase in revenue per visitor. Equipped with this insight, she decides to push experience C to all traffic, except for those visitors coming from Google, who would instead be given experience D.12

3. PUT RESULTS under the microscope.

Louis Pasteur didn’t make his pasteurization discovery after performing a single experiment. Instead, he systematically analyzed results from multiple experiments, reformulated his hypothesis based on this painstaking analysis, and eventually gave humanity one of its greatest scientific gifts: the establishment of a germ theory that led to numerous vaccines for diseases like cholera and anthrax.

The results from your tests may not save the world, but they can save your digital marketing. Use your test data to determine if your earlier hypothesis was correct and whether your outcomes link back to your original goals. If not, re-tool your hypothesis or throw it out altogether.

Set up your test to demonstrate how different test versions affect metrics such as click-through rates, sign ups, traffic-to-lead conversions, demo requests, contacts for more information, and ultimately, sales. You’ll want to be able to filter results based on any specific metric or target audience.

By tracking a wide variety of measurements—even beyond the specific goals you’re testing against—you can make sure that you don’t get “false positive” test results that end up negatively impacting your business. For instance, if you’ve developed a test to increase engagement rates, and results show that engagement rates did increase significantly, you’ll want to also measure your conversion rates before you start celebrating. While engaged visitors spend more time on your site and visit more pages, if the experiences take them further away from opportunities to convert—like filling out a lead generation form—then they’re not actually effective. By tracking different types of metrics in every test you conduct, you’ll be able to see the big picture and determine just how winning your test actually is.

2016 | Finding eureka. 15

162016 | Finding Eureka

Figure 1. Visualization of segment performance shows high-value customer segments.

Average Revenue per Visitor Audience: All Qualified Visitors (default)

Experience Visitors Revenue per Visitor (RPV) Lift Confidence

Experience A [Control] (5.12%) 2,665 $61.02 .....

Experience B (24.56%) 2,606 $60.40 -1.01%

Experience C (25.19%) 2,673 $63.85 4.65%

Experience D (25.13%) 2,666 $62.84 3.00%

2016 | Finding eureka. 16

172016 | Finding Eureka

Average Revenue per Visitor

Figure 2. Segmentation report shows that Google traffic responded better to Experience D.

Audience: From Google

Experience Visitors Revenue per Visitor (RPV) Lift Confidence

Experience A [Control] (26.15%) 596 $55.08 .....

Experience B (24.48%) 558 $56.29 -2.21%

Experience C (26.02%) 593 $56.48 2.54%

Experience D (23.34%) 532 $72.73 32.06%

2016 | Finding eureka. 17

182016 | Finding Eureka2016 | Finding eureka. 18

Document your results. Test results lose their meaning if they stay locked in the lab. Report your results to your colleagues across departments so that they can collaborate with you on the results and insights before you begin your next round of experimentation. The reports you create at this point in the process should offer an easy, at-a-glance visual representation of how audiences responded to the different test variations.

Use your reports to analyze content performance from many angles, including specific devices, audiences, and time ranges. Interactive graphs and tables are good ways to illustrate your findings, enabling you to filter data, isolate experiences, and call out audience segments for clear and easy analysis.

Report like a pro.Your test reports are an opportunity to deepen your data analysis and insights. To make the most of every report you share, make sure they all include the following:

• Graphs and charts with segmentation breakdowns, hour-by-hour results, and multiple success metrics

• High-level overviews, as well as drill-down advanced analysis

• Tools for easily sharing and collaborating on test results

192016 | Finding Eureka

4. EMBRACE the science of failure.

Some of the biggest breakthroughs only came about after multiple attempts and do-overs. Just look at Thomas Edison, who finally got the commercial light bulb right after building over ten thousand prototypes. The scientific method is an ongoing process, and at this phase in your testing cycle, it’s time to circle back to your original hypothesis. If your hypothesis did not prove to be true, repeat the experiment or think of ways to improve your test. Remember that even if none of the tested experiences performed better, your experiment wasn’t a waste of time. A successful test doesn’t always mean a boost in revenue or conversions. “Even if conversion is down, you learn something,” says Pizzuto. “The beauty and curse of optimization is that you’re never done.”13

Kevin Lindsay, director of product marketing for Adobe, agrees. “It’s all about test and learn,” he says. “If the original hypothesis that led us to perform a particular test proved inaccurate, we might say the test failed. But there are always interesting insights or unexpected results that allow us to drill down and go a little deeper.”14

If the test actually supported your hypothesis, then you’re on your way to developing a general theory. But even in those cases when you do have a clear winning result, you can’t stop there. Too often, marketers make the mistake of testing once and then banking on those results, without really understanding what led to the success. Instead, plan to iterate on those results to figure out why the front-runner was ahead. This will lead you to deeper insights that you can use in future campaigns.

T-Mobile is one company that takes advantage of iterative testing to refine online customer experiences and boost conversion. In one simple test, the team learned that users were logging into the website, navigating to the “Contact Us” page, and calling support within the hour. Just by adding two links to the contact page, T-Mobile saw a double-digit increase in self-help and six-figure savings on all support calls. “This test was the first of six iterations where we kept finding more and more,” Pizzuto explains.15

Through the practice of iterative testing , you won’t just learn how to improve the customer experience. You’ll also learn how to perfect your testing process. As you get better at testing , you’ll figure out when and how to create tests that answer a specific question or address a particular issue, and when more exploratory testing is needed to discover something you may not already know.

2016 | Finding eureka. 19

202016 | Finding Eureka 202016 | Finding eureka.

Celebrate the winners. Push out the losers. If you want to take your refining process a step further, consider auto- optimization. This approach is similar to an A/B test in that you set up a distinct set of different experiences. But rather than distributing traffic equally across the various tests, traffic is automatically distributed based on how the varying test experiences are doing. For example, if you apply auto-optimization to three separate experiences and one is doing better than the other two, more traffic in the future will be directed to the better performing experience.

Say you’re running a test on an email that you’re sending to 100,000 subscribers. The first five to 10 thousand people who open it could shift traffic, so those subscribers who haven’t yet opened the email will be served the top-performing interaction.

“A/B testing gives you a black and white answer—this works better than that. But what can we do to speak more relevantly to our audience?” says Fuhriman. “We auto-optimize to know the winner and push out the losers.”16 Based on what you learn, you can easily identify the best experiences and use that knowledge to give your audiences exactly what they want.

Seek the unexpected. Galileo kept pushing the boundaries of accepted knowledge, by questioning and testing long-established theories. When he turned his telescope to the heavens, he discovered that the moon wasn’t flat and smooth. He also saw that the earth and other planets revolved around the sun, just as Copernicus had hypothesized nearly a century earlier.

Your marketing is the same. Looking beyond what most of us would think of as testing opens up a galaxy of infinite possibilities. Daniel Hughes, head of data science at Digitas explains, “I think clients are unnecessarily limiting themselves in the way they think of testing. When you restrict the range of offers, features, or copy you’re willing to test, you’re less likely to be surprised, and your outcome is predetermined.”17

Hughes gives an example of a telecommunications client who delved further into the data to discover something totally unanticipated. The client noticed that interest and conversions would spike whenever there was heavy snow or wind. Data showed that when bad weather disrupted their satellite signals, customers would check out other providers out of frustration because the movies or games they were streaming kept getting interrupted. “We discovered this [phenomenon] by overlaying different data sets,” explains Hughes, “which led to another set of tests that we wouldn’t have otherwise thought of.”18

By looking for unexpected patterns in your data, you can formulate other hypotheses that lead to new data collection and experiments—and possibly exciting discoveries.

212016 | Finding Eureka

5. TELL the world.

What if Nicholas Copernicus never published his theory that the planets revolve around the sun? Or if we never heard about Stephen Hawking’s black hole theory? The final step in your data-driven method is to share your online testing discoveries far and wide, much like a scientist publishes new theoretical work for the entire scientific community.

When your testing uncovers essential truths, it’s important that everyone in your organization knows about them—including the people who will succeed you, and whom you may not have even met yet. This way, your organization can avoid retesting a hypothesis for which you already know the answers, and your business will benefit as everyone learns from and acts on what you’ve found.

Just like scientific literature follows a standardized structure, it helps to prepare a template sharing your findings and key learnings.

Consider capturing these elements when documenting your discoveries:

• Hypothesis that was tested • Description of what was tested • Visitor segments tested • Before images • Images of new content and experiences tested • KPIs used to measure success • KPIs for the winning experience

Over time these findings may bring important discoveries to light, like the ideal number of fields to include in a web-based form for the best submission rates. You can then apply these practices without further testing. But keep in mind that these assumptions should be revisited every so often as the digital world evolves.

To keep moving forward and create a culture of continual testing, make frequent communication a priority. For example, hold a quarterly optimization meeting. Or send a weekly email with the latest testing and targeting updates. A newsletter, a wiki, or a blog are also ways to make test results accessible across your organization. Finally, be sure to invite executives to come and see some of the recent testing that’s been going on. It’s vital that they continue to buy into what you’re doing. And it’s great to have company leaders promoting and celebrating your team’s success when you reach your goals.

2016 | Finding eureka. 21

222016 | Finding Eureka2016 | Finding eureka. 22

Make your breakthrough marketing discovery. To become a true online testing innovator, you need the right mixture of tools and strategies in your marketing laboratory. As you hone your data-driven method, here’s what you’ll want to keep on the shelf for every single experiment.

 1. Integration. You can’t keep your marketing tools in separate boxes. By ensuring your various technologies are integrated, you can create informed hypotheses based on actual data, quickly design and conduct sophisticated tests, and use the results to build personalized campaigns that meet your marketing goals.

 2. Automation. It’s time to stop doing what machines can do better. With automated testing and optimization tools, you’re not guessing about what works best for your customers, and you’re not dependent on tedious manual processes. Instead, you’re running multiple tests with ease, adapting to changing variables in real-time, analyzing and reporting results instantaneously, and targeting campaigns based on the best possible experiences for visitors.

 3. Collaboration. Even the best optimization and testing tools fall flat if they don’t give marketers the ability to easily share all the knowledge they’ve gained. With integrated reporting and sharing capabilities, you can quickly and effectively communicate your findings to all team members, work across departments to build operational best practices, and set up your organization for long-term success.

Find eureka at the intersection of science and art. Just as science has done for centuries, your marketing discoveries will give you empirical facts—and perhaps even a proven theory that becomes a best practice. But these tests and data can only take you so far. As you continue to delve deeper into testing strategies and tools, keep in mind that the real answers almost always live at the intersection of human and machine. As James McCormick, principle analyst at Forrester points out, “You can use data to treat a customer like a customer, not just an interaction in one of your channels.”19 By connecting the science of online testing to the art of human emotion, you’ll ensure that your marketing campaigns soar—and your customers’ hearts sing.

232016 | Finding Eureka

With Adobe Marketing Cloud, you can easily establish a data-driven method for online testing. Equipped with this integrated marketing solution, you’ll be able to design and implement tests that improve the customer experience and boost your bottom-line success.

Using deep, real-time data integrated from Adobe Analytics, you’ll have the building blocks you need to develop an informed hypothesis and predict testing results. Integrate this comprehensive data with Adobe Target, and it’s easy to set up, test, and deliver personalized experiences to your customer segments.

With the testing capabilities of Adobe Target, you’ll be able to run everything from simple A/B tests to complex multichannel tests—all without the usual coding and setup hassles. When the results come in, the customizable interface make it easy to see your visitors’ responses to content variations in real time. Then, with one-click optimized content delivery, you can push out winning test content instantaneously or set a test to self-optimize and automatically deliver the best performing content in real time to the right segment. It’s all part of our mission to help you adapt at the speed of business, so you’re always ready to meet your customers' needs.

To learn more about how data can transform your business, visit www.adobe.com/go/data-driven-marketing.

ADOBE can help.

2016 | Finding eureka. 23

242016 | Finding Eureka2016 | Finding eureka. 24

1. “Strength in Numbers: Best practices in data-driven marketing,” Adobe webinar, https://seminars.adobeconnect.com/_a227210/p77ute27cf2/.

2. “Symantec: Achieving more targeted e-commerce,” Adobe Customer Story, February 2014, http://wwwimages.adobe.com/content/dam/Adobe/en/customer-success/pdfs/symantec-case-study.pdf.

3. Jeff Fuhriman, personal interview, December 7, 2015.

4. Gina Casagrande, “What to Consider When A/B Testing a Landing Page,” Adobe Digital Marketing Blog, May 27, 2014, http://blogs.adobe.com/digitalmarketing/personalization/consider-ab-testing-landing-page/.

5. Jamie Brighton, “Personalisation Technology: Testing & Optimisation,” Adobe Digital Marketing Blog, June 29, 2015, Personalisation Technology: Testing & Optimisation adobe blog.

6. “Scottrade attracts customers across channels,” Adobe Customer Story, July 2014, https://www.adobe.com/content/dam/Adobe/en/customer-success/pdfs/scottrade-case-study.pdf.

7. Bridgette Darling, personal interview, December 3, 2015.

8. Jeff Fuhriman, personal interview.

9. “Choosing an online testing and optimization solution,” page 4, Web Analytics Demystified, November 2013.

10. James McCormick, “The Forrester Wave™ Online Testing Platforms: Q3 2015,” September 22, 2015.

11. “Four Advantages of a Planned Approach To Digital Maturity: 2015 Digital Marketing Survey Results,” July 2015, http://landing.adobe.com/dam/downloads/whitepapers/192320.en.digital-marketing-survey-report.pdf.

12. “Choosing an online testing and optimization solution," pages 7-8.

13. “Strength in Numbers: Best practices in data-driven marketing.”

14. Ibid.

15. Ibid.

16. Jeff Fuhriman, personal interview.

17. Daniel Hughes, Digitas, personal interview, November 3, 2015.

18. Ibid.

19. “Strength in Numbers: Best practices in data-driven marketing.”

Adobe Marketing Cloud empowers companies to use data to reach and engage customers and prospects with highly personalized marketing content across devices and digital touch points. Eight tightly integrated solutions offer marketers a complete set of marketing technologies that focus on analytics, web and app experience management, testing and targeting, advertising, audience management, video, social engagement, and campaign orchestration. The tie-in with Adobe Creative Cloud makes it easy to quickly activate creative assets across all marketing channels. Thousands of brands worldwide, including two-thirds of Fortune 50 companies, rely on Adobe Marketing Cloud with over 30.4 trillion transactions a year.