Jason Puckett

May 9, 2017

Optimize Your Ad Testing Schedule and Structure — Six Best Practices

There comes a time in every advertisers' life when they are in the process of executing their ad copy testing plan and aren’t seeing the results they want. There is a BIG difference between running a test in your college stats class and implementing an experiment in a high-performing AdWords account. No longer can you simply flip to the back of a textbook to find the answer.

Search Engine Marketers don’t live a dream world — we have to operate within the confines of performance marketing. This article won’t discuss what to test (although we do talk a lot about that), but rather how to test.

Whether this is during your initial tests, or several rounds in, here are six best practices that will maximize returns.

1. Don’t Get Discouraged Early On

Ad copy testing is a process with lots of failure. If your initial round of tests doesn't yield any winners, don’t get discouraged. Create a plan of 5-10 testing iterations before you begin and then execute them all.

If you’ve uncovered a winner and are not seeing a positive impact on your AdWords account, you will probably benefit from the structural changes discussed later on.

2. Define Your Testing Cycles

Ad copy testing should not happen in all campaigns, at all times. Define the frequency at which each campaign will receive a test based on the number of campaigns and ad groups within your account. One helpful metric is the length of time required to determine a winner for your selected campaign. If the time frame is one week, consider a three-week testing cycle.

3. Define Your Testing Structure

Running ad copy tests individually for every ad group is a big no-no. This causes account-wide creative fragmentation, difficult ad-unit-management, and after a few iterations, it becomes challenging to understand which creatives are in your account, and where.

Understand which campaigns and ad groups can be grouped together for a single test. Create testing categories of similarly behaving ad groups where ad copy is also similar.

In the example below, this eCommerce advertiser is able to use AdBasis Custom Parameters to dynamically insert varying ad copy for individual elements, within the ad itself.

Think of the highlighted portions below as individual feeds where there is a specific value in the feed that is mapped to specific campaigns or ad groups. This advertiser is able to test across multiple audiences, and multiple product categories.

It is not uncommon to have hundreds of thousands of ad groups in a single ad test.

Testing Schedule

4. Stagger Your Ad Copy Testing Schedule

You’ve defined your testing categories — nice work! But where do you begin? A good first step would be to understand the volume within each category and how the testing categories rank against each other. If possible, establish category tiers based on volume.

Testing Schedule

In order to mitigate risk, decide how many categories, per tier, will receive a test during implementation. This ensures the account runs optimally during your tests. Never run a test in all high-priority tiers at once. If your tests all lose, your account performance will drop. As you can see in the example above, we would select one Tier 1, one Tier 2 and one Tier 3 category to implement during each test phase.

5. Eliminate The Laggers As Early As Possible

This is easy to figure out, but hard to execute on. Sometimes if you have a creative that you’ve written and love, you want to see how it performs until the end of the test. However, this isn’t an AdWords best practice.

In an ideal testing world, we wait until every variation has been proven as a “Winner” or “Non-Winner”. Unfortunately, we operate within the confines of reality. If you’re halfway through your test duration and there is a bottom performing variation that is pulling down your key KPI by more than 15% (or a confidence level of under 50%) you should eliminate that variation.

This is a case-by-case decision so don’t make rash decisions. Just remember that under-performing variations within a test can pull down your account’s overall performance. Making the decision to get rid of these variations early on can help speed up your test results, thus making them more reliable by re-allocating sample to variations that have a good chance of winning.

6. Find Winners As Fast As Statistically Possible Then Invest in Them

You can think of all ad units that you’ll test as a normal distribution — some will win and some will lose, some were born to sing the blues. Many ads will have similar performance. Because of this normalized distribution, unless you eliminate losers and invest in winners, you won’t realize the gain in your key metrics.

Testing Schedule

Your normal CTR is at 3%. You execute your tests for two weeks at an average CTR of 2.8%. You uncover a winner with a CTR of 3.2%. You will want to invest in that winner for at least two weeks in order to recover the cost of your test.

In other words, think of your losing ads as a “cost of testing” and the winners as the “ROI” from testing. The goal is to stop that cost as soon as possible and invest in your winner. Invest in winners as quickly and for as long as possible.

Have a question for us? We'd love to hear from you!

comments powered by Disqus


Popular Articles

Understanding Conversion Layers and Executing Ad Experiments

August, 14th 2014 - Optimization

3 Steps for Building a Paid Acquisition A/B Testing Plan

October, 15th 2015 - Optimization

All Things Tested - 3Q Digital's Sean McEntee

August, 31st 2015 - "All Things Tested"