The Importance Of A/B Ad Testing

10 / 100

Whenever I’m asked what it is I do for a living I usually say “In simple terms, I write the ads on Google”. After all, that’s essentially what working in PPC comes down to – ads. Those beautifully crafted, four-line adverts whose main purpose is to grab the attention of the Googler and entice them into visiting your website and then making a purchase/completing a contact form/signing up to your mail listing or whatever else you may be counting as a conversion.

So if ads are the essence of PPC, clearly it’s important to ensure that the ad copy you use is the very best, right? Yes, but there are ways and means of achieving this and it’s your job to ensure that it works and works well… which brings us nicely to A/B ad testing.

So how do you do A/B ad testing and how do you ensure that the test is as fair as possible?  In part 1 I’ll be discussing how to set a test up, and what to test, and in part 2 I’ll elaborate this even more and explain how to determine how and when to analysis the results of your test.

Setting up A/B testing

In simple terms, A/B ad testing – or split testing – is a controlled experiment where two variations of an ad are run concurrently over a certain period of time in order to determine which version delivers the best results – super simple stuff.

Setting up A/B ad testing is a relatively simple process: the first step is to select the ad group in which you want to run the test and then within that ad group you need to create two ads which contain different copy.  But what copy should you test?  I would always recommend that when testing ad copy you should generally focus on one particular aspect of the ad in order to ensure that the test is as fair as possible and the results are easily understood.  For example, you may have two near identical ads, but each containing a different call to action e.g. “buy now” or “shop now”.

The reason why I recommend testing just one aspect of the ad is because if you were to run two completely different ads in the test, you would have no way of determining exactly what it is about each ad which is causing it to perform the way that it is. By isolating one aspect and running a test on its variations, it becomes much easier to determine what is causing the results which you see in terms of click through rate, conversion rate and so forth.

One thing which I would suggest is that you only run two variations of the same ads at the same time, rather than the four, five or sometimes even six versions we see in some accounts. This is because it can take a long time to collect sufficient results, and if an ad is underperforming, you want to be able to identify that as soon as possible.

Ad rotation

Finally, before you begin your test, make sure that within the campaign settings you set the Ad Rotation settings to ‘rotate’ indefinitely. You should be looking to show lower performing ads more evenly with higher performing ads and do not optimise, as shown below.  This will ensure that the ads are shown evenly, meaning that test results will be as fair as possible.

Of course, it isn’t just the call to action that you can test, but every single aspect of the ad from the headline, to the USP and to the display URL – it doesn’t even have to be the ad copy that you test. Another great test to run is to run two ads which have the exact same copy, but lead to different landing pages on the site. This is a great way of assessing the effectiveness of various pages on your site.  Get creative, think of as many different ways you can test the ads, and then get experimenting!

In the first part of this series I discussed how to set up A/B ad testing and what exactly you should be testing. Moving on from that I will go on to explain the reasons as to why we run A/B tests, how long they should be run for, and when to determine you have enough results to decide which ad works best.

Yes people, I’m talking about everybody’s favourite topic, statistics!

A/B ad testing – what is it?

Just in case you forgot, I’d like to remind you of what A/B ad testing is again (and no, it’s not because I’m a lazy blogger trying to bump up my word count – it’s for your benefit!)

In simple terms, A/B ad testing, or split testing, is a controlled experiment where two variations of an ad are run concurrently over a certain period of time in order to determine which version delivers the best results.

*The important point to note here is that the ads are running concurrently (and for the same period of time), with the reason for this being to ensure that the test is run as fairly as possible.

Ad performance – what are the variables?

There are so many factors which can affect an ads performance, that it is essential that the ads are run at the same time and for a long enough period for these external factors to have been taken into account.

Here is a list of just some of the variables which can result in variations in the performance of an ad:

  • The keyword which triggered the ad (each ad group contains numerous keywords, each of which will perform differently)
  • The search term which triggered a particular keyword within an ad group (each keyword has the potential to be triggered by numerous different search terms)
  • Whether the ad appeared on Google search or on the Search Partner Network
  • The average position that the ads appear in (ad position will vary each auction)
  • Where the ad positions actually appear on the page depending on how ads are displayed:
  • Position one does not always mean the ad has appeared at the top of the page above organic results
  • If the only ads are displayed at the side of results, position one is actually the top ad at the side
  • If the only ads shown are below the organic search results, position 1 becomes the top ad below the search results, so at the bottom of the page.
  • Time of day the ads are shown
  • Day of the week
  • Time of the month
  • The device on which the user is conducting their search (mobile, computer, tablet)
  • The location from which people are searching
  • Which competitor’s ads are being shown at the same time and how often they are being shown
  • Any external factors such as media coverage which may cause a spike in interest.

As mentioned previously, these are just some of the factors which can affect the test so it’s important that the results which are used to compare performance are statistically significant.

But ‘what do I class as statistically significant?’ I hear you ask…

The answer to that question will vary depending on your account. Some of our clients have massive budgets with thousands of clicks per day, while others have to make do with smaller budgets that receive fewer than 10 clicks per day. As a rule, I typically like each ad within the test to accrue at least 100 clicks each, but if that happens within a couple of days due to the volume of clicks, then we aren’t really taking into account all of the above variables, and so I would suggest running the test in a situation such as this for at least one week.

Gut instinct

In cases where the account is accruing only a few clicks a day, you may find that your test is running for a very long time while you wait for each ad to be clicked 100 times. So, in spite of everything I’ve said above about statistics, and variants, and external factors, sometimes it just comes down to gut instinct.

If you’ve been running the test for a month and each ad has had 40 clicks, with one having a click through rate of 4% and the other a click through rate of 8%, chances are you can make a reasonably educated guess as to which is performing better. As PPC experts we’re trained to work with statistics, I am also a firm believer in sometimes trusting your gut instinct, but of course that is an individual choice and is down to you as the account manager to decide.

Set goals and measure them

As to what you are measuring, again this will be down to you and your goals. Are you happy with a higher click through rate which will lead to an increase in traffic to your site and an improvement in quality score? Or are you looking at the whole conversion process and taking into account the whole conversion process? My advice would be that in most situations you should be looking at the effect an ad has on the whole process, because at the end of the day, in most situations conversions are the end goal!

So now you know how to set up and run you’re A/B ad tests and how to analyse the results, what are you waiting for? And don’t think that once you’ve run one test you should be happy with the results. Keep testing and testing, as there will always be some improvement that can be made to your ads.

No Comments

Leave A Comment

2024 © TSCA

0%