With all the different ways that there are to run tests on Facebook Ads, how do you decide between them?
For tests where you're looking to produce the most reliable & scientific results, split testing is often your best option. Split testing relies on splitting your target audience into different groups, and showing each variant that you're testing to only a single group.
The benefit of split testing is that it allows you to isolate just one key difference between the ads which each audience sees. Because there's just a single difference, you can attribute differences in performance between the different groups to the variable which you're testing.
For example, lets say that you're looking to test two different creative approaches, and want to understand which resonates best with your audience.
You could simply set up two ads in the same ad set, one for each creative, and see which performs best. The downside of this is that Facebook will show both ads to the same people. If one creative appears to perform better, you can't tell whether this is because that creatives resonates with people, or if it's the combination of both creatives that is helping performance.
By running both ads simultaneously, you also rely on Facebook's ad rotation algorithms. These aim to help you by shifting spend towards the best ad, but spend is often split so unevenly between creatives that this is hardly a fair way of running a test. Facebook will also tend to bias spend towards existing ads, as it already understands how well they perform, making it difficult to test new ads in a fair manner.
Split testing solves these troubles by dividing your audience into two evenly sized groups (called cells), and showing each cell only one of the two ads. After running your ads amongst the two cells, you can use performance data to decide which creative to proceed with.
A crucial aspect of split testing is that the two cells are created in an unbiased manner; no types of users are more likely to be in one group than another. This is important, because if higher quality users tended to occupy one cell more than another, then this would bias your test results.
Split testing also solves some tricky issues to do with attribution, the topic of how your ads influence a user's likelihood to convert. Lets say you want to run a test to decide whether to run video ads or static images. If you put these all into the same ad set, and let Facebook optimise for conversions, it will shift spend towards whichever ads are generating the most last-touch conversions; it will bias delivery towards whichever ads users tend to see just before converting.
This might sound ideal, but bear in mind that users don't typically go straight from seeing an ad to converting. Users often need to see ads from a brand several times before deciding to convert.
Lets say a particular user watches a video ad for a brand but doesn't click on it, and later sees a static image ad which they do click (and convert) on.
Intuitively it seems like the video ad likely played some role in getting the user to eventually convert. The static ad may have just been in the right place, at the right time, to get the final conversion, but not actually contributed to the conversion.
Facebook Ads will give all of the credit to the static image ad though, and shift spend towards this instead of the video ad. This is bad, as it doesn't give enough credit to the role the video ad played in getting the user to convert. If at the end of the test we decided to pause our video ads based on this, we could be making a costly error.
Instead of running this test by simply putting both the video and static image ads in the same ad set, let's look running this test via a split test. We divide our target audience into two (randomly chosen) cells, showing videos to one half and static images to the other.
Whenever a user in either audience converts, we know that it's entirely because of the type of ads that are showing to the cell that they're in. If a user in the video cell converts, it's because they've seen our videos. They aren't eligible to see any static image ads, so there's no question of whether other ads deserve credit for the conversion.
When we conclude the split test and look back at our results, we have a very clean view of the data. We can see exactly the number of conversions that videos were responsible for, and exactly the number that single images were responsible.
One thing that you might be thinking, having read the above, is that maybe there's some beneficial effect of showing both videos and static images. Maybe the two work together in encouraging users to convert; the whole is greater than the sum of its parts.
If we wanted to test the combined effect of both videos and static images, we can modify our earlier split test. Instead of just having two cells (one for videos and one for static images) we could include a third cell, in which users will see both videos and static images. Instead of splitting our audience into halves, we now split it into thirds.
Once we set up and run the test, we'll be collecting three sets of results. These will be results for those who've seen just videos, those who've seen just static images, and those who've seen both.
By comparing the three sets of results we can understand, in a clean and unbiased way, whether we should pause our videos, static images, or neither (if the combination of the two performs best).
Note that it's impossible to get these sorts of results without split testing. There's no way that, by running static images and videos in the same ad set, we could understand whether we should use just one of the two creative types, or if we were best off using both.
Fortunately, Facebook has made it fairly easy to set up most types of split tests through its Test & Learn interface. Within test & learn you'll see a test option titled Which campaign gives me a lower cost per result? This allows you to choose two campaigns, and split their audiences evenly between them.
There are a couple of limitations to note:
If the limitations above don't pose any problems, then Test & Learn is the ideal way to set up split tests. If you want to get around any of these limitations though, there is an alternative way to set up split tests, which is through the API.
Using the API requires some technical familiarity. If you're new to the Facebook Ads API, I'd recommend checking out Facebook's guide to using the API. If you've used APIs before, you can head straight to the page on split testing.