Stay up to date
Get notified, once a week, of the latest articles.
Conversion lift tests are the gold standard for answering the question: "what impact are my ads having on my business?"
The conversions or purchases that you see reported in a platform like Facebook Ads Manager will give you some clue as to the answer to this question, but they won't tell you the full story.
Two significant problems with these numbers are:
- Limited by attribution windows
The conversions you see in Ads Manager will always be limited by the attribution window that you're viewing them through, which by default is 28-day click and 1-day view. This means that conversions are only assigned to your ads if they happen within 28 days of someone clicking your ad, or 1 day of seeing your ad. Any conversions which your ads cause, but fall outside of this window, won't be counted.
- Double counting
Platforms like Ads Manager aren't well equipped to handle complex user journeys. For example, imagine you're advertising on Facebook Ads and Google Ads. Let's say that a user clicks on your Facebook ad, then clicks on one of your Google Ads, and then converts. Facebook Ads and Google Ads won't be aware of the click that the other platform received (Facebook won't know that a user went on to click on a Google ad), and so they'll both claim credit for the conversion. This is known as double counting, or duplicate conversions, and can cause the sum of conversions as reported by advertising platforms to exceed the actual number of conversions your business received recorded.
One solution which solves both of these problems is called conversion lift testing.
What is conversion lift testing?
Conversion lift testing works in a somewhat similar way to split testing, where users are split into different audience cells, who go on to see different variants of your ads.
In conversion lift testing though, we're not interested in seeing how different ad variants perform, rather we're interested in understanding what impact our ads actually have.
Instead of having different variants of our ads in different cells, we're going to have one cell where users are chosen to see our ads (the experiment cell), and one cell where users don't see our ads at all (the control cell).
Once we have this cell structure in place we can show our ads to the experiment cell for a period of time, usually somewhere between 1 to 3 months, and make sure that users in the control cell don't see any of our ads for this period.
When the conversion lift cell has come to an end, we can compare conversion numbers between the two cells. The difference in conversions between the two cells is what's called the number of incremental conversions.
There are two possible outcomes to be aware of:
- High incrementality
If there are many incremental conversions, then your ads are having a large effect, because they're leading people to convert and who wouldn't have if they hadn't seen your ads.
- Low incrementality
If the number of incremental conversions is low, then this indicates that you're mostly showing ads to people who would've converted anyway, and that your ads aren't having much real impact.
If you divide the amount that you spent during the conversion lift test by your number of incremental conversions, you can create a metric called incremental cost per conversion. This is the truest answer you can get as to how much you're having to spend on advertising to generate a conversion.
Creating control and experiment cells
A question that people sometimes ask when they first come across the idea of conversion lift testing is:
- "Why do I need to split users into control and experiment cells? Why can't I just compare the number of conversions from people that see my ads to the number of conversions from people who don't see my ads?"
To understand how to respond to the question above, we need to remind ourselves that Facebook is incredibly good at helping us to show ads to the people who are most likely to convert. If we show an ad to someone, it's because Facebook thinks they're at least somewhat likely to convert.
This introduces a bias. It means that people who are likely to convert on our ads are more likely to see our ads, and that people less likely to convert are less likely to see our ads. If we compare the groups of people who do and don't see our ads without running a conversion lift test, then we're comparing two very different groups against each other, and won't receive a fair result.
Conversion lift testing fixes this by creating the control and experiment cells in a unbiased manner. When you're running a conversion lift test, users are added to control and experiment cells just before they see your ad.
Just before your ad is shown to them for the first time, a random number is generated. Depending on the random number, they're put into either your control cell or your experiment cell, and so respectively will either not see your ads (they'll see a different ad instead) or will be eligible to see your ads as normal.
Because there's no bias in terms of which users do and don't see your ads (the selection is handled randomly by a random number generator), comparisons between your control and experiment cells are fair.
How do conversion lift tests help?
To understand exactly what the benefit is of running conversion lift tests, let's go back to the two issues with traditional Facebook attribution methods that we looked at near the start of this article. Firstly:
- Limited by attribution windows
The conversions you see in Ads Manager will always be limited by the attribution window that you're viewing them through, which by default is 28-day click and 1-day view...
To recap the issue, you might show an ad to someone who decides to convert a few days after seeing your ad. Intuitively it seems like your ad played some part in getting that person to convert, but because it happened outside of the 1-day view window for Facebook attribution, Facebook Ads won't recognize this conversion as having been caused by your ads.
Conversion lift tests aren't limited to an attribution window. You can run them for as long or as short as you like. Any conversions that happen within either your control or experiment cells during the conversion lift test will be counted, regardless of how long it's been since the converter clicked or saw one of your ads.
This is a big plus for conversion lift tests, because it recognizes that conversions don't happen instantly. It can take time for someone to convert, particularly if you're advertising high-consideration products like a holidays abroad, or expensive electronics.
By not setting a time limit on conversions, conversion lift tests are able to get a better picture of exactly what impact your ads are having.
Now let's look at our next problem:
- Double counting
Platforms like Ads Manager aren't well equipped to handle complex user journeys. Conversions coming from journeys that involve two paid touchpoints (e.g. Facebook and Google Ads) will be double counted.
The issue here, as we looked at earlier, is that Ads Manager will by default take credit for conversions that involve other channels. Facebook has no way of knowing if a user that clicked your ad has also clicked on your ads from other channels (e.g. Google Ads) before converting, and vice versa. This can lead to multiple channels claiming credit for a single conversion (double counting), which can overestimate the impact of your ads.
Conversion lift test fixes this by controlling for a single variable, whether users are eligible to see your Facebook Ads or not. In doing so, it's able to answer the question of what incremental impact your Facebook ads are having on top of the rest of your advertising.
Conversion lift testing doesn't run cross-channel, i.e. it can't tell you the combined impact of all of your ads together. By controlling for whether users can see your Facebook Ads or not, it is though able to tell you whether it's worth running Facebook Ads in addition to your other channels.
How does this help us understand complex user journeys? Well, conversion lift testing isn't going to tell us the impact that each channel is having. What it will do though is tell us the impact that Facebook Ads is having, and whether it's actually causing conversions or just taking credit for conversions which would've happened anyway via other channels.
How do you run conversion lift tests?
As we saw with split testing, the easiest way to run conversion lift tests is via Test & Learn. There's a test which you're able to set up there, titled what is it titled? This will allow you to run a single split test across the whole of your Facebook Ads account.
Some things to note about this are that:
- Account wide testing
Running this will help you understand the impact that all of your campaigns are having together. It won't let you break down the results by campaign to understand which campaigns are having the most individual impact.
- 10/90 split
Running a conversion lift test via test & learn will automatically set your control/experiment split at 10/90. That is, 10% of people who you would have normally showed ads to will be put into your control cell, and so not see any of your Facebook ads.
Creating in the API
As with split testing though, there is another way to set up conversion lift tests: through the Facebook Ads API.
Creating conversion lift tests through the API is a slightly more complex procedure, but it does come with significant benefits. These are that:
- Create at the campaign level
You can create conversion lift tests that run on a single campaign, or on a select group of campaigns. This is vital for helping you understand the impact of one campaign, or a certain group of campaigns.
- Variable splits
You can alter the control/experiment split from the default 10/90. If you're worried about losing volume while running your conversion lift test you can set it to be more imbalanced, e.g. 5/95. If you don't have much data coming through though, you can set it to be more equal, e.g 50/50. The benefit of increasing the size of your control is that it will make your results more reliable. If only 10% of your audience is in your control group, and you only get a couple of conversions in the duration of your conversion lift test, then your results are going to be extremely unreliable. The smaller the volume of conversion data during the test, the higher your control needs to be.
- Advanced reporting
When you set up a conversion lift test through the API, you get better reporting. A report for your lift test will still be generated in the Test & Learn interface, but it will include many extra details such as the size of your cells and results split by demographics.
To set up a conversion lift test this way, you'll want to have some existing familiarity with APIs. If you haven't used the Facebook API before, I'd recommend checking out Facebook's guide to using the API. If you're already familiar with the Facebook API, then you can head straight to the page on setting up conversion lift tests.