Conversion lift tests are the gold standard for answering the question: "what impact are my ads having on my business?"
The conversions or purchases that you see reported in a platform like Facebook Ads Manager will give you some clue as to the answer to this question, but they won't tell you the full story.
Two significant problems with these numbers are:
One solution which solves both of these problems is called conversion lift testing.
Conversion lift testing works in a somewhat similar way to split testing, where users are split into different audience cells, who go on to see different variants of your ads.
In conversion lift testing though, we're not interested in seeing how different ad variants perform, rather we're interested in understanding what impact our ads actually have.
Instead of having different variants of our ads in different cells, we're going to have one cell where users are chosen to see our ads (the experiment cell), and one cell where users don't see our ads at all (the control cell).
Once we have this cell structure in place we can show our ads to the experiment cell for a period of time, usually somewhere between 1 to 3 months, and make sure that users in the control cell don't see any of our ads for this period.
When the conversion lift cell has come to an end, we can compare conversion numbers between the two cells. The difference in conversions between the two cells is what's called the number of incremental conversions.
There are two possible outcomes to be aware of:
If you divide the amount that you spent during the conversion lift test by your number of incremental conversions, you can create a metric called incremental cost per conversion. This is the truest answer you can get as to how much you're having to spend on advertising to generate a conversion.
A question that people sometimes ask when they first come across the idea of conversion lift testing is:
To understand how to respond to the question above, we need to remind ourselves that Facebook is incredibly good at helping us to show ads to the people who are most likely to convert. If we show an ad to someone, it's because Facebook thinks they're at least somewhat likely to convert.
This introduces a bias. It means that people who are likely to convert on our ads are more likely to see our ads, and that people less likely to convert are less likely to see our ads. If we compare the groups of people who do and don't see our ads without running a conversion lift test, then we're comparing two very different groups against each other, and won't receive a fair result.
Conversion lift testing fixes this by creating the control and experiment cells in a unbiased manner. When you're running a conversion lift test, users are added to control and experiment cells just before they see your ad.
Just before your ad is shown to them for the first time, a random number is generated. Depending on the random number, they're put into either your control cell or your experiment cell, and so respectively will either not see your ads (they'll see a different ad instead) or will be eligible to see your ads as normal.
Because there's no bias in terms of which users do and don't see your ads (the selection is handled randomly by a random number generator), comparisons between your control and experiment cells are fair.
To understand exactly what the benefit is of running conversion lift tests, let's go back to the two issues with traditional Facebook attribution methods that we looked at near the start of this article. Firstly:
To recap the issue, you might show an ad to someone who decides to convert a few days after seeing your ad. Intuitively it seems like your ad played some part in getting that person to convert, but because it happened outside of the 1-day view window for Facebook attribution, Facebook Ads won't recognize this conversion as having been caused by your ads.
Conversion lift tests aren't limited to an attribution window. You can run them for as long or as short as you like. Any conversions that happen within either your control or experiment cells during the conversion lift test will be counted, regardless of how long it's been since the converter clicked or saw one of your ads.
This is a big plus for conversion lift tests, because it recognizes that conversions don't happen instantly. It can take time for someone to convert, particularly if you're advertising high-consideration products like a holidays abroad, or expensive electronics.
By not setting a time limit on conversions, conversion lift tests are able to get a better picture of exactly what impact your ads are having.
Now let's look at our next problem:
The issue here, as we looked at earlier, is that Ads Manager will by default take credit for conversions that involve other channels. Facebook has no way of knowing if a user that clicked your ad has also clicked on your ads from other channels (e.g. Google Ads) before converting, and vice versa. This can lead to multiple channels claiming credit for a single conversion (double counting), which can overestimate the impact of your ads.
Conversion lift test fixes this by controlling for a single variable, whether users are eligible to see your Facebook Ads or not. In doing so, it's able to answer the question of what incremental impact your Facebook ads are having on top of the rest of your advertising.
Conversion lift testing doesn't run cross-channel, i.e. it can't tell you the combined impact of all of your ads together. By controlling for whether users can see your Facebook Ads or not, it is though able to tell you whether it's worth running Facebook Ads in addition to your other channels.
How does this help us understand complex user journeys? Well, conversion lift testing isn't going to tell us the impact that each channel is having. What it will do though is tell us the impact that Facebook Ads is having, and whether it's actually causing conversions or just taking credit for conversions which would've happened anyway via other channels.
As we saw with split testing, the easiest way to run conversion lift tests is via Test & Learn. There's a test which you're able to set up there, titled what is it titled? This will allow you to run a single split test across the whole of your Facebook Ads account.
Some things to note about this are that:
As with split testing though, there is another way to set up conversion lift tests: through the Facebook Ads API.
Creating conversion lift tests through the API is a slightly more complex procedure, but it does come with significant benefits. These are that:
To set up a conversion lift test this way, you'll want to have some existing familiarity with APIs. If you haven't used the Facebook API before, I'd recommend checking out Facebook's guide to using the API. If you're already familiar with the Facebook API, then you can head straight to the page on setting up conversion lift tests.