Testing Inspiration

In the last few articles we looked at the different types of tests you can run in Facebook Ads. If you haven't read these, I'd recommend checking them out first before continuing with this article:

In this article, we're going to walk through two examples of the sorts of tests you might run using each of the above methods.

Split testing campaign objectives

When creating Facebook campaigns, you're given a number of different campaign objectives to choose from, depending on your goal. The trouble is, even if you know what goal you want to achieve, it's not always clear what your objective should be.

For example, let's say you're running an ecommerce store. Naturally, you'll gravitate toward the conversions objective, as your goal is likely going to be to drive sales.

This is what advertisers call a lower-funnel approach. Using the conversions objective will ensure your ads are shown to people most likely to already be interested in your products, and so these users are said to be lower down in the sales funnel.

Lower-funnel approaches can provide the best immediate ROIs, as they target people with a high likelihood of converting, but they're often difficult to scale. This is because there is a limited pool of people who are likely to be already interested in your products, and running a conversions campaign isn't going to bring anyone into that pool.

One way of scaling your activity might be by running a traffic campaign in addition to your conversions campaign. Traffic campaigns will bring in people who aren't as likely to immediately convert, but will allow you to bring many more users to your site.

You could test the benefit of running traffic campaigns by using a split test with two cells. Note that you don't want to test a traffic campaign directly against a conversion campaign, as a conversion campaign will always perform better in terms of conversion metrics. Rather, you want to use split testing to test the benefit of having a traffic campaign in addition to your regular campaigns.

With this in mind, you can create your split test to feature two cells, each of which contains a copy of your existing account, but where one cell also contains your traffic campaign.

Splitting traffic between these two cells and running them against each other will allow you to understand the impact of having an additional traffic campaign.

Because we're not interested in the immediate impact of the traffic campaign, we shouldn't measure the results of the test by looking at the traffic campaign itself. We need to look at the results at a cell-level; we should decide the test based on the results of how the cells perform overall. I

f the cell with the traffic campaign sees better performance than the cell without the traffic campaign, then this tells us that we should conclude the test and continue to use the traffic campaign.

The benefit of measuring in this way is that it allows us to measure the impact that the traffic ads have on the rest of the campaigns in their cell.

One obvious impact they might have is that they'll increase the size of our retargeting pools. If we have retargeting campaigns in our cells, then the retargeting campaign in the cell with the traffic campaign should benefit from the increased audience size.

How do we set this test up?

To set up this test, we want to create two cells, each with its own copy of all campaigns in the account. One of the cells (the experiment cell) will also contain the traffic campaign which we want to test the impact of.

To make the cells, we'll first duplicate all our existing campaigns, and rename the duplicates so that they're appended with - Experiment. For example, if we only have one existing campaign called Prospecting, then we want to end up with that campaign and another called Prospecting - Experiment. We then create the traffic campaign and call this whatever we like.

Because we want to test cells with multiple campaigns against each other, we'll want to create the split test via the API.

The correct call for this will be:

curl

\-F 'name="Traffic Study"'

\-F 'description="Testing the impact of adding a traffic campaign to our account"'

\-F 'start_time=[insert start time]'

\-F 'end_time=[insert end time]'

\-F 'type=SPLIT_TEST'

\-F 'cells=[{name:"Control", treatment_percentage:50, campaigns:[control campaign IDs]}, {name:"Experiment", treatment_percentage:50, campaigns:[experiment campaign + traffic campaign IDs]}]'

\-F 'access_token=[ACCESS_TOKEN]' \

https://graph.facebook.com/4.0/[BUSINESS_ID]/ad_studies

I've highlighted above all the parts that you'll have to fill in yourself. Note that the campaign IDs should be comma separated lists (e.g. 100, 101, 102...) and that the number of experiment IDs should be 1 greater than the number of control IDs (as it should also include the traffic campaign).

Once you've set up the test, you'll want to run it for some time. Exactly how long will depend on how much volume your Facebook account is generating, but 4 weeks is a relatively safe bet.

The above is one test we can run using split testing. The next section looks at an example test we could set up using a conversion lift test.

Using conversion lift to understand the impact of prospecting

To recap, prospecting is the act of showing ads to people who haven't interacted with your brand before; cold traffic. Prospecting is in contrast to retargeting, where ads are shown to those who have interacted with your brand, perhaps by visiting your site.

Because retargeting involved showing ads to people who've already expressed some level of interest in your brand, it often produces better returns. This sometimes convinces advertisers not to run prospecting activity at all on Facebook, and just to run retargeting.

What this line of thinking fails to capture is that retargeting is simply good at producing last touch returns. If you use a last touch attribution model which gives all conversion credit to the last ad a user saw before converting, as Facebook does by default, then retargeting will always come out looking best.

If you're retargeting people who've been to your site, last touch attribution models don't give any credit to the prospecting campaigns which brought those users to your site in the first place, making them look inefficient.

One way of measuring the effectiveness of your prospecting campaigns is to run a conversion lift test on them. For simplicity, let's assume that you only have one prospecting campaign.

What we'll do is we'll run a conversion lift test on that single prospecting campaign. To get accurate results as quickly as possible, we'll run a 50/50 split, meaning that 50% of users targeted by your prospecting campaign will see your ads, and 50% will be prevented from seeing your ads.

We can set this split test up in the API using the following call:

curl

\-F 'name="Prospecting Lift Test"'

\-F 'description="Test to understand the impact of prospecting"'

\-F 'start_time=[insert start time]'

\-F 'end_time=[insert end time]'

\-F 'cooldown_start_time=[insert start time]'

\-F 'observation_end_time=[insert end time]'

\-F 'type=LIFT'

\-F 'cells=[{name:"Prospecting Campaign",description:"Group for people who will see our prospecting campaign", treatment_percentage:50, control_percentage:50, campaigns:[[PROSPECTING_CAMPAIGN_ID]]'

\-F 'objectives=[{name:"[OBJECTIVE_NAME]", is_primary:true, type:"SALES", adspixels:[{id:[FB_PIXEL_ID], event_names:["[fb_pixel_purchase]"]}]}]'

\-F 'access_token=[ACCESS_TOKEN]' \

https://graph.facebook.com/4.0/[BUSINESS_ID]/ad_studies

All the bold sections above are parts that you should replace (including the square brackets) with the appropriate values for your test. Note that if you have multiple prospecting campaigns, then [PROSPECTING_CAMPAIGN_ID] should be a comma separated list of campaigns IDs.

Cooldown_start_time and observation_end_time should both be set to your start and end times respectively, unless you're familiar with split testing and confident with changing these values around.

Once you have your lift test set up, you'll be able to view its report in Test & Learn. Numbers won't populate in the report though until you have 100 conversions (as defined by the pixel objective selected above) across your control and experiment cells. This is to ensure some level of statistical significance is present before you can view your results.

There's no fixed amount of time to run a conversion lift study for. One idea is to run it until you have a greater than 99% confidence that your campaign is causing conversion lift, i.e. users who are exposed to your prospecting campaign are more likely to convert than those who aren't. If you don't see any significant results after several weeks though, then feel free to cut the lift test short.

If you've set the lift test up as outlined above, then 50% of your target audience will be prevented from seeing your ads. As this is a large group of people who you potentially want to reach, cutting the test short if you don't see results will stop you from missing out on sales to this group.

That's all on testing

If you want to be notified the next time I write something, leave your email here. No spam. No sales pitches. Just good advertising stuff every couple of weeks.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.