Within conversion campaigns on Facebook, you have the ability to select an optimisation event for each ad set. The optimisation event defines what Facebook will optimise towards.
If you set it to clicks, it'll show your ads to whoever is likely to bring in the lowest cost clicks; if you set it to purchases, it'll aim to bring in the lowest cost purchases.
So what event you tell Facebook to optimise for? This might seem like a flippant question, you might think:
There is some logic to this line of thought: if you want purchases, be up front with Facebook and tell it that you want purchases by choosing your ad sets to optimise for this event.
The issue though is that Facebook, like any ads platform, needs data to learn from. While purchases might sit at the bottom of your funnel, and so therefore may be the most valuable event to you, you might not have enough data for Facebook to effectively learn from. If your volume of purchases is so low, Facebook might not be able to get a good enough idea of what purchasers look like in order to effectively optimise for them.
In cases like this, there's often value in moving further up the funnel. By this I mean optimising towards upper funnel events, events that happen before a user purchases. In this example, this might be someone adding items to their basket, or clicking on the checkout button.
Because both of these events happen before a user has actually purchased, some users will drop out of the purchase flow after having completed these events but before completing their purchase. This mean that the number of people who've, say, added items to their basket will be higher than the number of purchasers.
The fact that there are more of these people means that if we chose to optimise for this event, Facebook would have more data to learn from. It would be able to get a much better idea of what a basket-adder looks like than a purchaser.
Sure, we don't want to target people who are just going to add something to their basket and not purchase. By optimising towards basket adds we're training Facebook to understand what all basket adders look like, including those who go on to purchase.
The crucial question here is whether the ability for Facebook to understand basket adders better than purchasers makes up for the fact that we're no longer optimising for the bottom-of-funnel event, purchases.
If we assume that basket adders look just like purchasers (i.e. they don't have any majorly different characteristics) then optimising for basket adds is a no-brainer: we're effectively doing the same thing as optimising for purchasers, but with much more data for Facebook to learn from.
This can sometimes be a contentious assumption, and one that's not always true. If your data shows that certain sorts of people are more likely to add items to their basket without completing a purchase (e.g. males, or 18-24 year olds), then the assumption above is incorrect; basket adders don't look just like purchasers.
In this case you have to make a choice. Do you optimise for events higher up the funnel, like basket adds, because they mean you have more data to learn from? Or do bite the bullet, accept that you'll have less data to learn from, and optimise to purchases?
Fortunately this isn't a question you have to answer yourself, you can run a test to help inform the answer.
The best way to run this test is to create a campaign for each optimisation event you want to assess, and test them via a split test. Running it via a split test means that all of the campaigns that are optimising to each event won't compete with one another. They'll each receive their own fair share of your target audience, amongst which they can serve their ads and learn from.
If you're testing which optimisation event to use for a single campaign, this can be set up easily through Test & Learn. Simply duplicate your existing campaign for each additional optimisation event you want to test, change over the optimisation events in the ad set for each new campaign, and publish your changes.
You can then head over to Test & Learn and set up a campaign-level split test amongst all of the campaigns you've just created, and your original campaign.
Once you've gotten results from your split test, you can start to analyse them to see which optimisation event is best for you. Notice first of all that, provided you've run the test with enough data, you should see that each campaign performs particularly well in delivering the event that it's optimising for.
For example, if one of your campaigns is optimising for basket adds, it should do particularly well at generating basket adds. It's good to check this is the case, as it validates Facebook's ability to optimise well for whatever event you choose.
In deciding the winner of the test though, you should always be looking at your bottom of funnel event, e.g. purchases. Don't judge each campaign according to the event that it's optimising for; the point of running this test is to see whether optimising for a higher funnel event can bring in more lower funnel events due to Facebook being able to learn more about your target customer.
To simplify, if you're main marketing objective is to drive purchases, then assess how each of the campaigns perform at driving purchases. If the campaign optimising for purchases drove the most purchases, this indicates that purchasers do look unique, and that optimising for upper funnel events isn't a good proxy for optimising for purchases.
Alternatively, it's very possible that you see some campaigns have better performance when optimising for upper funnel events. This will be because the greater volume of conversion events has helped Facebook to learn how to serve your ads more effectively to your target customers.
If you do see the latter scenario occurring, i.e. you're able to drive more volume by optimising higher up the funnel, you should continue to optimise for that upper funnel event, for that campaign.
Note that your results won't necessarily be applicable to all other campaigns though, as campaigns with more volume flowing through them might benefit more from optimising to bottom funnel events like purchases, and vice versa. It's always good to run tests like these on a variety of campaigns, with different levels of volume, so that you know which campaigns should be optimising to which events.
By doing this procedurally and across a number of campaigns, you can work out what level of volume you need in a campaign before it becomes beneficial to optimise to bottom funnel events like purchases. Similarly you can work out the level of volume below which you can get better performance by moving your optimisation event up the funnel.
Something which often gets pointed out when running the above test is that it's unfair. The point of the test is to assess which event it's most effective to optimise towards at each level of volume, but by running a split test you're artificially reducing the volume of each leg of the test.
By running the test as described above, you're effectively testing each optimisation event at a different volume to the volume it would usually run at. To be more precise, because you're dividing reach and spend amongst all legs of the test, you're running each optimisation event at an artificially lower volume for the duration of the test.
The danger is that this biases results in favour of using upper funnel optimisation events, like basket adds. This is because, when data is scarce, having some level of data to optimise from plays a huge role. If the leg of the test in which you're optimising for purchases has much lower volume than it usually does, then it will perform artificially badly.
This is a fair criticism of this testing approach, but sadly there isn't an obviously better alternative way to test. My advice would be to bear this limitation in mind when interpreting the results. If two optimisation events have performed equally during the split test, always decide in favour of the lower-funnel event (e.g. purchases, instead of basket adds).
This is because the ad sets which have been optimising to purchases will benefit more from the increased volume they'll see when the split test ends. The extra data they'll have to learn from will help them deliver ads more effectively than ad sets optimising to upper funnel events like basket adds.
The reason for this is slightly technical, but comes from the idea of diminishing marginal returns. The number of basket adds in the ad sets optimising for basket adds will be higher than the number of purchases in the ad sets optimising for purchases. Because of this, doubling the number of events each leg of the test will receive will make a smaller difference on performance in ad sets optimising for basket adds than it will for ad sets optimising for purchases.
Even if optimising higher up the funnel, say to basket adds, doesn't bring in more purchases than optimising to purchases, there can be value in it still. This is because Facebook lets you retarget people who've reached certain stages of your funnel.
If optimising for basket adds lets you drive significantly more people to add products to their basket, then this increases the number of people you can retarget to.
Because a basket add is quite a high-intent action, this retargeting audience is likely to be fairly profitable; people who've already added to their basket are comparatively likely to return to that site and complete their purchase.
If you are optimising higher up the funnel, definitely take advantage of this benefit of doing so. Make sure that you have ad sets set up to retarget people who've reached all the stages of the funnel that you're optimising for, so you can push them lower in the funnel and towards your final conversion.