<img height="1" width="1" style="display:none;" alt="" src="https://analytics.twitter.com/i/adsct?txn_id=nups8&amp;p_id=Twitter&amp;tw_sale_amount=0&amp;tw_order_quantity=0"> <img height="1" width="1" style="display:none;" alt="" src="//t.co/i/adsct?txn_id=nups8&amp;p_id=Twitter&amp;tw_sale_amount=0&amp;tw_order_quantity=0">

Customer Retention. Automated

Attribution Modeling: Last Touch, Multi-Touch & Revenue Lift

Posted by Polly Flinch on Apr 9, 2018 10:59:51 AM
Find me on:

One of the biggest barriers to marketing success is being able to quantify the value of your marketing campaigns. At the end of the day, marketers are accountable to ROI and ensuring that the tech stack they have in place is procuring the right numbers and keeping the C-Suite happy. This article will look at 3 attribution models - Last Touch Attribution, Multi-Touch Attribution, Randomized Control Trials (Revenue Lift) - and their strengths and weaknesses.


Because last touch attribution can be classified as a relatively simple model, its appeal is that it’s easy to implement and use to measure return on investment. Last touch attribution simply gives 100% of the credit for the sale to whatever channel the customer engaged with last. While this may make your life as a marketer easier, it doesn’t really give you an idea of the whole picture. A customer’s journey is generally pretty complex and customers tend to interact with multiple channels before making a purchase; however, a last touch attribution model will not take anything but that last interaction into account, skewing your attribution data, which in turn can affect key marketing activities, such as budget allocation.

Here’s an example for you. Say you have 1,000 best customers. Let’s a assume in your current customer mix, if they were left alone - they don’t receive a single marketing campaign from you - 10 of those customers would make a purchase. Now let’s say that you’re deploying Windsor Circle triggered emails to the same 1,000 best customers and 10 customers make a purchase during the same month. Using last touch attribution, Windsor Circle would get 100% of the credit for those sales; however, we know from the above information that these 10 people were going to purchase anyways - you can see the problem.

While last touch attribution may be attractive because it’s the simplest and easiest form of attribution modeling, there is clearly some room for improvement. Last touch attribution leaves the door wide open for inaccurate reporting and budget allocation and ineffective strategy planning based on skewed data of what is and isn’t working.


A little more complex and a lot more involved, multi-touch attribution allows a marketer to assign percentages of the total conversion to every touch point in a customer’s journey to make a purchase. Marketers can opt to create a multi-touch attribution model manually, or use predictive software, either option works well. While multi-touch attribution starts down the path of properly allocating a portion of the sale to each channel involved, there are still some pitfalls:

  • Allocation bias (if doing manually): as a content marketer you may put more weight on interaction with a blog article, while a social media marketer may fight to allocate a higher percentage to a like on Instagram or follow on twitter.
  • Use of inaccurate data (if using software): using software for attribution can be tough because it all happens within a black box, you need to know that the data that is being pulled into the model is accurate - this entails making sure all channels are being reported, including offline data, which can be a bit tricky.


The most effective way to truly understand the ROI that your campaigns are driving is through randomized control trials, also known as control group testing. To do this, you treat a certain percent of your customers and measure lift in revenue from that subset as compared to your non-treated (control) group. In its simplest form, there are 6 steps. This is the attribution model, we call it Revenue Lift, our clients subscribe to using Windsor Circle, here’s how it works:

  1. Set Up Your Cohort - First things first, you need to identify your cohort. You will need to identify - the campaign you want to test, the segment associated with the campaign, and the precent you would like to use for your treated vs. non-treated group (we usually recommend starting with a 90/10 split, but you can go as high as 50/50 for faster results). At Windsor Circle, we use a Universal Control Group, so if you select a 10% hold out, that 10% will not receive any campaigns. Once that 10% is removed, we then randomly assign a % of the treatment group to the control group for the campaign in question - giving us an even more granular look at what lift, if any, our campaigns are driving. Having control groups at both the universal and individual campaign level allows you, as the marketer, to view results on a macro and micro scale - you will be able to understand how the platform as a whole is working for you and how specific campaigns may or may not be working for you.
  2. Randomize Assignment to Your Cohorts - Random assignment of customers to your cohorts is integral to creating a non-biased test. For each customer segment, you will need to randomize who is added to the treatment group and the control group (if you use Windsor Circle software, we take care of this for you automatically). As stated above, we randomly assign 10% of customers to your non-treated cohort and 90% to your treated cohort, as we gain updates from ongoing purchases, we continue the random assignment - this takes care of your Universal Control Group. We also apply the same logic to all automated campaigns you have running, so a customer assigned to the universal control group will never receive a campaign; however, customers in the universal treatment group who are eligible to receive the campaign in question, may be assigned to the treated or non-treated cohort on a per-campaign basis. For example, it is completely plausible that a customer will be assigned to the non-treated group for an Automated Cart Recovery campaign, while also being assigned to the treatment group for Predictive Product Replenishment. This randomization helps to eliminate noise in the data.
  3. Treat the Treatment Group (Don’t Treat the Control Group)- once the assignments are complete, it’s time to run the test, depending on your marketing stack this may be as simple as turning the campaign on (it is if you’re using Windsor Circle).
  4. Measure the Results - as long as you’re measuring transactional purchase data (we do this), you can bypass the process of measuring attribution via clicks, coupons or other means to assign value to marketing treatments. Instead, you can simply measure the randomized control and treatment groups to assess where there are differences in spending patterns. Note: in our calculations we use a method called “winsorizing” (no, we didn’t come up with it) to deal with outliers that will inevitably show up in the data. You can read more about it here.
  5. Analyze for Statistical Significance - Like all good things in life, getting a true measure of impact takes time. If you’re looking for a quick number, control group testing isn’t going to be for you; however, if you’re looking for a statistically significant number that will help you and your executive team truly understand the impact of your marketing campaigns, stick with this model. The goal is to gather enough data points to achieve statistical significance. There are ways to mitigate the time it may take to gather the right amount of data, most notably increasing the size of your control group; however, this will have an impact on the potential revenue lift you could be seeing in exchange for knowing more quickly whether you can rely on the campaign in question to drive growth.
  6. Infer Success (or Not) from the Data - Once your results reach statistical significance - what did you find? If you’re seeing incremental ift, the tool your using (macro results) or the campaign your running (micro results) can be deemed a success. On average our clients are seeing a 20% increase in revenue from treated customers using WIndsor Circle’s Predictive Marketing Platform.

Randomized Control Trials are the most effective way to get a true measure of impact from your campaigns; however, you need to be willing (or able) to put in the time to gather the data necessary to deem a tool or a campaign a success using this method.


While there are many different attribution models that a marketer can utilize to gather information on what success looks like, it’s important to understand the strengths and weaknesses of the models your considering employing. Here’s your cheat sheet:


  Last Touch Attribution Multi-Touch Attribution RCT's (Revenue Lift) 
  • Simplest model
  • Easy to implement
  • Easy to measure
  • Gives a better idea of the customer journey
  • Takes multiple channels into account
  • Predictive model
  • Statistically significant method for determining ROI
  • Macro and micro measurements to understand overall impact of tool and per-campaign impact
  • Data can be inaccurate and misleading
  • Doesn't accurately portray the whole picture
  • Gaps in channel reporting
  • Data can be skewed if manually assigning %
  • Can be clunky to implement
  • Gaps in channel reporting (offline data)
  • It can take a while to gather all of the necessary data points to reach statistical significance
  • Implementation can be tricky if not working with a vendor, such as Windsor Circle



Topics: Best Practices, Data Science

Popular Articles