split test

Split test and let Cortex do the rest

Testing is a part of a marketer’s life. Typically when we hear of split testing or A/B testing in the realm of email, we think of having at least 2 or more variations of subject lines or email creatives to test against each other. It’s a must for marketers to constantly optimize to achieve the best performance. However, it can be time consuming to create variations in content and track its performance.

ReSci takes split testing to another level of performance optimization, which can truly increase your odds of yielding stronger campaign performance. With our multi-armed A/B bandit testing, you’ll be able to test faster and more efficiently. This post takes you behind the scenes to illustrate exactly how ReSci’s split testing functionality works, and how you can utilize it for better results.

 

How split testing works

Let’s use a Cart Abandonment email as an example on how traditional split testing is typically executed. If you have 3 subject line variations:

  • “You left something behind”
  • “We’ve saved your cart for you!”
  • “Did you forget something?”

or 3 unique email creative variations,

  • Image + details of a cart abandoned item
  • Image + details of a cart abandoned item + “free shipping” content slice
  • Image + details of a cart abandoned item + additional item recommendations

Testing three different variations

 

These variations would deploy evenly across a subset of subscribers (e.g., split 33%/33%/33% for each variation) and determine the strongest performer by whichever variation yielded the strongest performance. This is straightforward and can be effective for driving overall stronger campaign performance. 

 

How it works in Cortex

We will take the same previous scenario as above, preparing and finalizing the subject line variations or email creative variations that you’d like to test, and simply activate the respective email templates (variations) to begin the competition! There is no need to manually test variations on a subset of subscribers because Cortex will automatically begin testing on a subset of subscribers as soon as you activate the email templates. 

It’s important to note that split testing will work differently when utilized for emails in Automated Email stages versus emails in the Promo Blast stage. Emails in the Automated Email stages are typically switched-on indefinitely and are conditionally triggered over time, while emails in the Promo Blast stage completes deployment as soon as you launch them. Because of this key difference, split testing will function differently depending on the type of stage (Automated Email vs Promo Blast). 

 

Split testing for Automated Emails

Automated Emails, such as Cart Abandonment, are great for scenarios where they can continually be triggered when specific conditions based on subscriber activity are captured.

Over time as the email templates remain active indefinitely, Cortex will begin optimizing towards the stronger performing email template by awarding it with more send volume relative to the other active email templates in competition. With the built-in feature of AI and machine learning, Cortex’s back-end algorithms will identify the stronger performing variation by tracking opens, clicks, and click-to-opens (CTOR) to determine the true winner of this competition. 

When conducting such a multi-armed A/B bandit test, it’s best practice to give Cortex at least 3-4 weeks worth of time to determine a winning variation. You will be able to identify the winning variation in the performance statistics, by looking at the send volume distribution (emails sent) across the active email templates (variations). Below is an example of split testing several email creative variations in the Cart Abandon stage over the course of ~4 weeks:

From the exhibit above, it is clear that Cortex has reduced send volume (emails sent & deliveries) for the email variations highlighted in red as a result of overall lower performance when reviewing the engagement performance metrics (opens/open rate, clicks/click rate and CTOR). With this data driven key insight, you can now confidently determine which variations to deactivate in order to allow Cortex to redistribute more send volume to the stronger performing variations and in turn yield stronger overall campaign performance results. 

 

Best practices:

  • When activating a new variation to an Automated Email stage that already has email template variations that have been active for a while, it’s best to keep track of the date of when the new variation has been activated. That is the start date of when you should start tracking performance across all of the variations in the Automated Email stage for the next several weeks to determine the winning variation(s) or the weakest variation(s).
  • Allow Cortex at least 3-4 weeks worth of time to optimize and determine the winning variation(s) or the weakest variation(s).
  • Another best practice is to continuously test at least 2-4 variations at a time in all Automated Email stages being utilized. 

 

Split testing for Promo Blasts (one-time blasts)

Promo Blasts, one-time blasts, are great for scenarios where a special incentive or a big sale announcement warrants communication to the larger subscriber audience. A Promo Blast only takes a few minutes to successfully deploy across its subscriber audience. Because of this one-time blast design of an email deployment, ReSci’s split testing functionality for Automated Email stages cannot be utilized for Promo Blasts. Instead, the practice of traditional split testing as mentioned earlier in this article can be applied here. 

But you still can test effectively here with the use of Cortex’s Smart Segments functionality. You can create subsets from your larger subscriber audience for split testing purposes, and link these newly created segment audiences (subsets) to the email template variations you’d like to test.

Once the performance of the variations have been reviewed and the strongest performer has been determined, you can link the remaining larger subscriber segment audience to the final Promo Blast for deployment and in turn yield stronger overall campaign performance results.

 

Test for success

Testing is an essential task for all marketers. It’s well known that incremental changes over time can help engagement rates that lead to more conversions. Fortunately, our multi-armed A/B bandit testing helps you test more efficiently and effectively. Paired with our AI and machine learning elements to predict the optimal send times for your audience, our split testing provides assurance that your best performing variations will constantly be sent to your audience. Spend less time on analyzing performance metrics of A/B tests. Let Cortex do that work for you so you can focus more on creatives and strategy.

 

Brought to you by Aaron Lee, Enterprise Client Success Manager

Aaron provides strategic product and marketing consultation to help ReSci’s enterprise clients achieve their performance goals.

 

Feel confused or lost? Want more strategies? Contact the helpdesk, or reach out to your Client Success Manager and they’ll be happy to help!

Download Data-driven marketing guide

ReSci is a team of marketers and data scientists on a mission to democratize AI. We make powerful recommendations and predictions accessible to brands.

Request a demo

 

Did you find this useful? Share it on social media.