Retention Metrics Explained: Lead Scoring, Pt. 2 [RS Labs]

62% of customers churn immediately after signup, which means once these customers go through the registration process they will not make a purchase. The data science team at Retention Science uses Lead Scoring to help sort out the good customers from the bad so marketers can spend less effort courting bad customers and more time engaging the good ones. Part 1 of the Retention metrics explained: Lead Scoring post explained why we use Lead Scoring and how it works. In Part 2, we dive deeper into the model and examples.

Examples of Lead Scoring in Action

It’s interesting to note the specific drivers of Lead Scoring. That is, what information suggests that someone will become a purchaser? As we mentioned, this problem is challenging because there is a limited amount of information when a user signs up, yet we do find that for some businesses there is significant predictive power in that limited information. For example, one common input is the year the customer was born. This allows us to estimate the customer’s age, which turns out to be an interesting feature to examine across two different clients, which we call Client A and Client B.

We can examine the learned weights in the model to get a sense of how age impacts the end prediction of purchase likelihood. We can interpret these roughly as probabilities for convenience. The plot in the figure below shows Lead Scoring’s weight plotted against the age of the registered customers, in years.

 image03

The probabilistic score impact (y-axis) against the age (shown on the x-axis) for Client A and Client B

For Client A (orange), this information is slightly predictive—if the users are between the ages of 15-30, they are less likely to purchase than if they are older (30+). However, the results are noisy. On the other hand, for Client B (blue), there is consistent relationship between age and purchase likelihood, with the best users being in the 60-80 range.

There are a couple of other interesting points here. One is related to the distribution of feature values. Strangely, there seem to be some very elderly users. We can get some more context for this by looking at the distribution of user signups across age (the numbers have been altered but not the distribution):

bday_user_graph

Normalized count of users for different ages implied by birthday year selection during registration

 

As the graph shows, in both distributions there is a rather suspicious spike in the number of users in the tail end (older than 100). While this could be a data translation error, more likely this means users are lying. These extreme ages are both related to the earliest years users can choose when they register (around 1900). Users might lie for a number of reasons such as privacy concerns (“why does the retailer want to know my birthday?”) or to get past a legal age limit. Despite this data issue, Lead Score modeling revealed that age is a strong predictor of purchase probability, even without other features.

The most predictive features will likely be domain-specific. In the case of Client A, color choice is central to their most popular products, and it turns out this is the most predictive feature for purchasers. This might not be surprising, given the importance of color to their products, but it’s striking that the color that is most predictive is more than three times as indicative of a potential purchasing customer than the color with the lowest indication. So, that gives a very strong signal to use in the welcome series email campaigns. The figure below shows the various product colors and their impact on the potential purchase. The colors are ordered from most indicative (top of the figure) to lowest (at the bottom).

 imagecolor

The impact color choice has on whether a registered user will purchase

Evaluating Performance

Of course, the insights gained from a retrospective look at data is one thing. How do the models actually perform in practice?

Using our Lead Scoring model, we get a fine-grained ranking of all users according to their likelihood to convert. To illustrate, for our gambling client, we scored about 8000 users over Q1 and tracked their transactions over 6 months.

If we compare our model’s predicted top 10% users with the predicted bottom 10%, we find that the top 10% spent almost 300% more and converted 40% more often.

 Screen Shot 2015-11-11 at 4.56.03 PM

If we go further and compare the model’s predicted top 1% of users with the predicted bottom 1%, the results are even more drastic:  the top 1% spend over 700% as much.

Screen Shot 2015-11-11 at 4.55.46 PM

In this post we outlined our Welcome Purchase Probability model and motivated its utility to your business along with some meatier technical content. Other metrics we have covered include churn and CFV (Part 1 and Part 2). We hope this topic was interesting, and if you have any questions or comments we would love to hear from you!

 ————-

About the Author

Eric Doi is a data scientist at Retention Science.  His goal is to improve every day, just like gradient boosted learners. He studied Computer Science at UC San Diego and Harvey Mudd College.