Recommendation algorithms usually try to optimize the best set of items (products) to show a specific user. In certain situations it is necessary to obtain the best set of users that might be interested in a specific item. Many digital marketers face the hard problem of targeting the right users not only for their promotional emails, but also for specific item triggers.
Triggers that Digital Marketers have difficulty targeting:
- Newly introduced item (New Arrivals)
- An item dropped in price (Price Drop)
- Item is back in stock (Back in Stock)
- Item is in limited quantity (Limited quantity)
This blog post introduces a quick overview of our data science solution for the above mentioned triggers that digital marketers struggle with.
Traditionally, marketers use a batch and blast technique to all users. This is usually wasteful, non-personalized, and under-performing for a substantial portion of the user base who might not be interested in a particular item. It’s proven to cause high unsubscribes (app and email) and to lower customer retention.
An alternative solution that may seem intuitively correct could be to only select users that have had past interactions with the given item, but this poses two issues:
- This greatly limits the user base that can be selected
- It’s not applicable to the case where the item is newly launched (see case 1)
However, this solution could be a good alternate rule-based baseline.
The mathematical formulation of the problem as given an Item X, find a set of ‘N’ users such that we can maximize the redemption rate on Item X. [Other flavors of formulations could be to maximize the click to open rate]. We avoid mathematical notations and try to depict it in well known linear algebra functions.
Features are derived for the specific item using the following metadata:
a) semantic descriptions of the item
b) categories it belongs to
c) users who have bought the item (usually not present for 1).
Consider this item feature vector as a single high-level representation of various attributes of the item.
The same feature generating process is then applied to all other items. This gives birth to the Item Feature Matrix.
Similarity scores between the new item and all other items are computed (since this is an expensive operation, we leverage DIMSUM). This square symmetric matrix is the Item Similarity Matrix. A similarity threshold is considered and items not meeting that similarity score are discarded.
Items meeting the threshold are considered for the next step, let’s call it Similar Item Set. So now we’ve reached a point where we’ve figured out a few items that are very similar to Item X. Our next goal would be to connect them to users that have already shown interest or bought items in this Similar Item Set.
Affinity scores between a user and each of the Similar Item Sets are generated by tracking the interactions and transactions of users with these items. Let’s call this the User Item Matrix multiplying a row vector in the User Item Matrix, while the Similar Item Set vector gives us a scalar score.
The score represents a measure of the affinity that a user has for items that are similar to Item X. This provides a relative ranking of users who might have affinity to the item in question.
This algorithm is AB tested in 2 scenarios.
One group, all users were blasted with the specific item.
Using the technique described in this post, users were ranked in order of preference for the specific item. Users above a selected threshold were only part of the campaign.
We performed this test multiple times and present the average results below.
It’s interesting to note, that a 1.7X click to open rate lift was achieved, and almost a 3.4X conversion rate lift. Revenue per email was almost 7X.
Want to know more? Shoot us a demo request.
About The Author
Vedant Dhandhania is a Data Scientist at ReSci. He joined our team from Apple in 2014. Vedant builds machine learning models that leverage customer behaviors and advanced machine learning algorithms to accurately predict customer lifetime value, churn and purchase likelihood. Vedant has interests in Signal Processing and Deep Learning.