In traditional binary classification, we attempt to predict a binary outcome
To understand the practical ramifications of this constraint, consider a simple decision tree model. In a decision tree, splits are chosen to maximize the homogeneity of nodes. In the context of advertising, for an outcome model, this would mean finding splits that separate those who would visit our website from those who wouldn't. In uplift modeling, the split needs to be chosen to take into effect the treatment, and so, we want to sort out those who would visit our website only when shown an ad (serving an ad is useful) from those who would visit our website regardless (serving an ad is useless).
The conceptually simplest way to do this is to change the splitting criterion, but because many of sklearn
's built-in implementations do not allow for a custom split criterion/optimization function, this is not completely trivial. While we could recreate each method to allow for this customized uplift criterion (and others have done this), we have discovered that sklearn
is blazingly fast, and in making a homebrew algorithm, we'd sacrifice some of the benefits (efficient hyperparameter tuning) that come with speed (the other algorithms we have encountered are, regrettably, slow). Thus, we seek alternate methods that allow us to leverage sklearn
(and other) modules. In particular, we implement methods here that simply transform the data, but code the Treatment/Control information in the transformed outcome.
Our package by default implements the Transformed Outcome (Athey 2016) method, defined as:
where
The beauty of this transformation is that, in expectation,
or uplift. Any algorithm trained to predict
Once the model has been made, evaluation is traditionally accomplished using what is known as a Qini curve. The Qini curve is defined as Q
is then the area between the model's Qini curve and the random chance Qini curve. Q
has been used throughout the literature as a way of measuring how good a model is at separating positive and negative uplift values. However, a problem with this curve that its absolute value is dependent on how people generally respond to your treatment. Consequently, it is not particularly useful in understanding how much of the potential uplift you have captured with your model. To this end, we generally normalize Q
in a two different ways:
q1
:Q
normalized by the theoretical maximal area.q2
:Q
normalized by the practical maximal area.
The theoretical maximal curve corresponds to a sorting in which we assume that an individual is persuadable (uplift = 1) if and only if they respond in the treatment group (and the same reasoning applies to the control group, for sleeping dogs). The practical max curve corresponds to a similar sorting, for which we also assume that all individuals that have a positive outcome in the treatment group must also have a counterpart (relative to the proportion of individuals in the treatment and control group) in the control group that did not respond. This is a more conservative, realistic curve. The former can only be attained through overfitting, while the latter can only be attained under very generous circumstances. Within the package, we also calculate the "no sleeping dogs" curve, which simply precludes the possibility of negative effects.
To evaluate Q
, we predict the uplift for each row in our dataset. We then order the dataset from highest uplift to lowest uplift and evaluate the Qini curve as a function of the population targeted. The area between this curve and the x-axis can be approximated by a Riemann sum on the
where
We then need to subtract off the randomized curve area which is given by:
and so the Qini coefficient is:
Unfortunately, with the Transformed Outcome method, there is a real risk of overfitting to the treatment
label. In this case, the Qini curve as defined above could give values that are deceptively inflated. To correct for this, we implemented two alternate Qini-style curves. First, the Cumulative Gain Chart (Gutierrez 2017) finds the lift within the subset of the population up to $$ \phi$$, then multiplies this by
Note we simplified the notation, replacing
Alternatively, we also implement what we call the Adjusted Qini curve, which we define as follows:
We emphasize that the cumulative gains chart is less biased than the adjusted Qini curve, but the adjusted Qini can be useful when the percentage targeted is small and treatment group members are valued disproportionately higher. In such a case, the adjusted Qini overvalues treatment group information to prevent overspending.
References
Athey, S., & Imbens, G. W. (2015). Machine learning methods for estimating heterogeneous causal effects. stat, 1050(5).
Gutierrez, P., & Gérardy, J. Y. (2017, July). Causal Inference and Uplift Modelling: A Review of the Literature. In International Conference on Predictive Applications and APIs (pp. 1-13).