Insights
3 simple rules to build machine learning Models that add value
6 min read
By Julien Kervizic

In any Machine Learning projects there are three simple rules to follow in order to properly add value:

1. Focus on Business impact

2. Focus on a MVP

3. Productionize Early

While these 3 simple rules are not unique to Machine Learning projects, how these principles should be applied to ML projects tend to be different from other types of projects.

Rule 1: Focus on Business impact

What should drive model selection is what you can impact to drive business performance, this means having an actionable metrics to optimize for and a set of metrics on which we are doing willingful tradeoffs. DMAIC’s methodology provides a structure to handle the choice of metrics as part of their definition phase.

Overall this means that for instance, if we wanted to drive a revenue uplift by focusing on targeting customers least likely to convert by offering them a discount to get them to convert, what we should focus on, is not the accuracy of finding customer less likely to convert that could be nudged into conversion but rather a more open question of how to drive the maximum revenue uplift by targeting certain of these customers with an uplift, and this could potentially be done in a multitude of ways, such as targeting “poorly” customers that are more likely to make large purchases.

Focusing on optimizing metrics that matter for the business also has the additional advantage of handling the system issues of Machine Learning prediction. The prediction of a Machine Learning model is just one bolt of an engine, and if the systems does some random things or transforms your input in an un-intended way, you might have given the system some pristine input, but you are still getting garbage out.

This approach is in stark contrast with what happens traditionally within Software Engineering where the focus tends to be on the inputs of the systems.

Since we want to focus on output of a system, it is really important to track the impact of our model on the output of the system. Having a proper measurement and tracking process facilitates the optimization towards the output and allows for deep diving into what drives the expected outcome.

Rule 2: Focus on a MVP

Like for most Software projects, focusing on a MVP is really the way forward that is focusing on an MVP in terms of model, feature and in terms of operation. Too often people from Academia try to fine tune a near perfect model before being able to push it out of the door. A focus on an MVP forces it out of the door and see if it adds value.

It is not worth it to go directly to a Deep Learning model, a Boosted model from the get go. Starting with a simple and interpretable model is usually a better starting point than premature optimization with a better model. Even a simple models should be able to drive OK predictions provided there is signal in the data. Focusing on simple interpretable models has the added advantage of providing some understanding on the datasets and allowing to communicate these early to different stakeholders.

Likewise for features, it is also important to have a clear idea of what features and how much time needs to be investigated in trying different features for model building. Investigating all datasets, all features and all transformation is a time consuming effort that is often prone to data quality issues.

For instance, lets’ consider an e-commerce company that is trying to model its’ churn rate. Looking at different transformation of the transactional data such as re-ordering rate, average time between order, number of orders in the past n months … do not provide significant prediction performance. The reason being that replacement orders fell into the generic orders without some specific identification. These orders impacted each of those metrics, shortening average time between order, increasing re-ordering rate and the number of orders, essentially polluting the information contained in this metrics would these orders be removed. Being able to identify these orders, talking to the business on the other hand, would add additional signal such as Customers who might have had a bad experience and necessitated a replacement order to be made.

The need to focus on a MVP is also apparent on the way the model is operationalize, if the model needs to be tested in an ad-hoc manner such as through newsletter campaign push there shouldn’t be a need to fully test it by integrating data flows end to end. But having a model that allows to be refreshed without too much manual input when needed might be sufficient for operationalization of the model. That is running a script or a workbook from time to time to generate a CSV that can be used for segmentation for instance might be sufficient in order to get some first results from the model.

The data required to provide the desired uplift is most unlikely to available in the right form or of too poor quality to be able to provide predictions of the right degree of accuracy.

If that is the case but the area or the model is of particular importance for the business it is usually better to spend the time to work on with the business on a more performance measurement focus type of work. Forcing the explanation of change in metrics within a business process, along with the setup of a planning cycles allows for the business to work with you to identify potential predictors and to get the business a stake in the game at enforcing a certain degree of data quality.

Rule 3: Productionize Early

Data Scientists should aim to productionize their model early if they want to derive value. Productionizing the model does not mean to have the model fully integrated in a data pipeline, or having all the process for training and generating predictions fully automated, rather it means having the model embedded as part of the decision process, where the predictions provided are used to take decisions and then actions. This is for a couple of reasons:

1. Our goal is to impact a business metrics and that these decisions and actions are the one that dictate how this metrics will be impacted, the predictions should only be seen as an input as part of the process

3. Embedding predictions as part of the decision process, forces stakeholders to have some skin in the game, they should help improve the model by pointing out potential data-quality issue, suggest potential predictors, anomalies or other factors that might affect the data and the predictions.

It is important not to be too eager with all automation however, if the predictions for instance are only used on a monthly basis and can be imported through a file upload, it is most likely not wise to fully integrate. One of the core priorities for automation should be on creating re-usable components that allow for quicker iteration, be it through the development of standard data pipelines, tracking dashboard and frameworks, feature computation frameworks, feature stores, AutoML… Tools such as Airflow tend to simplify the development of these frameworks and allow a quick path to automation.

The productionization process should happen through small increments that are productionized early. This should be supported by a fast pace iterative cycle supported by automation.

Wrapping up

These 3 simple rules provide a framework to drive business value from operating machine learning. They allow a ML user to benefit from virtuous cycle and to operate within a complex system that consume their prediction in order to achieve their goal, at the same time gaining insight and support from different stakeholders, by focusing on an MVP approach and driving small iterative value quickly, showcasing results and bringing them on the same data driven journey.

Privacy Policy
Sitemap
Cookie Preferences
© 2024 WiseAnalytics