Detect & Overcome Model Drift in MLOps

Bhaskar Ammu -
overcome model drift in mlops
Illustration: © IoT For All

Machine learning (ML) is widely considered the cornerstone of digital transformation and is also the most vulnerable to the changing dynamics in the digital landscape. ML models can be optimized and defined by the parameters and variables available during the time they were created. Model drift (or model decay), is the loss of predictive power in an ML model. Model drift can be caused by changes in the digital environment and subsequent changes in variables, such as data. It is simply due to the nature of the machine language model as a whole.

It is easy to model drift into MLOps if you assume that all future variables will be the same as they were when the ML model was first created. If a model is run in static conditions using static data, then it shouldn’t suffer from poor performance since the data used for training is the same. If the model is in a dynamic and changing environment with too many variables, it will likely have a different performance.

Types of Model Drift

Based on the changes in variables or predictors, model drift can be divided into two types:

  1. Concept drift: Concept drift is when the statistical attributes of target variables change in a model. Simply stated, concept drift is when the variables in a model change. The model will not function properly.
  2. Data drift: This is the most common form of model drift. It occurs when certain predictors’ statistical properties change. The model can fail if the variables change. The model might be successful in one environment but not in another. This is because the data is not tailored to changing variables.

Model drift is a result of the battle between data drift and concept drift. Upstream data changes play an important role in model drift. The data pipeline can lead to missing data and features not being generated.

Let’s take a look at an ML model that was created to track spam emails. It is based on a generic template of spam emails at the time. The ML model can identify and stop these types of emails, thereby preventing potential phishing attacks. As the threat landscape evolves and cybercriminals become more sophisticated, real-life emails have replaced the original. ML detection systems that rely on previous years’ variables will not be able to properly classify new hacking threats when confronted with them. This is only one example of model drift.

Addressing Model Drift

Model drift can be detected early to ensure accuracy. Because model accuracy drops with time and predicted values deviate more from actual values, this is why it is important to catch drift early. This process can cause irreparable damage to the entire model. It is important to catch the problem early. It is easy to spot if something is wrong with your F1 score. This is where the precision and recall abilities of the model are evaluated.

A variety of metrics, depending on the situation, will also be relevant depending on the purpose of the model. An ML model that is intended for medical use will have a different set if it is compared to one designed for business operations. The result is exactly the same: If a specific metric falls below a threshold, there is a high likelihood that model drift is occurring.

In many cases, however, measuring the accuracy of a model can be difficult, especially when it is hard to get the actual and predicted data. This is one of the challenges in scaling ML models. Refining models using experience can help you to predict when drift may occur in your model. To address any impending model drift, models can be redeveloped regularly.

It is possible to keep the original model intact and create new models that improve or correct the predictions from the baseline model. It is important to weigh data based on current changes, especially when data changes over time. The ML model will be more robust if it gives more weight to the most recent changes than to the older ones. This will allow the database to handle future drift-related changes.

MLOps for Sustainable ML Models

There are no one-size fits all solution to ensure model drift is identified and dealt with promptly. It doesn’t matter if you are doing scheduled model retraining or using real-time machine learning, building a machine learning model that is sustainable is not an easy task.

MLOps have made it easier to retrain models more frequently and at shorter intervals. Data teams can automate model retraining. Scheduling is the best way to start the process. MLOps companies can strengthen their existing data pipeline by automating retraining. It doesn’t require any code changes or rebuilding of the pipeline. However, if a company discovers a new feature, or algorithm, including it in the retrained model will significantly improve model accuracy.

Monitor Your Models!

There are many variables that you need to take into consideration when deciding how often models should be retrained. Sometimes, waiting for the problem is the best option. This is especially true if there are no previous records to guide you. Models should also be retrained according to patterns that are tied to seasonal variations in variables. Monitoring is crucial in this sea change. No matter what business domains you work in, constant monitoring at regular intervals will always be the best way for model drift detection.

Bhaskar Ammu - Senior Data Scientist, Sigmoid

Sigmoid is a Sequoia-backed, leading data analysis company with offices in India, the US, and Peru. Sigmoid leverages its deep expertise in Open-Source and Cloud technologies to build innovative analytics solutions for business problems. Sigmoid’s...
Sigmoid is a Sequoia-backed, leading data analysis company with offices in India, the US, and Peru. Sigmoid leverages its deep expertise in Open-Source and Cloud technologies to build innovative analytics solutions for business problems. Sigmoid’s...