top of page

Overcoming Data Challenges in Machine Learning Model Monitoring

Machine learning models are becoming increasingly prevalent in today's data-driven world, used to solve a wide range of problems and make predictions based on historical data. However, even the most well-trained and accurate Machine Learning models can suffer from issues related to the data they rely on. Data quality is a critical factor in the performance of machine learning models, and any changes to the underlying data can have a significant impact on their accuracy and effectiveness. In this article, we will discuss Overcoming Data Challenges in Machine Learning Model Monitoring.


There are two types of data issues that can occur while monitoring the Machine Learning model.

  1. The first type is when something goes wrong with the data itself. This can include issues such as missing data, incorrect data, or inconsistent data. These issues can occur for various reasons, including errors in data collection, data processing, or data storage.

  2. The second type of data issue is when the data changes because the environment does. This can happen when there are changes in the underlying data sources, changes in the business processes, or changes in the system infrastructure. These changes can affect the quality and relevance of the input data, which can in turn affect the output of the machine learning system.

To avoid data issues, this article will discuss that monitoring input data quality should be an ongoing process. By ensuring high-quality input data, machine learning systems can produce accurate and reliable output, leading to better decision-making and improved business outcomes.


Overcoming Data Challenges in Machine Learning Model Monitoring

Below are some of the issues related to data that can cause problems while monitoring machine learning model:

1. Data processing issues

A machine learning system uses different data sources to make predictions, but sometimes there can be issues with the data that it's using.


For example, let's say a bank's data science team creates a machine learning system that sends personalized promo offers to clients. The system uses data from different sources like the customer database, clickstream logs from Internet banking, and call center logs. All the data is merged and stored in a data warehouse where the machine learning model calculates the necessary features on top of the joint table to rank the offers for each client.

There are many opportunities for things to go wrong in this process. The things like using the wrong source for data, losing access to the data, writing bad SQL code, making changes to the infrastructure, and having errors in the feature code. All of these things can lead to the machine learning model crashing or making incorrect predictions.


When there are issues with the data processing, the model code can simply crash, and it's easy to catch the issue. However, in some cases, the code can still execute on incorrect and incomplete input, which can lead to serious consequences.


If the machine learning system uses batch inference, as the example of the bank's promo offers, it's less dramatic. There's some room for error, and if a problem is caught on time, the model can simply be run again. But for high-load streaming models like e-commerce, gaming, or bank transactions, the problems with data processing can multiply, leading to even more significant issues.

2. Data schema change

In some cases, data processing may function correctly, but a valid change occurs at the data source that causes issues for the model. New data formats, types, and schemas are rarely good news for models. Additionally, the individual who made the change is often unaware of the impact on the model or may not even know that the model exists.


Consider the example of a promotional campaign. One day, the operational team of a call center decides to organize and enrich the information they collect after each customer call. They may introduce new categories to classify calls more accurately and ask each client about their preferred communication channel, logging it in a new field. They may even change the order of fields and rename them to make it more intuitive.

However, these changes can cause problems for the model, resulting in lost signal. The model will not match new categories with the old ones or process new features unless explicitly told to do so. If there is no check for data completeness, the model will generate a response based on partial input it knows how to handle.


This is a common issue in demand forecasting or e-commerce recommendations, where complex features are based on category type. If someone reorganizes the catalog, the model will need to learn it all over again or wait until someone explains what happened.


Other examples of changes that can cause issues for models include updates in the original business system that change units of measurement or date formats, new product features in the application that add telemetry that the model never trained on, and new 3rd party data providers or APIs.


The challenge is that domain experts may perceive these changes as operational improvements, but the model is trained on the aggregates and expects to calculate them the usual way. A lack of clear data ownership and documentation can make it harder to trace or inform individuals about upcoming data updates inside an organization. Therefore, data quality monitoring becomes critical to capture changes and ensure that the model can adapt to them.

3. Data loss at the source

Data loss is a common problem that can occur due to various reasons such as bugs, sensor failures, or unavailable external APIs. Detecting these issues early on is crucial since they can result in irreversible loss of future retraining data.


Sometimes, these outages may only affect a specific subset of data, such as users in a particular geography or operating system. This makes it even harder to detect them, and unless another system that relies on the same data source is properly monitored, the failure can go unnoticed.


What's worse is that a corrupted data source may still provide data that appears normal, such as a broken temperature sensor returning a constant value. This makes it difficult to spot unless you keep track of unusual numbers and patterns.


Similar to physical failures, resolving data loss issues may not always be possible immediately. However, detecting them on time can help assess the damage quickly, and if necessary, update, replace, or pause the model.

4. Broken upstream models

In complex systems, there may be multiple models that rely on one another, with one model's output serving as another model's input. This can create a situation where a faulty prediction from one model can corrupt the features used by another model.


For instance, consider a content or product recommendation engine. The system may first predict the popularity of a given product or item, and then use this information to recommend items to different users. These are separate models that rely on each other. Once an item is recommended to a user, the user is more likely to click on it, making it more popular in the first model's prediction.


Another example is a car route navigation system. The system first generates possible routes, and then predicts the expected time of arrival for each route. Another model ranks the routes and selects the optimal one, which may influence traffic patterns. As drivers follow the suggested routes, this can create a new traffic situation.


Similar issues can arise in other models used in logistics, routing, and delivery.


The interconnected nature of these models poses a significant risk: if one model fails or produces faulty results, it can create a cascade of problems that affect the entire system.


Tips to Overcome these Issues:

Here are some tips and strategies to overcome data issues while monitoring machine learning models:


1. Establish clear data ownership and documentation: It is essential to have a clear understanding of who owns the data and ensure that the data is properly documented, including any changes made to it. This helps in tracking and identifying any issues related to the data and facilitates faster resolution.


2. Monitor data quality: Regularly monitor the quality of the data and ensure that it meets the required standards. Establishing data quality monitoring procedures helps in identifying and addressing any data issues early on before they impact the performance of the machine learning model.


3. Build flexibility into the model: Consider building flexibility into the machine learning model by designing it to handle changes in the data. For example, if the data schema changes, the model can be modified to handle the changes rather than being completely retrained.


4. Retrain the model when necessary: If the data changes significantly or the model's performance declines, it may be necessary to retrain the model to ensure it continues to perform optimally. Keeping track of model performance over time and establishing thresholds for retraining can help identify when it's time for retraining.


5. Involve domain experts: Domain experts can provide valuable insights into any changes in the data and help identify potential issues. Involve them in the process of monitoring the machine learning model and keeping track of any changes in the data.


By implementing these tips and strategies, organizations can overcome data challenges and ensure the ongoing success of their Machine Learning models. Ultimately, this will result in better decisions, improved efficiencies, and increased business value.

bottom of page