Most machine learning algorithms are designed to work with stationary data. Yet, real-life streaming data is rarely stationary. Machine learned models built on data observed within a fixed time period usually suffer loss of prediction quality due to what is known as concept drift. The most common method to deal with concept drift is periodically retraining the models with new data. The length of the period is usually determined based on cost of retraining. The changes in the input data and the quality of predictions are not monitored, and the cost of inaccurate predictions is not included in these calculations. A better alternative is monitoring the model quality by testing the inputs and predictions for changes over time, and using change points in retraining decisions. There has been significant development in this area within the last two decades. In this webinar, Emre Velipasaoglu, Principal Data Scientist at Lightbend, Inc., will review the successful methods of machine learned model quality monitoring.