Machine learning models in production are susceptible to failures from data and concept drift issues that can degrade model performance over time. Effective monitoring of model health and data quality is needed to detect such issues. Key aspects to monitor include model accuracy, data schema changes, data distribution shifts, broken data pipelines, and the impact of concept drift. Starting simply by adding basic machine learning metrics to existing service monitoring for memory, latency, and uptime is pragmatic. The appropriate monitoring approach depends on factors like available resources, use case importance, and system complexity.