Validating an ML model with train-test accuracy metrics offers an initial understanding of viability but generating consistent inferencing with contextual business goals requires understanding how the deployed model works in different nature and how they will behave in case of soft data drift. In this talk, I will try to go through different explainability methods and how to employ them and how the choice of type of models affects or affects the interpretability in production inferencing.