This document discusses approaches for adding trust, transparency and accountability to AI models deployed with KFServing. It proposes integrating open-source explainability, fairness and adversarial robustness tools like AIX360, AIF360 and ART to analyze model payloads and provide explanations. The tools would calculate metrics from logged predictions to detect bias or anomalies. Designs are presented for capturing events from KFServing in brokers like Kafka for offline processing. This would allow auditing models over time to ensure trusted performance.