The document presents research on out-of-distribution detection for AI models. It discusses how most AI models are overconfident and not robust when facing out-of-distribution data. Several discriminative and generative approaches for out-of-distribution detection are surveyed from literature, including likelihood ratios, softmax scores, and conformal prediction. Experiments on a document classification task find that a likelihood-based approach performs best at distinguishing in-distribution documents from out-of-distribution ones like images, with a high true positive rate and low false positive rate. Future work to improve industrialization of the model is discussed.