Dictionary-assisted supervised contrastive learning (DASCL) is a method that leverages specialized dictionaries when fine-tuning pretrained language models. It combines cross-entropy loss with a supervised contrastive learning objective to improve classification performance, particularly in few-shot learning settings. Evaluations on tasks like sentiment analysis and abuse detection found that DASCL outperforms cross-entropy alone or supervised contrastive learning without dictionaries. Interpretability techniques like contrastive explanations can provide insights into why models make predictions by comparing predictions to alternative options.