This document summarizes a paper that proposes a new recommender system model called InterD that aims to perform well on both biased and debiased test sets. InterD uses distillation to train a student model using predictions from pre-trained biased and unbiased teacher models. The student model learns a weighted sum of the teacher predictions based on the environment (biased vs debiased) to determine recommendations. It also incorporates unobserved user-item pairs in training to further improve performance on both test sets. The experiments section evaluates InterD on benchmark datasets to demonstrate its effectiveness at achieving good performance on both biased and debiased recommendations.