TCAV is a method for interpreting machine learning models by quantitatively measuring the importance of user-chosen concepts for a model's predictions, even if those concepts were not part of the model's training data or input features. It does this by learning concept activation vectors (CAVs) that represent concepts and using the CAVs to calculate a model's sensitivity or importance to each concept via directional derivatives. TCAV was shown to validate ground truths from sanity check experiments, uncover geographical biases in widely used models, and match domain expert concepts for diabetic retinopathy versus those a model may use, helping ensure models' values and knowledge are properly aligned and reflected.