This document presents methods for detecting emergent intersectional biases in contextualized word embeddings. It introduces the Contextualized Embedding Association Test (CEAT), which generates contextualized embeddings from a language model and calculates effect sizes of biases based on the Word Embedding Association Test. CEAT was found to accurately detect intersectional biases that do not overlap with constituent minority identities. Evaluation showed that contextualized embeddings contained widely shared biases around gender, race, social groups, and intersections of these attributes. Moreover, the magnitude of bias generally decreased as models became more contextualized.