The document discusses using machine learning techniques to improve the sensitivity of A/B testing. It describes how A/B testing works by splitting users randomly into groups A and B that are exposed to different variants of a service. Key measures are calculated for each user and the groups are compared to determine if one variant is better. The document then discusses challenges with detecting small effects with limited user traffic. It proposes learning sensitive combinations of existing A/B testing metrics and predicting metrics to reduce variance as ways to improve sensitivity and the ability to detect smaller effects. Results show these machine learning approaches can increase sensitivity by detecting effects with less user data compared to traditional metrics.