Sara Vafi and Shana Rusonis presented on statistical models for analyzing A/B test results. They discussed the differences between Bayesian and frequentist statistics, and between average error control and all error control. Bayesian A/B testing uses average error control and models results as distributions, while frequentist A/B testing aims for all error control by controlling the false positive rate for each experiment. Small improvements are hardest to detect, and realistic A/B tests may require more data than average error control can provide on its own. Optimizely's Stats Engine takes a blended Bayesian-frequentist approach for more accurate and timely results.