This document contains a 12-point checklist for testing AI and bias. It asks questions about understanding biases in data, how data is split for training and testing, biases in human labels or training data, biases in feature selection and normalization, how success is measured and how it relates to end users, model stability over time, biases from data acquisition and labeling, selecting unbiased algorithms, testing for minority users, ensuring experiments don't reflect team biases, and checking models for overall goodness without success bias. The checklist aims to identify potential sources of bias at different stages of AI development and evaluation.