This document presents error bounds for transductive learning algorithms based on graph cut size. It defines the cut size of a vertex labeling as the number of edges connecting differently labeled vertices. Many transductive learning algorithms aim to minimize cut size. The document derives PAC-style error bounds relating the cut size of a labeling to its expected test error. It improves prior bounds by relating cut size to graph-theoretic quantities like minimum k-cut size. The tightest bounds show that with high probability, labelings with smaller relative cut size will have lower test error.