This document describes a stochastic subgradient descent approach called Pegasos for efficiently solving linear support vector machines (SVMs) on large datasets. Pegasos improves upon traditional gradient descent methods by using a more aggressive learning rate that allows for faster convergence to suboptimal solutions, which often generalize well to new examples. The key aspects of Pegasos are that it uses mini-batches of training examples to estimate subgradients, projects parameter updates into a bounded space, and converges to solutions much more quickly than traditional SVM solvers while achieving comparable test error rates. Experiments on a large text dataset demonstrate Pegasos' ability to reach accurate solutions orders of magnitude faster than conventional solvers like SVM Light.