Regularization helps prevent overfitting in machine learning models. It can be viewed as adding extra constraints or terms to the training objective function. Common regularization methods include l2 regularization, which adds a penalty that is proportional to the square of the weights, and l1 regularization, which uses an absolute value penalty. These help prevent overfitting by constraining the complexity of the model. l2 regularization scales down the weights, while l1 regularization induces sparsity by driving small weights to exactly zero.