Gradient boosting is an ensemble technique that combines weak learners, such as decision trees, into a single strong learner. It works by iteratively fitting a prediction model to the negative gradient of the loss function, which represents the residuals or errors from the previous iteration. This process of residual fitting helps reduce bias and variance in the model. Gradient boosting has advantages over other techniques like bagging in that it can reduce both bias and variance through its iterative process, while avoiding overfitting by using weak learners and a regularization parameter.