Decision trees are a supervised learning technique that can be used for both classification and regression problems. They work by recursively splitting a data set into purer and purer subsets based on an impurity measure, with the goal of ending up with subsets consisting of single class members. Common impurity measures include information gain and the GINI index. Decision trees can overfit data, so techniques like bagging and random forests are used to combine multiple decision trees to reduce variance.