The document summarizes research on making support vector machines (SVMs) more robust to adversarial label noise. It discusses how adversaries can intentionally flip labels in training data to undermine SVMs. The researchers propose a label noise robust SVM that learns from an expected kernel matrix to be less sensitive to label flips. Experiments on several datasets show their approach maintains higher accuracy than standard SVMs when the training data contains adversarial or random label noise. In conclusions, they discuss further investigating the properties and parameter selection for their kernel correction method.