This document discusses adversarial classification under differential privacy. It proposes a new defense against adversarial distributions that aims to provide utility, privacy, and security.
The defense frames the problem as a game between an attacker and defender. The defender first designs a classifier to minimize errors while satisfying privacy constraints. The attacker then tries to maximize damage by poisoning the data.
The proposed defense can outperform classical defenses by allowing the defender to certify performance guarantees no matter the attacker's strategy, improving on approaches that are limited once the attacker adapts. Examples applying this defense to traffic flow estimation and electricity consumption data are also discussed.