This document summarizes research on evasion attacks against machine learning systems at test time. The researchers propose a framework for evaluating the security of machine learning algorithms against evasion attacks. They model the adversary's goal, knowledge, capabilities, and attack strategy as an optimization problem. Using this framework, they evaluate gradient-descent evasion attacks against systems like spam filters and malware detectors. They show that machine learning classifiers can be vulnerable, even when the adversary has limited knowledge. The researchers explore techniques like bounding the adversary and adding a "mimicry" component to attacks to improve evasion effectiveness.