This presentation discusses robust filtering schemes to defend machine learning systems against adversarial attacks. It outlines three main defense schemes: input filtering, output filtering, and an end-to-end protection scheme. The input filtering scheme uses a genetic algorithm to determine an optimal sequence of filters to detect adversarial examples. The output filtering scheme formulates the detection of adversarial inputs as an outlier detection problem. The end-to-end scheme integrates components for adversarial detection, filtering, and classification into a unified framework for protection. Experimental results show the proposed approaches can effectively detect various adversarial attack types while maintaining high classification accuracy.