This document discusses research into generating adversarial examples to attack the vision system of the iCub humanoid robot. The researchers were able to craft perturbed images that were misclassified by the robot despite being visually indistinguishable from the originals. They developed gradient-based optimization attacks to target specific misclassifications or induce any misclassification. Potential countermeasures include rejecting inputs that fall in the "blind spots" far from the training data. However, deep learning features are unstable, with small pixel changes mapping to large changes in the deep space. Future work aims to address this instability issue.