The document discusses the vulnerability of deep learning models used for biomedical image segmentation to adversarial examples, highlighting the implications for medical security. A novel attack algorithm, named adaptive segmentation mask attack (asma), is introduced to create targeted adversarial examples with minimal visible perturbations, achieving high accuracy in intersection-over-union rates. The findings stress the importance of understanding these vulnerabilities to safely deploy deep learning in clinical tasks.