This document presents a study on adversarial neural pruning (ANP) with a focus on latent vulnerability suppression to improve the robustness and efficiency of neural networks against adversarial attacks. The authors introduce the concept of vulnerability in latent-feature representations and propose a Bayesian framework for pruning vulnerable features to enhance model performance. Experimental results show that the proposed ANP with vulnerability suppression (ANP-VS) outperforms various baseline models across multiple benchmark datasets.