Deep Learning and Artificial Neural Networks achieve remarkable performance in various tasks, so this is the reason why they are preferred in most Artificial Intelligence applications. Although, it is observed that very small perturbations of the original input, can lead this specific category of algorithms to behave in unpredictable manner. This situation raises several scientific questions regarding the security and reliability of the analogous systems that Deep Neural Networks (DNNs) are deployed, and the phenomenon riches significant proportions of concerns if one considers the significance of these systems. Self-driving cars, Identification Systems and Voice recognition are just some examples of applications where security is vital. For that reason, the study of the possible methods of attacking these systems through Adversarial Attacks has increased and so the methods creating robust models against malicious initiatives. In this Master Thesis, the state-of-the-art attacking methods are being examined and the evaluation of adversarial robustness of DNNs with different level of complexity is taking place. Towards this direction, a new alternative method is proposed, in witch is possible to achieve robustness against a category of attacking methods that have not confronted yet.