This document presents practical strategies for executing black-box attacks against deep neural networks (DNNs), where the attacker can only observe the output of the classifier without knowledge of its internal architecture. The process involves training a substitute model using synthetic datasets generated from the DNN's outputs, which then allows for the crafting of adversarial samples aimed at causing misclassifications. The document also discusses various attack methodologies, such as the Fast Gradient Sign Method and the Jacobian-based Saliency Map Attack, and addresses potential defense strategies.