This master's thesis from the Technical University of Munich focuses on the mathematical analysis of neural networks, highlighting their current understanding and the challenges related to their opacity and behavior, particularly in the context of adversarial examples. It reviews key themes such as approximation theory, stability properties, and methods for identifying neural network parameters, discussing the effects of deep architectures on overcoming the curse of dimensionality. The work is accompanied by elaborate examination of scattering networks and their application to convolutional neural networks.