This document provides a summary of a report on deep learning day 1. It covers the key points of input layers to hidden layers, activation functions, output layers, gradient descent, and backpropagation. The input is transformed through activation functions like ReLU and softmax. Loss is calculated between the training output and true output. Backpropagation derives gradients to adjust weights and biases based on the sigmoid derivative during backpropagation. Code examples demonstrate forward and backpropagation passes.