This document discusses using deep neural networks for speech enhancement by finding a mapping between noisy and clean speech signals. It aims to handle a wide range of noises by using a large training dataset with many noise/speech combinations. Techniques like global variance equalization and dropout are used to improve generalization. Experimental results show improvements over MMSE techniques, with the ability to suppress nonstationary noise and avoid musical artifacts. The introduction provides background on speech enhancement, recognition using HMMs and other models, and the role of deep learning advances.