A 4-hour long course given at the Deep learning 2019 summer school.
The topic is how to learn representations for machine learning when the amount of data is limited, for instance when the amount of samples is not large compared to the dimensionality of the problem, or when there is a lot of noise which renders learning difficult. This course bridge deep learning to more classic "shallow" learning techniques that work well in limited-data settings, with some theory and some practical recommendations.
1. Representations for machine learning: some learning theory results, some reflections on representations, and some simple models that extract representations.
2. Matrix factorizations: covering the wide spectrum from PCA to word2vec via dictionary learning and metric learning
3. Fisher kernels: building representations from likelihood models (slightly more academic)