Deep Learning is for some the Holy Grail and for others the root of all (future) evil. Besides the hype and buzzwords, it is a highly useful way of solving problems in many areas, which were difficult or impossible to solve in the past. Originally having gained traction mainly in computer vision, it is currently gaining a lot of momentum in text and language processing, but also in structural data problems.
We will take a look at the core principals of neural networks, important architectures and some tricks of the trade. We will answer questions such as how neural networks are trained and what makes them deep. After a brief look at current solutions and milestones, we will dive into the implementation of an image classifier using state of the art tools.
The use of pytorch in conjunction with a high-level library like fast.ai allows us to quickly get into deep learning, test ideas and implement prototypes. The common assumption that deep learning requires huge amounts of hardware and/or time is generally not true and allows us to see results within the talk.