Be the first to like this
This presentation was made on June 9th, 2020.
Video recording of the session can be viewed here: https://youtu.be/OCB9sTUnUug
In this meetup with Sanyam Bhutani, Machine Learning Engineer at H2O.ai, he gives a recap of the eight annual ICLR (International Conference on Learning Representations) 2020 - a niche deep learning conference whose focus is to study how to learn representations of data, which is basically what deep learning does.
Sanyam goes through a few of his favorite selected papers from this year’s ICLR, note this session may not be able to capture the richness of all papers or allow a detailed discussion.
You will be able to find Sanyam in our community slack (https://www.h2o.ai/slack-community/), please feel free to start a discussion with him, if you send a emoji greeting, you’ll find the answers.
Following are the papers we will look into:
U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Your classifier is secretly an energy based model and you should treat it like one
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Reformer: The Efficient Transformer
Generative Models for Effective ML on Private, Decentralized Datasets
Once for All: Train One Network and Specialize it for Efficient Deployment
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Plug and Play Language Models: A Simple Approach to Controlled Text Generation
BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning
Real or Not Real, that is the Question