When you yell “representative” at a customer service line and get directed to a live agent, you probably have natural language understanding (NLU) to thank. NLU is a crucial piece of conversational artificial intelligence (AI) that transforms human language—whether it be text or spoken—into digestible semantic information for machine comprehension. Interactions, a leading provider of Intelligent Virtual Assistants (IVAs), leverages advanced NLU models to help some of the largest multinational brands understand customer speech and deliver unparalleled user experience. Today, the best NLU models rely on deep neural networks (DNN). The billions of parameters powering these highly accurate state-of-the-art NLU models are trained using gigantic volumes of data that produce semantic outputs such as intent or sentiment. Through the years, Interactions has leveraged DNN-based NLU technology using large volumes of contact center specific speech data tagged with customized enterprise-driven intents through a unique human-assisted understanding process. While these systems are incredibly effective, they require expensive—and often unsustainable—amounts of supervised data. In contrast, a new generation of scalable machine learning methods—few-shot learning—produces NLU models of comparable quality without the dependence on large datasets. These methods use just a handful of examples to train, thereby broadening the use of NLU to applications in which large collections of labeled data might not be available. In the customer service industry, few-shot learning can be especially helpful for offering customers the ability to speak in their own words instead of having to navigate clunky predetermined menus or being repeatedly misunderstood. These methods can train models with comparable accuracy to large supervised data-driven models at much faster rates. Few-shot learning provides an opportunity to quickly bootstrap and customize NLU to specific applications and vertical-specific vocabulary. This unique capability helps deliver superior user experience across industries like retail, healthcare, insurance and more. In this session, Mahnoosh will review some of the existing methods for few-shot learning and highlight their potential applications for rapid NLU model development. She will also discuss drawbacks to current methods and additional research directions needed to ensure that the small number of examples used to train a large number of parameters do not result in overfitted models that struggle to generalize. Attendees will gain an understanding of the current landscape of few-shot learning in conversational AI, as well as the shortcomings of these techniques. As we grow NLU models and their applications, few-shot learning is an indelible part of rapidly delivering better experiences to conversational AI end users—and Mahnoosh is ready to unveil the technical details behind this emerging technology.