This is a deep dive into key design choices made in the NLP library for Apache Spark. The library natively extends the Spark ML pipeline API’s which enables zero-copy, distributed, combined NLP, ML & DL pipelines, leveraging all of Spark’s built-in optimizations. The library implements core NLP algorithms including lemmatization, part of speech tagging, dependency parsing, named entity recognition, spell checking and sentiment detection.
With the dual goal of delivering state of the art performance as well as accuracy, primary design challenges that we’ll cover are:
Using efficient caching, serialization & key-value stores to load large models (in particular very large neural networks) across many executors
Ensuring fast execution on both single machine and cluster environments (with benchmarks)
Providing simple, serializable, reproducible, optimized & unified NLP + ML + DL pipelines, since NLP pipelines are almost always part of a bigger machine learning or information retrieval workflow
Simple extensibility API’s for deep learning training pipelines, required for most real-world NLP problems require domain-specific models.
This talk will be of practical use to people using the Spark NLP library to build production-grade apps, as well as to anyone extending Spark ML and looking to make the most of it.