Since the invention of SQL and relational databases, data production has been about specifying how data should be transformed through queries. While Apache Spark can certainly be used as a general distributed SQL-like query engine, the power and granularity of Spark’s APIs allows for a fundamentally different, and far more productive, approach. This session will introduce the principles of goal-based data production with examples ranging from ETL, to exploratory data analysis, to feature engineering for machine learning. Goal-based data production concerns itself with specifying WHAT the desired result is, leaving the details of HOW the result is achieved to the smart data warehouse running on top of Spark. That not only substantially increases productivity, but also significantly expands the audience that can work directly with Spark: from developers and data scientists to technical business users. With specific data and architecture patterns and live demos, this session will demonstrate how easy it is for any company to create its own smart data warehouse with Spark 2.x and gain the benefits of goal-based data production.