Apache Spark is a fast and general engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. Spark SQL allows users to execute relation queries in Spark with distributed in-memory computations. Though Spark gives us faster in-memory computations, Solr is blazing fast for some analytic queries. In this talk, we will take a deep dive into how to optimize the SQL queries from Spark to Solr by plugging into the Spark LogicalPlanner using pushdown strategies. The key take aways from the talk will be: How to perform Spark SQL queries with Apache Solr? What happens inside a Spark SQL query? How to plug into Spark Logical Planner? What type of push-down strategies are optimal with Solr? Examples of push-down strategies Presented at Lucene Revolution - http://sched.co/BAwV