Apache Spark is a powerful framework for distributed computing. You can you its API to create connectors for a variety of data formats, both in reading and writing. In this session I'll introduce you to the official Neo4j Connector for Apache Spark that allows a bi-directional communication between these two tools. We'll discuss the challenges that we faced on our trip to the first release of the connector, and we'll see how to leverage Neo4j alongside the computational power of Spark.