Apache Spark software consists of two main components on the basic level: a driver which transforms the user’s code into several tasks which can be spread across worker nodes and executors that run on and carry out the assigned tasks at those node levels. To mediate between the two, some sort of cluster manager is needed.