1. Spark Streaming uses a Streaming Context to launch jobs across a cluster that process data from Kafka receivers in batches at periodic intervals. 2. The receivers divide the Kafka streams into blocks, write them to Spark's block manager, and report the received blocks to the Streaming Context. 3. The Streaming Context periodically takes all the blocks to create RDDs and launch jobs on the RDDs to process each batch of data from the Kafka receivers.