Spark Streaming extends Spark to allow processing of real-time data streams. It receives data from sources like Kafka, files, and sockets and divides the streams into micro-batches which are then processed using Spark's RDD transformations and actions. This allows for horizontally scalable, high throughput, and fault-tolerant stream processing. Spark Streaming can also seamlessly integrate with machine learning algorithms in Spark MLlib.