This document discusses processing time series data with Hadoop. It describes analyzing high density, large volume time series data from a single source using sliding windows and calculations like mean, variance, and fast Fourier transforms at different timescales. A Hadoop MapReduce job is used where mappers run filters on individual windows and output the window midpoint and calculated values. Further development includes additional signal processing filters, interfacing with a database, and handling multiple correlated data sets.