The document discusses the dynamics of micro-task crowdsourcing on Amazon Mechanical Turk (MTurk), presenting findings from a large-scale analysis of 2.5 million batches and 130 million hits collected over five years. Key insights include classification of hit types, factors affecting batch throughput, and the elasticity of supply and demand in the marketplace, highlighting a periodicity in task availability. The research emphasizes significant features for predicting batch performance, suggesting that crowdsourcing can be efficient if optimized appropriately.