The document discusses the role of disagreement in crowdsourcing semantic annotation, suggesting it can indicate complexity and variations in human interpretation rather than representing poor quality. It introduces 'crowdtruth,' a method that accounts for ambiguity and uses disagreement to improve data labeling quality, proving to be effective compared to traditional majority vote methods. Various tasks in medical relation extraction, event extraction, and sound interpretation illustrate the practical application of this approach.