The document discusses harnessing disagreement in crowdsourcing for cognitive computing tasks like relation extraction. Typically, a single gold standard answer is assumed, but the authors argue that annotator disagreement is not just noise but a source of useful information. By capturing and understanding disagreement through frequencies and similarities, machine learning models can be scored based on how well their outputs fit within the space of possible human interpretations. This approach aims to better adapt models to new annotation tasks by tolerating the inherent vagueness and ambiguity of human understanding.