The document discusses using crowdsourcing to collect semantic annotations by having multiple annotators label examples to capture disagreement. It proposes that collecting disagreement data can help address problems with traditional methods that rely on inter-annotator agreement as a measure of quality. The approach involves developing a crowdsourcing methodology and experimental plan to collect annotation examples from crowds along with a task workflow to evaluate the results and classifier performance.