Big data is having a disruptive impact across the sciences.
Human annotation of semantic interpretation tasks is a critical
part of big data semantics, but it is based on an antiquated
ideal of a single correct truth that needs to be similarly
disrupted.We expose seven myths about human annotation,
most of which derive from that antiquated ideal of truth,
and dispell these myths with examples from our research.We
propose a new theory of truth, Crowd Truth, that is based
on the intuition that human interpretation is subjective, and
that measuring annotations on the same objects of interpretation (in our examples, sentences) across a crowd will provide a useful representation of their subjectivity and the range of reasonable interpretations.