From classifying images or texts to responding to surveys, tapping into the knowledge of crowds to complete complex tasks has become a common strategy in social and information sciences. Although the timeliness and cost-effectiveness of crowdsourcing may provide desirable advantages to researchers, the data it generates may be of lower quality for some scientific purposes. The quality control mechanisms, if any, offered by common crowdsourcing platforms may not provide robust measures of data quality. This study explores whether research task participants may engage in motivated misreporting whereby participants tend to cut corners to reduce their workload while performing various scientific tasks online. We conducted an experiment with three common crowdsourcing tasks: answering surveys, coding images, and classifying online social media content. The experiment recruited workers from three sources: a crowdsourcing platform for crowd workers, a commercial survey panel provider for online panelists, and a research volunteering website for citizen scientists. The analysis seeks to address the following two questions: (1) whether online panelists, crowd workers or volunteers may engage in motivated misreporting differently and (2) whether the patterns of misreporting vary by different task types. We further seek to examine potential correlation between the patterns of motivated misreporting and the data quality of complex scientific research tasks. The study closes with suggestions of quality assurance practices of incorporating collective intelligence to improve the system for massive online information analysis in social science research.