With Hadoop entering the mainstream, can -- and should -- it benefit from best practices from the world of Data Warehousing. Should the same ground rules developed for capacity-constrained internal enterprise DWs apply to Hadoop data stores designed for scale out, or for harvesting data from the Internet? We will pinpoint 3 key areas: data quality, privacy & confidentiality, and lifecycle management, addressing issues such as: 1. Does it make sense to apply traditional data cleansing practices to Hadoop data? Or will removing "errors" remove the possibility for discovering new insights? 2. Do different standards for privacy protection apply when harvesting sources such as social media that are already public? Should enterprises track their customers on Facebook or Twitter? 3. Will Hadoop make conventional data archiving practices obsolete? Is it cost effective to "move" petabytes of data offline? Just because the Googles & Yahoos of the world retain all their data, should mainstream enterprises? Should Hadoop be considered the new tape?