This document discusses the need for transparency in data science and decision making. It notes that decisions made by algorithms are currently like "black boxes" where the reasoning is not understandable. The document suggests that transparency could be achieved by visualizing the data and assumptions used, the reasoning process, and criteria for decisions. However, it also notes that achieving full transparency may not be feasible due to the complexity of data processing pipelines and models like deep learning that are not fully understandable. It concludes by suggesting that better technology, legislation, accountability, market forces, or consumer advocacy could help increase transparency even if full transparency is not possible.