Provenance is broadly defined as the origin or source from which something comes and the history of subsequent owners. In the context of data, process and computation-intensive disciplines, provenance focuses on the description and understanding of where and how data is produced, the actors involved in its production, and the processes applied to it. Provenance has been a hot topic in the last years in scientific disciplines, with a strong emphasis in eScience, where technology and means for representing provenance have been proposed, ranging between different degrees of expressivity. Since the amount of data involved has increased in the different domains, provenance models have eventually evolved into semantic overlays, which describe provenance at different levels of granularity, facilitating user understanding. Nowadays, the need of provenance analysis has expanded beyond scientific domains into the Web of Data arena. The abundance of data is encouraging organizations and governments to publish and expose their data in a way that can be made available to the public and reused for a number of purposes through the Linked Data initiative. However, while an important number of large and interlinked data sets such as the UK government and the BBC web sites are starting to be now publicly available, important challenges still need to be addressed before this vision can be achieved. Amongst them, provenance is one of the most outstanding issues in order to guarantee data quality, trustworthiness and realiability in the Web of Data. In this talk, we will provide an insight on provenance, from eScience to the Web of Data, describing old problems and new challenges, which need to be addressed in the upcoming years.