This document summarizes the goals and steps taken to scrape data from PDF files on the Banking Organization Systemic Risk Report website. It involved scraping the HTML, cleaning and formatting the raw data from the PDFs, and performing exploratory data analysis on the data in Tableau. Key steps included fetching the HTML using libraries in Python, extracting tables and text from PDFs using regular expressions, and creating visualizations of indicators like exposures, assets, liabilities, and scores in Tableau to analyze systemic risk across banks.