This document discusses an advanced visualization tool for Spark and Flink jobs. It collects fine-grained data about task execution, including data characteristics and block fetch information. This information is exposed through a REST API and used to visualize the physical execution plan, detect issues like data skew, and help developers optimize their applications. The tool aims to help understand distributed data processing systems and guide testing of adaptive partitioning techniques. It has been extended to support Flink visualization as well. Future plans include open-sourcing the framework and adding more visualization features and metrics.