The document analyzes the explainability of GraphSum, an abstractive multi-document summarization model, by examining its attention weights. It finds that GraphSum's attention weights from later decoding layers correlate more strongly with the relevance of input text segments, improving explainability. It also finds that GraphSum performs better when using paragraphs rather than sentences as input for the news domain, as paragraphs aid structure rather than topic separation for news articles. The document concludes that attention weights and expert annotations may provide better insight into abstractive summarization than ROUGE scores alone.