This document summarizes a research paper that evaluated parameter efficient learning methods (PERMs) for natural language generation tasks. The researchers compared PERMs like adapter tuning, prefix tuning, and prompt tuning to finetuning large pre-trained language models on several metrics. Their results showed that PERMs can outperform finetuning with fewer training samples or larger models, and that adapter tuning generalizes best across domains while prefix tuning produces the most faithful generations. The study provides insights into how PERMs can help adapt models with limited data.