The document proposes a hierarchical decoding model for natural language generation (NLG) that separates the decoding process into layers associated with different linguistic patterns like parts-of-speech. It introduces techniques like inner-layer teacher forcing to encourage generating important repeated tokens, inter-layer teacher forcing to provide supervision between layers, and curriculum learning to train layers progressively. The model achieves significant improvements on an NLG dataset compared to a standard seq2seq model, demonstrating the benefits of leveraging linguistic knowledge for complex sentence generation.