This document proposes a new method called CALM (Concept-Aware Language Model) to improve commonsense reasoning in large language models. CALM uses novel self-supervised objectives that focus on concept-centric generation and ordering, including concept-to-sentence generation, concept order recovering, and generative question answering. It employs a two-stage training approach with a generator and discriminator. Experimental results show CALM outperforms baseline models on commonsense reasoning tasks and can generate sentences focused on input concepts. The method aims to enhance text-to-text models' commonsense abilities through efficient self-supervised pre-training on concept representation and reasoning.