This document summarizes a research paper on inverse constrained reinforcement learning. The paper proposes a method to estimate cost functions from expert data in continuous action spaces to achieve optimal behavior under constraints. It formulates cost function inference as a maximum entropy inverse reinforcement learning model and uses a neural network to approximate the cost function. The method employs importance sampling and early stopping to improve learning efficiency. Evaluation results demonstrate the method outperforms alternatives in terms of cumulative reward and constraint violations, and the learned cost functions can be effectively transferred to new tasks.