This document presents a novel emotional voice conversion framework that allows for one-to-many emotional style transfer without the need for parallel training data. It addresses challenges in emotional voice conversion, such as non-parallel training, by utilizing deep emotional features and a newly introduced emotional speech dataset (ESD). The proposed method demonstrates competitive results for both seen and unseen emotions in various experiments, enhancing applications in emotional text-to-speech and conversational agents.