This paper proposes an improved neural question generation model called answer-separated seq2seq. It treats the target answer and passage separately by masking the target answer in the passage. It uses a keyword-net to consistently make the model aware of the target answer and extract key information from it. A retrieval style word generator is also used to generate words based on their meanings. Experimental results show this model generates fewer questions containing the target answer and better predicts the question type given the answer, outperforming previous models on question generation tasks.