Entity extraction, also known as named-entity recognition (NER), entity chunking and entity identification, is a subtask of information extraction with the goal of detecting and classifying phrases in a text into predefined categories. While finding entities in an automated way is useful on its own, it often serves as a preprocessing step for more complex tasks, such as relationship extraction. For example, biomedical entity extraction is a critical step for understanding the interactions between different entity types, such as the drug-disease relationship or the gene-protein relationship. Feature generation for such tasks is often complex and time consuming. However, neural networks can obviate the need for feature engineering and use original data as input. We will demonstrate how to build a domain-specific entity extraction system from unstructured text using deep learning. In the model, domain-specific word embedding vectors are trained with word2vec learning algorithm on a Spark cluster using millions of Medline PubMed abstracts and then used as features to train an LSTM recurrent neural network for entity extraction, using Keras with TensorFlow or CNTK on a GPU-enabled Azure Data Science Virtual Machine (DSVM). Results show that training a domain-specific word embedding model boosts performance when compared to embeddings trained on generic data such as Google News. While we use biomedical data as an example, the pipeline is generic and can be applied to other domains.