This document describes a neural network approach for language identification. It discusses extracting features from text such as alphabet characters and character sequences (unigrams, bigrams, trigrams) that are common in different languages. Training data is prepared from texts of over 105 languages on the TED website, with out-of-vocabulary words removed. The neural network architecture has an input layer for features, hidden layers, and an output layer for language predictions. Alphabet features count Unicode character classes and are binarized. Trigrams are used as sequence features to aid comparisons between languages.