This document describes using a sequence-to-sequence (seq2seq) model to tokenize Chinese text without spaces between words. It discusses how the seq2seq model works, previous work using it for machine translation, and the goal of testing if it can also be used for Chinese tokenization. An analysis is provided on why applying the seq2seq model directly did not achieve ideal results for tokenization, and suggestions are made for future work such as using beam search and other algorithms.