This document summarizes research on predicting political ideology from Twitter text. It finds that: 1. Previous studies likely oversimplified the problem by using non-representative samples of highly politically engaged users, when political ideology exists on a spectrum. 2. Predicting specific positions (e.g. conservative vs. liberal) achieves high accuracy, but accuracy drops significantly for finer-grained predictions (e.g. extreme vs. moderate). 3. Language analysis can distinguish neutral, moderate, and extreme users, suggesting political engagement is a separate dimension from ideology. 4. A new dataset and modeling approach are needed to more fully capture political ideology and engagement as continuous variables.