This document summarizes a project aimed at spotting deceptive hotel reviews on TripAdvisor. It used a dataset of 800 truthful and 800 deceptive hotel reviews, along with over 800,000 real TripAdvisor reviews to build and evaluate text classification models. Key findings include: positive deceptive reviews were more prevalent than negative ones; lower-star hotels had higher rates of deceptive reviews; and certain word choices differentiated deceptive from truthful reviews. Combining text analysis features with bag-of-words improved the deception detection model's accuracy by 7%. A small student survey found humans had difficulty identifying deceptive reviews, performing worse than the computer model.