This document discusses preventing private information inference attacks on social networks. It explores how released social networking data could be used to predict undisclosed private information about individuals, such as their political affiliation or sexual orientation. It then describes three sanitization techniques that could be used to decrease the effectiveness of such attacks. An experiment is conducted applying these techniques to a Facebook dataset to attempt to discover sensitive attributes through collective inference and show that the sanitization methods decrease the effectiveness of local and relational classification algorithms.