People who work on audience, social or community teams (for media orgs, brands, politicians, public figures, etc.) often are responsible for some level of community moderation. While discourse can be productive, often trolls or bad actors weigh in – with threatening or harmful dialogue, graphic imagery, or other terrible things - all at the expense of the person on the other side of the screen. How can those working with digital communities protect their teams’ mental health as audiences and conversations grow, when resources might be thin?