Be the first to like this
Twitter has proven itself a rich and varied source of language data for linguistic analysis. For Twitter is more than a popular new channel for social interaction in language; in many ways it constitutes a whole new genre of text, as users adapt to its new limitations (140 character messages) and to its novel conventions such as retweeting and hash-tagging. But Twitter presents an opportunity of another kind to computationally-minded researchers of language, a generative opportunity to study how algorithmic systems might exploit linguistic tropes to compose novel, concise and re-tweetable texts of their own. This paper evaluates one such system, a Twitterbot named @MetaphorMagnet that packages its own metaphors and ironic observations as pithy tweets. Moreover, we use @MetaphorMagnet, and the idea of Twitterbots more generally, to explore the relationship of linguistic containers to their contents, to understand the extent to which human readers fill these containers with their own meanings, to see meaning in the outputs of generative systems where none was ever intended. We evaluate this placebo effect by asking human raters to judge the comprehensibility, novelty and aptness of texts tweeted by simple and sophisticated Twitterbots.