Be the first to like this
This paper will address the tools employed and the ethical, practical and methodological challenges of my research by using the Twitter data collected during the G20 2014 in Brisbane. The new computational tools developed in digital methods are really powerful on mapping Twitter conversations related to a particular topic. Nevertheless, there are still several limitations in the collection and in the analysis of the data. These methodological challenges are addressed in my research project, which aims to explore the growing of Public Diplomacy activities on social media. Indeed, according to the Twiplomacy Study (2014), the vast majority of the 193 UN member countries have a presence on Twitter and more than two-thirds of all heads of state and heads of government have personal accounts on the social network.
Yet, the study of Public Diplomacy on social media is still struggling to find an appropriate research method that is able to capture the complexity of the social media communication and assess engagement and participation. Particularly, these tasks have become challenging due to the new phenomenon of the cross-platform communication. Indeed, nowadays users share and comment news on social media, while traditional media, like TV channels and newspapers often report back what has been discussed on social media.
By analysing the Twitter data collected during the G20 2014 in Brisbane, I will describe some aspects of my methodological approach for the study of Public Diplomacy on Twitter. The paper will suggest that we need a mixed approach to comprehend the context and the cross-platform communication. To do so, tweets should be understood as units of information related to each other and visualised in the form of networks. Once key actors are found and relations among the different nodes visualised, a count of the URLs in our dataset will provide interesting information in terms of contents shared in the conversation. By manually looking at the information shared, the context of the conversation will emerge. In this sense, large datasets should be first visualised as whole and then analysed by zooming in the dataset and selecting particular interesting units, which will contain clues to understand the context and the flow of information between different platforms and websites.