Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Artificial Intelligence, elections, media pluralism and media freedom

501 views

Published on

A presentation by Pier Luigi Parcu on Artificial Intelligence, elections, media pluralism and media freedom at the European Artificial Intelligence Observatory April 2, 2019

Published in: Technology
  • Login to see the comments

  • Be the first to like this

Artificial Intelligence, elections, media pluralism and media freedom

  1. 1. AI, elections, media pluralism and media freedom Pier Luigi Parcu, CMPF Brussels, 2 April 2019 EUROPEAN PARLIAMENT EUROPEAN ARTIFICIAL INTELLIGENCE OBSERVATORY 2ND MEETING
  2. 2. 0 100 200 300 400 500 600 700 800 900 1000 ebay+Paypal Ant Financial Netflix Tencent Alibaba Facebook Alphabet Amazon Apple Microsoft Market capitalization of the biggest internet companies worldwide, as of 31 March 2019, in billions of U.S. dollars. Source:Ycharts
  3. 3. Technological challenges to quality of information • The centrality of few Internet giants and the development of AI given by their limited number and their global reach in the key functions of mediating interpersonal communication and disseminating contents have serious implications for the quality of information • At the same time big data has become the driver of the digital economy • The mix AI and big data is causing an acceleration in the disruption of many industries and the traditional information industry is one of the most impacted. The problem is that media have an externality of huge importance in the functioning of democracies and particularly of electoral processes
  4. 4. Technological challenges to quality of information • A specific threats for democratic processes derives from voters profiling and micro targeting. In abstract, reaching with specific information the right target could be useful and truly informative. But when information is driven by automated and obscurely directed forces (bots) the existence of serious problem is clear. Should social media platforms be allowed to self-regulate in their management of bots? Should bots be required to disclose that they are not human and who is responsible if they do not? • Still limited scientific certainty about how widespread “information disorder” is (Wardle and Derakhshan 2017), and on what is the dimension of the real impact on individuals choices and behaviours
  5. 5. Technological challenges to quality of information • Nonetheless, recent events have led many scholars to question the role of social media seen as responsible for distributing disinformation (Allcott and Gentzkow 2017; Tambini 2017), using manipulative psychometric profiling (Cadwalladr 2017), undermining authoritative journalism (Bell, 2018; Allcott and Gentzkow, 2017) and, ultimately, jeopardizing the fairness and transparency of elections” (Tambini 2018) • Even if scientific evidence is still incomplete (but it is accumulating), a principle of caution would require preventive intervention. Moreover, besides disinformation, also freedom of expression and plurality of information are heavily impacted by data driven algorithms information and bots
  6. 6. Technological challenges to quality of information Summarizing several new threats to information and fair electoral processes emerge: 1) The presence of only few gatekeepers and the disappearance of many traditional and local media, may drive towards the excessive standardization and homogeneity of the sources of news and qualified opinions, which negatively affects the quality and variety of information. The situation is exacerbated by a substantial absence of editorial control and editorial responsibility in the distribution model of information typical of major internet players; aggravated by lack of transparency and secrecy in the operating algorithms
  7. 7. 2) The polarisation of opinions created by the sociological and technological dynamics, further impoverish the democratic dialogue and cause the exclusion of middle ground and conciliatory occasions for debate. These developments influence negatively how political consensus is reached and maintained, even in otherwise democratic environments 3) In this “information disorder” increases the impact of hate speech, which has become a common pattern of political propaganda, even in democracies, targeting minorities, women, migrants, and so on 4) AI (in particular machine learning) cannot be safely trusted to counter disinformation at this stage of knowledge and control without serious risks for freedom of expression
  8. 8. Technological and political possible responses 1) Recognition that quality information is a public good and may require public support 2) Imposition of rules about editorial control and editorial responsibility for platforms 3) Technical empowerment of professional journalism 4) Support of media literacy, fact-checking and qualified actions of disinformation contrast 5) Impositions of transparency rules for any kind of online political advertising 6) Imposition of disclosure rules for bots of the type imposed for advertising on the media 7) Strengthening of competition rules regarding major internet players in the field of information pluralism
  9. 9. Thanks you for your attention! @cmpfeui @FCP_eui @FSRComsMedia

×