Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Is the algorithm reliable? The collaboration between technology and humans in the fight against hate speech

40 views

Published on

Presentation by Federica Casarosa at the 2019 CMPF Summer School for Journalists and Media Practitioners - Covering Political Campaigns in the Age of Data, Algorithms & Artificial Intelligence

Published in: Education
  • Be the first to comment

  • Be the first to like this

Is the algorithm reliable? The collaboration between technology and humans in the fight against hate speech

  1. 1. When the algorithm is not fully reliable: the collaboration between technology and humans in the fight against hate speech Federica Casarosa - Research Fellow CJC/RSCAS Firenze, 24 June 2019 1
  2. 2. Between Algorithms and Soft computing • algorithms – automated decision-making processes to be followed in calculations or other problem-solving operations, especially by a computer – Various level of accuracy • Soft computing – algorithms that are tolerant of imprecision, uncertainty, partial truth, and approximation 2
  3. 3. Algorithms and content control • algorithms that are used for content moderation are widely diffuse, having the advantage of scalability • Detection and removal of illegal content – Illegal content – Hate speech – Fake news 3
  4. 4. Hate speech • Hate speech is designed to promote hatred on the basis of race, religion, ethnicity, national origin or other specific group characteristics • Legal definition is still lacking at European level – Council Framework Decision 2008/913/JHA on combatting certain forms and expressions of racism and xenophobia by means of criminal law 4
  5. 5. Code of conduct on online hate speech I • Code of conduct on countering Illegal hate speech online • Signatories: Facebook, Google, Microsoft, Twitter, Instagram, Google+, Snapchat, Dailymotion and jeuxvideo.com 5
  6. 6. Code of conduct on online hate speech II • IT companies should adapt their internal procedures to guarantee that “they review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary”. • IT companies should provide for a removal notification system which allows them to review the removal requests “against their rules and community guidelines and, where necessary, national laws transposing the Framework Decision 2008/913/JHA”. 6
  7. 7. 7 Code of conduct on online hate speech III
  8. 8. The human in the loop I 8 • “trusted flaggers” – individual or entity which is considered to have particular expertise and responsibilities for the purposes of tackling hate speech. – individual or organized networks of private organizations, civil society organizations and semi-public bodies, to public authorities
  9. 9. The human in the loop II 9
  10. 10. Open issues I – legal v community standards 10 Facebook Youtube Twitter Code of conduct What does Facebook consider to be hate speech? Content that attacks people based on their actual or perceived race, ethnicity, national origin, religion, sex, gender or gender identity, sexual orientation, disability or disease is not allowed. We do, however, allow clear attempts at humor or satire that might otherwise be considered a possible threat or attack. This includes content that many people may find to be in bad taste (example: jokes, stand-up comedy, popular song lyrics, etc.). Hate speech refers to content that promotes violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes, such as: - race or ethnic origin - religion - disability - gender - age - veteran status - sexual orientation/gender identity. Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories. Illegal hate speech, as defined by the Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.
  11. 11. Open issues II – due process guarantees 11 • Availability of internal mechanisms that allow users to be notified to be heard and to review or appeal against the decision of the IT companies. – No specific requirement neither in terms of judicial procedures, nor through alternative dispute resolution mechanisms
  12. 12. Open issues III – selection of trusted flaggers 12 • platforms should “publish clear and objective conditions” for determining which individuals or entities they consider as trusted flaggers. – Conditions include expertise and trustworthiness. • Apart from Youtube’s programme, none of the other companies provide for a procedure to become trusted flaggers
  13. 13. Open issues IV – liability regime 13 • Do algorithms and trusted flaggers that proactively detect and remove content affect the exemption of liability for IT companies? • According to art 14 (1) (b), if the hosting provider act expeditiously to remove or to disable access to content upon obtaining such knowledge or awareness, it will continues to benefit from the liability exemption
  14. 14. Open issues IV – liability regime 14 • Do algorithms and trusted flaggers that proactively detect and remove content legitimise general monitoring obligations? • Suggestions from CJEU in case C-18/18 - Eva Glawischnig-Piesczek v Facebook
  15. 15. Comparison with fake news approach • What is fake news? – A lie • Who can be qualified/expert to identify a fake news? • Is fake news illegal? – No legal provision at European level – But intervention through code of practice on disinformation (October 2018) 15
  16. 16. Fake news and social networks • Terms of services of Facebook (January 2019) – We block economic incentives for people, pages and domains that spread misinformation. – We use various signals, including feedback from the community, to create an automatic learning model that can detect news that could be false. – We reduce the distribution of content deemed fake by independent third-party fact-checkers. – We allow people to decide for themselves which content to read, share and trust by providing them with more context and promoting the ability to evaluate news. – We work with academics and other organizations to address this complex issue. 16
  17. 17. An example : the case of Il Lercio 17
  18. 18. Thanks for your attention federica.casarosa@eui.eu 18

×