This document summarizes a presentation given by Prof Chris Marsden on responsible content moderation and the law. It discusses issues around disinformation, including defining it. It analyzes different regulatory options for addressing disinformation, from self-regulation to statutory regulation. It provides recommendations, including that governments should avoid internet shutdowns in response to disinformation alone. It discusses the Christchurch Call agreement signed by many countries to address terrorist and violent extremist content online, including a focus on understanding recommendation algorithms. Former PM Jacinda Ardern of New Zealand now promotes the Christchurch Call as a special envoy.
Common Good Digital Framework Action Plan
PURPOSE
The Common Good Digital Framework (CGDF) will serve as a platform to bring
authoritative knowledge and raise awareness about violations of ethical values
and standards by governments and large organizations.
The platform will monitor and alert against the misuse of Artificial Intelligence
(AI), personal data, and neglect of cyber security. The objectives of the
campaign are to stimulate and galvanize civil society towards the need to create
new norms and regulations, and therein influence public and private AI and
cyber policy.
Presentation by Christian D'Cunha at the 2019 CMPF Summer School for Journalists and Media Practitioners - Covering Political Campaigns in the Age of Data, Algorithms & Artificial Intelligence
The Next Six Months in Myanmar: Stakeholder Risk in the Telecoms SectorEthical Sector
Vicky Bowman gave a keynote analysis at Myanmar Connect 2015 in Naypyidaw on 16 September. Her presentation focussed on the stakeholder risks for the ICT sector during the coming six months before and after Myanmar’s election on 8 November. She previewed the ICT Sector-wide Impact Assessment, recently completed by MCRB, which will be published on 24 September, and identified some of the main online and offline human rights impacts of the sector which will feature jn the SWIA. She particularly highlighted the question of ‘network shutdown’ and the increased risk of this as a result of the election and its aftermath. She identified commitments which the government could make, as well as steps companies should take to prepare themselves for this risk.
Internet Governance & Digital Rights by Waihiga K. Muturi, Rtn..pdfWAIHIGA K.MUTURI
This meeting is scheduled for Wednesday 26th October at 4 pm GMT. Yes, it's only in two days. During this session, we will cover the following topic :
The importance of privacy as culture and how (legal) regulation makes a difference by Eduarda Chacon Rosas (Brazil).
Internet governance and digital rights by Waihiga K. Muturi, Rtn. (Kenya).
Online Apps: Its Regulations and Governance by Antara Jha (India).
Maintaining data integrity through effective regulatory policies by Jonathan Agbo (Nigeria).
I hope you will enjoy those conversations and strongly advise you to mark the date on the calendar.
As a last reminder, here's the link of the event: https://us06web.zoom.us/meeting/register/tZArfuqspz8sHdYTLYvWQcU7GZgCsZrPUSsE
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...Oles Kulchytskyy
The symposium was organized by the University of Latvia Faculty of Social Sciences (FSS) on the 10th of December. Latvian researchers and opinion leaders, together with European partners,
presented the latest findings in the disinformation and media literacy field as well as discussed the futur challenges that the digital media landscape presents for scientists, decision-makers as well as every media user.
Common Good Digital Framework Action Plan
PURPOSE
The Common Good Digital Framework (CGDF) will serve as a platform to bring
authoritative knowledge and raise awareness about violations of ethical values
and standards by governments and large organizations.
The platform will monitor and alert against the misuse of Artificial Intelligence
(AI), personal data, and neglect of cyber security. The objectives of the
campaign are to stimulate and galvanize civil society towards the need to create
new norms and regulations, and therein influence public and private AI and
cyber policy.
Presentation by Christian D'Cunha at the 2019 CMPF Summer School for Journalists and Media Practitioners - Covering Political Campaigns in the Age of Data, Algorithms & Artificial Intelligence
The Next Six Months in Myanmar: Stakeholder Risk in the Telecoms SectorEthical Sector
Vicky Bowman gave a keynote analysis at Myanmar Connect 2015 in Naypyidaw on 16 September. Her presentation focussed on the stakeholder risks for the ICT sector during the coming six months before and after Myanmar’s election on 8 November. She previewed the ICT Sector-wide Impact Assessment, recently completed by MCRB, which will be published on 24 September, and identified some of the main online and offline human rights impacts of the sector which will feature jn the SWIA. She particularly highlighted the question of ‘network shutdown’ and the increased risk of this as a result of the election and its aftermath. She identified commitments which the government could make, as well as steps companies should take to prepare themselves for this risk.
Internet Governance & Digital Rights by Waihiga K. Muturi, Rtn..pdfWAIHIGA K.MUTURI
This meeting is scheduled for Wednesday 26th October at 4 pm GMT. Yes, it's only in two days. During this session, we will cover the following topic :
The importance of privacy as culture and how (legal) regulation makes a difference by Eduarda Chacon Rosas (Brazil).
Internet governance and digital rights by Waihiga K. Muturi, Rtn. (Kenya).
Online Apps: Its Regulations and Governance by Antara Jha (India).
Maintaining data integrity through effective regulatory policies by Jonathan Agbo (Nigeria).
I hope you will enjoy those conversations and strongly advise you to mark the date on the calendar.
As a last reminder, here's the link of the event: https://us06web.zoom.us/meeting/register/tZArfuqspz8sHdYTLYvWQcU7GZgCsZrPUSsE
Presentation at COMPACT Project event in Riga - Disinformation, Media literac...Oles Kulchytskyy
The symposium was organized by the University of Latvia Faculty of Social Sciences (FSS) on the 10th of December. Latvian researchers and opinion leaders, together with European partners,
presented the latest findings in the disinformation and media literacy field as well as discussed the futur challenges that the digital media landscape presents for scientists, decision-makers as well as every media user.
A look at why Caribbean cyber security is important, Caribbean experiences achieving cyber security, why an effective strategy is critical and the importance of an effective Information Governance strategy.
A presentation by Pier Luigi Parcu on Artificial Intelligence, elections, media pluralism and media freedom at the European Artificial Intelligence Observatory April 2, 2019
Presentation at Data protection in the Western Balkans and the Eastern Partnership Region. High-level exchange and learning week organised by SIGMA, GIZ, RCC and ReSPA.
From the full report 2018 of Freedom House. Governments around the world are tightening control over citizens’ data and using claims of “fake news” to suppress dissent, eroding trust in the internet as well as the foundations of democracy.
Ethical Questions of Facial Recognition Technologies by Mika Nieminen Mindtrek
SAFETY AND SECURITY track - Tuesday 28th
"While facial recognition technology is utilised increasingly across the globe, there are extending debates on the ethical aspects and acceptability of facial recognition. Such issues include e.g. that facial recognition is not an accurate tech, it is creating step by step everywhere reaching “surveillance state”, there are challenges with individual privacy and data security, as well as it may have distorting effects on democratic processes. It is suggested, among other things, that facial recognition technology needs to be well regulated, system needs to be transparent and include “bias checks” as well as there needs to be an administrational procedure for correcting technological and social biases and faults in the system."
MIKA NIEMINEN, Principal Scientist, VTT, Technical Research Centre of Finland
Smart City Mindtrek 2020 – conference
28th-29th January
Tampere, Finland
www.mindtrek.org/2020/
ILS presentation on principles of fake news regulationmrleiser
A short presentation for the ILS lunch series on how to regulate fake news, focusing on inter-agency cooperation while protecting free expression, ensuring financial and advertising transparency.
PRIVACY RIGHTS ARE HUMAN RIGHTS (2).pdflinda gichohi
This is an article/blog on the Privacy Symposium Africa 2022 on Privacy Rights and Digital rights as Human Rights. It also talks about Online Gender Based Violence , this is gender based violence that manifests in the Digital Space and Online world ie; phishing, non-consensual sharing, harrassment. This article explains why Privacy rights are essential in the modern world.
QUT Regulating Disinformation with AI Marsden 2024Chris Marsden
“It is the ‘AI regulation moment” intoned the Secretary General of both the International Telecommunications Union (ITU) and the United Nations itself, before the UN General Assembly passed a unanimous resolution on AI safety, and the G7 Hiroshima Dialogue of AI codes of conduct moved industrialised nations beyond self-regulation. Academic analysts and policymakers need to challenge a reversion to broken models, to ethics washing and to what is now being termed ‘AI washing’. I set out a critical agenda for remembering lessons from the Internet past to assert an AI co-regulatory future.
A look at why Caribbean cyber security is important, Caribbean experiences achieving cyber security, why an effective strategy is critical and the importance of an effective Information Governance strategy.
A presentation by Pier Luigi Parcu on Artificial Intelligence, elections, media pluralism and media freedom at the European Artificial Intelligence Observatory April 2, 2019
Presentation at Data protection in the Western Balkans and the Eastern Partnership Region. High-level exchange and learning week organised by SIGMA, GIZ, RCC and ReSPA.
From the full report 2018 of Freedom House. Governments around the world are tightening control over citizens’ data and using claims of “fake news” to suppress dissent, eroding trust in the internet as well as the foundations of democracy.
Ethical Questions of Facial Recognition Technologies by Mika Nieminen Mindtrek
SAFETY AND SECURITY track - Tuesday 28th
"While facial recognition technology is utilised increasingly across the globe, there are extending debates on the ethical aspects and acceptability of facial recognition. Such issues include e.g. that facial recognition is not an accurate tech, it is creating step by step everywhere reaching “surveillance state”, there are challenges with individual privacy and data security, as well as it may have distorting effects on democratic processes. It is suggested, among other things, that facial recognition technology needs to be well regulated, system needs to be transparent and include “bias checks” as well as there needs to be an administrational procedure for correcting technological and social biases and faults in the system."
MIKA NIEMINEN, Principal Scientist, VTT, Technical Research Centre of Finland
Smart City Mindtrek 2020 – conference
28th-29th January
Tampere, Finland
www.mindtrek.org/2020/
ILS presentation on principles of fake news regulationmrleiser
A short presentation for the ILS lunch series on how to regulate fake news, focusing on inter-agency cooperation while protecting free expression, ensuring financial and advertising transparency.
PRIVACY RIGHTS ARE HUMAN RIGHTS (2).pdflinda gichohi
This is an article/blog on the Privacy Symposium Africa 2022 on Privacy Rights and Digital rights as Human Rights. It also talks about Online Gender Based Violence , this is gender based violence that manifests in the Digital Space and Online world ie; phishing, non-consensual sharing, harrassment. This article explains why Privacy rights are essential in the modern world.
QUT Regulating Disinformation with AI Marsden 2024Chris Marsden
“It is the ‘AI regulation moment” intoned the Secretary General of both the International Telecommunications Union (ITU) and the United Nations itself, before the UN General Assembly passed a unanimous resolution on AI safety, and the G7 Hiroshima Dialogue of AI codes of conduct moved industrialised nations beyond self-regulation. Academic analysts and policymakers need to challenge a reversion to broken models, to ethics washing and to what is now being termed ‘AI washing’. I set out a critical agenda for remembering lessons from the Internet past to assert an AI co-regulatory future.
Today, I will be presenting on the topic of
"Generative AI, responsible innovation, and the law."
Artificial Intelligence has been making rapid strides in recent years,
and its applications are becoming increasingly diverse.
Generative AI, in particular, has emerged as a promising area of innovation, the potential to create highly realistic and compelling outputs.
Marsden CELPU 2021 platform law co-regulationChris Marsden
12 November 2021 20th Annual International Conference, Center for Law & Public Utilities, School of Law, Seoul National University: The Wave of Digital Economy and Exploration of the Direction of Online Platform Regulation
Professor Chris Marsden, Sussex Law @SussCIGR
Discussion: Dr Eun-Jung Kwon (KISDI)
Oxford Internet Institute 19 Sept 2019: Disinformation – Platform, publisher ...Chris Marsden
With the move to a more digital, mobile, and platform-dominated media environment people increasingly find and access news and information via platforms like search engines and social media. These have empowered citizens in many ways and are important drivers of attention to established publishers but have also enabled the distribution of disinformation from a range of different actors. In a context where citizens are often increasingly sceptical of both platforms, publishers, and public authorities, what do we know about the scale and scope of disinformation problems and what can different actors do to counter the problems we face?
https://www.scl.org/articles/10662-interoperability-an-answer-to-regulating-ai-and-social-media-platforms
Prosumer Law and Networked Platform Regulation: The Long View
Platform regulation has become the cause of technology regulation: a call to regulate the intermediaries who provide platforms for networked digital services. These include the GAFA giants: Google, Amazon, Facebook and Apple. Many policy entrepreneurs are peddling solutions as the policy cycle turns, in a classic Kingdon case of ‘solutions chasing a problem’. Yet networks are not new, and their platforms have been regulated for hundreds of years. In this paper, I take the long view, focussing on common carriage neutrality and the railways/telegraphy regulation of the 1840s in England. I offer some historical examples that may be highly relevant to ‘prosumer’ digital capitalism 180 years later.
Any Internet user who has posted content, from Facebook to Twitter to blog posts to podcasts, has become a prosumer – though there are very broad categories, ranging from the occasional tweeter to the fully developed hacker. Over two billion people now use Google to search for content, Facebook, Instagram and WhatsApp to share news, gossip and photos, YouTube to watch and upload videos, and Twitter/Snap and other sites to say just about anything. We are all becoming ‘prosumers’ sharing intimate details of our personal lives. But this ‘prosumer environment’ is currently either grossly unregulated, leaving users' data and content at the mercy of the multinationals who host it and sometimes claim to own it, or subject to knee-jerk over-regulation as with the current ‘fake news’ controversy in Germany. It is a new regulatory policy cycle in network regulation.
Regulatory responses are finally emerging, driven by both data protection and competition concerns, yet the over-arching need to ensure greater neutrality of intermediaries has largely been limited to last mile monopolists and mobile oligopolists: the legacy telecommunications companies who provide Internet access. What is needed is a comprehensive Prosumer Law solution that draws on fundamental human rights to privacy and free expression, competition, and technology regulation to ensure a fair and neutral deal for prosumers and citizens.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
4. VOICE CONSTITUTIONAL
AMENDMENT
• Voting in Australia is mandatory (ballots can be spoilt)
• 90% turnout is normal
• Amendment has to have majority in 4/6 states
• Voice consultative assembly failed in all 6
5. DISINFORMATION ENVIRONMENT
• Victoria (where I live in Melbourne) was closest to majority
• This was a traditional ‘big media’ campaign
• With some AI-generated deep fakes included….
12. REGULATORY CONCERNS INCREASED ABOUT:
• Fake news, hate speech spread by bots, and
• their impact on fair elections and civil society.
• But these 'Great Hack' concerns are much older,
• going back to 2010 Trinidad and Ukraine (2011) elections.
As in the case of privacy regulation with GDPR,
• Europe led the way in willingness to legislate,
• Germany (2017 NetzDG Law) France (2018) legislated against disinformation
13. DEFINING DISINFORMATION (‘FAKE NEWS’)
• “False inaccurate or misleading information
• designed, presented and promoted to
• intentionally cause public harm or for profit”
• European Commission High Level Expert Group 2018
Distinguish disinformation from misinformation,
• which refers to unintentionally false or inaccurate information.
14. DISINFORMATION A RAPIDLY MOVING TARGET
• 2018 European Parliament study
• analysed 250 articles, papers and reports
• strengths and weaknesses of those focussed on AI disinformation
solutions on freedom of expression, media pluralism & democracy
• Agree with other experts: evidence of harm inconclusive in 2018
• 2016 US Presidential election/UK ‘Brexit’ referendum
• Note this report was pre-Muller (2018) and pre-fines in UK (2019-20)
• US Department of Justice and UK Parliamentary Committee
15. CHRIS MARSDEN, TRISHA MEYER, IAN BROWN
● Platform values and democratic elections:
● How can the law regulate digital disinformation?
Computer Law & Security Review, Volume 36, 2020, ISSN 0267-3649, https://doi.org/10.1016/j.clsr.2019.105373
16. 2018 EUROPEAN PARLIAMENT REPORT:
REGULATING DISINFORMATION WITH ARTIFICIAL INTELLIGENCE
• Presented in Strasbourg 13 December
• https://www.europarl.europa.eu/cmsdata/161725/STOA%20Panel%20meeting%2013-12-2018%20-%20Minutes.pdf
• 2 days after terrorists attack Xmas market
• Finally officially published 13 March 2019
17. INTERDISCIPLINARY STUDY ANALYSES IMPLICATIONS OF AI
DISINFORMATION INITIATIVES
Policy options based on literature, 10 expert interviews & mapping
We warn against technocentric optimism as a solution to disinformation,
• that proposes use of automated detection, (de)prioritisation, blocking and
removal by online intermediaries without human intervention.
• More independent, transparent and effective appeal and oversight mechanisms
are necessary in order to minimise inevitable inaccuracies
18. WE STUDIED CAMBRIDGE ANALYTICA, RUSSIAN (AND
MANY OTHER ACTORS) HACKING OF ELECTIONS
• Wider issue of regulating disinformation and election cybersecurity
• Euronews (9 Jan 2019) How Can Europe Tackle Fake News in the Digital Age? 3 minutes' video
• https://www.euronews.com/2019/01/09/how-can-europe-tackle-fake-news-in-the-digital-age
• saves you reading 100 pages’ European Parliament report I co-authored.
• But you should read the 4-page regulatory annex –
• How to regulate AI for disinformation.
• Good luck employing anyone with human rights law qualifications in India
• for much less than $100 an hour...much less a dollar!
19. 2021: LEAVE.EU LOST THEIR CASE, LONG AFTER BREXIT
• Leave.EU and Eldon v Information Commissioner [2021] UKUT 26 (AAC) (8 February 2021)
• https://www.bailii.org/uk/cases/UKUT/AAC/2021/26.pdf
• UK Information Commissioner issued Leave.EU and Eldon with
• monetary penalty notices and assessment notices (and an enforcement notice in the case of Eldon)
• under Data Protection Acts 1998 and 2018.
• First-tier Tribunal dismissed all appeals – appeal to the Upper Tribunal concerned the scope of Reg 22 PECR,
• the criteria for making a MPN (`serious contravention' and knowledge of risk of breach),
• the relevance of the Commissioner's regulatory action policy (RAP),
• proportionality and the criteria for an assessment notice, and unfair process –
• All five appeals dismissed by Upper Tribunal.
20. COMMONWEALTH STUDY:
CYBERSECURITY & DISINFORMATION
● Brown, Ian, Chris Marsden, James Lee, Michael Veale [2020]
● Electoral Cybersecurity in the Commonwealth: A Good Practice Guide
● London: Commonwealth Secretariat, 162pp. at
● https://thecommonwealth.org/sites/default/files/inline/Commonwealth%20cybersecurity%20for%20elections%20guide.pdf
○ based on an in-depth questionnaire sent to all Commonwealth election management bodies;
○ research missions in Ghana, Pakistan, Trinidad and the UK; and
○ regional training workshops in Africa (Jo’burg), Asia-Pacific (Sydney) and Caribbean (Trinidad)
○ It is the chief outcome of the Strengthening Election Cybersecurity project that is part of the Commonwealth Cyber Capability Programme.
Implementation of the Commonwealth Cyber Declaration agreed by heads of government at their 2018 meeting in London.
○ The declaration commits “to a cyberspace that supports economic and social development and rights online; to build the foundations of an
effective national cybersecurity response; and to promote stability in cyberspace through international cooperation.’
21. LAUNCH FOR CWEALTH ELECTORAL CYBERSECURITY GUIDE
3 March 2020: technical systems, laws, policies,
● capabilities across the whole electoral cycle
● recommendations to national contexts
● to help professionals who run elections
Lead author Dr Ian Brown said:
○ “It’s really important electoral authorities build up their links with government agencies dealing with
cyber security, data protection, public procurement, to respond more effectively together.
○ “especially in the Caribbean and Pacific there are a number of small Commonwealth countries,
○ really helpful if electoral authorities can co-operate in terms of sharing training and learning,
○ thinking about collaborative procurement and sharing information about specific attacks on their
election infrastructures
○ because that will make the response of each country together much stronger.”
22. DID COMMONWEALTH MEMBERS ACT ON
RECOMMENDATIONS FOR CYBERSECURITY EXPERTISE?
● Kenya election declared 16 August 2022
● 4 out of 7 Electoral Commissioners refused to certify the incumbent ‘victory’
○ Commonwealth: problems “use of the Kenya Integrated Elections Management System
(KIEMS) kit, which is used to register and identify voters using biometrics”
https://thecommonwealth.org/news/kenya-elections-largely-peaceful-and-transparent-say-commonwealth-observers
● Official observers:
○ Commonwealth Observer Group –Bruce Golding, Former Prime Minister of Jamaica
○ EISA Election Observation Mission –Goodluck Ebele Jonathan, Former President of Nigeria
○ Joint AU-COMESA Election Mission –Dr. Ernest Bai Koroma, Former President of Sierra Leone
○ IGAD Election Observation Mission –Dr. Mulatu Teshome, Former President of Ethiopia
● Ethiopian People's Revolutionary Democratic Front – not elected, nominated
○ IRI/NDI Election Observation Mission –Joaquim Chissano, Former President of Mozambique
○ Carter Center Election Expert Mission – Ben Graham Jones, Team Leader
○ EU Election Observation Mission –Ivan Stefanec, Member of the European Parliament
23. COMMONWEALTH FINAL REPORT – AUGUST 2023
● Report officially conveyed to the Government of Kenya and relevant stakeholders,
○ builds on the findings of the interim statement
○ issued shortly after the elections on 11 August 2022
● The Group recommended implementation of key legislative provisions,
● Including 2013 Elections Campaign Financing Act.
● need for the IEBC to receive its funding earlier on in the electoral cycle,
● to allow it to adequately plan and undertake its electoral activities,
● including boundary delimitation, in accordance with its own desired timeframe
● https://production-new-commonwealth-files.s3.eu-west-2.amazonaws.com/s3fs-public/2023-
08/Kenya%20COG%20Report%20Final%20(2).pdf?VersionId=yj2INsQTT0LrdfixdctCsAgJw4yt3cpg
24. RECOMMENDS THAT CMCA SS22-23 BE REPEALED, LAW
ADDRESSING DISINFORMATION THAT DO NOT UNDULY
CURTAIL OTHER CONSTITUTIONAL FREEDOMS
Section 22 of the CMCA 2018 places limitations on Article 33 of the Constitution (freedom of expression)
● ‘in respect of the publication of false, misleading or fictitious data or information that is likely to propagate
● war; incite persons to violence; constitutes hate speech; advocates hatred that constitutes ethnic
● incitement, vilification of others or incitement to cause harm.’
In 2021, a bill was tabled in the National Assembly that sought to amend the CMCA
● to provide the National Computer and Cybercrimes Coordination Committee (NC4) with greater powers to
recommend websites that should be made inaccessible,
● and to prohibit the use of electronic mediums to promote terrorism, extreme religious or cult activities.
Media stakeholders’ concern with this law extended beyond freedom of expression,
● with some noting an increased tendency by the Government to use ‘security concerns’ –
● in this case, terrorism – as a pretext for such media freedom violations.
● raised concerns that efforts to amend the CMCA so close to the election
● represented an effort to stifle freedom of expression in the election year.
25. ALWAYS READ FOOTNOTE 58!
● “As noted, only around 12 million people (21%) of Kenyan citizens use social media.”
● www.statista.com/statistics/1029198/facebook-user-share-in-kenyaby-age/
Well, that’s just nonsense. Why did the Observer group write that claim?
Freedom House (2023) Freedom on the Net 2023: The Repressive Power of Artificial Intelligence
● “During Kenya’s August 2022 election,
● influencers gamed social media platforms’ trending functions to boost misleading political hashtags.
● “For instance, the hashtag #ChebukatiCannotBeTrusted sought to undermine the country’s independent electoral
authority by suggesting that its leader supported one presidential candidate over the others.
● “Similar networks of influencers were found to have coordinated disinformation campaigns against Kenyan
journalists, judges, and members of civil society.”
● https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence
26. REAL PICTURE OF SOCIAL MEDIA USE IN KENYA
● Total Internet users were almost 50million in 2022!
● Communications Authority (CA) October-December 2022,
● mobile data/internet subscriptions 47.76Million,
● 66.8% mobile broadband.
● smartphones 60.2%,
● total population of 49.4 Million at the end of 2022
● https://www.kictanet.or.ke/state-of-internet-penetration-in-kenya/
● Social media use therefore 40million?
27. KENYA ENVIRONMENT VERY DIFFICULT BY JULY 2023
● “While economic concerns are the root of the conflict,
● it is troubling that the protests have seen an increase in the use of hate speech and
● we call on leaders of all sides of the political divide to deescalate the situation.”
● https://thecommonwealth.org/news/commonwealth-statement-recent-developments-kenya
29. NOTE: EU COMMISSIONER TO COMBAT
FOREIGN DISINFORMATION IS CZECH
• Prague Spring 1968 – she remembers
• Russian tanks occupying her country
30. 2018: FIVE RULE OF LAW
RECOMMENDATIONS
1. Media literacy and user choice
2. Strong human review and appeal processes where
AI is used
3. Independent appeal and audit of platforms
4. Standardizing notice and appeal procedures
Creating a multistakeholder body for appeals
5. Transparency in AI disinformation techniques
31. Option and form Typology of regulation Implications/Notes
0 Status quo Corporate Social Responsibility,
single-company initiatives
Note that enforcement of General Data Protection Regulation and the
proposed revised ePrivacy Regulation, plus agreed text for new AVMS
Directive, would all continue and likely expand
1 Non-audited
self-regulation
Industry code of practice,
transparency reports, self-
reporting
Corporate agreement on principles for common technical solutions and
Santa Clara Principles
2 Audited self-
regulation
European Code of Practice of Sept
2018; Global Network Initiative
published audit reports
Open interoperable publicly available standard e.g. commonly
engineered/designed standard for content removal to which platforms
could certify compliance
3 Formal self-
regulator
Powers to expel non-performing
members, Dispute Resolution
ruling/arbitration on cases
Commonly engineered standard for content filtering or algorithmic
moderation. Requirement for members of self-regulatory body to conform
to standard or prove equivalence. Particular focus on content ‘Put Back’
metrics and efficiency/effectiveness of appeal process
4 Co-regulation Industry code approved by
Parliament or regulator(s) with
statutory powers to supplant
Government-approved technical standard – for filtering or other forms of
moderation. Examples from broadcast and advertising regulation
5 Statutory
regulation
Formal regulation - tribunal with
judicial review
National Regulatory Agencies – though note many overlapping powers
between agencies on e.g. freedom of expression, electoral advertising and
32. 2021: TEN RECOMMENDATIONS
• 1.Electoral management boards (EMBs) should not request the operation of
Internet shutdowns during election periods,
• not objectively assessed a national emergency and sanctioned by a superior court.
• Such an injunction may be achieved with great speed, and
• the need for procedural legitimacy before such an extreme response is essential.
• 2.Governments should avoid shutdowns in response to disinformation concerns
• while ensuring false announcements are responded to
• where defamatory, fraudulent, or unjustifiably casting doubt on official EMB results.
• such action may not appear disinterested or legitimate prior to a court decision.
33. CHRISTCHURCH CALL 2ND ANNIVERSARY, UNITED
STATES JOINED BRINGING THE TOTAL TO 55 NATIONS
• 'Christchurch Call' response signatories issued this message:
• "We will improve transparency from Governments and companies on terrorist and violent extremist content, including
Government information on flagging and content removal requests; and increase the number and variety of companies
providing transparency reporting.
• We will work to improve the quality and content of that reporting over time;
• We will establish a multi-stakeholder process to ensure that transparency reporting across Government and industry is
responsive to the concerns of civil society participants and informative in demonstrating progress on the Call’s commitments;
• As a Community, we will devote resources towards building understanding of recommendation algorithms and user journeys,
including the role they may play in radicalisation or amplification of terrorist and violent extremist content;
• We have agreed to work together this year to design methods that can safely be used to build a better understanding of
algorithmic outcomes.
• This will help us address the question of amplification and identify more effective intervention points".
• Algorithmic analysis is thus now at the centre of the response.
34. “WE WILL BUILD UNDERSTANDING OF
RECOMMENDATION ALGORITHMS AND USER JOURNEYS”
• “including the role they may play in radicalisation or amplification of terrorist
• and violent extremist content;
• We have agreed to work together this year
• to design methods that can safely be
• used to build a better understanding of algorithmic outcomes.
• This will help us address the question of amplification and
• identify more effective intervention points"
35. ALGORITHMIC ANALYSIS IS AT THE CENTRE OF ANALYSIS
• Amplification?
• Should social media as default
• prevent live video from reaching a wide audience without prior approval?
• Does that remove the immediacy of e.g. TikTok?
• Isn’t that what broadcast rules are in place to do?
• 15 second delay for profanity/violence/banned speech
36. FORMER PM JACINTA ARDEN NOW A SPECIAL ENVOY ,
RECENTLY AT STANFORD ON CHRISTCHURCH CALL
• Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation
• Led by former Prime Minister of New Zealand Rt. Hon. Dame Jacinda Ardern,
• a delegation from the Christchurch Call joined Stanford scholars
• to discuss how to address the challenges posed by emerging technologies.
• https://fsi.stanford.edu/news/special-envoy-jacinda-ardern-assembles-stanford-scholars-discussion-
technology-governance-and
37. BUT FOR NON-ENGLISH LANGUAGE CONTENT….
• Almost non-existent moderation – and
• none at all on WhatsApp
• which is where ‘angry uncle’ content is
• more likely to be spread in family groups
• Brazil 2018 was bad…2022 may be worse
• https://blogs.lse.ac.uk/polis/2018/10/27/2018-brazil-elections-the-power-of-social-media-and-the-
threat-to-journalism/
38. NEW EU LEGISLATION: 2022, ENFORCED AUGUST 2023
• 22 April 2022, European policymakers reached an agreement on the Digital Services Act.
• European Parliament approved the DSA along with the Digital Markets Act on 5 July 2022
• In September, the Council of the European Union (member states) is expected to formally adopt the DSA,
• Published in the Official Journal of the European Union
• comes into force 20 days after publication.
• apply fifteen months after coming into force or on 1 January 2024.
• Very Large Online Platforms (VLOPs) and search engines will need to comply with their obligations
• four months after they have been designated as such by the EU Commission.
40. DSA ALREADY BEING FLOUTED BY X?
• European Commission, Directorate-General for
Communications Networks, Content and Technology,
• Digital Services Act – Application of the risk management
framework to Russian disinformation campaigns,
• Publications Office of European Union, 2023
https://data.europa.eu/doi/10.2759/764631
41. DIGITAL SERVICES ACT, RECITAL 68:
“IT IS APPROPRIATE THIS REGULATION IDENTIFY CERTAIN
AREAS OF CONSIDERATION FOR SUCH CODES OF CONDUCT”
• In particular, risk mitigation measures concerning
• specific types of illegal content should be explored via self- and co-regulatory agreements.
• Another area for consideration is the possible negative impacts of systemic risks on society and democracy,
• such as disinformation or manipulative and abusive activities.
• This includes coordinated operations aimed at amplifying information, including disinformation,
• such as the use of bots or fake accounts for the creation of fake or misleading information,
• sometimes with a purpose of obtaining economic gain,
• which are particularly harmful for vulnerable recipients of the service, such as children.
42. HOW AUSTRALIA COPED WITH
DISINFORMATION IN THE 2022 ELECTION
• Tom Rogers (Australian Election Commissioner) Official agreement with social media
• ACMA (2023) 2nd report on the Code of Practice on Disinformation
• https://www.acma.gov.au/sites/default/files/2023-
07/Digital%20platforms%20efforts%20under%20Code%20of%20Practice%20on%20Disinformation%20and%20Misinformation.pdf
• eSafety Commissioner has a guide on disinformation online.
• Marsden, C. and T. Meyer [2019] How can the law regulate removal of fake news?
• https://www.scl.org/articles/10425-how-can-the-law-regulate-removal-of-fake-news
43. 2022 AUSTRALIAN ELECTION WAS FAIR?
• Interference with the Trump (2016) election
• Muller Report to Department of Justice (2019) heavily redacted by Trump’s Attorney general Bill Barr – we’re
not sure what was in it
• Report On The Investigation Into Russian Interference In The 2016 Presidential Election
• 2020 Report more extensive SELECT COMMITTEE ON INTELLIGENCE ON RUSSIAN ACTIVE MEASURES
CAMPAIGNS AND INTERFERENCE IN THE 2016 U.S. ELECTION
• VOLUME 5: COUNTERINTELLIGENCE THREATS AND VULNERABILITIES
• https://www.intelligence.senate.gov/sites/default/files/documents/report_volume5.pdf
• 2016 Brexit referendum in the UK
• No Commons ‘pandemic of disinformation inquiry – left to Lord Puttnam in Lords
• https://committees.parliament.uk/committee/407/democracy-and-digital-technologies-committee/
44. CO-REGULATION OF AUSTRALIAN CODE OF PRACTICE
ON DISINFORMATION
• 2021 voluntary Code on Disinformation
• https://www.oecd.org/stories/dis-misinformation-hub/voluntary-code-of-practice-on-misinformation-and-disinformation-1fe0be59/
• 2023 amended law to make ACMA co-regulator of the Code?
• Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023
• Department of Infrastructure, Transport, Regional Development, Communications and the Arts
• Consultation June-August. Bill in spring?
• Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023—Guidance
Note (infrastructure.gov.au)
45. OFFICE OF THE E-SAFETY COMMISSIONER ESTABLISHED 2015
• to promote safety for children only
• Problem grew during the pandemic
• Affects all children
• sextortion of boys growing problem
46. ONLINE SAFETY ACT 2021 (CTH) TO COVER ADULTS:
• promoting online safety for all Australians
• complaints system for cyber bullying material targeted at an Australian child [s 30]
• complaints system for cyber-abuse material targeted at an Australian adult (that meets the threshold of
serious harm) [s 36]
• complaints and objections system for non-consensual sharing of intimate images [s 32]
• online content scheme (for illegal and restricted online content, including a complaints system) [ss 38-40]
• requiring internet service providers to block access to material showing abhorrent violent conduct (e.g.
terrorist acts) [s 39]
• issue civil penalties, enforceable undertakings, injunctions for breaches of the Act [Part 10, OSA 2021]
47. E-SAFETY COMMISSIONER CAN APPLY TO FEDERAL COURT
• person may be ordered to cease providing a
social media service
• if civil penalty provisions of the Act continue
to be breached,
• continued operation of the social media
service represents a significant community
safety risk [s.156 OSA 2021]
48. S.34 DSA COMPARABLE TO S.156 ONLINE SAFETY ACT?
• When does content or conduct cross the threshold,
• resulting in an
• “actual or foreseeable negative effect”
• severe enough to be considered systemic
• in relation to the risk factors specified in DSA?