This document summarizes global trends in AI regulation. It discusses how many nations are developing national AI strategies and legislation to regulate high-risk AI applications. It also outlines some multi-national initiatives developing regulatory guidelines for AI through organizations like the OECD, G7, and UN. Common principles reflected in proposed regulation include transparency, fairness, safety, accountability, and privacy. The EU is proposing comprehensive AI legislation while other regions take different approaches to AI governance.
Re thinking regulation at the age of AILofred Madzou
This is a presentation of the keynote that Lofred Madzou (AI Project Lead at the World Economic Forum) gave on October 14th at the Instituto Nacional de Defensa de la Competencia y la Propiedad Intelectual (INDECOPI) in Lima. It presents some of the most important policy challenges associated with the development of means to address them.
Regulating Artificial Intelligence (AI)
Ethical Principles
Legal Frameworks
Transparency and Accountability
Risk Assessment and Mitigation
Data Governance and Privacy
Interdisciplinary Collaboration
International Cooperation
* "Responsible AI Leadership: A Global Summit on Generative AI"
*April 2023 guide for experts and policymakers
* Developing and governing generative AI systems
* + 100 thought leaders and practitioners participated
* Recommendations for responsible development, open innovation & social progress
* 30 action-oriented recommendations aim
* Navigate AI complexities
Re thinking regulation at the age of AILofred Madzou
This is a presentation of the keynote that Lofred Madzou (AI Project Lead at the World Economic Forum) gave on October 14th at the Instituto Nacional de Defensa de la Competencia y la Propiedad Intelectual (INDECOPI) in Lima. It presents some of the most important policy challenges associated with the development of means to address them.
Regulating Artificial Intelligence (AI)
Ethical Principles
Legal Frameworks
Transparency and Accountability
Risk Assessment and Mitigation
Data Governance and Privacy
Interdisciplinary Collaboration
International Cooperation
* "Responsible AI Leadership: A Global Summit on Generative AI"
*April 2023 guide for experts and policymakers
* Developing and governing generative AI systems
* + 100 thought leaders and practitioners participated
* Recommendations for responsible development, open innovation & social progress
* 30 action-oriented recommendations aim
* Navigate AI complexities
EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we
remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability.
As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due
process – is an increasingly urgent concern.
Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem and addresses the following key issues:
1. The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected
2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression
3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
4. Unregulated and unmonitored forms of AI experimentation on human populations
5. The limits of technological solutions to problems of fairness, bias, and discrimination
Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical
pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.
IT Conferences 2024 To Navigate The Moral Landscape Of Artificial Intelligenc...Internet 2Conf
This insightful presentation delves into the key areas of AI ethics, examining moral trade-offs, and implementing ethical AI frameworks. It highlights the evolving nature of AI ethics debates, especially relevant in 2024's IT conferences like Internet 2.0 Conference. The talk aims to guide AI's future responsibly, emphasizing the importance of humane and ethical considerations in the rapidly advancing field of artificial intelligence.
Article started one year ago, obtains far more relevancy these days. Its meaning stays the same however: "Without laws and regulations would be chaos affecting our freedom and human nature."
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
Explore the ethical landscape of Artificial Intelligence (AI) through our insightful PowerPoint presentation. Delve into crucial considerations that shape the responsible development and deployment of AI technologies. From privacy concerns and bias mitigation to transparency and accountability, this presentation covers the key ethical dimensions of AI. Gain a comprehensive understanding of the ethical challenges and solutions in the rapidly evolving world of artificial intelligence. Stay informed and empower your audience with the knowledge needed to navigate the ethical intricacies of AI responsibly.
Let us see the good and bad effects of the impact of Artificial Intelligence and the emerging technologies!
Artificial Intelligence (AI)
Ethics
Transparency
Explainnability
Privacy and Data Protection
Accountability and Responsibility
Robustness and Safety
Collaboration and Interdisciplinary Approaches
Bias Mitigation and Diversity
Global Standards and Regulation
Role of AI Safety Institutes in Trustworthy AI.pdfBob Marcus
Describes possible role of AI Safety Institutes collaborating to enable trustworthy AI. The key areas are External Red Team Testing and Incident Tracking Databases
Steve Wood Generative AI and Data Protection Asia Privacy Bridge October 202...stevewood900540
A presentation given by Steve Wood, former UK Deputy Information Commissioner and Director of Privacyx Consulting, to the 2023 Asia Bridge Conference in Seoul October 12 2023
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive AnalyticsDataScienceConferenc1
As the data-driven landscape rapidly evolves, predictive analytics holds tremendous potential for transformative insights, with predictive models becoming integral to decision-making. However, this immense power demands an equally profound responsibility towards ethical considerations. In this talk, we delve into the crucial interplay between predictive analytics and three paramount ethical aspects: data privacy, bias mitigation, and accountability. We will explore strategies for safeguarding sensitive information, mitigating bias in algorithmic decision-making, and fostering transparency to ensure accountability. Join us to delve into the ethical dimensions of predictive analytics.
EU'S Ethics Guidelines for Trustworthy AI 2019ELSE CORP
Artificial Intelligence (AI) is one of the most transformative forces of our time, and is bound to alter the fabric of society.
This working document constitutes a draft of the AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI), of which a final version is due in March 2019. Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.
Extracted from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai as Follow Up for first power hour session with Mikael Eriksson on AI, October 30th in Stockholm
Richard van der Velde, Technical Support Lead for Cookiebot @CMP – “Artificia...Associazione Digital Days
The training of artificial intelligence systems is just the latest use of users’ personal data that companies collect online. But the information on how the data is used, what consent is needed or how it will be regulated is not always clear. Strong concerns have already been raised about data privacy and consent.
A Research Project PresentationOnline Policies for Enabling Fi.docxmakdul
A Research Project Presentation
Online Policies for Enabling Financial Companies to Manage Privacy Issues
NAME:
Course:
1
Introduction
Companies in the financial sector handle data that are priority for hackers.
Organizations invest in vast technologies for protecting the data from unauthorized access.
However, they do not adequately invest in behavioral measures for safeguarding the data.
Companies in the financial sector face numerous attempts by the cybercriminals who target stealing data stored in the systems. The corporations handle confidential data that could be used for committing crimes, such as impersonation and illegal transfer of money (Noor & Hassan, 2019). It is a major concern whether financial institutions have effective policies that ensure the data are properly secured from both internal and external threats. Financial companies, especially those that spread across the country have always focused on investing in technologies that promote the privacy of the data and the systems. They are deploying technologies, such as cloud computing, which promote the privacy of the data. Also, they use Bcrypt technologies to encrypt data via algorithms that will take hackers decades to decrypt a single password. Though they invest in such technologies that cost millions of dollars, there are questions whether they invest in behavioral measures to protect the data systems (Noor & Hassan, 2019). Such measures require the use of online policies that will ensure that internal and the external users can adhere to best practices that make them less vulnerable to attacks, especially the social engineering attacks that target unsuspecting users.
2
Literature Review
Financial companies have implemented policies for promoting desirable user behaviors.
They provide guidelines on how to use the networks.
They do not require the users to follow strict rules, which indicates the inefficiency of the policies.
Financial companies have implemented policies on how customers access their data remotely. Such policies outline the standards that customers must follow such as the multi-factor authentication, which aims at ensuring that no unauthorized users access the data (Suchitra &Vandana, 2016). The policies are communicated to the customers when they provide their data. It is an effective approach that mainly ensures that customer must follow certain guidelines that promote the overall security of the data. However, Timothy Toohey (2014) questions whether the policies apply to the side of the users who are very likely to exhibit behaviors that expose data to threats. For instance, the customers may use devices that have weak antimalware tools. Such devices create an avenue that a hacker can use and access the system.
3
Research Method
The researcher will employ a case-study design.
It means that the researcher will focus on individual cases and analyze them.
Interviews and observation will be the primary tools of data.
The da.
Generative AI: Responsible Path forward, a presentation conducted during DataHour webinar series by Analytics Vidhya and attended by more than a hundred data scientists and AI experts from around the world. The presentation address the importance of AI ethics and the development of responsible AI governance at tech firms to help mitigate AI risks and ethical issues.
Regulating Generative AI: A Pathway to Ethical and Responsible ImplementationIJCI JOURNAL
Artificial intelligence (AI) is becoming more and more prevalent in our daily lives, and its potential applications are practically limitless. However, as with any technology, there are concerns about how AI could be misused or abused. One of the most serious concerns is the potential for discrimination, particularly against women or minorities, when AI systems are used for tasks like job hiring. Additionally, there are concerns about privacy and security, as AI could be used to monitor people's movements or launch cyberattacks. To address these concerns, regulations must be developed to ensure that AI is developed and used ethically and responsibly. These regulations should address issues like safety, privacy, security, and discrimination. Finally, it is important to educate the public about AI and how to use it safely and responsibly. In this paper, I will examine the AI regulations and challenges that exist today, particularly in the United States. Two regulations I will focus on are the AI in Government Act of 2020 and the National Artificial Intelligence Initiative Act of 2020. Additionally, I will examine two Executive Orders that have addressed the issue of AI in the federal government. Finally, I will conclude with some policy considerations and recommendations for federal agencies.
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Patrick Van Renterghem
Organisations need to make sure that they use AI in an appropriate way. Martijn and Hugo explain how to ensure that the developments are ethically sound and comply with regulations, how to have end-to-end governance, and how to address bias and fairness, interpretability and explainability, and robustness and security.
During the conference, we looked at an example AI development process with focussing on the risks to be managed and the controls that can be established.
EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we
remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability.
As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due
process – is an increasingly urgent concern.
Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem and addresses the following key issues:
1. The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected
2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression
3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
4. Unregulated and unmonitored forms of AI experimentation on human populations
5. The limits of technological solutions to problems of fairness, bias, and discrimination
Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical
pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.
IT Conferences 2024 To Navigate The Moral Landscape Of Artificial Intelligenc...Internet 2Conf
This insightful presentation delves into the key areas of AI ethics, examining moral trade-offs, and implementing ethical AI frameworks. It highlights the evolving nature of AI ethics debates, especially relevant in 2024's IT conferences like Internet 2.0 Conference. The talk aims to guide AI's future responsibly, emphasizing the importance of humane and ethical considerations in the rapidly advancing field of artificial intelligence.
Article started one year ago, obtains far more relevancy these days. Its meaning stays the same however: "Without laws and regulations would be chaos affecting our freedom and human nature."
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
Explore the ethical landscape of Artificial Intelligence (AI) through our insightful PowerPoint presentation. Delve into crucial considerations that shape the responsible development and deployment of AI technologies. From privacy concerns and bias mitigation to transparency and accountability, this presentation covers the key ethical dimensions of AI. Gain a comprehensive understanding of the ethical challenges and solutions in the rapidly evolving world of artificial intelligence. Stay informed and empower your audience with the knowledge needed to navigate the ethical intricacies of AI responsibly.
Let us see the good and bad effects of the impact of Artificial Intelligence and the emerging technologies!
Artificial Intelligence (AI)
Ethics
Transparency
Explainnability
Privacy and Data Protection
Accountability and Responsibility
Robustness and Safety
Collaboration and Interdisciplinary Approaches
Bias Mitigation and Diversity
Global Standards and Regulation
Role of AI Safety Institutes in Trustworthy AI.pdfBob Marcus
Describes possible role of AI Safety Institutes collaborating to enable trustworthy AI. The key areas are External Red Team Testing and Incident Tracking Databases
Steve Wood Generative AI and Data Protection Asia Privacy Bridge October 202...stevewood900540
A presentation given by Steve Wood, former UK Deputy Information Commissioner and Director of Privacyx Consulting, to the 2023 Asia Bridge Conference in Seoul October 12 2023
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive AnalyticsDataScienceConferenc1
As the data-driven landscape rapidly evolves, predictive analytics holds tremendous potential for transformative insights, with predictive models becoming integral to decision-making. However, this immense power demands an equally profound responsibility towards ethical considerations. In this talk, we delve into the crucial interplay between predictive analytics and three paramount ethical aspects: data privacy, bias mitigation, and accountability. We will explore strategies for safeguarding sensitive information, mitigating bias in algorithmic decision-making, and fostering transparency to ensure accountability. Join us to delve into the ethical dimensions of predictive analytics.
EU'S Ethics Guidelines for Trustworthy AI 2019ELSE CORP
Artificial Intelligence (AI) is one of the most transformative forces of our time, and is bound to alter the fabric of society.
This working document constitutes a draft of the AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI), of which a final version is due in March 2019. Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.
Extracted from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai as Follow Up for first power hour session with Mikael Eriksson on AI, October 30th in Stockholm
Richard van der Velde, Technical Support Lead for Cookiebot @CMP – “Artificia...Associazione Digital Days
The training of artificial intelligence systems is just the latest use of users’ personal data that companies collect online. But the information on how the data is used, what consent is needed or how it will be regulated is not always clear. Strong concerns have already been raised about data privacy and consent.
A Research Project PresentationOnline Policies for Enabling Fi.docxmakdul
A Research Project Presentation
Online Policies for Enabling Financial Companies to Manage Privacy Issues
NAME:
Course:
1
Introduction
Companies in the financial sector handle data that are priority for hackers.
Organizations invest in vast technologies for protecting the data from unauthorized access.
However, they do not adequately invest in behavioral measures for safeguarding the data.
Companies in the financial sector face numerous attempts by the cybercriminals who target stealing data stored in the systems. The corporations handle confidential data that could be used for committing crimes, such as impersonation and illegal transfer of money (Noor & Hassan, 2019). It is a major concern whether financial institutions have effective policies that ensure the data are properly secured from both internal and external threats. Financial companies, especially those that spread across the country have always focused on investing in technologies that promote the privacy of the data and the systems. They are deploying technologies, such as cloud computing, which promote the privacy of the data. Also, they use Bcrypt technologies to encrypt data via algorithms that will take hackers decades to decrypt a single password. Though they invest in such technologies that cost millions of dollars, there are questions whether they invest in behavioral measures to protect the data systems (Noor & Hassan, 2019). Such measures require the use of online policies that will ensure that internal and the external users can adhere to best practices that make them less vulnerable to attacks, especially the social engineering attacks that target unsuspecting users.
2
Literature Review
Financial companies have implemented policies for promoting desirable user behaviors.
They provide guidelines on how to use the networks.
They do not require the users to follow strict rules, which indicates the inefficiency of the policies.
Financial companies have implemented policies on how customers access their data remotely. Such policies outline the standards that customers must follow such as the multi-factor authentication, which aims at ensuring that no unauthorized users access the data (Suchitra &Vandana, 2016). The policies are communicated to the customers when they provide their data. It is an effective approach that mainly ensures that customer must follow certain guidelines that promote the overall security of the data. However, Timothy Toohey (2014) questions whether the policies apply to the side of the users who are very likely to exhibit behaviors that expose data to threats. For instance, the customers may use devices that have weak antimalware tools. Such devices create an avenue that a hacker can use and access the system.
3
Research Method
The researcher will employ a case-study design.
It means that the researcher will focus on individual cases and analyze them.
Interviews and observation will be the primary tools of data.
The da.
Generative AI: Responsible Path forward, a presentation conducted during DataHour webinar series by Analytics Vidhya and attended by more than a hundred data scientists and AI experts from around the world. The presentation address the importance of AI ethics and the development of responsible AI governance at tech firms to help mitigate AI risks and ethical issues.
Regulating Generative AI: A Pathway to Ethical and Responsible ImplementationIJCI JOURNAL
Artificial intelligence (AI) is becoming more and more prevalent in our daily lives, and its potential applications are practically limitless. However, as with any technology, there are concerns about how AI could be misused or abused. One of the most serious concerns is the potential for discrimination, particularly against women or minorities, when AI systems are used for tasks like job hiring. Additionally, there are concerns about privacy and security, as AI could be used to monitor people's movements or launch cyberattacks. To address these concerns, regulations must be developed to ensure that AI is developed and used ethically and responsibly. These regulations should address issues like safety, privacy, security, and discrimination. Finally, it is important to educate the public about AI and how to use it safely and responsibly. In this paper, I will examine the AI regulations and challenges that exist today, particularly in the United States. Two regulations I will focus on are the AI in Government Act of 2020 and the National Artificial Intelligence Initiative Act of 2020. Additionally, I will examine two Executive Orders that have addressed the issue of AI in the federal government. Finally, I will conclude with some policy considerations and recommendations for federal agencies.
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Patrick Van Renterghem
Organisations need to make sure that they use AI in an appropriate way. Martijn and Hugo explain how to ensure that the developments are ethically sound and comply with regulations, how to have end-to-end governance, and how to address bias and fairness, interpretability and explainability, and robustness and security.
During the conference, we looked at an example AI development process with focussing on the risks to be managed and the controls that can be established.
Similar to Tendencias globales en la regulación de la IA y estándares tecnológicos asociados (20)
Understanding the Challenges of Street ChildrenSERUDS INDIA
By raising awareness, providing support, advocating for change, and offering assistance to children in need, individuals can play a crucial role in improving the lives of street children and helping them realize their full potential
Donate Us
https://serudsindia.org/how-individuals-can-support-street-children-in-india/
#donatefororphan, #donateforhomelesschildren, #childeducation, #ngochildeducation, #donateforeducation, #donationforchildeducation, #sponsorforpoorchild, #sponsororphanage #sponsororphanchild, #donation, #education, #charity, #educationforchild, #seruds, #kurnool, #joyhome
A process server is a authorized person for delivering legal documents, such as summons, complaints, subpoenas, and other court papers, to peoples involved in legal proceedings.
This session provides a comprehensive overview of the latest updates to the Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (commonly known as the Uniform Guidance) outlined in the 2 CFR 200.
With a focus on the 2024 revisions issued by the Office of Management and Budget (OMB), participants will gain insight into the key changes affecting federal grant recipients. The session will delve into critical regulatory updates, providing attendees with the knowledge and tools necessary to navigate and comply with the evolving landscape of federal grant management.
Learning Objectives:
- Understand the rationale behind the 2024 updates to the Uniform Guidance outlined in 2 CFR 200, and their implications for federal grant recipients.
- Identify the key changes and revisions introduced by the Office of Management and Budget (OMB) in the 2024 edition of 2 CFR 200.
- Gain proficiency in applying the updated regulations to ensure compliance with federal grant requirements and avoid potential audit findings.
- Develop strategies for effectively implementing the new guidelines within the grant management processes of their respective organizations, fostering efficiency and accountability in federal grant administration.
Russian anarchist and anti-war movement in the third year of full-scale warAntti Rautiainen
Anarchist group ANA Regensburg hosted my online-presentation on 16th of May 2024, in which I discussed tactics of anti-war activism in Russia, and reasons why the anti-war movement has not been able to make an impact to change the course of events yet. Cases of anarchists repressed for anti-war activities are presented, as well as strategies of support for political prisoners, and modest successes in supporting their struggles.
Thumbnail picture is by MediaZona, you may read their report on anti-war arson attacks in Russia here: https://en.zona.media/article/2022/10/13/burn-map
Links:
Autonomous Action
http://Avtonom.org
Anarchist Black Cross Moscow
http://Avtonom.org/abc
Solidarity Zone
https://t.me/solidarity_zone
Memorial
https://memopzk.org/, https://t.me/pzk_memorial
OVD-Info
https://en.ovdinfo.org/antiwar-ovd-info-guide
RosUznik
https://rosuznik.org/
Uznik Online
http://uznikonline.tilda.ws/
Russian Reader
https://therussianreader.com/
ABC Irkutsk
https://abc38.noblogs.org/
Send mail to prisoners from abroad:
http://Prisonmail.online
YouTube: https://youtu.be/c5nSOdU48O8
Spotify: https://podcasters.spotify.com/pod/show/libertarianlifecoach/episodes/Russian-anarchist-and-anti-war-movement-in-the-third-year-of-full-scale-war-e2k8ai4
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Tendencias globales en la regulación de la IA y estándares tecnológicos asociados
1. Global trends in AI
regulation and associated
technology standards
Dr. Ansgar Koene
Global AI Ethics and Regulatory Leader, EY
4 September 2023
2. Development of policy debate on AI (deliberative approach)
Fact finding and expert consultation
AI Principles
National AI Strategies
Legislative gap analysis
New or amended legislation to regulate AI
Overview of National AI-Strategies (2020)
53 Nations
Source: OECD (Organization for Economic Cooperation and Development)
https://www.aisoma.de/useful-resources-on-artificial-intelligence/
Page 2
3. Reactive policy development
• (Perceived) abuse of power through the use of AI
triggers reactive policy responses
• E.g. Due to protest over bias in accuracy and
deployment various US cities and states have
moved to ban the use of Face Recognition AI by
police or public sector
• IBM, Microsoft and Amazon prompted to back away
from selling facial recognition tech to law
enforcement
Page 3
4. Multi-national initiatives (illustrative examples)
OECD: Actively developing regulatory guidelines,
thought leadership and tracking tools through multi-
lateral dialogue to support coordinated approaches to
responsible use of AI.
OECD also acts as secretariate for other multi-national
initiatives like the Global Partnership on AI (GPAI).
G7: Under Japan’s chairmanship the G7 has
established the “Hiroshima Process” for international
discussion and harmonization of rules for the use
and development of AI.
Council of Europe: Developing a Framework
Convention on AI, Human Rights, Democracy
and the Rule of Law.
UN: UNESCO developed the Recommendation
on the Ethics of AI which was adopted by all
193 member states. This will be a contributing
part to the UN Global Digital Compact that is
being drafted for ratification 2024.
ASEAN: Is currently in the process of assessing
a set of AI Principles and Guideline for the
region.
5. Possible AI Governance Initiatives
Page 5
Organisation and application
Prepare the workforce and increase
awareness
Create AI Governance framework
Funding of Innovation/research
Set up AI Principles and
Governance programs
Create a Trusted AI Framework
Manage AI Risks and Implement
Appropriate Controls
Monitor and Keep Humans in-the-Loop
Promote R&D programs
Society
6. Proportionality and Do
No Harm
Safety and Security
Right to Privacy and
Data Protection
Multi-stakeholder and
Adaptive Governance &
Collaboration
Responsibility and
Accountability
Transparency and
Explainability
Human Oversight and
Determination
Sustainability
Awareness & Literacy Fairness and Non-
Discrimination
G20/OECD UNESCO
Inter-Governmental Principles for the Use and Development of AI
Page 6
7. Principles of AI Governance reflected in proposed regulation & guidance
Page 7
Transparency
Responsible disclosure
regarding AI systems and
stakeholders should be informed
if AI systems are used
Fairness
Avoidance of unfair bias,
accessibility and universal
design, and stakeholder
participation
Security and Safety
AI systems should be secure
and function appropriately, such
that they do not pose
unreasonable safety risk
Accountability
Owners, developers, providers
and users of AI systems should
be responsible for the proper
functioning of the systems
Privacy and Data Governance
Respect for privacy, quality and
integrity of data, and access to
data
8. Regulatory approaches to AI
United States: Emphasis on applying pre-existing laws
(e.g. Anti-Discrimination laws) and sector specific
regulations (e.g. Medical Devices), paired with voluntary
guidelines (e.g. NIST AI Risk Management framework) and
public commitments from industry.
China: Combines general guidelines with focused
regulation to address specific areas of concern. Passed new
legislation specifically requiring AI generated media content
(text, video and audio) to be labelled as synthetic, and
requiring providers to ensure that training data and content is
“true and accurate”.
EU: Focus on harmonizing regulator approaches to AI
across the 27 member states by proposing overarching
legislation to ensure that ‘high-risk’ AI applications don’t
violate the safety, security and fundamental rights of
persons (the EU AI Act).
Canada: Seeking to establish comprehensive
legislation for AI through the AI and Data Act, which
builds on existing legislation mandating Algorithmic
Impact Assessment on federal AI tools expanding the
scope to include private sector and risk mitigation
and management obligations for high-impact uses
of AI (C-27 AIDA).
UK: Proposing a framework of responsible AI
principles for existing domain-specific regulators
to apply when assessing uses of AI in the context of
their domain of competence.
Japan, South Korea, Singapore and many other
technologically developed states: Currently focused on
voluntary guidelines.
Page 8
10. Five regulatory trends for AI
Page 10
1. Regulation and guidance is consistent with the G20/OECD AI Principles
2. Proportionality of regulatory obligations based on risk/impact of the AI
application
3. Combination of sector-specific and sector agnostic requirements to meet the
broad application domains of AI
4. AI-related policy are developed in the context of other digital policy priorities
5. Use of regulatory sandboxes and similar tools for agile learning and refining
of policy implementation
11. The EU Digital Strategy: 4 Pillars, and potentially global implications
Page 11
12. AI is a core piece of the EU’s digital strategy and regulatory tapestry
AI
Act
(draft)
GDPR
DMA
Updated
product liability
DSA
Data
Act
AI Liability
Directive
(draft)
…
…
…
Page 12
13. AI - EU Policy drivers
• Balancing between:
Ecosystem of Trust &
Ecosystem of Excellence
• AI Risk Assessment
• Access to Data without sacrificing Rights
• Coherent regulation across EU27.
Challenges
And..
• Correct previous Digital Economy ‘failures’
• Continue to champion Fundamental Rights (e.g.
GDPR)
• Where possible, increase regulatory convergence
with ‘partner countries’ (US and TTC process
• Develop global alliances (e.g. Canada, Japan).
To establish a legal framework for AI across Europe’s
which will set some requirements for high-risk
applications of AI: from making sure that they use high-
quality data, to ensuring human oversight.
EU’s Aim EU desired ‘Geopolitical Positioning’
China
US
EU
Focus on
‘Human-
Centred,
Trustworthy AI’
Page 13
14. EU: A risk-based approach to regulation
Risk assessment on the basis of risk to safety, security and fundamental rights of
natural person posed by intended use of AI
No risk or minimal risk
PERMITTED with no restrictions
Non high risk
PERMITTED but subject to
information/transparency obligations (i.e.
impersonation –bots-)
High risk
PERMITTED SUBJECT TO COMPLIANCE with
AI requirements and ex ante conformity
assessment (i.e. recruitment, medical devices)
Unacceptable risk
PROHIBITED (i.e. social scoring)
Page 14
15. Page 15
ASEAN Digital Masterplan – the need for an ecosystem of trust for AI
• “As ASEAN moves towards developing its digital economy, a trusted ecosystem is key - one
where businesses can benefit from digital innovations while consumers are confident to use AI.”
- ASEAN Digital Masterplan 2025
16. Key commonalities in current AI initiatives within ASEAN
Page 16
Strategic Initiatives
AI talent and manpower
Developing and building a pool of
skilled workers to support AI
development
Innovation and Research
Encouraging investments in AI-
related research and innovation
Governance
Implementing legal and regulatory
frameworks for the development and
application of AI systems
Infrastructure
Setting up ICT infrastructure and
systems to support data sharing and the
development of AI systems
Priority Areas
Healthcare
Many of the AI Governance guidelines identify
priority areas for the development and
deployment of AI solutions.
Some common priority areas are:
Education Smart Cities
Finance Manufacturing/Logistics Public Service
17. 5 major areas of concern for AI system
Page 17
From the landscape study conducted, it was noted that many of the existing frameworks based on international AI ethics
principles seek to address 5 major areas of concern for AI system.
1 2
3 4 5
KNOW WHEN ONE
IS USING AI AND AI
SYSTEMS
UNDERSTAND HOW
AN AI MODEL
MAKES A DECISION
ENSURE AI
SYSTEMS ARE
RELIABLE AND
SAFE
LEAD TO FAIR
DECISION / NO
UNINTENDED BIAS
DEFINE OVERSIGHT
FOR AI SYSTEMS
Areas of Concern
18. Draft ASEAN AI Principles
Transparency and Explainability
Fairness and Equity
Security and Safety
Human-centric
Privacy and Data Governance
Accountability and Integrity
Robustness and Reliability
Page 18
19. Further considerations for policymakers
Page 19
Other factors to consider in AI policy development include:
1. Ensuring regulators have access to sufficient subject matter expertise to successfully
implement, monitor and enforce these policies.
2. Ensuring clarity, if the intent is to regulate risks arising from the technology itself or
from the way it is used or both;
3. The extent to which risk management policies and procedures, as well as the
responsibility for compliance, should apply to third-party vendors supplying AI-related
products and services.
4. The importance of multi-lateral processes to make AI rules interoperable and
comparable.
20. Standards and Regulation
• Standards establish technical detail, allowing legislation to
concentrate on policy objectives
• Standards can be one way to establish regulatory compliance
Page 20
21. P70xx “Ethics AI Standards”
IEEE P2863 - Recommended Practice for
Organizational Governance of AI
…
[national standards body]
Page 21
23. IEEE 7000-2021: Model Process for Addressing Ethical Concerns During System Design
IEEE P7001: Transparency of Autonomous Systems
IEEE P7002: Data Privacy Process
IEEE P7003: Algorithmic Bias Considerations
IEEE P7004: Child and Student Data Governance
IEEE P7005: Employer Data Governance
IEEE P7006: Personal Data AI Agent Working Group
IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation Systems
IEEE P7008: Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
IEEE P7009: Fail-Safe Design of Autonomous and Semi-Autonomous Systems
IEEE 7010-2020: Recommended Practice for Assessing the Impact of Autonomous and Intelligent
Systems on Human Well-being
IEEE P7011: Process of Identifying and Rating the Trustworthiness of News Sources
IEEE P7013: Benchmarking of Automated Facial Analysis Technology
Page 23
Legislation that is already in place
GDPR Article 22 on right to recourse in case of automated decision making with significant impact on individuals
DSA obligations on recommender systems and (automated) content moderation
DMA obligations on automated systems for product recommendations
Machinery Directive
Medical Devices
New legislation that will touch on AI (other than the AI Act)
Data Act
AI Liability directive
Cyber Resilience Act