The rapid developments in Artificial Intelligence (AI) and the intensification in the adoption of AI in domains such as autonomous vehicles, lethal weapon systems, robotics and alike pose serious challenges to governments as they must manage the scale and speed of socio-technical transitions occurring. While there is considerable literature emerging on various aspects of AI, governance of AI is a significantly underdeveloped area. The new applications of AI offer opportunities for increasing economic efficiency and quality of life, but they also generate unexpected and unintended consequences and pose new forms of risks that need to be addressed. To enhance the benefits from AI while minimising the adverse risks, governments worldwide need to understand better the scope and depth of the risks posed and develop regulatory and governance processes and structures to address these challenges. This introductory article unpacks AI and describes why the Governance of AI should be gaining far more attention given the myriad of challenges it presents. It then summarises the special issue articles and highlights their key contributions. This special issue introduces the multifaceted challenges of governance of AI, including emerging governance approaches to AI, policy capacity building, exploring legal and regulatory challenges of AI and Robotics, and outstanding issues and gaps that need attention. The special issue showcases the state-of-the-art in the governance of AI, aiming to enable researchers and practitioners to appreciate the challenges and complexities of AI governance and highlight future avenues for exploration.
Keynote slides: Opportunities for Mutual Value CreationRoss Dawson
This document discusses opportunities for mutual value creation through collaboration between technology and society. It covers topics like enabling ecosystems through platforms, future leadership that taps human potential, and emerging opportunities to shape a post-capitalist world. The goal is collaborative value creation through open innovation and structures that benefit all stakeholders.
Keynote slides: Platform Strategy Creating Exponential Value in a Connected ...Ross Dawson
This document discusses platform strategy and how organizations can create exponential value in a connected world. The key points are:
- Many of the largest companies now generate over half their revenue from platform business models which enable value-creating interactions between external producers and consumers.
- Strategies for platforms include focusing on external value creation, encouraging participation through positive feedback loops, and having governance structures to share value.
- As connectivity and participation increase, organizations must transition to platform-based structures and scale through external networks and APIs that enable participation.
Keynote slides: Opportunities for Mutual Value CreationRoss Dawson
This document discusses opportunities for mutual value creation through collaboration between technology and society. It covers topics like enabling ecosystems through platforms, future leadership that taps human potential, and emerging opportunities to shape a post-capitalist world. The goal is collaborative value creation through open innovation and structures that benefit all stakeholders.
Keynote slides: Platform Strategy Creating Exponential Value in a Connected ...Ross Dawson
This document discusses platform strategy and how organizations can create exponential value in a connected world. The key points are:
- Many of the largest companies now generate over half their revenue from platform business models which enable value-creating interactions between external producers and consumers.
- Strategies for platforms include focusing on external value creation, encouraging participation through positive feedback loops, and having governance structures to share value.
- As connectivity and participation increase, organizations must transition to platform-based structures and scale through external networks and APIs that enable participation.
1. The document discusses open government and how technology enables transparency, collaboration, and public participation in government.
2. It outlines three parts of open government: transparency through access to government data and information, collaboration between government agencies and jurisdictions, and participation through new online tools for public input.
3. Open government aims to increase trust in government through transparency, engage citizens through collaboration and participation, and take advantage of technology and networks to improve government services.
La World Wide Web Foundation creó un “índice de la web”, que intenta medir el crecimiento, utilidad e impacto de internet en las personas y los países. El estudio se desarrolló en 61 países, incorporando indicadores referidos a políticas, economía e impacto social de la web, como también conectividad e infraestructura.
Understanding Impact: mySociety's year in researchmysociety
This document summarizes mySociety's research into the impacts of civic technology from 2016. It discusses framing the research around examining impacts before, during, and after using civic tech tools, on users, civic technologists, and the world. The research included surveys on demographics and attitudes in multiple countries, experiments on how people engage with civic sites, and case studies on government responses to civic tech. Key findings included that presenting information and messages in certain ways can encourage engagement, and ensuring user processes are clear and logical. The research aims to continue building partnerships to better understand civic tech's impacts and how to maximize positive effects.
Lee Rainie, Director of Internet and Technology Research at the Pew Research Center, presented this material on October 14, 2020 at a gathering sponsored by the International Institute of Communications. He described the most recent Center public opinion surveys since mid-March, covering the impact of the COVID-19 outbreak, racial justice protests that began in the summer, and the final stages of the 2020 presidential election campaign. He particularly examined how and why people are using the internet in the midst of multiple national crises and their concerns about digital divide and homework gap issues. And he covered how the Center has researched the impact of misinformation in recent years.
Startup.gov: Reworking Government Through Technical InnovationSarah Granger
This document discusses how government agencies can stimulate innovation through a startup culture approach. It outlines some obstacles to innovation like bureaucracy and risk aversion, and contrasts that with the culture of startups which value opportunity, flexibility, risk tolerance, and collaboration. The document advocates for both internally and externally driven solutions to promote openness, sharing of information, and engagement between government and the public.
How open data and social media can work together to solve some of government's big problems. (Presented to the California Democratic Party Internet Caucus at Stanford University, Feb. 5, 2011.)
New meets old media: Civic Tech users in West Africamysociety
This was presented by Nicole Anand from Reboot at the Impacts of Civic Technology Conference (TICTeC2016) in Barcelona on 28th April. You can find out more information about the conference here: https://www.mysociety.org/research/tictec-2016/
Computer-mediated communication (CMC) such as email has helped political communication in divided regions by allowing people to communicate across borders. While CMC gives people greater freedom to participate in political processes and express their views, it also enables politicians and governments to disseminate information and potentially exert control. CMC provides both opportunities and challenges for political participation and democracy.
The document discusses how open government through data sharing can transform democracy by making governments more open, innovative, responsive and smarter. It provides examples of open government initiatives around the world and argues that governments should make raw public data easily accessible, encourage public participation in designing applications, be responsive to social media, and make data analytics a core competency. The benefits of open government include cheaper and better services, increased transparency and accountability, and deeper civic engagement.
This research paper analyzes exposed cyber assets in critical sectors across US cities using data from the search engine Shodan. Some key findings include:
- Emergency services in Houston and Lafayette had many exposed assets.
- New York City, as a financial hub, had the most exposed assets in the financial sector.
- Exposed utilities assets were mostly in small towns, not large cities.
- Philadelphia alone had over 65,000 exposed devices in the education sector.
The paper also examines exposed industrial control systems and human machine interfaces that could impact critical infrastructure if accessed maliciously. It concludes by calling for improved awareness and security of these exposed city cyber assets.
Delivering value through data future agenda 2019Future Agenda
Delivering value through data - final report. Throughout 2018, Future Agenda canvassed the views of a wide range of 900 experts with different backgrounds and perspectives from around the world, to provide their insights on the future value of data. Supported by Facebook and many other organisations, we held 30 workshops across 24 countries in Africa, Asia, the Americas, and Europe. In them, we reviewed the data landscape across the globe, as it is now, and how experts think it will evolve over the next five to ten years.
The aim? To gain a better understanding of how perspectives and priorities differ across the world, and to use the diverse voices and viewpoints to help governments, organisations, and individuals to better understand what they need to do to realise data’s full potential.
We are not aware of any other exercise of this scale or scope. No other project we know of has carefully and methodically canvassed the views of such a wide range of experts from such a diverse range of backgrounds and geographical locations. The result, we hope, delivers a more comprehensive picture of the sheer variety of issues and views thrown up by a fast-evolving ‘data economy’ than can be found elsewhere. And, by providing this rich set of perspectives, we aim to help businesses and governments - to develop the policies, strategies, and innovations that realise the full potential of data (personal, social, economic, commercial), while addressing potential harms, both locally and globally.
For more details see the dedicated website www.deliveringvaluethroughdata.org
This document discusses government 2.0, which aims to make government more transparent, efficient and user-oriented through new technologies like blogs, wikis and social networking. It provides examples of government agencies using these tools, like a policy blog by the EU and a wiki created by US intelligence agencies. However, it notes that government 2.0 requires more than just tools - it must leverage drivers like reducing information asymmetries and changing citizen expectations to drive innovation without strict top-down control. The risks of too much transparency and ensuring participation remains balanced are also addressed.
This document discusses how big data and analytics will help the world of charities. It argues that the financial flows to charities, their operations, and government policy will all need to shift as data technology rapidly grows. Charities will need to address issues around who owns and manages the increasing data being collected. This data, from sources like charities, donors, news, and third parties, is currently being used by various stakeholders like funders, charities, and social enterprises to make funding decisions, identify opportunities, and drive impact. A case study is presented on a hackathon that used data and one on how data is influencing estate planning and charitable giving conversations.
The Impact of the Consumerization of IT on the Public SectorGovLoop
The document discusses how the consumerization of IT is transforming the public sector workforce by allowing government employees to work anywhere, anytime, and on any device. It highlights opportunities like improved employee morale and increased work flexibility. However, it also notes challenges in ensuring cybersecurity, data security, and addressing legal issues with policies that have not caught up with changing technologies. The document advocates that agencies build flexible infrastructures that can support new devices and workstyles while still maintaining proper governance over data and systems.
Government agencies are using the power of analytics to understand government performance as well as analyze key trends, catch fraud, and drive better citizen engagement. In this session, you will learn tips on using data to effectively do your job better. Learn key analytical strategies that will help you become an analytical star within your agency or organization.
Ten future global trends impacting the mining industryFuture Agenda
Future of Mining
Technological, social and regulatory change are all have major implications for the world’s mining sector. Ahead of a speech next month in China, this an overview of ten key global future trends that may well have significant impact. Drawn from projects on the future of data, work, land use and automation, this presents ten of the shifts underway and illustrates some of the associated areas of change.
If you have any comments or additions, do let us know and we can include in an updated view coming out later in the year.
Dr. Alan Borning (University of Washington Computer Science professor emeritus), presents and leads a discussion on the true costs of "free" services such as Facebook, Twitter, Google, etc.
Call for papers - ICPP6 T13P03 - GOVERNANCE AND POLICY DESIGN LESSONS FOR TRU...Araz Taeihagh
CALL FOR PAPERS
T13P03 - GOVERNANCE AND POLICY DESIGN LESSONS FOR TRUST BUILDING AND RESPONSIBLE USE OF AI, AUTONOMOUS SYSTEMS AND ROBOTICS
https://www.ippapublicpolicy.org/conference/icpp6-toronto-2023/panel-list/17/panel/governance-and-policy-design-lessons-for-trust-building-and-responsible-use-of-ai-autonomous-systems-and-robotics/1390
Abstract submission deadline: 31 January 2023
GENERAL OBJECTIVES, RESEARCH QUESTIONS AND SCIENTIFIC RELEVANCE
Artificial intelligence (AI), Autonomous Systems (AS) and Robotics are key features of the fourth industrial revolution, and their applications are supposed to add $15 trillion to the global economy by 2030 and improve the efficiency and quality of public service delivery (Miller & Sterling, 2019). A McKinsey global survey found that over half of the organisations surveyed use AI in at least one function (McKinsey, 2020). The societal benefits of AI, AS, and Robotics have been widely acknowledged (Buchanan 2005; Taeihagh & Lim 2019; Ramchurn et al. 2012), and the acceleration of their deployment is a disruptive change impacting jobs, the economic and military power of countries, and wealth concentration in the hands of corporations (Pettigrew et al., 2018; Perry & Uuk, 2019).
However, the rapid adoption of these technologies threatens to outpace the regulatory responses of governments around the world, which must grapple with the increasing magnitude and speed of these transformations (Taeihagh 2021). Furthermore, concerns about these systems' deployment risks and unintended consequences are significant for citizens and policymakers. Potential risks include malfunctioning, malicious attacks, and objective mismatch due to software or hardware failures (Page et al., 2018; Lim and Taeihagh, 2019; Tan et al., 2022). There are also safety, liability, privacy, cybersecurity, and industry risks that are difficult to address (Taeihagh & Lim, 2019) and The opacity in AI operations has also manifested in potential bias against certain groups of individuals that lead to unfair outcomes (Lim and Taeihagh 2019; Chesterman, 2021).
These risks require appropriate governance mechanisms to be mitigated, and traditional policy instruments may be ineffective due to insufficient information on industry developments, technological and regulatory uncertainties, coordination challenges between multiple regulatory bodies and the opacity of the underlying technology (Scherer 2016; Guihot et al. 2017; Taeihagh et al. 2021), which necessitate the use of more nuanced approaches to govern these systems. Subsequently, the demand for the governance of these systems has been increasing (Danks & London, 2017; Taeihagh, 2021).
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RI...gerogepatton
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an
in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being
used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with
theories and models reviewed and expanded constructs, the writers propose a new framework called “The
Transformation Risk-Benefit Model of Artificial Intelligence” to address the increasing fears and levels of
AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where
benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to
illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational
model.
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RI...ijaia
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an
in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being
used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with
theories and models reviewed and expanded constructs, the writers propose a new framework called “The
Transformation Risk-Benefit Model of Artificial Intelligence” to address the increasing fears and levels of
AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where
benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to
illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational
model.
1. The document discusses open government and how technology enables transparency, collaboration, and public participation in government.
2. It outlines three parts of open government: transparency through access to government data and information, collaboration between government agencies and jurisdictions, and participation through new online tools for public input.
3. Open government aims to increase trust in government through transparency, engage citizens through collaboration and participation, and take advantage of technology and networks to improve government services.
La World Wide Web Foundation creó un “índice de la web”, que intenta medir el crecimiento, utilidad e impacto de internet en las personas y los países. El estudio se desarrolló en 61 países, incorporando indicadores referidos a políticas, economía e impacto social de la web, como también conectividad e infraestructura.
Understanding Impact: mySociety's year in researchmysociety
This document summarizes mySociety's research into the impacts of civic technology from 2016. It discusses framing the research around examining impacts before, during, and after using civic tech tools, on users, civic technologists, and the world. The research included surveys on demographics and attitudes in multiple countries, experiments on how people engage with civic sites, and case studies on government responses to civic tech. Key findings included that presenting information and messages in certain ways can encourage engagement, and ensuring user processes are clear and logical. The research aims to continue building partnerships to better understand civic tech's impacts and how to maximize positive effects.
Lee Rainie, Director of Internet and Technology Research at the Pew Research Center, presented this material on October 14, 2020 at a gathering sponsored by the International Institute of Communications. He described the most recent Center public opinion surveys since mid-March, covering the impact of the COVID-19 outbreak, racial justice protests that began in the summer, and the final stages of the 2020 presidential election campaign. He particularly examined how and why people are using the internet in the midst of multiple national crises and their concerns about digital divide and homework gap issues. And he covered how the Center has researched the impact of misinformation in recent years.
Startup.gov: Reworking Government Through Technical InnovationSarah Granger
This document discusses how government agencies can stimulate innovation through a startup culture approach. It outlines some obstacles to innovation like bureaucracy and risk aversion, and contrasts that with the culture of startups which value opportunity, flexibility, risk tolerance, and collaboration. The document advocates for both internally and externally driven solutions to promote openness, sharing of information, and engagement between government and the public.
How open data and social media can work together to solve some of government's big problems. (Presented to the California Democratic Party Internet Caucus at Stanford University, Feb. 5, 2011.)
New meets old media: Civic Tech users in West Africamysociety
This was presented by Nicole Anand from Reboot at the Impacts of Civic Technology Conference (TICTeC2016) in Barcelona on 28th April. You can find out more information about the conference here: https://www.mysociety.org/research/tictec-2016/
Computer-mediated communication (CMC) such as email has helped political communication in divided regions by allowing people to communicate across borders. While CMC gives people greater freedom to participate in political processes and express their views, it also enables politicians and governments to disseminate information and potentially exert control. CMC provides both opportunities and challenges for political participation and democracy.
The document discusses how open government through data sharing can transform democracy by making governments more open, innovative, responsive and smarter. It provides examples of open government initiatives around the world and argues that governments should make raw public data easily accessible, encourage public participation in designing applications, be responsive to social media, and make data analytics a core competency. The benefits of open government include cheaper and better services, increased transparency and accountability, and deeper civic engagement.
This research paper analyzes exposed cyber assets in critical sectors across US cities using data from the search engine Shodan. Some key findings include:
- Emergency services in Houston and Lafayette had many exposed assets.
- New York City, as a financial hub, had the most exposed assets in the financial sector.
- Exposed utilities assets were mostly in small towns, not large cities.
- Philadelphia alone had over 65,000 exposed devices in the education sector.
The paper also examines exposed industrial control systems and human machine interfaces that could impact critical infrastructure if accessed maliciously. It concludes by calling for improved awareness and security of these exposed city cyber assets.
Delivering value through data future agenda 2019Future Agenda
Delivering value through data - final report. Throughout 2018, Future Agenda canvassed the views of a wide range of 900 experts with different backgrounds and perspectives from around the world, to provide their insights on the future value of data. Supported by Facebook and many other organisations, we held 30 workshops across 24 countries in Africa, Asia, the Americas, and Europe. In them, we reviewed the data landscape across the globe, as it is now, and how experts think it will evolve over the next five to ten years.
The aim? To gain a better understanding of how perspectives and priorities differ across the world, and to use the diverse voices and viewpoints to help governments, organisations, and individuals to better understand what they need to do to realise data’s full potential.
We are not aware of any other exercise of this scale or scope. No other project we know of has carefully and methodically canvassed the views of such a wide range of experts from such a diverse range of backgrounds and geographical locations. The result, we hope, delivers a more comprehensive picture of the sheer variety of issues and views thrown up by a fast-evolving ‘data economy’ than can be found elsewhere. And, by providing this rich set of perspectives, we aim to help businesses and governments - to develop the policies, strategies, and innovations that realise the full potential of data (personal, social, economic, commercial), while addressing potential harms, both locally and globally.
For more details see the dedicated website www.deliveringvaluethroughdata.org
This document discusses government 2.0, which aims to make government more transparent, efficient and user-oriented through new technologies like blogs, wikis and social networking. It provides examples of government agencies using these tools, like a policy blog by the EU and a wiki created by US intelligence agencies. However, it notes that government 2.0 requires more than just tools - it must leverage drivers like reducing information asymmetries and changing citizen expectations to drive innovation without strict top-down control. The risks of too much transparency and ensuring participation remains balanced are also addressed.
This document discusses how big data and analytics will help the world of charities. It argues that the financial flows to charities, their operations, and government policy will all need to shift as data technology rapidly grows. Charities will need to address issues around who owns and manages the increasing data being collected. This data, from sources like charities, donors, news, and third parties, is currently being used by various stakeholders like funders, charities, and social enterprises to make funding decisions, identify opportunities, and drive impact. A case study is presented on a hackathon that used data and one on how data is influencing estate planning and charitable giving conversations.
The Impact of the Consumerization of IT on the Public SectorGovLoop
The document discusses how the consumerization of IT is transforming the public sector workforce by allowing government employees to work anywhere, anytime, and on any device. It highlights opportunities like improved employee morale and increased work flexibility. However, it also notes challenges in ensuring cybersecurity, data security, and addressing legal issues with policies that have not caught up with changing technologies. The document advocates that agencies build flexible infrastructures that can support new devices and workstyles while still maintaining proper governance over data and systems.
Government agencies are using the power of analytics to understand government performance as well as analyze key trends, catch fraud, and drive better citizen engagement. In this session, you will learn tips on using data to effectively do your job better. Learn key analytical strategies that will help you become an analytical star within your agency or organization.
Ten future global trends impacting the mining industryFuture Agenda
Future of Mining
Technological, social and regulatory change are all have major implications for the world’s mining sector. Ahead of a speech next month in China, this an overview of ten key global future trends that may well have significant impact. Drawn from projects on the future of data, work, land use and automation, this presents ten of the shifts underway and illustrates some of the associated areas of change.
If you have any comments or additions, do let us know and we can include in an updated view coming out later in the year.
Dr. Alan Borning (University of Washington Computer Science professor emeritus), presents and leads a discussion on the true costs of "free" services such as Facebook, Twitter, Google, etc.
Call for papers - ICPP6 T13P03 - GOVERNANCE AND POLICY DESIGN LESSONS FOR TRU...Araz Taeihagh
CALL FOR PAPERS
T13P03 - GOVERNANCE AND POLICY DESIGN LESSONS FOR TRUST BUILDING AND RESPONSIBLE USE OF AI, AUTONOMOUS SYSTEMS AND ROBOTICS
https://www.ippapublicpolicy.org/conference/icpp6-toronto-2023/panel-list/17/panel/governance-and-policy-design-lessons-for-trust-building-and-responsible-use-of-ai-autonomous-systems-and-robotics/1390
Abstract submission deadline: 31 January 2023
GENERAL OBJECTIVES, RESEARCH QUESTIONS AND SCIENTIFIC RELEVANCE
Artificial intelligence (AI), Autonomous Systems (AS) and Robotics are key features of the fourth industrial revolution, and their applications are supposed to add $15 trillion to the global economy by 2030 and improve the efficiency and quality of public service delivery (Miller & Sterling, 2019). A McKinsey global survey found that over half of the organisations surveyed use AI in at least one function (McKinsey, 2020). The societal benefits of AI, AS, and Robotics have been widely acknowledged (Buchanan 2005; Taeihagh & Lim 2019; Ramchurn et al. 2012), and the acceleration of their deployment is a disruptive change impacting jobs, the economic and military power of countries, and wealth concentration in the hands of corporations (Pettigrew et al., 2018; Perry & Uuk, 2019).
However, the rapid adoption of these technologies threatens to outpace the regulatory responses of governments around the world, which must grapple with the increasing magnitude and speed of these transformations (Taeihagh 2021). Furthermore, concerns about these systems' deployment risks and unintended consequences are significant for citizens and policymakers. Potential risks include malfunctioning, malicious attacks, and objective mismatch due to software or hardware failures (Page et al., 2018; Lim and Taeihagh, 2019; Tan et al., 2022). There are also safety, liability, privacy, cybersecurity, and industry risks that are difficult to address (Taeihagh & Lim, 2019) and The opacity in AI operations has also manifested in potential bias against certain groups of individuals that lead to unfair outcomes (Lim and Taeihagh 2019; Chesterman, 2021).
These risks require appropriate governance mechanisms to be mitigated, and traditional policy instruments may be ineffective due to insufficient information on industry developments, technological and regulatory uncertainties, coordination challenges between multiple regulatory bodies and the opacity of the underlying technology (Scherer 2016; Guihot et al. 2017; Taeihagh et al. 2021), which necessitate the use of more nuanced approaches to govern these systems. Subsequently, the demand for the governance of these systems has been increasing (Danks & London, 2017; Taeihagh, 2021).
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RI...gerogepatton
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an
in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being
used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with
theories and models reviewed and expanded constructs, the writers propose a new framework called “The
Transformation Risk-Benefit Model of Artificial Intelligence” to address the increasing fears and levels of
AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where
benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to
illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational
model.
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RI...ijaia
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an
in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being
used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with
theories and models reviewed and expanded constructs, the writers propose a new framework called “The
Transformation Risk-Benefit Model of Artificial Intelligence” to address the increasing fears and levels of
AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where
benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to
illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational
model.
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RI...ijaia
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an
in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being
used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with
theories and models reviewed and expanded constructs, the writers propose a new framework called “The
Transformation Risk-Benefit Model of Artificial Intelligence” to address the increasing fears and levels of
AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where
benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to
illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational
model.
Why and how is the power of big teach increasing?Araz Taeihagh
Abstract: The growing digitalization of our society has led to a meteoric rise of large technology companies (Big Tech), which have amassed tremendous wealth and influence through their ownership of digital infrastructure and platforms. The recent launch of ChatGPT and the rapid popularization of generative artificial intelligence (GenAI) act as a focusing event to further accelerate the concentration of power in the hands of the Big Tech. By using Kingdon’s multiple streams framework, this article investigates how Big Tech utilize their technological monopoly and political influence to reshape the policy landscape and establish themselves as key actors in the policy process. It explores the implications of the rise of Big Tech for policy theory in two ways. First, it develops the Big Tech-centric technology stream, highlighting the differing motivations and activities from the traditional innovation-centric technology stream. Second, it underscores the universality of Big Tech exerting ubiquitous influence within and across streams, to primarily serve their self-interests rather than promote innovation. Our findings emphasize the need for a more critical exploration of policy role of Big Tech to ensure balanced and effective policy outcomes in the age of AI.
Keywords: generative AI, governance, artificial intelligence, big tech, multiple streams framework
Regulating Generative AI: A Pathway to Ethical and Responsible ImplementationIJCI JOURNAL
The document discusses regulating artificial intelligence through ethical guidelines and policies. It examines existing regulations in the US and other countries that aim to ensure AI is developed and used safely, privately and without discrimination. Key initiatives discussed include the AI in Government Act, National Artificial Intelligence Initiative Act, and Executive Orders in the US, as well as the General Data Protection Regulation and upcoming AI Regulation in Europe. The document argues more regulation is still needed as AI becomes more advanced and integrated into daily life.
Data Sharing in Disruptive Technologies Lessons from Adoption of Autonomous S...Araz Taeihagh
Autonomous systems have been a key segment of disruptive technologies for which data are constantly collected, processed, and shared to enable their operations. The internet of things facilitates the storage and transmission of data and data sharing is vital to power their development. However, privacy, cybersecurity, and trust issues have ramifications that form distinct and unforeseen barriers to sharing data. This paper identifies six types of barriers to data sharing (technical, motivational, economic, political, legal, and ethical), examines strategies to overcome these barriers in different autonomous systems, and proposes recommendations to address them. We traced the steps the Singapore government has taken through regulations and frameworks for autonomous systems to overcome barriers to data sharing. The results suggest specific strategies for autonomous systems as well as generic strategies that apply to a broader set of disruptive technologies. To address technical barriers, data sharing within regulatory sandboxes should be promoted. Promoting public-private collaborations will help in overcoming motivational barriers. Resources and analytical capacity must be ramped up to overcome economic barriers. Advancing comprehensive data sharing guidelines and discretionary privacy laws will help overcome political and legal barriers. Further, enforcement of ethical analysis is necessary for overcoming ethical barriers in data sharing. Insights gained from this study will have implications for other jurisdictions keen to maximize data sharing to increase the potential of disruptive technologies such as autonomous systems in solving urban problems.
Organizational and social impact of Artificial IntelligenceAJHSSR Journal
ABSTRACT: Modern information technologies and the advent of machines powered by artificial intelligence
(AI) have already strongly influenced the world of work in the 21st century. Computers, algorithms and
software simplify everyday tasks, and it is impossible to imagine how most of our life could be managed
without them. However, the emergence of artificial intelligence (AI) has its own advantages and disadvantages.
With the recent boom in big data and the continuous need for innovation, Artificial Intelligence is carving out a
bigger place in our society. Through its computer-based capabilities, it brings new possibilities to tackle many
issues within organizations. It also raises new challenges about its use and limits. Over the past few years,
developments in artificial intelligence (AI) have captured the imagination of tens of millions of people around
the world. Both the sophistication and the societal impact of intelligent technologies are set to increase
substantially in the coming years and decades, as are the associated policy challenges. This includes how
government agencies protect consumers and citizens from unethical, unsafe or unsound use of AI systems
employed in critical contexts such as health, finance, or employment by companies or individuals. Some of the
impacts of AI on organizations include: power shifts; reassignment of decision making responsibility; cost
reduction and enhanced service; and personnel shifts and downsizing as some jobs are done Robots. This paper
aims to provide a better understanding of the organizational and social impact of Artificial Intelligence in the
organizational decision making process. Unemployment is not the same as leisure, and there are deep links
between unemployment and unhappiness, self-doubt, and isolation understanding what policies and norms can
break these links could significantly improve the median quality of life. Empirical and theoretical research on AI
came up with discussions and findings that with time the negative perceptions will be addressed.
KEYWORDS: Artificial Intelligence, Algorithms, Decision making, Robots, Technologies.
Governmental Transparency in the Era of Artificial IntelligenceDennisdeVries21
The document discusses governmental transparency and the use of artificial intelligence. It notes that while governments have used AI since the 1980s, recently there has been a shift to more complex, data-driven AI models that are difficult to explain. As AI is increasingly used in governmental decision-making, transparency is important so that citizens understand decisions. The authors developed a framework to assess the quality of explanations for legal decisions provided by public administrations. Through a case study and survey in the Netherlands, they found communication could be improved by providing more interactive explanations of decisions, including details of calculations and laws used. A good explanation led to greater citizen understanding and acceptance of decisions.
FirstReview these assigned readings; they will serve as your .docxclydes2
First:
Review these assigned readings; they will serve as your scientific sources of accurate information:
http://www.closerlookatstemcells.org/Top_10_Stem_Cell_Treatment_Facts.html
http://www.closerlookatstemcells.org/How_Science_Becomes_Medicine.html
http://www.newvision.co.ug/news/649266-fighting-ageing-using-stem-cell-therapy.html
http://www.nature.com/news/stem-cells-in-texas-cowboy-culture-1.12404
http://www.cbc.ca/radio/whitecoat/blog/stem-cell-hype-and-risk-1.3654515
http://stm.sciencemag.org/content/7/278/278ps4.full
Next:
Use a standard Google search for this phrase: “stem cell therapy.” Do not go to Google Scholar. Select one of the websites, blogs, or other locations that offer stem cell therapies.
Save the link for your selected site.
Read the materials provided on your selected site and find out who the authors and sponsors of the site are by going to their “home” or “about us” pages.
Finally, submit your responses to the following in an essay of 500-750 words (2-3 pages of text—use a separate page for a title and for your references):
You are going to prepare a critique of the site you located and compare it to the scientific information available on this therapy.
Give the full title of the website, web blog, or other site that you selected, along with the link.
Describe the therapy that is being offered and what conditions it is designed to treat.
Who are the authors and sponsors of the site you selected?
Compare the claims about the therapy offered to what is said in the assigned readings about this type of therapy. You may have to use our library, as well, to determine what scientists and researchers have to say about the use of stem cells to treat this condition.
Would you say that the therapy you found is a well-established, proven technique for humans, or more of an experimental, unproven approach?
What about the type of language discussed in the Goldman article? Is the therapy you found using sensationalist claims and terminology that are not supported by the scientific research?
Would you recommend that a patient with this condition go ahead and participate in this treatment? Why or why not?
Literature review on how Information Technology has impacted governing bodies’ ability to align public policy with stakeholder needs
Nowadays, the governing bodies both in public and private sectors are dealing with complex systems on a day to day operations. These systems are made up of different components which present varying interactions and interrelationships with and/or among each other; therefore, making their management to be difficult or challenging. Indeed, Ruiz, Zabaleta & Elorza (2016), highlighted that public policymakers have to deal with complex systems which involve heterogeneous agents that act in non-linear behaviors making their management difficult. Neziraj & Shaqiri (2018) also stated that the policymakers are faced with problems which are complex and non-uniform due to a lot of uncertainties and risk situ.
Artificial Intelligence In Information TechnologyKatie Naple
This document discusses artificial intelligence (AI) and its applications in information technology. It provides an overview of key AI technologies like machine learning, deep learning, natural language generation, speech recognition and biometrics. It then discusses how AI is applied in various sectors such as healthcare, business, manufacturing and automotive industries. The document also outlines some advantages of AI, such as improved performance and data security, as well as challenges like technical difficulties and data issues.
Digital Authoritarianism: Implications, Ethics, and SafegaurdsAndy Aukerman
Artificial intelligence, or AI, constitutes a common motif in science fiction literature – the aspect of a “robot uprising”, where AI becomes sufficiently advanced such that it surpasses human intelligence and escapes human control. Common perceptions of AI focus on the ethical and human impacts of a malevolent, artificially intelligent agent itself. In this document, I wish to instead focus on an equally important, and, I will argue, more plausible case of sufficiently advanced AI which poses immense risk to human activity: the use of AI in conjunction with big data for authoritarian rule and population control. In these scenarios, AI has no agency, and instead serves as a sufficiently advanced and intelligent tool for human agents. Throughout this document, I summarize current and potential applications of this type of AI, explore the ethical ramifications, and last, propose and evaluate solutions and safeguards.
This document provides an overview of social innovation and the potential of connecting devices and sensors to create smarter infrastructure and applications. It discusses how analyzing data from billions of connected devices can help address problems like traffic congestion, public safety and health. Key points include how data platforms and analytics can provide insights to improve systems like transportation, energy and healthcare. The document also discusses smart city initiatives and how a focus on applications built on top of connected infrastructure can generate value.
This document provides an overview of social innovation through connected devices and data analytics. It discusses how by 2020 there will be 28 billion connected devices generating vast amounts of data. It describes how companies like Hitachi are working to turn this machine data into intelligence through analytics to help address challenges in areas like transportation, public safety, energy and health. The document outlines the potential benefits of social innovation initiatives in smart cities, public safety, energy/water management, transportation and health. It emphasizes the importance of understanding where data comes from, managing and analyzing data securely, and applying industry expertise to focus on what information and applications can make the most meaningful impact.
The Future of Work: How AI is Shaping Employment and IndustriesKelvin Martins
"The Future of Work: How AI is Shaping Employment and Industries" is a concise eBook exploring AI's impact on workforce dynamics and industry landscapes. Through real-life examples and practical insights, readers gain an understanding of how AI is transforming job roles, organizational structures, and the overall landscape of work.
Artificial Intelligence (AI) has been a disruptive force within the communication industry. Regulations of this new technology have yet to keep pace with the technological development of generative AI. However, within the United States, the President, Congress, federal agencies, state legislatures, and municipal governments have attempted to provide a framework to regulate AI. These regulations attempt to strike a balance between allowing the technology
to grow and guarding against issues of disinformation, discrimination, and privacy violations. This article examines the current trends in U.S. AI regulation pointing out the legal and regulatory philosophies that guide early attempts to manage generative AI platforms. The article concludes with suggestions for PR practitioners to navigate the evolving parameters of AI regulation.
Artificial intelligence (AI) refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing. While the field has been pursuing principles and applications for over 65 years, recent advances, uses, and attendant public excitement have returned it to the spotlight. The impact of early AI 1 systems is already being felt, bringing with it challenges and opportunities, and laying the foundation on which future advances in AI will be integrated into social and economic domains. The potential wide-ranging impact make it necessary to look carefully at the ways in which these technologies are being applied now, whom they’re benefiting, and how they’re structuring our social, economic, and interpersonal lives.
Governing the adoption of robotics and autonomous systems in long term care i...Araz Taeihagh
Robotics and autonomous systems have been dubbed as viable technological solutions to address the incessant demand for long-term care (LTC) across the world, which is exacerbated by ageing populations. However, similar to other emerging technologies, the adoption of robotics and autonomous systems in LTC pose risks and unintended consequences. In the health and LTC sectors, there are additional bioethics concerns that are associated with novel technology applications. Using an in-depth case study, we examined the adoption of novel technologies such as robotics and autonomous systems in LTC to meet the rising social care demand in Singapore consequent to its ageing population. We first described the LTC sector in Singapore and traced the development of robotics and autonomous systems deployed in the LTC setting. We then examined technological risks and ethical issues that are associated with their applications. In addressing these technological risks and ethical concerns, Singapore has adopted a regulatory sandbox approach that fosters experimentation through the creation of a robotics test-bed and the initiation of various robotics pilots in different health clusters. The stakeholders largely envision positive scenarios of human-robot coexistence in the LTC setting. When robots can take over routine and manual care duties in the future, human care workers can be freed up to provide more personalised care to the care recipients. We also highlighted existing gaps in the governance of technological risks and ethical issues surrounding the deployment of robotics and autonomous systems in LTC that can be advanced as future research agendas.
Similar to Governance of artificial intelligence (20)
Unmasking deepfakes: A systematic review of deepfake detection and generation...Araz Taeihagh
Due to the fast spread of data through digital media, individuals and societies must assess the reliability of information. Deepfakes are not a novel idea but they are now a widespread phenomenon. The impact of deepfakes and disinformation can range from infuriating individuals to affecting and misleading entire societies and even nations. There are several ways to detect and generate deepfakes online. By conducting a systematic literature analysis, in this study we explore automatic key detection and generation methods, frameworks, algorithms, and tools for identifying deepfakes (audio, images, and videos), and how these approaches can be employed within different situations to counter the spread of deepfakes and the generation of disinformation. Moreover, we explore state-of-the-art frameworks related to deepfakes to understand how emerging machine learning and deep learning approaches affect online disinformation. We also highlight practical challenges and trends in implementing policies to counter deepfakes. Finally, we provide policy recommendations based on analyzing how emerging artificial intelligence (AI) techniques can be employed to detect and generate deepfakes online. This study benefits the community and readers by providing a better understanding of recent developments in deepfake detection and generation frameworks. The study also sheds a light on the potential of AI in relation to deepfakes.
A governance perspective on user acceptance of autonomous systems in SingaporeAraz Taeihagh
Autonomous systems that operate without human intervention by utilising artificial intelligence are a significant feature of the fourth industrial revolution. Various autonomous systems, such as driverless cars, unmanned drones and robots, are being tested in ongoing trials and have even been adopted in some countries. While there has been a discussion of the benefits and risks of specific autonomous systems, more needs to be known about user acceptance of these systems. The reactions of the public, especially regarding novel technologies, can help policymakers better understand people's perspectives and needs, and involve them in decision-making for governance and regulation of autonomous systems. This study has examined the factors that influence the acceptance of autonomous systems by the public in Singapore, which is a forerunner in the adoption of autonomous systems. The Unified Technology Adoption and Use Theory (UTAUT) is modified by introducing the role of government and perceived risk in using the systems. Using structural equation modelling to analyse data from an online survey (n = 500) in Singapore, we find that performance expectancy, effort expectancy, social influence, and trust in government to govern autonomous systems significantly and positively impact the behavioural intention to use autonomous systems. Perceived risk has a negative relationship with user acceptance of autonomous systems. This study contributes to the literature by identifying latent variables that affect behavioural intention to use autonomous systems, especially by introducing the factor of trust in government to manage risks from the use of these systems and filling the gap by studying the entire domain of autonomous systems instead of a narrow focus on one application. The findings will enable policymakers to understand the perceptions of the public in regard to adoption and regulation, and designers and manufacturers to improve user experience.
The soft underbelly of complexity science adoption in policymakingAraz Taeihagh
The deepening integration of social-technical systems creates immensely complex environments, creating increasingly uncertain and unpredictable circumstances. Given this context, policymakers have been encouraged to draw on complexity science-informed approaches in policymaking to help grapple with and manage the mounting complexity of the world. For nearly eighty years, complexity-informed approaches have been promising to change how our complex systems are understood and managed, ultimately assisting in better policymaking. Despite the potential of complexity science, in practice, its use often remains limited to a few specialised domains and has not become part and parcel of the mainstream policy debate. To understand why this might be the case, we question why complexity science remains nascent and not integrated into the core of policymaking. Specifically, we ask what the non-technical challenges and barriers are preventing the adoption of complexity science into policymaking. To address this question, we conducted an extensive literature review. We collected the scattered fragments of text that discussed the non-technical challenges related to the use of complexity science in policymaking and stitched these fragments into a structured framework by synthesising our findings. Our framework consists of three thematic groupings of the non-technical challenges: (a) management, cost, and adoption challenges; (b) limited trust, communication, and acceptance; and (c) ethical barriers. For each broad challenge identified, we propose a mitigation strategy to facilitate the adoption of complexity science into policymaking. We conclude with a call for action to integrate complexity science into policymaking further.
Development of New Generation of Artificial Intelligence in ChinaAraz Taeihagh
How did China become one of the leaders in AI development, and will China prevail in the ongoing AI race with the US? Existing studies have focused on the Chinese central government’s role in promoting AI. Notwithstanding the importance of the central government, a significant portion of the responsibility for AI development falls on local governments’ shoulders. Local governments have diverging interests, capacities and, therefore, approaches to promoting AI. This poses an important question: How do local governments respond to the central government’s policies on emerging technologies, such as AI? This article answers this question by examining the convergence or divergence of central and local priorities related to AI development by analysing the central and local AI policy documents and the provincial variations by focusing on the diffusion of the New Generation Artificial Intelligence Development Plan (NGAIDP) in China. Using a unique dataset of China’s provincial AI-related policies that cite the NGAIDP, the nature of diffusion of the NGAIDP is examined by conducting content analysis and fuzzy-set Qualitative Comparative Analysis (fsQCA). This study highlights the important role of local governments in China’s AI development and emphasises examining policy diffusion as a political process.
Governing disruptive technologies for inclusive development in citiesAraz Taeihagh
Abstract
Cities are increasingly adopting advanced technologies to address complex challenges. Applying technologies such as information and communication technology, artificial intelligence, big data analytics, and autonomous systems in cities' design, planning, and management can cause disruptive changes in their social, economic, and environmental composition. Through a systematic literature review, this research develops a conceptual model linking (1) the dominant city labels relating to tech-driven urban development, (2) the characteristics and applications of disruptive technologies, and (3) the current understanding of inclusive urban development. We extend the discussion by identifying and incorporating the motivations behind adopting disruptive technologies and the challenges they present to inclusive development. We find that inclusive development in tech-driven cities can be realised if governments develop suitable adaptive regulatory frameworks for involving technology companies, build policy capacity, and adopt more adaptive models of governance. We also stress the importance of acknowledging the influence of digital literacy and smart citizenship, and exploring other dimensions of inclusivity, for governing disruptive technologies in inclusive smart cities.
Sustainable energy adoption in poor rural areasAraz Taeihagh
Abstract
A growing body of literature recognises the role of local participation by end users in the successful implementation of sustainable development projects. Such community-based initiatives are widely assumed to be beneficial in providing additional savings, increasing knowledge and skills, and improving social cohesion. However, there is a lack of empirical evidence regarding the success (or failure) of such projects, as well as a lack of formal impact assessment methodologies that can be used to assess their effectiveness in meeting the needs of communities. Using a case study approach, we investigate the effectiveness of community-based energy projects in regard to achieving long-term renewable energy technology (RET) adoption in energy-poor island communities in the Philippines. This paper provides an alternative analytical framework for assessing the impact of community-based energy projects by defining RET adoption as a continuous and relational process that co-evolves and co-produces over time, highlighting the role of social capital in the long-term RET adoption process. In addition, by using the Social Impact Assessment methodology, we study off-grid, disaster-vulnerable and energy-poor communities in the Philippines and we assess community renewable energy (RE) projects implemented in those communities. We analyse the nature of participation in the RET adoption process, the social relations and interactions formed between and among the different stakeholders, and the characteristics, patterns and challenges of the adoption process.
Highlights
• Community-based approaches aid state-led renewable energy in off-grid areas.
• Social capital in communities addresses immediate energy needs in affected areas.
• Change Mapping in Social Impact Assessment shows community-based RE project impacts.
• Long-term renewable energy adoption involves co-evolving hardware, software, orgware.
• Successful adoption relies on communal mechanisms to sustain renewable energy systems.
Smart cities as spatial manifestations of 21st century capitalismAraz Taeihagh
Globally, smart cities attract billions of dollars in investment annually, with related market opportunities forecast to grow year-on-year. The enormous resources poured into their development consist of financial capital, but also natural, human and social resources converted into infrastructure and real estate. The latter act as physical capital storage and sites for the creation of digital products and services expected to generate the highest value added. Smart cities serve as temporary spatial fixes until new and better investments opportunities emerge. Drawing from a comprehensive range of publications on capitalism, this article analyzes smart city developments as typifier of 21st century capital accumulation where the financialization of various capitals is the overarching driver and ecological overshoot and socio-economic undershoot are the main negative consequences. It closely examines six spatial manifestations of the smart city – science parks and smart campuses; innovation districts; smart neighborhoods; city-wide and city-regional smart initiatives; urban platforms; and alternative smart city spaces – as receptacles for the conversion of various capitals. It also considers the influence of different national regimes and institutional contexts on smart city developments. This is used, in the final part, to open a discussion about opportunities to temper the excesses of 21st century capitalism.
Highlights
• Recent academic literature on modern capitalism and smart city development are brought together
• Different interpretations and denominations of 21th century capitalism are mapped and synthesized into an overview box
• Six spatial manifestations of the smart city are identified and thoroughly described, with their major institutions, actors and resources
• Five different types of capital (natural, human, social, physical and financial) are mapped, along with an analysis of how further financialization affects conversion processes between them
• Options to mitigate exclusionary tendencies of capitalism in the digital age are explored, based on the varieties of capitalism literature
Digital Ethics for Biometric Applications in a Smart CityAraz Taeihagh
From border control using fingerprints to law enforcement with video surveillance to self-activating devices via voice identification, biometric data is used in many applications in the contemporary context of a Smart City. Biometric data consists of human characteristics that can identify one person from others. Given the advent of big data and the ability to collect large amounts of data about people, data sources ranging from fingerprints to typing patterns can build an identifying profile of a person. In this article, we examine different types of biometric data used in a smart city based on a framework that differentiates between profile initialization and identification processes. Then, we discuss digital ethics within the usage of biometric data along the lines of data permissibility and renewability. Finally, we provide suggestions for improving biometric data collection and processing in the modern smart city.
A realist synthesis to develop an explanatory model of how policy instruments...Araz Taeihagh
Abstract
Background
Child and maternal health, a key marker of overall health system performance, is a policy priority area by the World Health Organization and the United Nations, including the Sustainable Development Goals. Previous realist work has linked child and maternal health outcomes to globalization, political tradition, and the welfare state. It is important to explore the role of other key policy-related factors. This paper presents a realist synthesis, categorising policy instruments according to the established NATO model, to develop an explanatory model of how policy instruments impact child and maternal health outcomes.
Methods
A systematic literature search was conducted to identify studies assessing the relationships between policy instruments and child and maternal health outcomes. Data were analysed using a realist framework. The first stage of the realist analysis process was to generate micro-theoretical initial programme theories for use in the theory adjudication process. Proposed theories were then adjudicated iteratively to produce a set of final programme theories.
Findings
From a total of 43,415 unique records, 632 records proceeded to full-text screening and 138 papers were included in the review. Evidence from 132 studies was available to address this research question. Studies were published from 1995 to 2021; 76% assessed a single country, and 81% analysed data at the ecological level. Eighty-eight initial candidate programme theories were generated. Following theory adjudication, five final programme theories were supported. According to the NATO model, these were related to treasure, organisation, authority-treasure, and treasure-organisation instrument types.
Conclusions
This paper presents a realist synthesis to develop an explanatory model of how policy instruments impact child and maternal health outcomes from a large, systematically identified international body of evidence. Five final programme theories were supported, showing how policy instruments play an important yet context-dependent role in influencing child and maternal health outcomes.
Addressing Policy Challenges of Disruptive TechnologiesAraz Taeihagh
This special issue examines the policy challenges and government responses to disruptive technologies. It explores the risks, benefits, and trade-offs of deploying disruptive technologies, and examines the efficacy of traditional governance approaches and the need for new regulatory and governance frameworks. Key themes include the need for government stewardship, taking adaptive and proactive approaches, developing comprehensive policies accounting for technical, social, economic, and political dimensions, conducting interdisciplinary research, and addressing data management and privacy challenges. The findings enhance understanding of how governments can navigate the complexities of disruptive technologies and develop policies to maximize benefits and mitigate risks.
Navigating the governance challenges of disruptive technologies insights from...Araz Taeihagh
The proliferation of autonomous systems like unmanned aerial vehicles, autonomous vehicles and AI-powered industrial and social robots can benefit society significantly, but these systems also present significant governance challenges in operational, legal, economic, social, and ethical dimensions. Singapore’s role as a front-runner in the trial of autonomous systems presents an insightful case to study whether the current provisional regulations address the challenges. With multiple stakeholder involvement in setting provisional regulations, government stewardship is essential for coordinating robust regulation and helping to address complex issues such as ethical dilemmas and social connectedness in governing autonomous systems.
A scoping review of the impacts of COVID-19 physical distancing measures on v...Araz Taeihagh
Most governments have enacted physical or social distancing measures to control COVID-19 transmission. Yet little is known about the socio-economic trade-offs of these measures, especially for vulnerable populations, who are exposed to increased risks and are susceptible to adverse health outcomes. To examine the impacts of physical distancing measures on the most vulnerable in society, this scoping review screened 39,816 records and synthesised results from 265 studies worldwide documenting the negative impacts of physical distancing on older people, children/students, low-income populations, migrant workers, people in prison, people with disabilities, sex workers, victims of domestic violence, refugees, ethnic minorities, and people from sexual and gender minorities. We show that prolonged loneliness, mental distress, unemployment, income loss, food insecurity, widened inequality and disruption of access to social support and health services were unintended consequences of physical distancing that impacted these vulnerable groups and highlight that physical distancing measures exacerbated the vulnerabilities of different vulnerable populations.
Call for papers - ICPP6 T13P05 - PLATFORM GOVERNANCE IN TURBULENT TIMES.docxAraz Taeihagh
CALL FOR PAPERS
T13P05 - PLATFORM GOVERNANCE IN TURBULENT TIMES
https://www.ippapublicpolicy.org/conference/icpp6-toronto-2023/panel-list/17/panel/platform-governance-in-turbulent-times/1428
Abstract submission deadline: 31 January 2023
GENERAL OBJECTIVES, RESEARCH QUESTIONS AND SCIENTIFIC RELEVANCE
Platforms significantly increase the ease of interactions and transactions in our societies. Crowdsourcing and sharing economy platforms, for instance, enable interactions between various groups ranging from casual exchanges among friends and colleagues to the provision of goods, services, and employment opportunities (Taeihagh 2017a). Platforms can also facilitate civic engagements and allow public agencies to derive insights from a critical mass of citizens (Prpić et al. 2015; Taeihagh 2017b). More recently, governments have experimented with blockchain-enabled platforms in areas such as e-voting, digital identity and storing public records (Kshetri and Voas, 2018; Taş & Tanrıöver, 2020; Sullivan and Burger, 2019; Das et al., 2022).
How platforms are implemented and managed can introduce various risks. Platforms can diminish accountability, reduce individual job security, widen the digital divide and inequality, undermine privacy, and be manipulated (Taeihagh 2017a; Loukis et al. 2017; Hautamäki & Oksanen 2018; Ng and Taeihagh 2021). Data collected by platforms, how platforms conduct themselves, and the level of oversight they provide on the activities conducted within them by users, service providers, producers, employers, and advertisers have significant consequences ranging from privacy and ethical concerns to affecting outcomes of elections. Fake news on social media platforms has become a contentious public issue as social media platforms offer third parties various digital tools and strategies that allow them to spread disinformation to achieve self-serving economic and political interests and distort and polarise public opinion (Ng and Taeihagh 2021). The risks and threats of AI-curated and generated content, such as a Generative Pre-Trained Transformer (GPT-3) (Brown et al., 2020) and generative adversarial networks (GANs) are also on the rise (Goodfellow et al., 2014) while there are new emerging risks due to the adoption of blockchain technology such as security vulnerabilities, privacy concerns (Trump et al. 2018; Mattila & Seppälä 2018; Das et al. 2022).
The adoption of platforms was further accelerated by COVID-19, highlighting their governance challenges.
Call for papers - ICPP6 T07P01 - EXPLORING TECHNOLOGIES FOR POLICY ADVICE.docxAraz Taeihagh
CALL FOR PAPERS
T07P01 - EXPLORING TECHNOLOGIES FOR POLICY ADVICE
https://www.ippapublicpolicy.org/conference/icpp6-toronto-2023/panel-list/17/panel/exploring-technologies-for-policy-advice/1295
Abstract submission deadline: 31 January 2023
GENERAL OBJECTIVES, RESEARCH QUESTIONS AND SCIENTIFIC RELEVANCE
Knowledge and expertise are key components of policy-making and policy design, and many institutions and processes exist – universities, professional policy analysts, think tanks, policy labs, etc. – to generate and mobilize knowledge for effective policies and policy-making. Despite many years of research, however. many critical ssues remain unexplored, including the nature of knowledge and non-knowledge, how policy advice is organized into advisory systems or regimes, and when and how specific types of knowledge or evidence are transmitted and influence policy development and implementation. These long-standing issues have been joined recently by use of Artificial Intelligence and Big data, and other kinds of technological developments – such as crowdsourcing through open collaboration platforms, virtual labour markets, and tournaments – which hold out the promise of automating, enhancing. or expanding policy advisory activities in government. This panel seeks to explore all aspects of the application of current and future technologies to policy advice, including case studies of its deployment as well as theoretical and conceptual studies dealing with moral, epistemological and other issues surrounding its use.
What factors drive policy transfer in smart city developmentAraz Taeihagh
Abstract
Smart city initiatives are viewed as an input to existing urban systems to solve various problems faced by modern cities. Making cities smarter implies not only technological innovation and deployment, but also having smart people and effective policies. Cities can acquire knowledge and incorporate governance lessons from other jurisdictions to develop smart city initiatives that are unique to the local contexts. We conducted two rounds of surveys involving 23 experts on an e-Delphi platform to consolidate their opinion on factors that facilitate policy transfer among smart cities. Findings show a consensus on the importance of six factors: having a policy entrepreneur; financial instruments; cities’ enthusiasm for policy learning; capacity building; explicit regulatory mechanisms; and policy adaptation to local contexts. Correspondingly, three policy recommendations were drawn. Formalizing collaborative mechanisms and joint partnerships between cities, setting up regional or international networks of smart cities, and establishing smart city repositories to collect useful case studies for urban planning and governance lessons will accelerate policy transfer for smart city development. This study sheds light on effective ways policymakers can foster policy learning and transfer, especially when a jurisdiction's capacity is insufficient to deal with the uncertainties and challenges ahead.
Perspective on research–policy interface as a partnership: The study of best ...Araz Taeihagh
This article serves as a blueprint and proof-of-concept of Singapore’s Campus for Research Excellence and Technological Enterprise (CREATE) programmes in establishing effective collaborations with governmental partners. CREATE is a research consortium between Singapore’s public universities and international research institutions. The effective partnership of CREATE partners with government stakeholders is part of its mission to help government agencies solve complex issues in areas that reflect Singapore’s national interest. Projects are developed in consultation with stakeholders, and challenges are addressed on a scale that enables significant impact and provides solutions for Singapore and internationally. The article discusses the lessons learnt, highlighting that while research–policy partnerships are widespread, they are seldom documented. Moreover, effective communication proved to be a foundation for an effective partnership where policy and research partners were more likely to provide formal and informal feedback. Engaging policy partners early in the research co-development process was beneficial in establishing effective partnerships.
Whither policy innovation? Mapping conceptual engagement with public policy i...Araz Taeihagh
Abstract
A transition to sustainable energy will require not only technological diffusion and behavioral change, but also policy innovation. While research on energy transitions has generated an extensive literature, the extent to which it has used the policy innovation perspective – entailing policy entrepreneurship or invention, policy diffusion, and policy success – remains unclear. This study analyzes over 8000 publications on energy transitions through a bibliometric review and computational text analysis to create an overview of the scholarship, map conceptual engagement with public policy, and identify the use of the policy innovation lens in the literature. We find that: (i) though the importance of public policy is frequently highlighted in the research, the public policy itself is analyzed only occasionally; (ii) studies focusing on public policy have primarily engaged with the concepts of policy mixes, policy change, and policy process; and (iii) the notions of policy entrepreneurship or invention, policy diffusion, and policy success are hardly employed to understand the sources, speed, spread, or successes of energy transitions. We conclude that the value of the policy innovation lens for energy transitions research remains untapped and propose avenues for scholars to harness this potential.
How transboundary learning occurs: Case Study of the ASEAN Smart Cities Netwo...Araz Taeihagh
While policy study of smart city developments is gaining traction, it falls short of understanding and explaining knowledge transfers across national borders and cities. This article investigates how transboundary learning occurs through the initiation and development of a regional smart cities network: the ASEAN Smart Cities Network (ASCN). The article conducts an in-depth case study from data collected through key informant interviews and document analysis. Spearheaded by Singapore in 2017, ASCN is seen as a soft power extension for Singapore, a branding tool for ASEAN, and a symbiotic platform between the private sector and governments in the region. Most transboundary knowledge transfers within the ASCN are voluntary transfers of policy ideas. Effective branding, demand for knowledge, availability of alternative funding options, enthusiasm from the private actors, and heightened interest from other major economies are highlighted as facilitators of knowledge transfer. However, the complexity of governance structures, lack of political will and resources, limited policy capacity, and lack of explicit operational and regulatory mechanisms hinder transboundary learning. The article concludes that transboundary learning should go beyond exchanges of ideas and recommends promoting facilitators of knowledge transfer, building local policy capacity, encouraging collaborative policy transfer, and transiting from an information-sharing platform to tool/instrument-based transfer.
The governance of the risks of ridesharing in southeast asiaAraz Taeihagh
This document provides an in-depth analysis of the governance strategies used to regulate ridesharing risks in five Southeast Asian countries: the Philippines, Singapore, Indonesia, Malaysia, and Vietnam. It conducted interviews and collected secondary data on regulatory events from 2013 to 2021. The document finds that countries employed different governance strategies, including no-response, prevention-oriented, control-oriented, toleration-oriented, and adaptation-oriented strategies. It provides a timeline of major regulatory actions and examines the risks identified and approaches taken in each country.
How does fakenews spread understanding pathways of disinformation spread thro...Araz Taeihagh
What are the pathways for spreading disinformation on social media platforms? This article addresses this question by collecting, categorising, and situating an extensive body of research on how application programming interfaces (APIs) provided by social media platforms facilitate the spread of disinformation. We first examine the landscape of official social media APIs, then perform quantitative research on the open-source code repositories GitHub and GitLab to understand the usage patterns of these APIs. By inspecting the code repositories, we classify developers' usage of the APIs as official and unofficial, and further develop a four-stage framework characterising pathways for spreading disinformation on social media platforms. We further highlight how the stages in the framework were activated during the 2016 US Presidential Elections, before providing policy recommendations for issues relating to access to APIs, algorithmic content, advertisements, and suggest rapid response to coordinate campaigns, development of collaborative, and participatory approaches as well as government stewardship in the regulation of social media platforms.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
United Nations World Oceans Day 2024; June 8th " Awaken new dephts".Christina Parmionova
The program will expand our perspectives and appreciation for our blue planet, build new foundations for our relationship to the ocean, and ignite a wave of action toward necessary change.
UN WOD 2024 will take us on a journey of discovery through the ocean's vastness, tapping into the wisdom and expertise of global policy-makers, scientists, managers, thought leaders, and artists to awaken new depths of understanding, compassion, collaboration and commitment for the ocean and all it sustains. The program will expand our perspectives and appreciation for our blue planet, build new foundations for our relationship to the ocean, and ignite a wave of action toward necessary change.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
AHMR is an interdisciplinary peer-reviewed online journal created to encourage and facilitate the study of all aspects (socio-economic, political, legislative and developmental) of Human Mobility in Africa. Through the publication of original research, policy discussions and evidence research papers AHMR provides a comprehensive forum devoted exclusively to the analysis of contemporaneous trends, migration patterns and some of the most important migration-related issues.
About Potato, The scientific name of the plant is Solanum tuberosum (L).Christina Parmionova
The potato is a starchy root vegetable native to the Americas that is consumed as a staple food in many parts of the world. Potatoes are tubers of the plant Solanum tuberosum, a perennial in the nightshade family Solanaceae. Wild potato species can be found from the southern United States to southern Chile
Synopsis (short abstract) In December 2023, the UN General Assembly proclaimed 30 May as the International Day of Potato.
Contributi dei parlamentari del PD - Contributi L. 3/2019Partito democratico
DI SEGUITO SONO PUBBLICATI, AI SENSI DELL'ART. 11 DELLA LEGGE N. 3/2019, GLI IMPORTI RICEVUTI DALL'ENTRATA IN VIGORE DELLA SUDDETTA NORMA (31/01/2019) E FINO AL MESE SOLARE ANTECEDENTE QUELLO DELLA PUBBLICAZIONE SUL PRESENTE SITO
Donate to charity during this holiday seasonSERUDS INDIA
For people who have money and are philanthropic, there are infinite opportunities to gift a needy person or child a Merry Christmas. Even if you are living on a shoestring budget, you will be surprised at how much you can do.
Donate Us
https://serudsindia.org/how-to-donate-to-charity-during-this-holiday-season/
#charityforchildren, #donateforchildren, #donateclothesforchildren, #donatebooksforchildren, #donatetoysforchildren, #sponsorforchildren, #sponsorclothesforchildren, #sponsorbooksforchildren, #sponsortoysforchildren, #seruds, #kurnool
3. traditionally performed by humans can be replaced and outperformed by machines
(Bathaee, 2018; Osoba & Welser, 2017; Sætra, 2020).
While the technology can yield positive impacts for humanity, AI applications can
also generate unexpected and unintended consequences and pose new forms of risks
that need to be effectively managed by governments. As AI systems learn from data in
addition to programmed rules, unanticipated situations that the system has not been
trained to handle and uncertainties in human-machine interactions can lead AI
systems to display unexpected behaviours that pose safety hazards for its users (He
et al., 2019; Helbing, 2019; Knudson & Tumer, 2011; Lim & Taeihagh, 2019). In many
AI systems, biases in the data and algorithm have been shown to yield discriminatory
and unethical outcomes for different individuals in various domains, such as credit
scoring and criminal sentencing (Huq, 2019; Kleinberg, Ludwig, Mullainathan, &
Sunstein, 2018). The autonomous nature of AI systems presents issues around the
potential loss of human autonomy and control over decision-making, which can yield
ethically questionable outcomes in multiple applications such as caregiving and mili
tary combat (Firlej & Taeihagh, 2021; Leenes et al., 2017; Solovyeva & Hynek, 2018).
Responsibility and liability for harms resulting from the use of AI applications remain
ambiguous under many legal frameworks (Leenes et al., 2017; Xu & Borson, 2018) and
the automation of routine and manual tasks in domains such as data analysis, service,
manufacturing and driving enabled by machine-learning algorithms, chatbots and
driverless vehicles are expected to displace millions of jobs that will not be evenly
distributed within and across countries (Linkov, Trump, Poinsatte-Jones, & Florin,
2018; Taeihagh & Lim, 2019). Managing the scale and speed of AI adoption and their
attendant risks is becoming an increasingly central task for governments. However, in
many instances, the beneficiaries of these technologies do not bear the costs of their
risks, and these risks are transferred to the society or governments (Leenes et al., 2017;
Soteropoulos, Berger, & Ciari, 2018).
While there is considerable literature emerging on various aspects of AI, governance
of AI is an emerging but significantly underdeveloped area. To enhance the benefits of AI
while minimising the adverse risks they pose, governments worldwide need to under
stand better the scope and depth of the risks posed. There is a need to reassess the efficacy
of traditional governance approaches such as the use of regulations, taxes, and subsidies,
which may be insufficient due to the lack of information and constant changes (Guihot,
Matthew, & Suzor, 2017), and the speed and scale of adoption of AI threatens to outpace
the regulatory responses to address the concerns raised (Taeihagh, Ramesh, & Howlett,
2021). As such, governments face mounting pressures to design and establish new
regulatory and governance structures to deal with these challenges effectively. The
increasing recognition of AI governance across government, the public (Chen, Kuo, &
Lee, 2020; Zhang & Dafoe, 2019, 2020) and industry is evident from the emergence of
new governance frameworks in the meta-discourse on AI such as adaptive and hybrid
governance (Leiser & Murray 2016; Linkov et al., 2018; Tan & Taeihagh, 2021b), and self-
regulatory initiatives such standards and voluntary codes of conduct to guide AI design
(Guihot et al., 2017; IEEE 2019). The first half of 2018 saw the release of new AI strategies
from over a dozen countries, significant boosts in pledged financial support by govern
ments for AI, and the heightened involvement of industry bodies in AI regulatory
development (Cath, 2018), raising further questions regarding what ideas and interests
2 A. TAEIHAGH
4. should shape AI governance to ensure inclusion and diverse representation of all
members of society (Hemphill, 2016; Jobin, Ienca, & Vayena, 2019).
This special issue introduces the multifaceted challenges of governance of Artificial
Intelligence, including emerging governance approaches to AI, policy capacity building,
and exploring legal and regulatory challenges of AI and Robotics. This introduction
unpacks AI and describes why the Governance of AI should be gaining far more attention
given the myriad of challenges it presents. The introduction then summarises of the
special issue articles are presented, and their key contributions are highlighted. Thanks to
the diverse set of articles comprising this special issue; it highlights the state-of-the-art in
the governance of AI and discusses the outstanding issues and gaps that need attention,
aiming to enable researchers and practitioners to appreciate the challenges that AI brings
better and understand the complexities of governance of AI and future avenues for
exploration.
2. AI – background and recent trends
Conceptions of AI date back to earlier efforts in developing artificial neural networks to
replicate human intelligence, which can be referred to as the ability to interpret and learn
from the information. Originally designed to understand neuron activity in the human
brain, more sophisticated neural networks were developed in the late 20th
century with
the aid of advancements in processing power to solve problems such as image and speech
recognition (Izenman 2008). These efforts led to the introduction of the concept of AI as
computer programs (or machines) that can perform predefined tasks at much higher
speeds and accuracy. In the most recent wave of AI developments facilitated by advance
ments in big data analytics, AI capabilities have expanded to include computer programs
that can learn from vast amounts of data and make decisions without human guidance,
commonly referred to as Machine-learning (ML) algorithms (Izenman 2008). Unlike
earlier algorithms that rely on pre-programmed rules to execute repetitive tasks, ML
algorithms are designed with rules about how to learn from data that involves ‘inferential
reasoning’, ‘perception’, ‘classification’, and ‘optimisation’ to replicate human decision-
making (Bathaee, 2018; Linkov et al., 2018). The learning process involves feeding these
algorithms with large data sets, from which they seek and test complex mathematical
correlations between candidate variables to maximise predictions of a specified outcome
(Kleinberg et al. 2018; Brauneis & Goodman, 2018). As these algorithms adapt their
decision-making rules with more experience, ML-driven decisions are primarily depen
dent on the data rather than on pre-programmed rules and, thus, typically cannot be
predicted well in advance (Mittelstadt et al., 2016).
Among AI experts and researchers, there is a broad consensus that AI still ‘falls
short’ of human cognitive abilities, and most AI applications that have been successful
to date stem from ‘narrow AI’ or ‘weak AI’, which refer to AI applications that can
perform tasks in specific and restricted domains, such as chess, image, and speech
recognition (Bostrom & Ludkowsky 2014; Lele, 2019b). Narrow AI is expected to
automate and replace many mid-skill professions due to their ability to execute routine,
cognitive tasks at much higher speeds and accuracy than their human counterparts
(Lele, 2019bb; Linkov et al., 2018). In future, it is expected that this form of AI will
eventually achieve ‘General AI’ or ‘artificial general intelligence’, a level of intelligence
POLICY AND SOCIETY 3
5. comparable to or surpassing humans due to the ability to generalise across different
contexts that cannot be programmed in advance (Bostrom & Ludkowsky 2014; Wang
& Siau, 2019). This introduction and the articles comprising this special issue focus on
applications of narrow AI.
Both industry and governments worldwide have enthused over the potential societal
benefits arising from AI and thus, have accelerated the technology’s development and
deployment across various domains. Some of the impetuses for deploying AI include
increasing economic efficiency and quality of life, meeting labour shortages, tackling
ageing populations and strengthening national defence, and they vary between govern
ments according to each nation’s unique strategic concerns (Lele, 2019; Taeihagh & Lim,
2019). For instance, governments in Japan and Singapore have supported the use of
assistive and surgical robots in healthcare and autonomous vehicles for public transpor
tation to meet labour shortages and tackle ageing populations (Inagaki, 2019; SNDGO
2019; Taeihagh & Lim, 2019; Tan & Taeihagh, 2021, 2021b). Cost-savings and increased
productivity are the main motivations for AI adoption in various sectors, which is already
transforming the manufacturing, logistic, service, and maritime industries (World
Economic Forum, 2018). AI-based technologies are also a strategic military asset for
countries such as China, US, and Russia, whose governments have made significant
investments in robots, drones and fully autonomous weapon systems for national
defence and geopolitical influence (Allen, 2019; Lele, 2019).
3. Understanding the risks of AI
Many scholars highlight the safety issues that can arise from deploying AI in various
domains. A major challenge faced by most AI applications to date stems from their lack
of generalizability to different contexts, in which they can face unexpected situations
widely referred to as ‘corner cases’ that the system had not been trained to handle
(Bostrom & Ludkowsky 2014; Lim & Taeihagh, 2019; Pei, Cao, Yang, & Jana, 2017).
For instance, fatal crashes have already resulted from trials of Tesla’s partially autono
mous vehicles due to the system’s misinterpretation of unique environmental conditions
that it had not previously experienced during testing. While various means of detecting
these corner cases in advance have been devised, such as simulating data on many
possible driving situations for autonomous vehicles, not all scenarios can be covered or
even envisioned by the human designers (Bolte, Bar, Lipinski, & Fingscheidt, 2019; Pei
et al., 2017). Due to the complexity and adaptive nature of ML processes, it is difficult for
humans to articulate or understand why and how a decision was made, which hinders the
identification of corner case behaviours in advance (Mittelstadt et al., 2016). As ML
decisions are highly data-driven and unpredictable, the system can exhibit vastly different
behaviours in response to almost identical inputs that make it difficult to specify ‘correct’
behaviours and verify their safety in advance (Koopman & Wagner, 2016). In particular,
scholars point out potential safety hazards that can also arise from the interaction
between AI systems and their users due to the problem of automation bias, where
humans afford more credibility to automated decisions due to the latter’s seemingly
objective nature and, thus, grow complacent and display less cautious behaviour while
using AI systems (Osoba & Welser, 2017; Taeihagh & Lim, 2019). Thus, human-machine
interfaces significantly shape the degree of safety, particularly in social settings that
4 A. TAEIHAGH
6. involve frequent interactions with users such as robots for personal care, autonomous
vehicles, and service providers.
The decision-making autonomy of AI significantly reduces human control over their
decisions, creating new challenges for ascribing responsibility and legal liability for the
harms imposed by AI on others. Existing legal frameworks for the ascribing of respon
sibility and liability for machine operation treat machines as tools that are controlled by
their human operator based on the assumption that humans have a certain degree of
control over the machine’s specification (Matthias 2004; Leenes & Lucivero, 2014).
However, as AI relies largely on ML processes that learn and adapt their own rules,
humans are no longer in control and, thus, cannot be expected to always bear respon
sibility for AI’s behaviour. Under strict product liability, manufacturers and software
designers could be subject to liability for manufacturing defects and design defects, but
the unpredictability of ML decisions implies that many erroneous decisions made by AI
are beyond the control of and cannot be anticipated by these parties (Butcher & Beridze,
2019; Kim et al. 2017; Lim & Taeihagh, 2019). This raises critical questions regarding the
extent to which different parties in the AI supply chain will be held liable in different
accident scenarios and the degree of autonomy that is sufficient to ‘limit’ the responsi
bility of these parties for such unanticipated accidents (Osoba & Welser, 2017; Wirtz,
Weyerer, & Sturm, 2020). It is also widely recognised that excessive liability risks can
hinder long-run innovation and improvements to the technology, which highlights
a major issue regarding how governments can structure new liability frameworks that
balance the benefits of promoting innovation with the moral imperative of protecting
society from the risks of emerging technologies (Leenes et al., 2017).
Given the value-laden nature of the decisions automated by algorithms in various
aspects of society, AI systems can potentially exhibit behaviours that conflict with societal
values and norms, prompting concerns regarding the ethical issues that can arise from
AI’s rapid adoption. One of the most intensively discussed issues across industry and
academia is the potential for algorithmic decisions to be biased and discriminatory. As
ML algorithms can learn from data gathered from society to make decisions, they could
not only conflict with the original ethical rules they were programmed with but also
reproduce the inequality and discriminatory patterns of society that is contained in such
data (Goodman & Flaxman, 2017; Osoba & Welser, 2017; Piano, 2020). If sensitive
personal characteristics such as gender or race in the data are used to classify individuals,
and some characteristics are found to negatively correlate with the outcome that the
algorithm is designed to optimise, the individuals categorised with these traits will be
penalised over others with different group characteristics (Liu 2018). This could yield
disparate outcomes in terms of risk exposure and access to social and economic benefits.
Bias can also be introduced through the human designer in constructing the algorithm,
and even if sensitive attributes are removed from the data, there are techniques for ML
algorithms to use ‘probabilistically inferred’ variables as a proxy for sensitive attributes,
which is much harder to regulate (Kroll et al., 2016; Osoba & Welser, 2017). The risk of
bias and discrimination stemming from the optimisation process in AI algorithms
reflects a dominant concern surrounding discussions of fairness in AI governance –
the trade-off between equity and efficiency in algorithmic decision-making – (Sætra,
2020) and how a balance can be struck to produce socially desirable outcomes catering to
the different groups’ ethical preferences remains subject to debate.
POLICY AND SOCIETY 5
7. A vast body of literature and government reports have highlighted issues of data
privacy and surveillance that can arise from AI applications. As algorithms in AI systems
utilise sensors to collect data and big data technologies to store, process and transmit data
through external communication networks, there have been concerns regarding the
potential misuse of personal data by third parties and increasing calls for more holistic
data governance frameworks to ensure reliable sharing of data within and between
organisations (Gasser & Almeida, 2017; Janssen, Brous, Estevez, Barbosa, & Janowski,
2020). AI systems store extensive personal information about their users that can be
transmitted to third parties to profile individuals’ preferences, such as using past travel
data collected in autonomous vehicles to tailor advertisements to passengers (Chen et al.,
2020; Lim & Taeihagh, 2018), using personal and medical information collected by
personal care robots and networked medical devices for the surveillance of individuals
(Guihot et al., 2017; Leenes et al., 2017; Tan, Taeihagh, & Tripathi, 2021). The ownership
of such data and how AI system developers should design these robots to adhere to
privacy laws are key concerns that remain to be addressed (Chen et al., 2020; Leenes et al.,
2017). Surveillance is also a key concern over the use of AI in many domains, such as
surveillance robots in the workplace that monitor employee performance and govern
ment agencies potentially using autonomous vehicles to track passenger movements with
negative implications for democratic freedoms and personal autonomy (Leenes et al.,
2017; Lim & Taeihagh, 2018).
The autonomy assumed by AI systems to make decisions in place of humans can
introduce ethical concerns in their application across various sectors. Studies have
underlined the potential for personalisation algorithms used by digital platforms to
undermine the decision-making autonomy of data subjects by filtering information
presented to users based on their preferences and influencing their choices. By exerting
control over an individual’s decision and reducing the ‘diversity of information’ pro
vided, personalisation algorithms can reduce personal autonomy and, thus, be construed
as unethical (Mittelstadt et al., 2016). In healthcare, the use of robots to provide personal
care services has prompted concerns over the potential loss of autonomy and dignity of
care recipients if robots excessively restrict patients’ mobility to avoid dangerous situa
tions (Leenes et al., 2017; Tan et al., 2021). Studies have yet to examine how these risks
can be balanced against their potential benefits for autonomy in other scenarios, such as
autonomous vehicles increasing mobility for the disabled and elderly (Lim & Taeihagh,
2018), and personal care robots offering patients greater freedom of movement with the
assurance of being monitored (Leenes et al., 2017). In the military, autonomous weapon
systems such as drones and unmanned aerial vehicles have been developed to improve
the precision and reliability of military combat, planning and strategy, but there has been
increasing momentum across industry and academia, including prominent figures, high
lighting their ethical and legal unacceptability (Lele, 2019; Roff, 2014). Central to these
concerns is the delegation of authority to a machine to exert lethal force ‘independently
of human determinations of its moral and legal legitimacy’ and the lack of controllability
over these adaptive systems that could amplify the consequences of failure, prompting
fears of a dystopian future where such weapons inflict casualties and escalate crises at
a much larger scale (Firlej & Taeihagh, 2021; Scharre, 2016; Solovyeva & Hynek, 2018).
Unemployment and social instability resulting from the automation of routine cog
nitive tasks remains one of the most publicly debated issues concerning AI adoption
6 A. TAEIHAGH
8. (Frey & Osborne, 2017; Linkov et al., 2018). The effects of automation are already felt in
industries such as the manufacturing, entertainment, healthcare, finance, and transport
sectors as companies increasingly invest in AI to reduce labour costs and boost efficiency
(Linkov et al., 2018). While technological advancements have historically created new
jobs as well, there are concerns that the distribution of employment opportunities is
uneven across sectors and skill levels. Studies show that highly routine and cognitive
tasks that characterise many middle-skilled jobs are at a high risk of automation. In
contrast, tasks with relatively lower risks of automation are those that machines cannot
easily replicate – this includes manual tasks in low-skilled, service occupations that
require flexibility and ‘physical adaptability’, as well as high-skilled occupations in
engineering and science that require creative intelligence (Frey & Osborne, 2017;
World Economic; Forum, 2018). As high- and low-skilled occupations benefit from
increased wage premiums and middle-skilled jobs are being phased out, automation
could exacerbate income and social inequalities (Alonso et al. 2018).
4. Governing AI
4.1 Why AI governance is important
Understanding and managing the risks posed by AI is crucial to realise the benefits of the
technology. Increased efficiency and quality in the delivery of goods and services, greater
autonomy and mobility for the elderly and disabled, and improved safety from using AI
in safety-critical operations such as in healthcare, transport and emergency response are
the many socio-economic benefits arising from AI that can propel smart and sustainable
development (Agarwal, Gurjar, Agarwal, & Birla, 2015; Lim & Taeihagh, 2018;
Yigitcanlar et al., 2018). Thus, as AI systems develop and increase in complexity, their
risks and interconnectivity with other smart devices and systems will also increase,
necessitating the creation of both specific governance mechanisms, such as for health
care, transport and autonomous weapons, as well as a broader global governance frame
work for AI (Butcher & Beridze, 2019).
4.2 Challenges to AI governance
The high degree of uncertainty and complexity of the AI landscape imposes many
challenges for governments in designing and implementing effective policies to govern
AI. Many challenges posed by AI stem from the nature of the problem, which are highly
unpredictable, intractable and nonlinear, making it difficult for governments to formu
late concrete objectives in their policies (Gasser & Almeida, 2017; Perry & Uuk, 2019).
ML systems’ inherent opacity and unpredictability pose technical challenges for
governments in ensuring the accountability of AI. Firstly, the opacity of complex ML
algorithms remains a major barrier to AI governance as it limits the extent of
transparency, explainability and accountability that can be achieved in AI systems
(Lim & Taeihagh, 2019; Mittelstadt et al., 2016). Even with mandated levels of
transparency and explainability of algorithms, it is impossible for the experts them
selves to interpret how certain algorithmic outputs are derived from its inputs and
designing the algorithm to be more explainable reduces their complexity, which has
POLICY AND SOCIETY 7
9. been shown to undermine accuracy and performance (Felzmann, Villaronga, Lutz, &
Tamò-Larrieux, 2019; Piano, 2020). This issue is highlighted as a severe limitation of
the EU General Data Protection Regulation in increasing algorithmic transparency to
tackle discrimination (Goodman & Flaxman, 2017). Algorithms are often kept inten
tionally opaque by their developers to prevent cyber-attacks and to safeguard trade
secrets, which is legally justified by intellectual property rights, and the complexity of
extensive datasets used by ML algorithms makes it nearly impossible to identify and
remove all variables that are correlated with sensitive categories of personal data
(Carabantes, 2020; Goodman & Flaxman 2016; Kroll et al., 2016). As most individuals
lack sufficient technical literacy or the willingness to pay for accessing such expertise
to help them to interpret these explanations, requirements by the likes of GDPR for
mandated explanations are unlikely to inform and/or empower them (Carabantes,
2020; Felzmann et al., 2019). Secondly, ML decisions are unpredictable as they are
derived from the data and can differ significantly with slight changes in inputs. The
lack of human controllability over the behaviour of AI systems suggests the difficulty
of assigning liability and accountability for harms resulting from software defects, as
manufacturers and programmers often cannot predict the inputs and design rules
that could yield unsafe or discriminatory outcomes (Kim et al. 2017; Kroll et al.,
2016).
Data governance is a key issue surrounding the debate on AI governance, as multiple
organisational and technological challenges exist that impede effective control over data
and attribution of responsibility for data-driven decisions made by AI systems (Janssen
et al., 2020). Data fragmentation and lack of interoperability between systems limits an
organisation’s control over data flows throughout its entire life cycle and shared roles
between different parties in data sharing clouds the chain of accountability and causation
between AI-driven decisions/events and the parties involved in facilitating that decision
(Janssen et al., 2020).
Existing governance and regulatory frameworks are also ill-equipped to manage the
societal problems introduced by AI due to the insufficient information required for
understanding the technology and regulatory lags behind AI developments. Major
technology companies and AI developers such as Google, Facebook, Microsoft, and
Apple possess huge informational and resource advantages over governments in regulat
ing AI, which significantly supersedes governments’ traditional role in distributing and
controlling resources in society (Guihot et al., 2017). The information asymmetries
between technology companies and regulators compound the difficulty for the latter in
understanding and applying new or existing legislation to AI applications (Taeihagh
et al., 2021). Regulators are struggling to meet these informational gaps and are lagging
behind due to rapid developments in the technology, which in turn leads to the formula
tion of laws that are ‘too general’ or ‘vague’ and lack specificity to effectively regulate the
technology (Guihot et al., 2017; Larsson, 2020). In particular, lawmakers can be deterred
from outlining specific rules and duties for algorithm programmers to allow for future
experimentation and modifications to code to improve the software, but doing so
provides room for programmers to evade responsibility and accountability for the
system’s resulting behaviour in society (Kroll et al., 2016). These challenges demonstrate
how the four governing resources traditionally used by governments for regulation are
insufficient to manage the risks arising from AI and the need for governments to find new
8 A. TAEIHAGH
10. ways of acquiring information and devising effective policies that can adapt to the
evolving AI landscape (Guihot et al., 2017; Wirtz et al., 2020).
Amidst the issues with ‘hard’ regulatory frameworks, industry bodies and govern
ments have increasingly adopted self-regulatory or ‘soft law’ approaches to govern AI
design, but they remain limited in their effectiveness. Soft law approaches refer to
‘nonbinding norms and techniques’ that create ‘substantive expectations that are not
directly enforceable’. Industry bodies have released voluntary standards, guidelines,
and codes of conduct (IEEE 2019), and governments alike have formed expert com
mittees to devise AI strategies and released multiple AI ethics guidelines and govern
ance frameworks (PDPC et al. 2020; AI; Hleg, 2019; Jobin et al., 2019). Guidelines can
be amended and adapt more rapidly than traditional regulation and thus are advanta
geous in keeping pace with technological developments (Hemphill, 2020; Larsson,
2020). This approach has been adopted in response to previous emerging technologies
and can promote ethical, fair, and non-discriminatory practices in AI design, but
many issues exist regarding their efficacy. Firstly, the voluntary nature of self-
regulatory initiatives cannot assure that the outlined principles will always be adhered
to, particularly as they are often not subject to uniform enforcement standards
(Butcher & Beridze, 2019; Hemphill, 2016). Secondly, governments will face the
challenge of ensuring consistent application of these guidelines in designing the
same AI technology across different sectors if the principles differ across multiple
guidelines and are not well-coordinated with regulations (Cath et al. 2018; Guihot
et al., 2017).
In addition, self-regulation alone could be insufficient and even undesirable for AI
governance due to their inability to ensure inclusivity and representation of diverse
stakeholders. The deep involvement of industry stakeholders in developing ethical
principles and regulations for AI raises concerns that corporate interests dominate
AI regulations, which has been an ongoing critique of self-regulatory initiatives to
govern emerging technologies in general (Hemphill, 2016). For instance, major tech
nology companies have exerted significant influence over the framing of AI issues and
the formulation of AI policy, such as through lobbying efforts and their inclusion in AI
expert groups formed by governments (Cath et al. 2018; Jobin et al., 2019). Studies
have highlighted the risks of regulatory capture by AI industries due to the significant
informational advantages of AI developers that make their latter’s technological
expertise ‘particularly valuable’ for regulators (Guihot et al., 2017). Furthermore, the
‘inscrutable’ and highly opaque nature of ML algorithms could be used by corporations
to legitimise deep industry involvement in AI regulations and the choices made behind
these regulations are often made away from public scrutiny. The unbridled influence of
corporations, as well as key political figures in the AI landscape, could exacerbate
power imbalances and social inequalities, as the ideologies and interests of an elite few
could manifest themselves through the design of AI and the decisions that they make
in society (Jobin et al., 2019). To ensure greater inclusivity and diversity in AI
governance, more research is required to examine the key actors, their roles, the
dominant ideas, and values promoted in AI policies, whether there is a global con
vergence in these values across different countries and the degree to which these values
reflect society’s interests or are politically motivated (Jobin et al., 2019; Mulligan &
Bamberger, 2018).
POLICY AND SOCIETY 9
11. 4.3 Steps forward for AI governance
The conceptual framing of AI is a crucial determinant of how the problems introduced by
AI are understood, whether they are included in policies and the degree of priority
afforded to the problem in public policy formulation (Perry & Uuk, 2019), but the issue
of framing has yet to be extensively discussed as a key component of AI governance. As
AI is still developing with the potential to grow more salient and diverse, the complexity
of its challenges suggests that decision-making in AI systems needs to be carefully
conceptualised according to their context of application, and these framing processes
should be subject to public debate (Cunneen, Mullins, & Murphy, 2019). For instance,
how policymakers frame the relative importance of different ethical principles arising
from a particular AI application, conflicts between ethical principles, and the justification
of their importance have critical implications on the resulting trade-offs from designing
AI systems and the public’s compliance with different ethical guidelines (Piano, 2020).
The initial framing of AI is also critical to avoid amplifying perceived risks and fears
surrounding new technologies and instead promote a more balanced discourse on what
AI is, the goals and norms that AI should reinforce in society, and the design require
ments to maximise its benefits while minimising its risks (Cath et al. 2018; Cunneen et al.,
2019)
To address the governance challenges posed by the uncertainty and complexity of AI
developments, there are increasing calls for the adoption of innovative governance
approaches such as adaptive governance and hybrid or ‘de-centred’ governance (Dafoe,
2018; Linkov et al., 2018b; Pagallo, Casanovas, & Madelin, 2019; Tan & Taeihagh, 2021b).
Characteristic of adaptive and hybrid governance is the diminished role of the govern
ment in controlling the distribution of resources in society. This is defined in hybrid
governance as a combination of state and non-state actors, or blurring the public/private
distinction by different degrees where regulation exists as a combination of industry
standards and ‘public regulatory oversight’ (Guihot et al., 2017; Hemphill, 2016). Hybrid
governance can exist in the forms of co-regulation, enforced self-regulation, and meta-
regulation (Hemphill, 2016), all of which emphasise the increasing role played by non-
state actors and the need for ‘ongoing assessment of the balance of power’ between
private and public actors (Leiser & Murray, 2016). Similarly, adaptive governance
emphasises the need to shift away from ‘command and control’ measures (Gasser &
Almeida, 2017) towards more flexible approaches characterised by the iterative adjust
ment and improvement of regulations and policies as new information is gathered (Li,
Taeihagh, De Jong, & Klinke, 2021; Linkov et al., 2018; Tan & Taeihagh, 2021b). Adaptive
approaches are purported to be advantageous for proactively identifying and addressing
the risks introduced from ML systems that are expected to change over time, as well as
raising the public’s awareness of AI and engaging with the public to identify new issues
that have not yet entered the government’s agenda (Cihon, Maas, & Kemp, 2020; Linkov
et al., 2018; Pagallo et al., 2019). Flexibility is critical to enable diverse groups of
stakeholders to build consensus around the norms and trade-offs in designing AI
systems, as well as for global AI governance to be applicable across different geographical,
cultural, and legal contexts and aligned with existing standards of democracy and human
rights (Gasser & Almeida, 2017; Wirtz, Weyerer, & Geyer, 2019). Examples of adaptive
governance include laws that require regular risk assessments of the regulated activity,
10 A. TAEIHAGH
12. soft law approaches that involve collaboration with the affected stakeholders to develop
guidelines, and legal experimentation and regulatory sandboxes to test innovative frame
works for liability and accountability for AI that will be adapted in iterative phases (Cath
et al. 2018; Hemphill, 2020; Linkov et al., 2018; Philipsen, Stamhuis, & De Jong, 2021).
New governance frameworks can also be adapted from the approaches taken to
regulate previous emerging technologies (Gasser & Almeida, 2017). Studies have ana
lysed hybrid governance to regulate the Internet and emerging digital technologies. For
instance, Leiser and Murray (2016) highlight the need to account for the increasing role
of non-state actors, particularly private actors such as technology companies that control
the exchange of information across the globe, transnational private actors that are
developing design principles, as well as the role of civil society groups in ensuring
accountability of the former two. Lessons could be drawn from the experiences of
governing previous emerging technologies such as the Internet, nanotechnology, aviation
safety and space law (Butcher & Beridze, 2019; Snir, 2014) to inform the governance of AI
and other new emerging technologies. In addition, a key research agenda for future
studies on AI governance would be to analyse the distinctive features of AI technology
that warrants different approaches from previous technologies.
An emerging body of literature has proposed governing AI systems through their
design, where social, legal, and ethical rules can be enforced through code to regulate the
behaviour of AI systems (Leenes & Lucivero, 2014). For instance, provisions in data
protection laws can be translated into technical specifications for robots equipped with
cameras to automatically blur faces to prevent categorisation of individuals based on
sensitive characteristics such as ethnicity, as well as to remove data after it has exceeded
a specified storage timeframe or to restrict third party access to particular categories of
data (Leenes et al., 2017). However, several implementation challenges must also be
tackled to govern AI systems through code effectively. Many legal and ethical rules
cannot be translated into explicit code, which includes ‘subtle exceptions’ that often
require value judgements and consideration of contextual factors for interpretation, and
the resulting machine interpretation also depends on how it was initially designed
(Leenes & Lucivero, 2014; Mulligan & Bamberger, 2018). Other risks that governments
need to manage from such an approach are potential manipulations of encoded rules to
‘subvert regulatory aims’, which is easily masked by the opacity of ML processes
(Mulligan & Bamberger, 2018).
Common to recent studies in their proposed frameworks for AI governance is the
emphasis on building broad societal consensus around AI ethical principles and ensuring
accountability, but there is a need for studies examining how these frameworks can be
implemented in practice. Gasser and Almeida’s (2017) three-tiered framework comprises
a technical layer involving the AI system processes and data structures, a layer for the
ethical design of AI, and the third layer encompassing AI’s societal implications and the
role of regulation and legislation. Rahwan (2018) proposes extending the ‘human-in-the-
loop’ approach to a broader ‘society-in-the-loop’ approach where society is first respon
sible for finding consensus on the values that should shape AI and the distribution of
benefits and costs among different stakeholders. Other proposals centre on the need for
greater centralisation and cross-cultural cooperation to improve coordination among
national approaches (Cihon et al., 2020; Óhéigeartaigh, Whittlestone, Liu, Zeng, & Liu,
2020). Cihon et al. (2020) propose a framework to centralise the current fragmented state
POLICY AND SOCIETY 11
13. of international AI governance, outlining methods to monitor and coordinate national
approaches and examining the disadvantages of centralisation relative to decentralisa
tion, such as regulatory lags and limited adaptability to rapidly evolving risks. Among
these approaches, there are increasing calls to produce more concrete specifications on
implementing these governance frameworks in practice and identifying the parties in
government that are responsible for leading different aspects of AI governance (Wirtz
et al., 2020).
5. Overview of the special issue articles
The articles in the special issue tackle the issues of governing AI and robotics through
case studies and comparative analyses addressing gaps in the literature pertaining to
examining the risks and benefits of deploying AI in different applications and sectors,
identifying dominant ideals envisioned in the meta-discourse on AI governance and their
implications. Furthermore, exploration of new legal/regulatory/governance approaches
taken by various countries/governments to govern AI and examination of approaches
which can enhance implementation of AI and cross-country comparisons of AI govern
ance approaches is conducted. Authors have explored the limitations/effectiveness of
different AI governance approaches through case studies and derived key lessons to
facilitate policy learning. Below a brief summary of the articles in the special issue on the
Governance of AI and Robotics is presented.
5.1 Framing governance for a contested emerging technology: insights from AI
policy (Ulnicane, Knight, Leach, Stahl, & Wanjiku, 2021)
Whether AI develops in socially beneficial or problematic ways largely depends on public
policies, governance arrangements and regulation. In recent years many national govern
ments, international organisations, think tanks, consultancies, and civil society organisa
tions have launched their AI strategies outlining benefits and risks as well as initial
governance and regulatory frameworks. There are many questions regarding the regula
tion and governance of emerging disruptive technologies (Taeihagh et al., 2021). This
article address the following question regarding the governance of AI: What governance
and regulatory models are emerging to facilitate benefits and avoid risks of AI? What role
do different actors – governments, international organisations, business, academia, and
civil society – play in the emerging governance of AI? Do radical technological innova
tions in AI require radical or rather incremental innovations in governance? Are emer
ging governance and technology frameworks in different countries, regions and
organisations converging or diverging and why?
To study these questions, Ulnicane et al. (2021) draw on a dataset of more than 60 AI
strategies from ‘national governments, international organisations, consultancies, think
tanks and civil society organisations’ worldwide. The interdisciplinary research frame
work of the article draws on concepts and approaches from Science, Technology and
Innovation Studies, Policy Analysis and Political Science. It consists of two main pillars.
First, to study risks, uncertainties, and unintended consequences of AI in policy docu
ments, the concepts of policy framing and positive and negative expectations are used.
Second, to study emerging governance and regulation frameworks outlined in the policy
12 A. TAEIHAGH
14. documents, concepts of governance and governance models are used. AI policy docu
ments are analysed to establish how they frame AI. The emerging governance of AI is
analysed according to diverse governance models such as market, participatory, flexible,
and deregulated (Peters, 2001). This article contributes to the studies of emerging
disruptive technologies by analysing how the framing of risks and uncertainties of AI
leads to the development of specific governance and regulatory arrangements mapping
similarities and differences across countries, regions, and organisations.
5.2 Steering the governance of artificial intelligence: national strategies in
perspective (Radu, 2021)
The latest wave of AI developments is built around deep learning, speech and image
recognition, and data analytics tools deployed daily, such as banking, e-learning, medical
diagnosis – and more recently, smart vehicles (Radu, 2021). The article empirically
investigates the governance responses to AI developments worldwide. The article exam
ines recent AI national strategies worldwide to uncover the modalities through which
regulation is articulated at the state level.
The article highlights the key elements of the dominant regulatory discourses and
compares AI national projects. Using content analysis and comparative methods, Radu
(2021) examines the strategies of numerous countries to determine the articulation of
AI governance and the co-production of political and socio-technological constructs.
Building on the empirical evidence, the article highlights how the collective represen
tation of a technical project is predefined politically and is encapsulated into a vision
that is integrated into regulating the relevant sectors of society (Flichy, 2008). As such,
the article contributes to the policy discourse around AI by conceptualisation the
debates and critically examining the governance of AI and its complex effects in the
future.
5.3 The Governance of Artificial Agency (Gahnberg, 2021)
Gahnberg argues for the need for new regulatory strategies to conceptualise the chal
lenges of governing artificial agency. In the article, he argues that the notion of ‘intelli
gence’ is a vague metric of the success of the system and, the key characteristic of an AI
system is its overall ability to act as an autonomous agent within a specific environment
(Gahnberg, 2021). In the article, he posits that while different AI applications range from
use in filtering Spam to killer robots, they share a common characteristic of being
delegated authority to perform tasks regarding a specific objective(s).
The article underlines the social and legal constraints for delegating this authority to
an artificial agent, including legal and ethical provisions that may prohibit tasking the
agent to pursue certain objectives (e.g. using lethal force). Gahnberg (2021) further
highlights the role of non-state actors in the development of governance mechanisms,
such as in the case of development of standards and best practices developed by the
technical community for fail-safe mechanisms or initiatives such as the Algorithmic
Justice League that aims to mitigate the risks of algorithmic bias through creating
inclusive training data. By examining the challenges of governing artificial agency, the
article provides insights into the debates about AI governance.
POLICY AND SOCIETY 13
15. 5.4 Governing the adoption of robotics and autonomous systems in long-term
care in Singapore (Tan & Taeihagh, 2021)
Autonomous systems and robotics have been dubbed as a viable technological solution to
address the ever-increasing demand for long-term care globally, which is exacerbated by
ageing populations (Robinson, MacDonald, & Broadbent, 2014; Tan et al., 2021).
However, adopting new technologies in long-term care, like all other emerging technol
ogies, involves unintended consequences and poses risks (Taeihagh et al., 2021). For
instance, there are issues about safety, privacy, cybersecurity, liability, influence to the
existing health workforce, patient’s autonomy, patient’s dignity, moral agency, social
justice, and trust that need to be addressed by the government in the process of adoption
of autonomous systems (Sharkey & Sharkey, 2012; Stahl & Coeckelbergh, 2016; Tan et al.,
2021). Addressing these risks and ethical issues warrant the assessment of the govern
ment’s policy capacity (Wu, Ramesh, & Howlett, 2015) and the various governance
strategies that government chooses to deploy (no response, prevention-oriented, precau
tion-oriented, control-oriented, toleration-oriented and adaptation oriented) in the
implementation of these autonomous systems (Li, Taeihagh, & De Jong, 2018, Li et al.,
2021LinkManagerBM_REF_hD9ODVk6; Taeihagh & Lim, 2019).
The article is a theoretically-informed empirical study that examines the adoption of
autonomous systems in long-term care as a policy measure to arrest the issue of rising
social care demand due to the ageing population by examining Singapore as a case
study. The article reviews various existing and potential applications of autonomous
systems in social care and long-term care for older people in Singapore. It then
discusses the technological risks and ethical implications of deploying these systems
to users, society, and the economy at large. Finally, it assesses the extent to which
governance strategies influence the policy response adopted by the Singapore govern
ment, specifically in balancing the need for innovation and addressing uncertainties
arising from the deployment of emerging technologies in healthcare. Insights gathered
are important for policy learning, especially in understanding the governance process
of novel and emerging technologies as possible solutions to address the demand-supply
mismatch in the health sector. The analysis will showcase the interactive dynamics of
policy capacity, governance strategies and policy response as three major ingredients in
the governance of emerging technologies and calibrate their respective proportions to
facilitate effective implementations of novel technologies in healthcare (Tan &
Taeihagh, 2021).
5.5. Exploring governance dilemmas of disruptive technologies: the case of
care robots in Australia and New Zealand (Dickinson, Smith, Carey, & Carey,
2021)
While there is a range of experiments ongoing worldwide applying AI and robotics in
various care settings (Tan et al., 2021), with high expectations for positive outcomes,
Dickinson et al. (2021) are concerned with the unexpected consequences and risks, and
their article explores the use of robots in care services in the Australian and New Zealand.
They point out that while there is a burgeoning literature on robots, much of it is around
technical efficacy, acceptability, or the legal ramifications. They point out the lack of
14 A. TAEIHAGH
16. attention to the implementation of robots in care settings and their implications for
policymaking.
Informed by semi-structured interviews with policymakers, care providers, suppliers
of robots and technology experts, Dickinson et al. (2021) elicit a series of ‘dilemmas’ of
governance faced in this space and identify three dilemmas relating to independence and
surveillance, the re-shaping of human interactions, the dynamics of caregiving and
receiving care illustrating some of the tensions involved in the governance of robotics
in care services and draw attention to the issues that governments need to address for the
effective adoption of these technologies.
5.6 Law and tech collide: foreseeability, reasonableness, and advanced driver
assistance systems (Leiman, 2021)
There have been a number of fatalities involving automated vehicles since May 2016,
which have been highly publicised. Simultaneously, millions of serious injuries and
fatalities have occurred due to collisions by human-operated vehicles. In jurisdictions
where compensations depend on the establishment of fault, many of the injured or their
dependents will not recover any compensations (Leiman, 2021). In many Australian
jurisdictions, the standard of care required presents significant challenges when applied
to partly, highly, and fully automated vehicles (ibid).
Leiman (2021) explores how the existing regulatory framework in Australia considers
perceptions of risk in establishing legal liability in the case of motor vehicles and whether this
approach can be applied to automated vehicles. The article examines whether the law itself
may affect the perceptions of risk in view of the data generated by the automated vehicles and
considers the efficacy of no-fault or hybrid schemes for legal liability in existence in Australia
and compares them with the alternative legislation passed by the Parliament in the UK that
imposes responsibility on the insurers. Leiman also discusses proposals concerning the role
of government in assuring safety and fault in the case of Automated vehicles.
5.7 Co-regulating algorithmic disclosure for digital platforms. Di Porto and
Zuppetta (2021)
Di Porto and Zuppetta (2021) explore the increasing ability of IT-mediated platforms to
adopt algorithmic decision making and whether disclosure self-regulation, as it currently
stands at the EU level, is an appropriate strategy to address the risks posed by the
increasing adoption of algorithmic decision making taken by private entities. While
general data protection regulation (GDPR) requires platforms to provide information
about the treatment of personal data by AI systems and self-assess the risks of data
breaches, there is no reference made to the different abilities of the receiving entities to
understand the meaning and consequences of algorithmic decisions.
The article argues that the disclosure self-regulation should be rethought considering
the new AI developments and points out that since final consumers and SMEs are unaware
of the technical underpinnings of these platforms and the value of personal data, the
regulators should step in and address this issue. Furthermore, as platforms have increas
ingly deployed AI and Big data, they have gained the ability to manipulate the information
they produce, thus weakening the validity of consumers and small businesses choices (Di
POLICY AND SOCIETY 15
17. Porto & Zuppetta, 2021). The authors advocate that the regulators should tackle informa
tion asymmetry, low bargaining power and wrong information and endorse an enforced
co-regulatory approach that allows participation of platforms and consumers, and devel
opment of personalised discloses based on the needs of the consumers to empower them.
Acknowledgments
Araz Taeihagh is grateful for the funding support provided by the Lee Kuan Yew School of Public
Policy, National University of Singapore. The special issue editor would like to thank the editors of
Policy and Society Journal and all the anonymous reviewers for their constructive feedback and
support for this and other articles in the special issue on the Governance of AI and Robotics.
A special thanks to Hazel Lim, Devyani Pande, and Siying Tan of the Policy Systems Group and
the events team at the Lee Kuan Yew School of Public Policy for their support in various aspects of
organising this special issue and the accompanying workshop held on August 30-31, 2019.
Disclosure of potential conflicts of interest
No potential conflict of interest was reported by the author.
Funding
This work was supported by the National University of Singapore.
Notes on contributor
Araz Taeihagh (DPhil, Oxon) is the Head of Policy Systems Group at the Lee Kuan Yew School of
Public Policy, Principal Investigator at the Centre for Trusted Internet and Community at the
National University of Singapore, and Visiting Associate Professor at Dynamics of Inclusive
Prosperity Initiative, at the Rotterdam School of Management, Erasmus University. He has lived
and conducted research on four continents, and since 2007, his research interest has been on the
interface of technology and society. Taeihagh is interested in socio-technical systems and focuses
on the unique challenges that arise from introducing new technologies to society (e.g., crowdsour
cing, sharing economy, autonomous vehicles, AI, MOOCs, ridesharing). Taeihagh focuses on: a)
how to shape policies to accommodate new technologies and facilitate sustainable transitions; b)
the effects of these new technologies on the policy process; and c) changing the way we design and
analyse policies by developing innovative practical approaches to address the growth in the
interdependence and complexity of socio-technical systems. Araz has more than two decades of
experience working with firms on PMC and Design issues relating to chemical, petroleum and
construction projects and provides technical and policy consultations on environmental, trans
portation, energy, and technology related issues.
ORCID
Araz Taeihagh http://orcid.org/0000-0002-4812-4745
16 A. TAEIHAGH
18. References
Agarwal, P. K., Gurjar, J., Agarwal, A. K., & Birla, R. (2015). Application of artificial intelligence for
development of intelligent transport system in smart cities. Journal of Traffic and
Transportation Engineering, 1(1), 20–30.
Allen, G. C. (2019). Understanding China’s AI Strategy: Clues to Chinese strategic thinking on
artificial intelligence and national security. Washington, DC: Center for a New American
Security.
Alonso Raposo, M., Grosso, M., Després, J., Fernándezmacías, E., Galassi, C., Krasenbrink, A.,
Krause, J., Levati, L., Mourtzouchou, A., Saveyn, B., Thiel, C. and Ciuffo, B. (2018). An analysis
of possible socio-economic effects of a Cooperative, Connected and Automated Mobility (CCAM)
in Europe. European Union. Luxembourg: Publications Office of the European Union,
Bathaee, Y. (2018). The artificial intelligence black box and the failure of intent and causation.
Harvard Journal of Law & Technology, 31(2), 889.
Bolte, J. A., Bar, A., Lipinski, D., & Fingscheidt, T. (2019). Towards corner case detection for
autonomous driving. In 2019 IEEE Intelligent Vehicles Symposium, pp.438–445. Paris, France.
doi: 10.1109/IVS.2019.8813817
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. The Cambridge
Handbook of Artificial Intelligence, 316, 334.
Brauneis, R., & Goodman, E. P. (2018). Algorithmic transparency for the smart city. Yale JL &
Tech, 20, 103.
Butcher, J., & Beridze, I. (2019). What is the state of artificial intelligence governance globally? The
RUSI Journal, 164(5–6), 88–96.
Carabantes, M. Black-box artificial intelligence: an epistemological and critical analysis. AI & Soc
35, 309–317 (2020). https://doi.org/10.1007/s00146-019-00888–w
Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and
challenges.
Chen, S. Y., Kuo, H. Y., & Lee, C. (2020). Preparing society for automated vehicles: Perceptions of
the importance and urgency of emerging issues of governance, regulations, and wider impacts.
Sustainability, 12(19), 7844.
Cihon, P., Maas, M. M., & Kemp, L. (2020). Fragmentation and the Future: Investigating
Architectures for International AI Governance. Global Policy, 11(5), 545–556.
Cunneen, M., Mullins, M., & Murphy, F. (2019). Autonomous vehicles and embedded artificial
intelligence: The challenges of framing machine driving decisions. Applied Artificial Intelligence,
33(8), 706–731.
Dafoe, A. (2018). AI governance: A research agenda; future of humanity institute. Oxford, UK:
University of Oxford.
Di Porto, F., & Zuppetta, M. (2021). Co-regulating algorithmic disclosure for digital platforms. In
Policy and society (pp. 1–22).
Dickinson, H., Smith, C., Carey, N., & Carey, G. (2021). Exploring governance dilemmas of
disruptive technologies: The case of care robots in Australia and New Zealand. Policy &
Society, This Issue.
Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust:
Transparency requirements for artificial intelligence between legal norms and contextual
concerns. Big Data & Society, 6(1), 2053951719860542.
Firlej, M., & Taeihagh, A. (2021). Regulating human control over autonomous systems. Regulation
& Governance. https://doi.org/10.1111/rego.12344
Flichy, P. (2008). Understanding technological innovation: A socio-technical approach. Cheltenham,
UK: Edward Elgar Publishing.
Forum, W. E. (2018). The future of jobs report 2018. World Economic Forum, Geneva,
Switzerland.
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to
computerisation? Technological Forecasting and Social Change, 114, 254–280.
Gahnberg, C. (2021). The governance of artificial agency. In Policy and society.
POLICY AND SOCIETY 17
19. Gasser, U., & Almeida, V. A. (2017). A layered model for AI governance. IEEE Internet Computing,
21(6), 58–62.
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making
and a “right to explanation”. AI Magazine, 38(3), 50–57.
Guihot, M., Matthew, A. F., & Suzor, N. P. (2017). Nudging robots: Innovative solutions to
regulate artificial intelligence. Vand. J. Ent. & Tech. L, 20, 385.
He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., & Zhang, K. (2019). The practical implementation of
artificial intelligence technologies in medicine. Nature Medicine, 25(1), 30–36.
Helbing, D. (2019). Societal, economic, ethical and legal challenges of the digital revolution: From
big data to deep learning, artificial intelligence, and manipulative technologies. In Helbing D.
(ed) Towards digital enlightenment (pp. 47–72). Cham: Springer. https://doi.org/10.1007/978-3-
319-90869-4_6
Hemphill, T. A. (2016). Regulating nanomaterials: A case for hybrid governance. Bulletin of
Science, Technology & Society, 36(4), 219–228.
Hemphill, T. A. (2020). The innovation governance dilemma: Alternatives to the precautionary
principle. Technology in Society, 63, 101381.
Hleg, A. I. (2019). High level expert group on artificial intelligence. ethics guidelines for trust
worthy AI. In European commission. Brussels, Belgium. https://ec.europa.eu/newsroom/dae/
document.cfm?doc_id=60419
Huq, A. Z. (2019). Racial equity in algorithmic criminal justice. Duke Law Journal, 68(6),
1043–1134.
IEEE 2019. (2019). The IEEE global initiative on ethics of autonomous and intelligent systems.
ethically aligned design: A vision for prioritising human well-being with autonomous and
intelligent systems (First Edition ed.). IEEE. Springer, Cham. https://standards.ieee.org/con
tent/ieee-standards/en/industry-connections/ec/autonomous-systems.html
Inagaki, K. (2019). Japan’s demographics make good case for self-driving cars. Financial Times.
https://www.ft.com/content/382aef5c-a7d7-11e9-984c-fac8325aaa04
Izenman, A. J. (2008). Modern multivariate statistical techniques. Regression, classification and
manifold learning, 10, 978–0. Springer, New York, NY.
Janssen, M., Brous, P., Estevez, E., Barbosa, L. S., & Janowski, T. (2020). Data governance:
Organising data for trustworthy artificial intelligence. Government Information Quarterly, 37
(3), 101493.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of ethics guidelines.Nature
Machine Intelligence.1(9), 389–399.
Kim, S. (2017). Crashed software: Assessing product liability for software defects in automated
vehicles. Duke L. & Tech. Rev, 16, 300.
Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2018). Discrimination in the age of
algorithms. Journal of Legal Analysis, 10, 113–174.
Knudson, M., & Tumer, K. (2011). Adaptive navigation for autonomous robots. Robotics and
Autonomous Systems, 59(6), 410–420.
Koopman, P., & Wagner, M. (2016). Challenges in autonomous vehicle testing and validation. SAE
International Journal of Transportation Safety, 4(1), 15–24.
Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016).
Accountable algorithms. U. Pa. L. Rev, 165, 633.
Larsson, S. (2020). On the governance of artificial intelligence through ethics guidelines. Journal of
Law and Society, 1, 23.
Leenes, R., & Lucivero, F. (2014). Laws on robots, laws by robots, laws in robots: Regulating robot
behaviour by design. Law, Innovation and Technology, 6(2), 193–220.
Leenes, R., Palmerini, E., Koops, B. J., Bertolini, A., Salvini, P., & Lucivero, F. (2017). Regulatory
challenges of robotics: Some guidelines for addressing legal and ethical issues. Law, Innovation
and Technology, 9(1), 1–44.
Leiman, T. (2021). Law and tech collide: Foreseeability, reasonableness and advanced driver
assistance systems. In Policy and society (pp. 1–22).
18 A. TAEIHAGH
20. Leiser, M., & Murray, A. (2016). The role of non-state actors and institutions in the governance of
new and emerging digital technologies. In The oxford handbook of law, regulation and
technology, Eds. R. Brownsword, E. Scotford, K. Yeung, & O. U. Press. (pp. 670–704)
Lele, A. (2019). Disarmament, arms control and arms race. In Disruptive technologies for the
militaries and security (pp. 217–229). Singapore: Springer.
Lele, A. (2019b). Artificial intelligence. In Disruptive technologies for the militaries and security (pp.
139–154). Singapore: Springer.
Li, Y., Taeihagh, A., & De Jong, M. (2018). The governance of risks in ridesharing: A revelatory
case from Singapore. Energies, 11(5), 1277.
Li, Y., Taeihagh, A., De Jong, M., & Klinke, A. (2021). Toward a commonly shared public policy
perspective for analysing risk coping strategies. Risk analysis, 41(3), 519–532. https://doi.org/10.
1111/risa.13505
Lim, H. S. M., & Taeihagh, A. (2018). Autonomous vehicles for smart and sustainable cities: An
in-depth exploration of privacy and cybersecurity implications. Energies, 11(5), 1062.
Lim, H. S. M., & Taeihagh, A. (2019). Algorithmic decision-making in AVs: Understanding ethical
and technical concerns for smart cities. Sustainability, 11(20), 5791.
Linkov, I., Trump, B., Poinsatte-Jones, K., & Florin, M. V. (2018). Governance strategies for
a sustainable digital world. Sustainability, 10(2), 440.
Linkov, I., Trump, B. D., Anklam, E., Berube, D., Boisseasu, P., Cummings, C., Ferson, S., Florin,
M., Goldstein, B., Hristozov, D., Jensen, K.A., Katalagarianakis, G., Kuzma, J., Lambert, J.H.,
Malloy, T., Malsch, I., Marcomini, A., Merad, M., Palma-Oliveira, J., Perkins, E., Renn, O.,
Seager, T., Stone, V., Vallero, D., Vermeire, T. (2018b). Comparative, collaborative, and
integrative risk governance for emerging technologies. Environment Systems and Decisions, 38
(2), 170–176. https://doi.org/10.1007/s10669-018-9686–5
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning
automata. Ethics and information technology,6(3), 175–183.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms:
Mapping the debate. Big Data & Society, 3(2), 2053951716679679.
Mulligan, D. K., & Bamberger, K. A. (2018). Saving governance-by-design. Calif.L.Rev, 106, 697.
Óhéigeartaigh, S. S., Whittlestone, J., Liu, Y., Zeng, Y., & Liu, Z. (2020). Overcoming barriers to
cross-cultural cooperation in AI ethics and governance. In Philosophy & technology, 33(4), 571–
593.
Osoba, O. A., & Welser, W. (2017). An intelligence in our image: The risks of bias and errors in
artificial intelligence. Rand Corporation. Santa Monica, California.
Pagallo, U., Casanovas, P., & Madelin, R. (2019). The middle-out approach: Assessing models of
legal governance in data protection, artificial intelligence, and the Web of Data. The Theory and
Practice of Legislation, 7(1), 1–25.
PDPC (2020). (2020). Personal data protection commission of Singapore, Infocommunications
Media Development Authority (IMDA) and Singapore Digital (SG:D). empowering possibilities
model artificial intelligence governance framework (Second Edition ed.). Singapore. https://www.
pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/
SGModelAIGovFramework2.pdf
Pei, K., Cao, Y., Yang, J., & Jana, S. (2017). Towards practical verification of machine learning: The
case of computer vision systems. arXiv Preprint, arXiv:1712.01785.
Perry, B., & Uuk, R. (2019). AI governance and the policymaking process: Key considerations for
reducing AI risk. Big Data and Cognitive Computing, 3(2), 26.
Peters, G. (2001). The Future of Governing: Four Emerging Models. Kansas: University Press of
Kansas.
Philipsen, S., Stamhuis, E. F., & De Jong, M. (2021). Legal enclaves as a test environment for
innovative products: Toward legally resilient experimentation policies 1. In Regulation &
governance. https://doi.org/10.1111/rego.12375
Piano, S. L. (2020). Ethical principles in machine learning and artificial intelligence: Cases from the
field and possible ways forward. Humanities and Social Sciences Communications, 7(1), 1–7.
POLICY AND SOCIETY 19
21. Radu, R. (2021). Steering the governance of artificial intelligence: National strategies in perspec
tive. In Policy and society.
Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and
Information Technology, 20(1), 5–14.
Robinson, H., MacDonald, B., & Broadbent, E. (2014). The role of healthcare robots for older
people at home: A review. International Journal of Social Robotics, 6(4), 575–591.
Roff, H. M. (2014). The strategic robot problem: Lethal autonomous weapons in war. Journal of
Military Ethics, 13(3), 211–227.
Sætra, H. S. (2020). A shallow defence of a technocracy of artificial intelligence: Examining the
political harms of algorithmic governance in the domain of government. Technology in Society,
62. 101283.
Scharre, P. (2016). Autonomous weapons and operational risk. Washington, DC: Center for a New
American Security.
Sharkey, A., & Sharkey, N. (2012). Granny and the robots: Ethical issues in robot care for the
elderly. Ethics and Information Technology, 14(1), 27–40.
SNDGO (Smart Nation and Digital Government Office) (2019). Assistive technology and robotics
in healthcare. Smart Nation Singapore. https://www.smartnation.sg/what-is-smart-nation/initia
tives/Health/assistive-technology-and-robotics-in-healthcare
Snir, R. (2014). Trends in global nanotechnology regulation: The public-private interplay. Vand.
J. Ent. & Tech. L, 17, 107.
Solovyeva, A., & Hynek, N. (2018). Going beyond the «Killer robots» debate: Six dilemmas
autonomous weapon systems raise. Central European Journal of International & Security
Studies, 12(3).
Soteropoulos, A., Berger, M., & Ciari, F. (2018). Impacts of automated vehicles on travel behaviour
and land use: An international review of modelling studies. Transport reviews, 39(1), 29–49.
Stahl, B. C., & Coeckelbergh, M. (2016). Ethics of healthcare robotics: Towards responsible
research and innovation. Robotics and Autonomous Systems, 86, 152–161.
Taeihagh, A., & Lim, H. S. M. (2019). Governing autonomous vehicles: Emerging responses for
safety, liability, privacy, cybersecurity, and industry risks. Transport Reviews, 39(1), 103–128.
Taeihagh, A., Ramesh, M., & Howlett, M. (2021). Assessing the regulatory challenges of emerging
disruptive technologies. In Regulation & Governance. https://doi.org/10.1111/rego.12392
Tan, S. Y., & Taeihagh, A. (2021). Governing the adoption of robotics and autonomous systems in
long-term care in Singapore. In Policy and society (pp. 1–21).
Tan, S. Y., & Taeihagh, A. (2021b). Adaptive governance of autonomous vehicles: Accelerating the
adoption of disruptive technologies in Singapore. Government Information Quarterly, 38(2),
101546.
Tan, S. Y., Taeihagh, A., & Tripathi, A. (2021). Tensions and antagonistic interactions of risks and
ethics of using robotics and autonomous systems in long-term care. Technological Forecasting
and Social Change, 167, 120686.
Ulnicane, I., Knight, W., Leach, T., Stahl, B. C., & Wanjiku, W. G. (2021). Framing governance for
a contested emerging technology: Insights from AI policy. In Policy and Society (pp. 1–20).
Wang, W., & Siau, K. (2019). Artificial intelligence, machine learning, automation, robotics, future
of work and future of humanity: A review and research agenda. Journal of Database
Management (JDM), 30(1), 61–79.
Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial intelligence and the public sector—
applications and challenges. International Journal of Public Administration, 42(7), 596–615.
Wirtz, B. W., Weyerer, J. C., & Sturm, B. J. (2020). The dark sides of artificial intelligence: An
integrated ai governance framework for public administration. International Journal of Public
Administration, 43(9), 818–829.
Wu, X., Ramesh, M., & Howlett, M. (2015). Policy capacity: A conceptual framework for under
standing policy competences and capabilities. Policy and Society, 34(3–4), 165–171.
Xu, H., & Borson, J. E. (2018). The future of legal and ethical regulations for autonomous robotics.
In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp.2362–
2366. IEEE. Madrid, Spain, Madrid Municipal Conference Centre
20 A. TAEIHAGH
22. Yigitcanlar, T., Kamruzzaman, M., Foth, M., Sabatini, J., Da Costa, E., & Ioppolo, G. (2018). Can
cities become smart without being sustainable? A systematic review of the literature. In
Sustainable cities and society. 45, 348–365.
Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Oxford, UK:
University of Oxford.
Zhang, B., & Dafoe, A. (2020). US public opinion on the governance of artificial intelligence. In
Proceedings of the AAAI/ACM Conference on AI, Ethics and Society, pp.187–193.Zhang, B., &
Dafoe, A. (2020). US public opinion on the governance of artificial intelligence. In Proceedings of
the AAAI/ACM Conference on AI, Ethics and Society, pp.187–193. New York, NY, USA.
POLICY AND SOCIETY 21