This slide shows (1) AI and Accountability , (2) AI Ethics, (2) Privacy Protection. Several AI ethics documents such as IEEE EAD, EC-HELG Ethics Guideline for Trustworthy AI, Social Principles of Human-Centric AI(Japan), focus on AI's transparency, accountability and trust. We follow the discussions of these documents around the above (1),(2) and (3) topics.
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
A journey into the business world of artificial intelligence. Explore at a high-level ongoing business experiments in creating new value.
* Review AI as a priority for value generation
* Explore ongoing experimentation
* Touch on how businesses are monetising AI
* Understand the intent of adoption by industries
* Discuss on the state of customer trust in AI
Part 1 of a 9 Part Research Series named "What matters in AI" published on https://www.andremuscat.com
Responsible AI & Cybersecurity: A tale of two technology risksLiming Zhu
With the broader adoption of digital technologies and AI, organisations face the emerging risks of AI, the unfamiliar, and the intensified risk of cybersecurity, the familiar. AI and cybersecurity are intertwined, but risk silos are often created when they are dealt with at the technology and governance levels. This talk will explore the interactions between responsible AI and cybersecurity risks via industry case studies. It will show how we can break down the risk silos and use emerging trust-enhancing technologies, architecture and end-to-end software engineering/DevOps practices to connect the two worlds and uplift the risk management posture for both.
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
Nick Schmidt of BLDS, LLC to the Maryland AI meetup, June 4, 2019 (https://www.meetup.com/Maryland-AI). Nick discusses ideas of fairness and how they apply to machine learning. He explores recent academic work on identifying and mitigating bias, and how his work in lending and employment can be applied to other industries. Nick explains how to measure whether an algorithm is fair and also demonstrate the techniques that model builders can use to ameliorate bias when it is found.
Keynote from Intellifest 2012 addressing the differences between narrow (classical) Artificial Intelligence and Artificial General Intelligence. Implications of cloud computing for AGI are also discussed.
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
A journey into the business world of artificial intelligence. Explore at a high-level ongoing business experiments in creating new value.
* Review AI as a priority for value generation
* Explore ongoing experimentation
* Touch on how businesses are monetising AI
* Understand the intent of adoption by industries
* Discuss on the state of customer trust in AI
Part 1 of a 9 Part Research Series named "What matters in AI" published on https://www.andremuscat.com
Responsible AI & Cybersecurity: A tale of two technology risksLiming Zhu
With the broader adoption of digital technologies and AI, organisations face the emerging risks of AI, the unfamiliar, and the intensified risk of cybersecurity, the familiar. AI and cybersecurity are intertwined, but risk silos are often created when they are dealt with at the technology and governance levels. This talk will explore the interactions between responsible AI and cybersecurity risks via industry case studies. It will show how we can break down the risk silos and use emerging trust-enhancing technologies, architecture and end-to-end software engineering/DevOps practices to connect the two worlds and uplift the risk management posture for both.
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
Nick Schmidt of BLDS, LLC to the Maryland AI meetup, June 4, 2019 (https://www.meetup.com/Maryland-AI). Nick discusses ideas of fairness and how they apply to machine learning. He explores recent academic work on identifying and mitigating bias, and how his work in lending and employment can be applied to other industries. Nick explains how to measure whether an algorithm is fair and also demonstrate the techniques that model builders can use to ameliorate bias when it is found.
Keynote from Intellifest 2012 addressing the differences between narrow (classical) Artificial Intelligence and Artificial General Intelligence. Implications of cloud computing for AGI are also discussed.
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Today, I will be presenting on the topic of
"Generative AI, responsible innovation, and the law."
Artificial Intelligence has been making rapid strides in recent years,
and its applications are becoming increasingly diverse.
Generative AI, in particular, has emerged as a promising area of innovation, the potential to create highly realistic and compelling outputs.
Automate your business operations by incorporating these Artificial Intelligence Overview PowerPoint Presentation Slides. The scope of machine learning is increasing day by day as it is much more convenient and efficient. Facilitate business transformation using this machine learning PowerPoint presentation. With the advent of new and improved technology, it is important to replace human intelligence with robotic process automation. Showcase the stimulation of human intelligence and how applying artificial intelligence can help the organization to grow using this computer science PowerPoint slideshow. You can also present a detailed analysis of AI along with its components, objectives, key statistics, reasons and many other points with the help of this machine intelligence PowerPoint visual. Some of the problems are beyond the control of a human. They do require cognitive intelligence. Utilize this problem-solving PowerPoint graphic in that situation to find apt solutions to your organizational problems. Therefore, download this learning algorithm complete deck now to replace your old technology with machine consciousness, sentience, and mind. https://bit.ly/3xH1aFf
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
AI Governance and Ethics - Industry StandardsAnsgar Koene
Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".
Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?PECB
Generative AI offers great opportunities for innovation in various industries. Hence, by adopting ISO/IEC 27032, you can enhance your cybersecurity resilience and efficiently address the risks associated with generative AI.
Amongst others, the webinar covers:
• AI & Privacy
• Generative AI, Models & Cybersecurity
• AI & ISO/IEC 27032
Presenters:
Christian Grafenauer
Anonymization expert, privacy engineer, data protection officer, LegalTech researcher (GDPR, Blockchain, AI) Christian Grafenauer is an accomplished privacy engineer, anonymization expert, and computer science specialist, currently serving as the project lead for anonymity assessments at techgdpr. With an extensive background as a senior architect in Blockchain for IBM and years of research in the field since 2013, Christian co-founded privacy by Blockchain design to explore the potential of Blockchain technology in revolutionizing privacy and internet infrastructure. As a dedicated advocate for integrating legal and computer science disciplines, Christian’s expertise in anonymization and GDPR compliance enables innovative AI applications, ensuring a seamless fusion of technology and governance, particularly in the realm of smart contracts. In his role at techgdpr, he supports technical compliance, Blockchain, and AI initiatives, along with anonymity assessments. Christian also represents consumer interests as a member of the national Blockchain and DTL standardization committee at din (German standardization institute) in ISO/TC 307.
Akin Johnson
Akin J. Johnson is a renowned Cybersecurity Expert, known for his expertise in protecting digital systems from potential threats. With over a decade of experience in the field, Akin has developed a deep understanding of the ever-evolving cyber landscape.
Akin is an advocate for cybersecurity awareness and frequently shares his knowledge through speaking engagements, workshops, and publications. He firmly believes in the importance of educating individuals and organizations on the best practices for safeguarding their digital assets.
Lucas Falivene
Lucas is a highly experienced cybersecurity professional with a solid base in business, information systems, information security, and cybersecurity policy-making. A former Fulbright scholar with a Master of Science degree in Information Security Policy and Management at Carnegie Mellon University (Highest distinction) and a Master's degree in Information Security at the University of Buenos Aires (Class rank 1st). Lucas has participated in several trainings conducted by the FBI, INTERPOL, OAS, and SEI/CERT as well as in the development of 4 cyber ISO national standards.
Date: July 26, 2023
YouTube Link: https://youtu.be/QPDcROniUcc
Overview of Artificial Intelligence in CybersecurityOlivier Busolini
If you are interested in understsanding a bit more the potential of Artifical Intelligence in Cybersecurity, you might want to have a look at this overview.
Written from my CISO -and non AI expert- point of view, for fellow security professional to navigate the AI hype, and (hopefully!) make better, informed decisions :-)
All feedback welcome !
What is Accountability of AI? We answer to this question by clarifying responsibility, explainability and liability of limited autonomous AI with several bright and dark real examples.
Then we move to the concept of "Trust " which is of not limited to single AI system but group AI ‘s behavior.
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Explore the risks and concerns surrounding generative AI in this informative SlideShare presentation. Delve into the key areas of concern, including bias, misinformation, job loss, privacy, control, overreliance, unintended consequences, and environmental impact. Gain valuable insights and examples that highlight the potential challenges associated with generative AI. Discover the importance of responsible use and the need for ethical considerations to navigate the complex landscape of this transformative technology. Expand your understanding of generative AI risks and concerns with this engaging SlideShare presentation.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Today, I will be presenting on the topic of
"Generative AI, responsible innovation, and the law."
Artificial Intelligence has been making rapid strides in recent years,
and its applications are becoming increasingly diverse.
Generative AI, in particular, has emerged as a promising area of innovation, the potential to create highly realistic and compelling outputs.
Automate your business operations by incorporating these Artificial Intelligence Overview PowerPoint Presentation Slides. The scope of machine learning is increasing day by day as it is much more convenient and efficient. Facilitate business transformation using this machine learning PowerPoint presentation. With the advent of new and improved technology, it is important to replace human intelligence with robotic process automation. Showcase the stimulation of human intelligence and how applying artificial intelligence can help the organization to grow using this computer science PowerPoint slideshow. You can also present a detailed analysis of AI along with its components, objectives, key statistics, reasons and many other points with the help of this machine intelligence PowerPoint visual. Some of the problems are beyond the control of a human. They do require cognitive intelligence. Utilize this problem-solving PowerPoint graphic in that situation to find apt solutions to your organizational problems. Therefore, download this learning algorithm complete deck now to replace your old technology with machine consciousness, sentience, and mind. https://bit.ly/3xH1aFf
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
AI Governance and Ethics - Industry StandardsAnsgar Koene
Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".
Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?PECB
Generative AI offers great opportunities for innovation in various industries. Hence, by adopting ISO/IEC 27032, you can enhance your cybersecurity resilience and efficiently address the risks associated with generative AI.
Amongst others, the webinar covers:
• AI & Privacy
• Generative AI, Models & Cybersecurity
• AI & ISO/IEC 27032
Presenters:
Christian Grafenauer
Anonymization expert, privacy engineer, data protection officer, LegalTech researcher (GDPR, Blockchain, AI) Christian Grafenauer is an accomplished privacy engineer, anonymization expert, and computer science specialist, currently serving as the project lead for anonymity assessments at techgdpr. With an extensive background as a senior architect in Blockchain for IBM and years of research in the field since 2013, Christian co-founded privacy by Blockchain design to explore the potential of Blockchain technology in revolutionizing privacy and internet infrastructure. As a dedicated advocate for integrating legal and computer science disciplines, Christian’s expertise in anonymization and GDPR compliance enables innovative AI applications, ensuring a seamless fusion of technology and governance, particularly in the realm of smart contracts. In his role at techgdpr, he supports technical compliance, Blockchain, and AI initiatives, along with anonymity assessments. Christian also represents consumer interests as a member of the national Blockchain and DTL standardization committee at din (German standardization institute) in ISO/TC 307.
Akin Johnson
Akin J. Johnson is a renowned Cybersecurity Expert, known for his expertise in protecting digital systems from potential threats. With over a decade of experience in the field, Akin has developed a deep understanding of the ever-evolving cyber landscape.
Akin is an advocate for cybersecurity awareness and frequently shares his knowledge through speaking engagements, workshops, and publications. He firmly believes in the importance of educating individuals and organizations on the best practices for safeguarding their digital assets.
Lucas Falivene
Lucas is a highly experienced cybersecurity professional with a solid base in business, information systems, information security, and cybersecurity policy-making. A former Fulbright scholar with a Master of Science degree in Information Security Policy and Management at Carnegie Mellon University (Highest distinction) and a Master's degree in Information Security at the University of Buenos Aires (Class rank 1st). Lucas has participated in several trainings conducted by the FBI, INTERPOL, OAS, and SEI/CERT as well as in the development of 4 cyber ISO national standards.
Date: July 26, 2023
YouTube Link: https://youtu.be/QPDcROniUcc
Overview of Artificial Intelligence in CybersecurityOlivier Busolini
If you are interested in understsanding a bit more the potential of Artifical Intelligence in Cybersecurity, you might want to have a look at this overview.
Written from my CISO -and non AI expert- point of view, for fellow security professional to navigate the AI hype, and (hopefully!) make better, informed decisions :-)
All feedback welcome !
What is Accountability of AI? We answer to this question by clarifying responsibility, explainability and liability of limited autonomous AI with several bright and dark real examples.
Then we move to the concept of "Trust " which is of not limited to single AI system but group AI ‘s behavior.
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
Explore the ethical landscape of Artificial Intelligence (AI) through our insightful PowerPoint presentation. Delve into crucial considerations that shape the responsible development and deployment of AI technologies. From privacy concerns and bias mitigation to transparency and accountability, this presentation covers the key ethical dimensions of AI. Gain a comprehensive understanding of the ethical challenges and solutions in the rapidly evolving world of artificial intelligence. Stay informed and empower your audience with the knowledge needed to navigate the ethical intricacies of AI responsibly.
Let us see the good and bad effects of the impact of Artificial Intelligence and the emerging technologies!
Artificial intelligence, Technological Singularity & the LawFlorian Ducommun
This presentation was given at the Empowerment Summit and is about the current of regulation with respect to Articifical Intelligence (AI) and contains some prospective insights about the elements that AI regulation should take into account
Ethical Questions in Artificial Intelligence (AI)
Bias and Fairness
Transparency and Accountability
Privacy and Data Protection
Autonomy and Human Agency
Safety and Risk Management
Accountability and Legal Liability
Equity and Social Justice
Human-centric Design and Value Alignment
Explainability and Interpretability
Global Governance and International Cooperation
Roberto Zicari
https://www.linkedin.com/in/roberto-v-zicari-087863/
ISSIP Award
https://issip.org/issip-2022-excellence-in-service-innovation-award-program-awardees/
Award Type: Distinguished Recognition
Innovation: Z-Inspection: A process to assess trustworthy AI in Practice
Summary: Z-Inspection provides a novel process to assess if an AI system is trustworthy per the definition of trustworthy AI given by the high-level European Commission expert group on AI.
Organization: Z-Inspection Initiative
Primary Contact: Roberto V. Zicari (LI) (B)
The Artificial Intelligence World: Responding to Legal and Ethical IssuesRichard Austin
The presentation examines the legal and ethical issues that Facial Recognition Systems and Autonomous and Self-driving Vehicles present then looks at organizational, regulatory and individual tools available to respond to these issues.
IT Conferences 2024 To Navigate The Moral Landscape Of Artificial Intelligenc...Internet 2Conf
This insightful presentation delves into the key areas of AI ethics, examining moral trade-offs, and implementing ethical AI frameworks. It highlights the evolving nature of AI ethics debates, especially relevant in 2024's IT conferences like Internet 2.0 Conference. The talk aims to guide AI's future responsibly, emphasizing the importance of humane and ethical considerations in the rapidly advancing field of artificial intelligence.
Ferma report: Artificial Intelligence applied to Risk Management FERMA
FERMA brought together a group of experts from within and beyond the risk management community to develop the first thought paper about AI applied to risk management.
Their aim was to perform an initial assessment of the potential value of AI to improve enterprise risk management (ERM), and second, to understand how risk managers can be key actors in highlighting to the organisation leadership the opportunities and challenges of AI technologies.
The working group expects that corporate risk management will benefit from AI in several areas. “From its ability to process large amounts of data to the automation of certain risk management repetitive and burdensome steps, AI could allow risk managers to respond faster to new and emerging exposures. By acting in real time and with some predictive capabilities, risk management could reach a new level in supporting better decision making for senior management.”
This paper aims to guide risk managers on applying AI from a basic understanding to developing their own strategy on the implementation of AI. It includes an action guide and a template for risk managers to develop their own AI risk management roadmap.
Richard van der Velde, Technical Support Lead for Cookiebot @CMP – “Artificia...Associazione Digital Days
The training of artificial intelligence systems is just the latest use of users’ personal data that companies collect online. But the information on how the data is used, what consent is needed or how it will be regulated is not always clear. Strong concerns have already been raised about data privacy and consent.
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
Fairness, robustness, and explainability in AI are some of the key cornerstones of trustworthy AI. Through its open source projects, IBM and IBM Research bring together the developer, data science and research community to accelerate the pace of innovation and instrument trust into AI.
Global Governance of Generative AI: The Right Way ForwardLilian Edwards
AI regulation has been a hot topic since the rise of machine learning (ML) in the “big data” era, but generative AI or “foundation models” tools like ChatGPT, DALL-E 2(now 3) and CoPilot, ike ML before them, may create serious societal risks, including embedding and outputting bias; generating fake news, illegal or harmful content and inadvertent “hallucinations”; infringing existing laws relating eg to copyright and privacy; as well as environmental, competition and workplace concerns.
Many nations are now considering regulation to address these worries, and can draw on a number of basic and hybrid models of governance. This paper canvasses models of mandatory comprehensive legislation (where the EU AI Act hopes to place itself as a gold standard model); vertical mandatory legislation (where China has quietly taken a lead); adapting existing law (see the many copyright lawsuits underway); and voluntary “soft law” such as codes of ethics, “blueprints”, or industry guidelines. Both the domestic and international regulatory scenes for AI are also increasingly politicised as the rise of "AI safety" hype shows. Against this backdrop what choices should smaller countries such as the UK and Australia make? will international harmonisation lead to a race to the top as with the GDPR, or the bottom - rule by tech for tech?
EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we
remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability.
As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due
process – is an increasingly urgent concern.
Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem and addresses the following key issues:
1. The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected
2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression
3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
4. Unregulated and unmonitored forms of AI experimentation on human populations
5. The limits of technological solutions to problems of fairness, bias, and discrimination
Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical
pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.
K-anonymization has been regarded as a great method to make a bad person indistinguishable among k people whose quasi identifiers are same.
It, unfortunately, has a problematic side effect of defamation. In this case, defamation means the case where other good k-1 people are suspected as a bad person because both of a bad person and good people have the same quasi identifiers because of k-anonymization. This slide shows a mathematical model of defamation and proposes an algorithm which minimizes the probability of defamation.
Social Effects by the Singularity -Pre-Singularity Era-Hiroshi Nakagawa
Contents:
Stance of scientists community against Pre-Singularity problems
Amplification vs. Replacement
AI takes over jobs
Boarder line between amplification and replacement
Autonomous driver: trolley problem
The right to be forgotten
Towards black box
Responsibility
Vulnerability of financial dealing system made of many AI agent traders connected via internet
AI and weapon
Filter bubble phenomena
Analogy: Selfish gene
AI and privacy
The right to be forgotten, Profiling and Don’t Track
Feeling of friendliness to android
Again self conscious and identity
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Leading Change strategies and insights for effective change management pdf 1.pdf
AI and Accountability
1. AI and Accountability
Hiroshi Nakagawa
(RIKEN AIP)
Images in this file is licensed by creative commons via power point of MicroSoft.
1
2. IEEE Ethically Aligned Design version 2
1. Executive Summary
2. General Principles
3. Embedding Values Into Autonomous
Intelligent Systems
4. Methodologies to Guide Ethical Research
and Design
5. Safety and Beneficence of Artificial General
Intelligence (AGI) and Artificial
Superintelligence (ASI)
6. Personal Data and Individual Access
Control
7. Reframing Autonomous Weapons Systems
8. Economics/Humanitarian Issues
9. Law
10. Affective Computing
11. Classical Ethics in Artificial Intelligence
12. Policy
13. Mixed Reality
14. Well-being 2
The final version was published
3. IEEE EAD (Final) on April 2019
• 1. Human Rights
– A/IS shall be created and operated to respect, promote,
and protect internationally recognized human rights.
• 2. Well-being
– A/IS creators shall adopt increased human well-being
as a primary success criterion for development.
• 3. Data Agency
– A/IS creators shall empower individuals with the ability
to access and securely share their data, to maintain
people’s capacity to have control over their identity.
• 4. Effectiveness
– A/IS creators and operators shall provide evidence of
the effectiveness and fitness for purpose of A/IS.
3
4. IEEE EAD (Final) on April 2019
• 5. Transparency
– The basis of a particular A/IS decision should always be
discoverable.
• 6. Accountability
– A/IS shall be created and operated to provide an
unambiguous rationale for all decisions made.
• 7. Awareness of Misuse
– A/IS creators shall guard against all potential misuses
and risks of A/IS in operation.
• 8. Competence
– A/IS creators shall specify and operators shall adhere to
the knowledge and skill required for safe and effective
operation. 4
5. One of the real problem is
Misuse/Abuse of AI
5
6. It’s not me but AI says so!
AIA society without
freedom of speech
nor even human
rights.
We need to design
the society in which
we have the right to
object AI’s decision.
GDPR article 22
Why
me?
7. GDPR article 22:
Automated individual decision-making,
including profiling
• 1. The data subject shall have the right not to
be subject to a decision based solely on
automated processing, including profiling,
which produces legal effects concerning him
or her or similarly significantly affects him or
her
7
8. IEEE EAD version2
How to cope with misuse/abuse of AI
– Find out misuse/abuse of AI
AI should be equipped with the mechanism that
explains the reasoning path and what data is used to
reach the results
Whistle blower against peculiar/strange behavior of AI
Redress or rescue package is to be legitimized
Insurance is also needed
8
9. Implementation of AI Ethics
Transparency Explainability
Understandability
Accountability
Trust
9
10. Single AI system is too complex and
being black box XAI
• XAI became a big research topic in recent years, such as XAI2017,
XAI2018
– The methods to give meanings of internal variables with
the combination of input variables.
– It seems not to be working for Deep Learning ‘cause of its
high dimensionality and complexity.
– Explanation is generated not via AI itself but via a simple
simulator such as decision list, decision tree, etc.
– As for the way to make output be understandable
explanation for ordinary people, promising results have
not yet come out.
10
11. Transparency and Accountability
• IEEE EADversion2 Law chapter says:
• We need to clarify who is responsible in case
of accidents
• For this Transparency and Accountability
11
12. Transparency
• Disclose the followings:
Learning data for ML and input data of actual use
of AI application generated by ML
Data flow and algorithm of AI application.
Conceptual data flow is OK
Investor, Founder and developer of AI application
system
12
13. Misunderstood version of
Accountability
• Wrong one
– Only disclosing information via transparency with
natural language document for users of AI
application system
– In Japan, the mistranslation into “responsible to
explain 説明責任” is badly effecting many
people’s attitudes towards accountability (Prof.
Ohya: Keio Univ.)
13
14. Accountability must be recognized as:
• Explain the validity, fairness and legitimacy of
result/output of AI with the manner that AI
application users who are ordinary citizen can
easily understand and accept.
To clarify who are responsible for the results
of AI application outputs.
Responsibility implies compensation.
14
15. New Directions
Technically speaking, we have to think not only
about single AI but about group AI
They have to have the ability to generate easily
understandable explanations for ordinary
people tough !
Then how?
15
16. The direction of utilizing AI:
recommendation
Towards TRUST
Trust does not require the precise and
detailed proof of AI outcome!
16
17. The direction of utilizing AI:
recommendation
Trust: Making some one be authority based on
historical accumulation of technology
advancement
Licensing this authority by public authority such
as national government: i.e. medical doctor,
lawyer
Compensation for accidents: when responsive
persons are not clearly identified, insurance
comes to be the last resort.
17
18. Trustworthy AI (EU)
• Lawful, Ethical, Robust
• Requirements
1. Human agency and oversight
2. Technical robustness and safety
3. Privacy and data governance
4. Transparency
5. Diversity non-discrimination and fairness
6. Societal and environmental well-being
7. accountability
18
19. Single AI Drone used as a weapon
• AI drones are operated from a remote
operating center , even thousands of Km
– Complexity of battle field
– Responsible person could be unclear because of
latency time, difficulties of recognizing the real
enemy.
19
20. Single AI Drone used as a weapon
– It is tough to identify who are soldiers and who
are civilians.
– To solve this problem, every persons’ data might
be gathered for long period of time and analyzed
with big data mining technologies to identify who
are enimies.
– worse but anyway, accountability is recognized
as a key factor.
20
22. Unpredictability of Group AI’s behavior
• Platoon of autonomous AI drones
– If an attack comes out unintentionally where
human commanders are set aside, it is unclear
who is responsible Unintentionally happening
of battle, even war!
– No accountability is a problem!
– CCW(Convention on Certain Conventional
Weapons) tries to ban it, as far as I know
22
23. Autonomous AI weapon
AI’s action liability immunity Strict
liability
Unjustified
damage
Autonomous
AI weapon
Unjustified
acts (mis
attack)
AI weapon
developer +
commander
Political
decision
AI weapon
developer
(wrong
design of
attack
checking)
International
laws
AI weapon as
a controllable
tool
operator
23
24. Unpredictability of group AIs
Flush crash
• Flush crash: Group of AI traders communicate
each other via i.e. stock prices as common
language, and catastrophic results comes out in
seconds
– Deals in micro seconds
– Companies do not disclose AI traders’ algorithm
because of enterprise secret policies
No accountability!
25. How to cope with
• AI traders’ algorithms are still in secret
• Observing the market from outside by another special
AI: AI observer
• AI observers try to find unusual situation as early as
possible: Unusual situation detection technologies:
good research topic of AI
– Detected then stop
– Before detection, the loss or gain are exemption of liability
– The problem is when the system stops.
The problem caused by AI should be solved by AI
25
26. AI observer observes the behavior of group of AI
and tries to
detect unusual situation as early as possible.
AI
AI observer
We should make a scheme
on which we trust this AI
Observer!
27. Conclusion of Accountability Part
• Combination of Transparency, Accountability
including AI observers, Licensing, and
Compensation by insurance makes AI system
based on machine learning technologies be
trusted by every people including ordinary
citizens.
• This is good for us ML and AI researchers and
developers.
27
28. Copy right and Intellectual property
• The various copy rights of art, etc. created with
assistance of AI
• The intellectual property rights created with
assistance of AI
• Who are the holder of copy right/ intellectual
property?
• When two or more parties insist their rights, what
kind of role AI should do?
28
29. What is AI’s position when AI is involved in a task of
creating art products?
AI generate
vast amount
of art
product
candidates.
Select good art
product
candidates
which is likely
to accepted by
people with
this filter
Generate a filter which
select good art product
based on them
past excellent art products
New viewers’ reaction =
their idea and sense
End viewers or
audience
Art products made
with AI
Viewer’s
idea and
sense: key
point of
copyright
See things in
this loop
30. The typical contention
30
In case of copyright violation, who should be
responsible to it?
• Creator who use AI?
• even the AI is (limited) autonomous
• Autonomous AI itself?
• Developer of autonomous AI?
• …..
31. What is AI’s position when AI is involved in a task of
creating intellectual products such as patent?
AI generate
vast
amount of
I.P. s such as
candidates
of patent.
Select good
candidates of
I.P. which is
likely to pass
this filter
Generate a filter
which rejects similar
candidates of I.P.
based on them
past intellectual properties
Authority’s decision to
accept as I.P. or not
&
User’s reaction =
“used or not”
I.P. made with AI
technical
idea and
sense: key
point of I.P.
See things in
this loop
In I.P. , this process should
reject the too similar idea
to the already existed I.P.s
33. Privacy of DNA
33
Extract DNA from the litter to infer his face image and make poster of
his face to arrest him.
34. DNA Privacy cont.
Cosmetic surgery becomes fad to get rid of identification
of face image.
Person who got cosmetic surgery is regarded as a bad guy.
The national government collects and control every
citizen’s DNA
Biologically DNA, informationally SNS history is collected
and used to control all the people. Who arel targets?.
Many people say EU’s regulation about personal data is
too strict, however, the above case forces us to think of
the importance of private data of every individual.
34
35. Personal Data Protection
• Personal data is overwhelming on the internet.
• Concept of privacy has changed.
• GDPR(General Data Protection Regulation) covers
the whole EU area, even outside of EU.
• 2018/5/25
35
36. GDPR
• GDPR people do not think that there is an
anonymization method which enables the
anonymized data is freely transferred or
distributed without the consent of data subject.
• Then, personal data can transferred to the third
party by
1. The purpose of use is permitted by GDPR
2. Vender’s accountability of the purpose and usage.
3. Consent of data subject
36
37. MyData and GDPR art.20
MyData movement
• GDPR article 20
• The data subject shall have the right to receive the
personal data concerning him or her, which he or
she has provided to a controller,
• in a structured, commonly used and machine-
readable format
• and have the right to transmit those data to
another controller without hindrance from the
controller to which the personal data have been
provided
37
38. MyData: Personal Data Eco-system:
GAFA personal data the data subject
Google,
Facebook,
Apple,
MS Data
Subject
employment
API for
developers
transportation
purchase
Web
power
companymedicine
governmen
t
research
bank
employment
API for
developer
s
transportatio
n
purchase
Web
power
company
medicine
government
researc
h
bank
38
MyData 2016,2017,2018( Helsinki),
MyData Japan 2017,2018(Tokyo)
39. PLR (personal life repository)
PLR clouds
(Any online cloud storage is OK to be used)
app app
Data subject
app
Personal tetrminal
Local goverment
hospitals
Family or friends
Broad casting co.
schooltransportaion
bank
retailer
hotels
PLR
PLR
PLR
PLR PLR
PLR
PLR PLR
PLR
PLR
Medical clinic
app
PLR
PLR
暗号化されたデータEncrypted data
working
Application to permit to use a data subject’s personal data with other
organizations using without mediator
(Prof. Hasida U-tokyo&RIKEN)
40. PDS
Personal Data Storage(PDS)
• Personal Data Store/Vault
• or
• Personal Data Cloud
Personal
Data
Personal
Data
Personal
Data
Personal
Data
Services with
personal dataMediator
(via AI
• Auto Upload
• Encrypt with
personal secret key
• Internet ID
• API-of-Me
• Usage Log
• Tracing the route
• Unified Data Format
• Portability
41. IEEE EAD:Personal Data and Individual
Access Control
• EAD version2
• Not only privacy protection but many ideas
about the design of AI system which accesses
personal data
41
42. AI personal agent:
AI agent covers pregnant to tumb
• Health care record before and after birth
• Citizenship given just after birth government data
• School
• Moving
• Cross boarder moving: immigration data
• Purchasing history
• IoT and wearables (telecommunications data)
• SNS, digital media
• Job training
• Commitment to society through job, volunteer work
• Contract, insurance, finance
• Death( How to treat her/his digital heritage)
42
43. Regional Jurisdiction
• Laws are distinct country by country, but the right
of privacy protection( basic human right) should
be sure across countries.
• Data subject should be able to access her/his own
personal data in cross-border fashion.
Legally possible?
As for GDPR, within EU area, or area having adequate
in personal data protection laws, cross-border is
allowed.
43
44. Agency and Control
• In order to define how widely AI agent
behaves, personally identifiable information
(PII) should be explicitly defined.
• Collection and transferring of personal date
should comply the fundamental policy of
GDPR.
44
45. Transparency and Access
• Data subject should have the right to know how her/his
personal data is collected, used, stored and abandoned.
• UI by which a data subject can easily correct her/his
personal data, is required.
• To implement these ideas, we employ AI technologies.
45
46. How to make consent
• We have to develop AI system by which we can get
consent from the data subject who is not familiar to AI
• AI easily gets consent whenever the situation changes
• employee<<employer
This power imbalance must be overcome, because the
consent between two guys when their power is very
imbalanced, the consent is legally very doubtful.
46
47. How to make consent: information
weak person’s case
• Elderlies, people suffering dementia, infants
and so on who are information weak people
should be kept an eye on.
• For this purpose, AI is a crucial technology.
47
48. The right to be forgotten
• Individuals demand the search engine company to erase
pages which describe about her/him
• AI assists to determine whether demanded pages are erase or
not. AI might utilizes the amount of records about erasing or
not.
• Good application of AI technologies.
48
49. be easy to get to know or not
• Observation of usage of ELIZA which was developed in 1966
says:
• People tend to speak personal, even privacy information
when they convers with AI because AI is not human
• The same phenomena is expected to happen when a person
convers with Robot, humanoid, android who look very similar
to human being.
• If the robot do extract personal privacy maliciously and send it
to malicious server….. Quite scary
49
50. • If a companion animal robot of elderly person is
connected to the internet and contaminated by
a malware,
• s/he downs her/his guard and speak about
her/his financial asset,
• then her/his asset can get stolen
• Digital canine madness
50
51. – Can AI know what is her/his privacy?
– Then, the privacy information AI collected should be
protected with AI technologies from outside bad
guys( might be AI)
51
AI should have an ability of
privacy protection by design
52. What is privacy depends on each individual or
each situation
When he is in extra-marital affair,
Hotels or restaurants might be privacy
• Can AI decide which information is privacy?
• But if AI can do this, AI is almost AGI
• In other words, AI recognize human’s feeling,
good and bad intentions and so on: Scary!
52