This presentation describes recent ethical issues related to AI and ML algorithms. Its focus is data and algorithmic bias, algorithmic interpretability and how GDPR relates to these issues.
Ethical Issues in Machine Learning Algorithms (Part 2)Vladimir Kanchev
The presentation deals with types of biases found in AI/ML systems - data bias, algorithmic bias, and lack of interpretability. Reasons for their appearances are given, and major approaches for their reduction.
Ethical Issues in Machine Learning Algorithms. (Part 3)Vladimir Kanchev
The presentation deals with ethical issues in a few currently widely used machine learning (or AI) technologies and algorithms. The ML applications are described in details, their current state of the art, their specific challenges and ethical problems. Current solutions from academic and industrial perspective are given. A mixture of academic and applied sources are used for the presentation - it aims to be more interesting for students and practitioners.
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
Ethical Issues in Machine Learning Algorithms (Part 2)Vladimir Kanchev
The presentation deals with types of biases found in AI/ML systems - data bias, algorithmic bias, and lack of interpretability. Reasons for their appearances are given, and major approaches for their reduction.
Ethical Issues in Machine Learning Algorithms. (Part 3)Vladimir Kanchev
The presentation deals with ethical issues in a few currently widely used machine learning (or AI) technologies and algorithms. The ML applications are described in details, their current state of the art, their specific challenges and ethical problems. Current solutions from academic and industrial perspective are given. A mixture of academic and applied sources are used for the presentation - it aims to be more interesting for students and practitioners.
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
Recent advances in Machine Learning, have created powerful algorithms, pushing the boundaries of Artificial Intelligence (AI). As machine learning becomes increasingly prevalent, one biggest issue it needs to addressing is bias that seeps into AI. This presentation focuses on bias in AI algorithms, provides a range of examples where AI is racist or sexist. We explore causes like biased data, lack of attention to the inputs, and insufficient understanding of the algorithm. Finally, we propose steps which could help reduce these incidences of discrimination.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
A Theory of Knowledge Lecture given by Mark Steed, Director of JESS Dubai on Monday 4th March 2019
The lecture explains how AI works and then looks at some of the ethical implications
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Responsible Data Use in AI - core tech pillarsSofus Macskássy
In this deck, we cover four core pillars of responsible data use in AI, including fairness, transparency, explainability -- as well as data governance.
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
Nick Schmidt of BLDS, LLC to the Maryland AI meetup, June 4, 2019 (https://www.meetup.com/Maryland-AI). Nick discusses ideas of fairness and how they apply to machine learning. He explores recent academic work on identifying and mitigating bias, and how his work in lending and employment can be applied to other industries. Nick explains how to measure whether an algorithm is fair and also demonstrate the techniques that model builders can use to ameliorate bias when it is found.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
Contemporary AI engenders hopes and fears – hopes of harnessing AI for productivity growth and innovation – fears of mass unemployment and conflict between humankind and an artificial super-intelligence. Before we let AI drive our hopes and fears, we need to understand what it is and what it is not. Then we need to understand how to implement AI in an ethical and responsible manner. Only then can we harness the power of AI to our benefit.
Big Data can generate, through inferences, new knowledge and perspectives. The paradigm that results from using Big Data creates new opportunities. Big Data has great influence at the governmental level, positively affecting society. These systems can be made more efficient by applying transparency and open governance policies, such as Open Data. After developing predictive models for target audience behavior, Big Data can be used to generate early warnings for various situations. There is thus a positive feedback between research and practice, with rapid discoveries taken from practice.
DOI: 10.13140/RG.2.2.14677.17120
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
Recent advances in Machine Learning, have created powerful algorithms, pushing the boundaries of Artificial Intelligence (AI). As machine learning becomes increasingly prevalent, one biggest issue it needs to addressing is bias that seeps into AI. This presentation focuses on bias in AI algorithms, provides a range of examples where AI is racist or sexist. We explore causes like biased data, lack of attention to the inputs, and insufficient understanding of the algorithm. Finally, we propose steps which could help reduce these incidences of discrimination.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
A Theory of Knowledge Lecture given by Mark Steed, Director of JESS Dubai on Monday 4th March 2019
The lecture explains how AI works and then looks at some of the ethical implications
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Responsible Data Use in AI - core tech pillarsSofus Macskássy
In this deck, we cover four core pillars of responsible data use in AI, including fairness, transparency, explainability -- as well as data governance.
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
Nick Schmidt of BLDS, LLC to the Maryland AI meetup, June 4, 2019 (https://www.meetup.com/Maryland-AI). Nick discusses ideas of fairness and how they apply to machine learning. He explores recent academic work on identifying and mitigating bias, and how his work in lending and employment can be applied to other industries. Nick explains how to measure whether an algorithm is fair and also demonstrate the techniques that model builders can use to ameliorate bias when it is found.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
Contemporary AI engenders hopes and fears – hopes of harnessing AI for productivity growth and innovation – fears of mass unemployment and conflict between humankind and an artificial super-intelligence. Before we let AI drive our hopes and fears, we need to understand what it is and what it is not. Then we need to understand how to implement AI in an ethical and responsible manner. Only then can we harness the power of AI to our benefit.
Big Data can generate, through inferences, new knowledge and perspectives. The paradigm that results from using Big Data creates new opportunities. Big Data has great influence at the governmental level, positively affecting society. These systems can be made more efficient by applying transparency and open governance policies, such as Open Data. After developing predictive models for target audience behavior, Big Data can be used to generate early warnings for various situations. There is thus a positive feedback between research and practice, with rapid discoveries taken from practice.
DOI: 10.13140/RG.2.2.14677.17120
For this assignment, you are given an opportunity to explore and.docxshanaeacklam
For this assignment, you are given an opportunity to explore and apply a decision making framework to analyze an IT-related ethical issue.
A framework provides a methodical and systematic approach for decision making
.
Module 2 - Methods of Ethical Analysis (see LEO Content – Readings in Week 2)
describes three structured frameworks for ethical analysis, namely
Reynolds Seven-Step Approach, Kidder’s Nine Steps, and Spinello’s Seven-Step Process
.
There are several ways described in Module 2 to systematically approach an ethical dilemma
. Each of the frameworks described has its merits, and each will result in an ethical decision if straightforwardly and honestly applied.
In addition, you will want to consider the ethical theories described in
Module 1 – Introduction to Theoretical Ethical Frameworks (see LEO Content – Readings in Week 1)
which help decision makers find the right balance concerning the acceptability of and justification for their actions.
For this paper,
all of the following elements
must be addressed:
Describe
a current IT-related ethical issue:
Since this is a paper exercise, not a real-time situation,
it is best if you construct a brief scenario
where this issue comes into play, and thus causes an ethical dilemma. The dilemma may affect you, your family, your job, or your company; or it may be a matter of public policy or law that affects the general populace.
See the list below for a list of suggested issues, which may be a source of an ethical dilemma
.
It is not necessary to incorporate answers to the companion questions of the list subjects in your paper
– they are only there to define the issue.
Define
a concise and
separate
problem statement
that has been
extracted
from the above description or scenario. It is best
if you define the specific problem caused by the dilemma, that needs a specific ethical decision to be made, that will solve the dilemma
. Be aware that if it is a matter of public policy or law that it may require a regulatory body or congressional approval to take action to implement a solution.
Analyze
your problem using one of the structured decision-making frameworks chosen from Module 2.
Make sure that you identify the decision-making framework utilized. In addition, the steps in the decision-making framework selected must be used as major headings in your analysis, then,
Consider and state the impact of the decision that you made
on an individual, an organization, stakeholders, customers suppliers, and the environment, as applicable, and
State and discuss the applicable ethical theory
from Module 1
that supports your decision.
Concerning your paper:
Prepare a minimum 3- 5 page, double-spaced paper and submit it to the LEO Assignments Module as an attached Microsoft Word file.
Provide appropriate American Psychological Association (APA) reference citations for all sources you use
. In addition to critical thinking and analysis skills, your paper shoul ...
over the past ten years, data has grown on the Internet, and we are the fuel and haste of this increase. Business owners, they produce apps for us, and we feed these companies with our data, unfortunately, it is all our private data. In the end, we become, through our private data, a commodity that is sold to the highest bidder.
Without security, not even privacy. Ethical oversight and constraints are needed to ensure that an appropriate balance. This article will cover: the contents of big data, what it includes, how data is collected, and the process of involving it on the Internet. In addition, it discuss the analysis of data, methods of collecting it, and factors of ethical challenges. Furthermore, the user's rights, which must be observed, and the privacy the user has.
The Future of Moral Persuasion in Games, AR, AI Bots, and Self Trackers by Sh...Sherry Jones
4-18-19 - This presentation was shown at the eLearning Consortium of Colorado (eLCC) Annual Conference. The focus of the talk is on the various ethical problems that currently exist in the technology industry and predictions of how future technologies, such as Digital Games, AR, AI Bots, and Self Trackers, will be designed to morally persuade users.
The presentation that includes the video can be accessed here: http://bit.ly/futureethics
e-SIDES presentation at Leiden University 21/09/2017e-SIDES.eu
On September 21st the eLaw team member of e-SIDES, Magdalena Jozwiak, made a presentation of the e-SIDES project at a lunch event at the Leiden University’s Law Faculty. The event, organized within the Interaction Between Legal Systems research theme, attracted an interdisciplinary audience and was followed by a discussion on e-SIDES, its goals and approaches.
"Towards Value-Centric Big Data" e-SIDES Workshop - "Privacy Preserving Techn...e-SIDES.eu
The following presentation was given by Tjerk Timan, Policy Ananlyst from TNO and BDVA, during the e-SIDES workshop "Towards Value-Centric Big Data" held on April 2, 2019 in Brussels.
Ethics for the Digital AgeBy Gry Hasselbalch on 2016-02-05AN.docxSANSKAR20
Ethics for the Digital Age
By Gry Hasselbalch on 2016-02-05
ANALYSIS: This January the European Data Protection Supervisor presented his new “Ethics Advisory Group”. A group of experts that will help him “reconsider the ethical dimension of the relationships between human rights, technology, markets and business models and their implications for the rights to privacy and data protection in the digital environment.” He is not the first European decision maker or thought leader to bring forward ethics as a guiding principle in the digital age. Over the last year digital ethics, and in particular data ethics, have become the “talk of the town” in Europe. Based on the realisation that laws have not followed pace with the development of digital technologies, technologists, academics, policymakers and businesses are today revisiting cultural values and moral systems when groping for a new ethical framework for the digital age.
Ethics of Technology
Technological developments have in history always at some point during their implementation into society forced us to revisit laws, but in particular also ethical value systems and limits. Time and again we are faced with the fact that technology is in fact not neutral, but contain in their very design ethical implications. The photograph was in its early stage of implementation in the late 19thand early 20th century, discussed as both art and reality. This discussion entered the court rooms and the legal rights over a photograph were determined. It was however not only legal rights that were defined, but a delineation of the very ethical implications of a technology (the camera, the photograph) that could reproduce the appearance of an individual with such accuracy. It was an examination of the particularly human consequences (distress and humiliation) of the capacities of this new technology. Defining a right and wrong and attempting to morally manage its implications for individuals.
What we experience these years is a pace of technological developments as never seen before. Not only did the World Wide Web and the capacities of digital technologies develop over just a few decades, but the digital evolution expanded into practically every area of life and society over an even shorter period of time. It only took a few years after Tim Berners Lee invented an open source information space interlinked by hyperlinks in 1989 before the first online businesses emerged and ordinary people started using internet services in the mid 1990s.
Evidently laws have not followed pace with the countless ethical implications of today’s rapid technological development. Now we are questioning the ethics of automatic systems designed to collect data on us en masse, algorithms designed to predict and profile us, technologies used to surveil us and manipulate us and not the least business models profiting from the most private details on individuals. The only way we can do this is by revisiting our values and morals, t ...
How Can Policymakers and Regulators Better Engage the Internet of Things? Mercatus Center
The world today is seemingly always plugged into the Internet and technologies are constantly sharing data about our personal and professional lives. Device connectivity is on an upward trend with Cisco estimating that 50 billion devices will be connected to the Internet by 2020. Collection and data sharing by these devices introduces a host of new vulnerabilities, raising concerns about safety, security, and privacy for policymakers and regulators.
Internet of Things & Wearable Technology: Unlocking the Next Wave of Data-Dri...Adam Thierer
"Internet of Things & Wearable Technology: Unlocking the Next Wave of Data-Driven Innovation." A presentation by Adam Thierer (Mercatus Center at George Mason University) made on September 11, 2014 at AEI-FCC Conference on "Regulating the Evolving Broadband Ecosystem."
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Ethical Issues in Machine Learning Algorithms. (Part 1)
1. Ethical Issues in
Machine Learning Algorithms
(Part 1)
IEEE Young Professionals Bulgaria
Vladimir Kanchev, PhD
1
2. Intro
Dr. Kim, (2018, May 31) Human ethics for artificial intelligent beings. An
Ethics Scary Tale. Retrieved from https://aistrategyblog.com/category/utilitarianism/
2
3. Intro
Data Science (DS) and Machine Learning (ML)
systems:
can automate a lot of tedious and dangerous work
now.
are already part of our life.
are trusted with making important decisions.
3
4. Intro
But DS and ML systems:
have innate biases which do not coincide with social
norms and have no ethical grounds.
fail in a way which is not humanly interpretable.
can have negative economic and social impact –
eliminate jobs.
have some security issues – chat bots, autonomous
cars, etc.
4
5. Contents
1. Advances in Data Science (DS) and
Machine Learning (ML) fields.
2. Ethics and ethical issues.
3. Current legislation. GDPR.
4. ML data bias, algorithmic bias, and
interpretability issues.
5. Ongoing academic research problems.
5
7. Some AI definitions
What is Artificial Intelligence (AI)?
the science and engineering of making intelligent machines.
(John McCarthy)
AI is a branch of Computer Science (CS) with both
theoretical and practical aspects.
AI has common aims and approaches as robotics,
control systems, speech recognition, etc.
AI is a buzz word nowadays; in public imagination,
it overlaps with ML and DS.
7
8. A tree of AI subfields
Atlam, H., Walters, R., & Wills, G. (2018). Intelligence of Things:
Opportunities & Challenges.8
9. Some ML definitions
What is machine learning (ML)?
Field of study that gives computers the ability to learn
without being explicitly programmed. (Arthur Samuel)
Study of algorithms that improve their performance P at
some task T with experience E as we have well defined task
<P,T,E>. (Tom Mitchell)
ML has a practical and a solid theoretical side.
It needs a certain amount of training (labeled or
unlabeled) data to build knowledge (models).
9
10. Recent trends in ML
ML has gained wide popularity among CS
community of programmers and researchers.
ML has become another buzzword as AI.
Many implementations of ML algorithms can be
found in different programming libraries.
10
11. Some DS definitions
What is data science?
Data science vs statistics.
Data Scientist (n.): Person who is better at statistics than
any software engineer and better at software engineering
than any statistician. (Josh Wills)
Advances of analytics field.
real-time data streaming, e-marketing, healthcare, retail.
Advances of Big Data.
academic/public, commercial, and private big datasets.
11
12. DS vs. ML
Loy, A. (2015). Embracing Data Science. UMAP Journal, 36(4)12
13. Some ML definitions
What is deep learning (DL)?
DL methods are ML methods based on learning data
representations. They are usually related to the training of
neural networks with many (n>100) layers.
Fast advances during the last decade; related to the
Big Data boom and cheap GPU hardware.
First developed as an applied then as a theoretical
field.
A number of CV and ML problems are solved and
built into commercial products.
13
15. Recent trends in ML
Flourish of DL and reinforcement learning
algorithms and software frameworks as tensorflow.
Use of more hardware resources – better
processors, ubiquitous clouds, and supercomputers.
Increased accuracy due to the application of larger,
bigger labeled datasets.
Wide application to classic CS fields as computer
vision (CV), natural language processing (NLP),
computational finance, etc.
15
17. Recent trends in ML
In recent years, IT companies, such as FB and
Google have:
transformed themselves into data companies.
built world-class AI research groups.
accumulated a lot of Big Data about customers, not
publicly available.
made better digital marketing due to user profiling
and personalization.
17
18. Challenges in DS&ML fields
What’s next?
Both fields follow boom & bust cycle; AI/DL winter
coming?
Technological development vs. scientific
development in AI and ML fields.
Is society ready to accept AI/ML/DS systems?
18
19. Contents
1. Advances in Data Science (DS) and
Machine Learning (ML) fields.
2. Ethics and ethical issues.
3. Current legislation. GDPR.
4. ML data bias, algorithmic bias, and
interpretability issues.
5. Ongoing academic research problems.
19
21. Some ethics definitions
Ethics or moral philosophy
a branch of philosophy that involves systematizing,
defending, and recommending concepts of right and wrong
conduct. (Internet Encyclopedia of Philosophy).
Ethics vs. Laws vs. Religion
these terms have a common root but do not coincide.
Data ethics
How data affects human well-being - positively and
negatively.
Ethical values
autonomy, equality, etc.
21
22. Ethics in real life
Lee Sterrey (2014, March 24), Include ethics when teaching big data.
Retrieved from
https://www.ibmbigdatahub.com/blog/include-ethics-when-teaching-big-data
22
23. Ethics of technology
Definition:
is an interdisciplinary research area concerned with all moral
and ethical aspects of technology in society. (Luppicini,
2008)
It views society and technology as interrelated and
aims to:
• use technology ethically.
• prevent misuses.
• guide new technological advances.
• benefit society.
http://www.liquisearch.com/technoethics
23
24. Ethics, Society, and Technology
24 Rahwan, I. (2018). Society-in-the-loop: programming the algorithmic
social contract. Ethics and Information Technology, 20(1), 5-14
25. Some ethical concepts
What is an ethical issue?
Def: Moral issues are those actions which have the potential
to help or harm others or ourselves*.
What is an ethical dilemma?
Def: A situation in which a difficult choice has to be made
between two courses of action, either of which entails
transgressing a moral principle**.
* https:/philosophy.lander.edu/ethics/issue.html
** https://en.oxforddictionaries.com/definition/ethical_dilemma25
27. Ethical DS cases
The Facebook emotions study (2014)
psychological research, just another A/B testing?
Panama papers (2016)
use of hacked data
Cambridge Analytica case (2018)
psychological profiling
27
28. Ethical DS cases
DS/ML ethical cases in near future:
Autonomous cars
Autonomous weapons
meaningful human control?
Internet of things (IoT)
Personalized medicine (genomic information)
Social Credit System (China)
just another credit score?
28
29. Ethical issues
Innovators are restricted to the given state of
scientific and technical knowledge.
Each technical innovation brings risks and benefits.
How to manage risks, when implementing an
innovation?
29
30. Ethical issues in other fields
Adopted ideas from other fields:
Medical experimentation
Scientific research
Professional communities
30
31. How to solve ethical issues
What approach is best for solving DS/ML ethical
issues?
strict national regulation vs. international regulation vs.
looser code of ethics?
Different approaches/priorities:
• development of technology
• businesses growth; more investments in DS/ML field
• public interest
Innovation first or Regulation first policy.
31
32. Contents
1. Advances in Data Science (DS) and
Machine Learning (ML) fields.
2. Ethics and ethical issues.
3. Current legislation. GDPR.
4. ML data bias, algorithmic bias, and
interpretability issues.
5. Ongoing academic research problems.
32
34. Legislation
Falls behind technological progress for most DS/ML
ethical concerns.
A long tradition of regulation for consumer, security,
and privacy protection in the USA.
EU scores ahead in 2018 with GDPR.
34
35. Legislation
Data privacy:
has been already a major concern for public opinion
and a political issue.
has been already introduced into legislation.
While other DS/ML ethical issues:
are still a subject of debate and are not fully
introduced into legislation.
there are similar issues in other fields regulated by
other laws.
35
38. Legislative approach in the USA
Focus is on free speech and transparency;
restriction of personal data being processed by the
government.
Different legislation on a state level; a lack of
legislation at a national level.
Not a long tradition of privacy legislature.
In general, more business-friendly environment;
belief in industry self-regulation.
38
39. Legislative approach in EU
Privacy.
a fundamental human right; a long tradition of privacy
legislation.
Stricter EU privacy law – applied to all industries.
Introduction of GDPR legislation.
Less business-friendly environment.
EU regulations lead to a conflict with US IT corporations. A
new special tax on big tech (under discussion 2018-19).
39
40. Legislative approach in China
Plan to implement their own data privacy
regulations – nation security is a top priority.
Debate between US and EU approaches.
Widespread mobile devices and services and thus,
growing concern about data privacy.
Discussions held within local Confucian traditions
behind „The Great Firewall of China”.
40
41. Legislative approach in India
Densely populated and diverse country; specific
cultural traditions of privacy.
No regulatory tradition of personal data protection.
Not a solid regulatory framework for anonymization
and intellectual property (IP).
New data protection bill (2018), tries to adapt to EU
and US legislation due to the Indian large BPO
industry.
41
43. GDPR
Legally binding regulation, not a directive or a
recommendation.
Expanded definition of personal data – including
person’s name, location, online identifiers,
biometrics, genetic information, etc.
Requires 72-hour notification of data breaches.
Record keeping requirements.
Data protection by design – a legal requirement.
43
45. GDPR
Consent from users should be clearly given,
informed and specific; can be withdrawn at any
time without consequences.
A right to algorithmic explanation.
Introduction of data processors/controllers.
Companies using EU citizens data are subjected to
it.
Fines for noncompliance over 20 mil euros / 4% of
global revenues.
45
47. GDPR
GDPR requirements for data protection:
1. Big data analytics must be fair.
No bias and discrimination. Consumers should be awarded
for data collection. Processing should be transparent.
2. Permission to process data.
Unambiguous consent from users. User consent for data use
by third parties.
3. Purpose limitation.
No further processing incompatible with the original purpose.
47
48. GDPR
4. Holding on data.
Using only data you need to process for a specific purpose.
5. Accuracy.
Incorrect data must be dismissed. Big data should not `
represent a general population. Hidden biases in data should
be considered in final results. No discrimination during
profiling.
6. Individual rights and access to data.
Individuals should be allowed to access their own data.
48
49. GDPR
7. Security measures and risk.
Security risks should be specifically addressed during
processing.
8. Accountability.
Big data processing without a defined hypothesis might cause
problems. Biased profiling, too.
9. Controllers and processors.
No clear definition as both operations are performed by AI
algorithms.
49
50. GDPR
GDPR is now a buzzword as is AI.
Its implementation started on May 25th, 2018.
GDPR requirements should be included into the
existing ML automatic services – GDPR compliance.
People and corporations should be convinced that
GDPR requirements are beneficial to ML services.
50
51. Contents
1. Advances in Data Science (DS) and
Machine Learning (ML) fields.
2. Ethics and ethical issues.
3. Current legislation. GDPR.
4. ML data bias, algorithmic bias, and
interpretability issues.
5. Ongoing academic research problems.
51