This document discusses codes of ethics and concerns regarding artificial intelligence and autonomous systems. It provides an overview of the various IEEE P7000 working groups that are examining issues around big data, machine learning, and ethics. It also mentions some case studies and examples that raise ethical issues relating to areas like bias, privacy, and fairness. The goal is to help engineers address ethical considerations in AI system design and development.
An introductory take on the ethical issues surrounding the use of algorithms and machine learning in finance, education, law enforcement and defense. This work was stimulated by, but is not a product or authorized content from the IEEE P7003 WG.
Disclaimer: This work is mine alone and does not reflect view of IEEE, IEEE 7003 WG, my employer.
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
The document discusses IBM's efforts to bring trust and transparency to AI through open source. It outlines IBM's work on several open source projects focused on different aspects of trusted AI, including robustness (Adversarial Robustness Toolbox), fairness (AI Fairness 360), and explainability (AI Explainability 360). It provides examples of how bias can arise in AI systems and the importance of detecting and mitigating bias. The overall goal is to leverage open source to help ensure AI systems are fair, robust, and understandable through contributions to tools that can evaluate and improve trusted AI.
Sri Krishnamurthy presents on machine learning and AI in finance. He discusses how the 4th industrial revolution is being driven by emerging technologies like AI, robotics, and 5G. Machine learning and AI are revolutionizing the finance industry by enabling real-time analytics, predictive analytics, and automating tasks. Sri outlines the machine learning workflow and key areas where machine learning is being applied in finance like trading strategies, risk management, and fraud detection.
Mathematical Finance & Financial Data Science Seminar
AI and machine learning are entering every aspect of our life. Marketing, autonomous driving, personalization, computer vision, finance, wearables, travel are all benefiting from the advances in AI in the last decade. As more and more AI applications are being deployed in enterprises, concerns are growing about potential "AI accidents" and the misuse of AI. With increased complexity, some are questioning whether the models actually work! As the debate about fairness, bias, and privacy grow, there is increased attention to understanding how the models work and whether the models are thoroughly tested and designed to address potential issues.
The area "Responsible AI" is fast emerging and becoming an important aspect of the adoption of machine learning and AI products in the enterprise. Companies are now incorporating formal ethics reviews, model validation exercises, and independent algorithmic auditing to ensure that the adoption of AI is transparent and has gone through formal validation phases.
In this talk, Sri will introduce Algorithmic auditing and discuss why Algorithmic auditing will be a formal process industries using AI will need. Sri will also discuss the emerging risks in the adoption of AI and discuss how QuSandbox, his company is building, will address the emerging needs of formal Algorithmic auditing practices in enterprises.
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Patrick Van Renterghem
Organisations need to make sure that they use AI in an appropriate way. Martijn and Hugo explain how to ensure that the developments are ethically sound and comply with regulations, how to have end-to-end governance, and how to address bias and fairness, interpretability and explainability, and robustness and security.
During the conference, we looked at an example AI development process with focussing on the risks to be managed and the controls that can be established.
Demystifying Artificial Intelligence: Solving Difficult Problems at ProductCa...Carol Smith
This document discusses a presentation on demystifying artificial intelligence and solving difficult problems. The presentation covers topics such as why AI experiences can be challenging, what AI is, different types of machine learning, how humans teach and monitor AI systems, ensuring AI is designed responsibly, and communicating about AI systems. It uses examples such as a hypothetical lawn care treatment selection system to illustrate concepts around data collection and training, potential biases, and unintended consequences that can arise.
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
An introductory take on the ethical issues surrounding the use of algorithms and machine learning in finance, education, law enforcement and defense. This work was stimulated by, but is not a product or authorized content from the IEEE P7003 WG.
Disclaimer: This work is mine alone and does not reflect view of IEEE, IEEE 7003 WG, my employer.
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
The document discusses IBM's efforts to bring trust and transparency to AI through open source. It outlines IBM's work on several open source projects focused on different aspects of trusted AI, including robustness (Adversarial Robustness Toolbox), fairness (AI Fairness 360), and explainability (AI Explainability 360). It provides examples of how bias can arise in AI systems and the importance of detecting and mitigating bias. The overall goal is to leverage open source to help ensure AI systems are fair, robust, and understandable through contributions to tools that can evaluate and improve trusted AI.
Sri Krishnamurthy presents on machine learning and AI in finance. He discusses how the 4th industrial revolution is being driven by emerging technologies like AI, robotics, and 5G. Machine learning and AI are revolutionizing the finance industry by enabling real-time analytics, predictive analytics, and automating tasks. Sri outlines the machine learning workflow and key areas where machine learning is being applied in finance like trading strategies, risk management, and fraud detection.
Mathematical Finance & Financial Data Science Seminar
AI and machine learning are entering every aspect of our life. Marketing, autonomous driving, personalization, computer vision, finance, wearables, travel are all benefiting from the advances in AI in the last decade. As more and more AI applications are being deployed in enterprises, concerns are growing about potential "AI accidents" and the misuse of AI. With increased complexity, some are questioning whether the models actually work! As the debate about fairness, bias, and privacy grow, there is increased attention to understanding how the models work and whether the models are thoroughly tested and designed to address potential issues.
The area "Responsible AI" is fast emerging and becoming an important aspect of the adoption of machine learning and AI products in the enterprise. Companies are now incorporating formal ethics reviews, model validation exercises, and independent algorithmic auditing to ensure that the adoption of AI is transparent and has gone through formal validation phases.
In this talk, Sri will introduce Algorithmic auditing and discuss why Algorithmic auditing will be a formal process industries using AI will need. Sri will also discuss the emerging risks in the adoption of AI and discuss how QuSandbox, his company is building, will address the emerging needs of formal Algorithmic auditing practices in enterprises.
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Patrick Van Renterghem
Organisations need to make sure that they use AI in an appropriate way. Martijn and Hugo explain how to ensure that the developments are ethically sound and comply with regulations, how to have end-to-end governance, and how to address bias and fairness, interpretability and explainability, and robustness and security.
During the conference, we looked at an example AI development process with focussing on the risks to be managed and the controls that can be established.
Demystifying Artificial Intelligence: Solving Difficult Problems at ProductCa...Carol Smith
This document discusses a presentation on demystifying artificial intelligence and solving difficult problems. The presentation covers topics such as why AI experiences can be challenging, what AI is, different types of machine learning, how humans teach and monitor AI systems, ensuring AI is designed responsibly, and communicating about AI systems. It uses examples such as a hypothetical lawn care treatment selection system to illustrate concepts around data collection and training, potential biases, and unintended consequences that can arise.
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
This document discusses several efforts around developing governance and oversight for artificial intelligence:
1. An open letter in 2016 signed by Elon Musk and Stephen Hawking called for research into ensuring AI systems are robust and beneficial.
2. In 2017, researchers from companies like Google, Facebook, and IBM proposed the Asilomar AI Principles, a set of 23 guidelines for developing beneficial AI, including that the goal should be beneficial intelligence and caution around future capabilities.
3. Other discussions focused on privacy, consent, identity, bias, and involving a global community in debates around AI governance to develop oversight that protects humanity.
The document discusses the business case for applied artificial intelligence. It covers how AI can enhance enterprises and how to build successful AI ventures. Specifically, it notes that global AI business value is forecast to reach $3.9 trillion by 2022. It then discusses how to introduce AI in enterprises, including defining objectives and benefits. It also outlines challenges of AI introduction. Additionally, it provides a framework for successful AI startups with factors around value creation, implementation, and competitive positioning. The document concludes by discussing implementation aspects like machine learning canvases and technical debts, as well as ethical considerations around issues like bias, privacy, and accountability.
AI in Manufacturing & the Proposed EU Artificial Intelligence ActBarry O'Sullivan
An overview of Ireland's positioning in artificial intelligence, a summary of the European Commission High-Level Expert Group on Artificial Intelligence "Ethics Guidelines for Trustworthy Artificial Intelligence", and the proposed AI Act. Presented as part of an event organised by the Confirm SFI Centre for Smart Manufacturing.
The field of Artificial Intelligence (AI) has progressed rapidly in the past few years. AI systems are having a growing impact on society and concerns have been raised whether AI system can be trusted. A way to address these concerns is to employ ethically aligned design principles to the development of AI software. Yet these principles are still far away from practical application. This talk provides state-of-the-art empirical insight into what should researchers and professionals do today when the client wants ethics to be added to their system.
Susskind, 'A Manifesto for AI in the Law' ICAIL 2017, London, 2017Richard Susskind
Keynote address at the 16th International Conference on Artificial Intelligence and Law, June, London, including 15-point manifesto for the AI/Law research community.
This document provides an introduction to artificial intelligence and machine learning, including their history, common examples, and applications in finance (FinTech). It discusses key concepts like the difference between artificial intelligence, machine learning, and their various subfields. The document also outlines machine learning techniques and processes, and provides examples of real-world applications. Global initiatives applying these technologies in finance are highlighted.
AI Governance and Ethics - Industry StandardsAnsgar Koene
Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".
Organizations today have lots and lots of data. Typically when it comes to data analysis we have to know what our measures of success are before we design our BI. These are typically manifested by competency, or domain driven KPI's but what if those metrics don't actually measure success at all? In this talk we will be discussing how to leverage azure machine learning to answer questions in your organization about success and how to find the KPI's that really matter and drive results.
AI, Machine Learning & Deep Learning Risk Management & Controls: Beyond Deep Learning and Generative Adversarial Networks: Model Risk Management in AI, Machine Learning & Deep Learning
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...Daniel Katz
This document provides an overview of complex systems models and big data in the social sciences. It discusses how data is becoming more abundant due to decreasing storage costs and increasing computing power. This has led to a data-driven world where large datasets are analyzed using machine learning techniques like classification, clustering, and regression. Examples are given of applications in various domains like retail, healthcare, and law. The document also discusses challenges like high-dimensional data and the need for feature extraction. Overall, it frames the current era as one of big data and data-driven theory building using inductive reasoning and machine learning.
The document discusses various ways that bias can arise in artificial intelligence systems and machine learning models. It provides examples of bias found in facial recognition systems against dark-skinned women, sentiment analysis showing preference for some religions over others, and risk assessment algorithms used in criminal justice showing racial disparities. The document also discusses definitions of fairness and bias in machine learning. It notes there are at least 21 definitions of fairness and bias can be introduced during data handling and model selection in addition to through training data.
Artificial intelligent systems in finance have exploded over the last few years. Many institutions are struggling to leverage these new AI systems and machine learning approaches to risk management. This is particularly true for applications to risk models that are subject to regulatory scrutiny where transparency limits applications of these new approaches. Co-sponsored with PRMIA (Professional Risk Managers’ International Association), this session will provide an overview of the current state of applied machine learning and artificial intelligence for risk modeling and how it can be applied for monitoring risk and building new risk models.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Extracted from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai as Follow Up for first power hour session with Mikael Eriksson on AI, October 30th in Stockholm
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017Carol Smith
What is machine learning? Is UX relevant in the age of artificial intelligence (AI)? How can I take advantage of cognitive computing? Get answers to these questions and learn about the implications for your work in this session. Carol will help you understand at a basic level how these systems are built and what is required to get insights from them. Carol will present examples of how machine learning is already being used and explore the ethical challenges inherent in creating AI. You will walk away with an awareness of the weaknesses of AI and the knowledge of how these systems work.
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
AI & ML in Cyber Security - Why Algorithms Are DangerousRaffael Marty
Every single security company is talking in some way or another about how they are applying machine learning. Companies go out of their way to make sure they mention machine learning and not statistics when they explain how they work. Recently, that's not enough anymore either. As a security company you have to claim artificial intelligence to be even part of the conversation.
Guess what. It's all baloney. We have entered a state in cyber security that is, in fact, dangerous. We are blindly relying on algorithms to do the right thing. We are letting deep learning algorithms detect anomalies in our data without having a clue what that algorithm just did. In academia, they call this the lack of explainability and verifiability. But rather than building systems with actual security knowledge, companies are using algorithms that nobody understands and in turn discover wrong insights.
In this talk I will show the limitations of machine learning, outline the issues of explainability, and show where deep learning should never be applied. I will show examples of how the blind application of algorithms (including deep learning) actually leads to wrong results. Algorithms are dangerous. We need to revert back to experts and invest in systems that learn from, and absorb the knowledge, of experts.
Technologies in Support of Big Data EthicsMark Underwood
As part of the NIST Big Data Public Working Group, we examine technologies that can support ethics in systems design. In particular, we review issues raised by the IEEE P7000 community regarding ethics for autonomous systems and robotics. Possible adaptations to the NBDPWG reference model are considered for the third and final version of SP1500.
DevOps Support for an Ethical Software Development Life Cycle (SDLC)Mark Underwood
As part of the IEEE SA P7000 and P2675 working groups, it has been determined that DevOps engineering practices can support (or hinder) the environment for an ethical software development life cycle (SDLC). This deck scratches the surface.
This document discusses several efforts around developing governance and oversight for artificial intelligence:
1. An open letter in 2016 signed by Elon Musk and Stephen Hawking called for research into ensuring AI systems are robust and beneficial.
2. In 2017, researchers from companies like Google, Facebook, and IBM proposed the Asilomar AI Principles, a set of 23 guidelines for developing beneficial AI, including that the goal should be beneficial intelligence and caution around future capabilities.
3. Other discussions focused on privacy, consent, identity, bias, and involving a global community in debates around AI governance to develop oversight that protects humanity.
The document discusses the business case for applied artificial intelligence. It covers how AI can enhance enterprises and how to build successful AI ventures. Specifically, it notes that global AI business value is forecast to reach $3.9 trillion by 2022. It then discusses how to introduce AI in enterprises, including defining objectives and benefits. It also outlines challenges of AI introduction. Additionally, it provides a framework for successful AI startups with factors around value creation, implementation, and competitive positioning. The document concludes by discussing implementation aspects like machine learning canvases and technical debts, as well as ethical considerations around issues like bias, privacy, and accountability.
AI in Manufacturing & the Proposed EU Artificial Intelligence ActBarry O'Sullivan
An overview of Ireland's positioning in artificial intelligence, a summary of the European Commission High-Level Expert Group on Artificial Intelligence "Ethics Guidelines for Trustworthy Artificial Intelligence", and the proposed AI Act. Presented as part of an event organised by the Confirm SFI Centre for Smart Manufacturing.
The field of Artificial Intelligence (AI) has progressed rapidly in the past few years. AI systems are having a growing impact on society and concerns have been raised whether AI system can be trusted. A way to address these concerns is to employ ethically aligned design principles to the development of AI software. Yet these principles are still far away from practical application. This talk provides state-of-the-art empirical insight into what should researchers and professionals do today when the client wants ethics to be added to their system.
Susskind, 'A Manifesto for AI in the Law' ICAIL 2017, London, 2017Richard Susskind
Keynote address at the 16th International Conference on Artificial Intelligence and Law, June, London, including 15-point manifesto for the AI/Law research community.
This document provides an introduction to artificial intelligence and machine learning, including their history, common examples, and applications in finance (FinTech). It discusses key concepts like the difference between artificial intelligence, machine learning, and their various subfields. The document also outlines machine learning techniques and processes, and provides examples of real-world applications. Global initiatives applying these technologies in finance are highlighted.
AI Governance and Ethics - Industry StandardsAnsgar Koene
Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".
Organizations today have lots and lots of data. Typically when it comes to data analysis we have to know what our measures of success are before we design our BI. These are typically manifested by competency, or domain driven KPI's but what if those metrics don't actually measure success at all? In this talk we will be discussing how to leverage azure machine learning to answer questions in your organization about success and how to find the KPI's that really matter and drive results.
AI, Machine Learning & Deep Learning Risk Management & Controls: Beyond Deep Learning and Generative Adversarial Networks: Model Risk Management in AI, Machine Learning & Deep Learning
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...Daniel Katz
This document provides an overview of complex systems models and big data in the social sciences. It discusses how data is becoming more abundant due to decreasing storage costs and increasing computing power. This has led to a data-driven world where large datasets are analyzed using machine learning techniques like classification, clustering, and regression. Examples are given of applications in various domains like retail, healthcare, and law. The document also discusses challenges like high-dimensional data and the need for feature extraction. Overall, it frames the current era as one of big data and data-driven theory building using inductive reasoning and machine learning.
The document discusses various ways that bias can arise in artificial intelligence systems and machine learning models. It provides examples of bias found in facial recognition systems against dark-skinned women, sentiment analysis showing preference for some religions over others, and risk assessment algorithms used in criminal justice showing racial disparities. The document also discusses definitions of fairness and bias in machine learning. It notes there are at least 21 definitions of fairness and bias can be introduced during data handling and model selection in addition to through training data.
Artificial intelligent systems in finance have exploded over the last few years. Many institutions are struggling to leverage these new AI systems and machine learning approaches to risk management. This is particularly true for applications to risk models that are subject to regulatory scrutiny where transparency limits applications of these new approaches. Co-sponsored with PRMIA (Professional Risk Managers’ International Association), this session will provide an overview of the current state of applied machine learning and artificial intelligence for risk modeling and how it can be applied for monitoring risk and building new risk models.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Extracted from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai as Follow Up for first power hour session with Mikael Eriksson on AI, October 30th in Stockholm
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017Carol Smith
What is machine learning? Is UX relevant in the age of artificial intelligence (AI)? How can I take advantage of cognitive computing? Get answers to these questions and learn about the implications for your work in this session. Carol will help you understand at a basic level how these systems are built and what is required to get insights from them. Carol will present examples of how machine learning is already being used and explore the ethical challenges inherent in creating AI. You will walk away with an awareness of the weaknesses of AI and the knowledge of how these systems work.
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
AI & ML in Cyber Security - Why Algorithms Are DangerousRaffael Marty
Every single security company is talking in some way or another about how they are applying machine learning. Companies go out of their way to make sure they mention machine learning and not statistics when they explain how they work. Recently, that's not enough anymore either. As a security company you have to claim artificial intelligence to be even part of the conversation.
Guess what. It's all baloney. We have entered a state in cyber security that is, in fact, dangerous. We are blindly relying on algorithms to do the right thing. We are letting deep learning algorithms detect anomalies in our data without having a clue what that algorithm just did. In academia, they call this the lack of explainability and verifiability. But rather than building systems with actual security knowledge, companies are using algorithms that nobody understands and in turn discover wrong insights.
In this talk I will show the limitations of machine learning, outline the issues of explainability, and show where deep learning should never be applied. I will show examples of how the blind application of algorithms (including deep learning) actually leads to wrong results. Algorithms are dangerous. We need to revert back to experts and invest in systems that learn from, and absorb the knowledge, of experts.
Technologies in Support of Big Data EthicsMark Underwood
As part of the NIST Big Data Public Working Group, we examine technologies that can support ethics in systems design. In particular, we review issues raised by the IEEE P7000 community regarding ethics for autonomous systems and robotics. Possible adaptations to the NBDPWG reference model are considered for the third and final version of SP1500.
DevOps Support for an Ethical Software Development Life Cycle (SDLC)Mark Underwood
As part of the IEEE SA P7000 and P2675 working groups, it has been determined that DevOps engineering practices can support (or hinder) the environment for an ethical software development life cycle (SDLC). This deck scratches the surface.
Implications of GDPR for IoT Big Data Security and Privacy FabricMark Underwood
Discussion of ways in which GDPR has, and will continue to influence the SDLC and deployment of IoT, especially as it impacts the privacy and security fabric.
Open Source Insight: Securing IoT, Atlanta Ransomware Attack, Congress on Cyb...Black Duck by Synopsys
The Black Duck blog and Open Source Insight become part of the Synopsys Software Integrity blog in early April. You’ll still get the latest open source security and license compliance news, insights, and opinions you’ve come to expect, plus the latest software security trends, news, tips, best practices, and thought leadership every week. Don’t delay, subscribe today! Now on to this week’s open source security and cybersecurity news.
This document provides a summary of a presentation on cybersecurity evolution and awareness. It discusses emerging technology trends like the internet of things, big data, and predictive analytics. It also covers social media risks and security services to reduce risk through a five step approach of identifying, protecting, detecting, responding to, and recovering from cyber attacks. The presentation aims to prepare organizations for future cybersecurity challenges through education and implementing best practices.
NIST Big Data Public WG : Security and Privacy v2Mark Underwood
The document discusses security and privacy considerations for big data as outlined by the National Institute of Standards and Technology's (NIST) Big Data Public Working Group. It notes that big data introduces new challenges due to factors like multiple security schemes, streamed and stored data, sensor data, and data sharing across organizations. It also summarizes NIST's volumes on big data definitions, taxonomies, use cases, and reference architectures as they relate to security and privacy.
Open Source Insight: Happy Birthday Open Source and Application Security for ...Black Duck by Synopsys
Opinions differ on exactly when, but open source turned twenty this year. Most security breaches in 2017 were preventable (you hear that, Equifax?), and it’s time to take a look back to prevent similar breaches in 2018. iPhone source code gets leaked (for a short time). And keeping medical devices, voting machines, automobiles, and critical infrastructure safe in a world of increasing application risk.
Read on for open source security and cybersecurity in Open Source Insight for February 9th, 2018.
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
Building Digital Trust in a Digital Economy
Veronica Tan, Director - Cyber Security Agency of Singapore
Apidays Singapore 2024: Connecting Customers, Business and Technology (April 17 & 18, 2024)
------
Check out our conferences at https://www.apidays.global/
Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8
Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io
Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/
Open Source Insight: AI for Open Source Management, IoT Time Bombs, Ready for...Black Duck by Synopsys
Some interesting topics in this week’s Open Source Insight, including news that Equifax knew about its security issues more than a year before the fact. We also look at the use of AI for open source management; the ticking time bomb that is IoT security; a preview of the Legal track at Black Duck FLIGHT 2017, and to round out the month, we offer a fun infographic in the spirit of Halloween.
How Healthcare CISOs Can Secure Mobile DevicesSkycure
Original webinar: http://get.skycure.com/mobile-security-in-healthcare-webinar
In this webinar, Jim Routh, CSO at Aetna, and Adi Sharabani, CEO and co-founder at Skycure, discuss:
- The state of mobile security in Healthcare organizations
- How to improve incident response and resilience of mHealth IT operations
- How to leverage risk-based mobility to predict, detect and protect against threats
Open Source Insight: You Can’t Beat Hackers and the Pentagon Moves into Open...Black Duck by Synopsys
We take a deep dive into security researchers Charlie Miller and Chris Valasek’s keynote at last week’s FLIGHT 2017 conference. What is “Hidden Cobra” and is it targeting US aerospace, telecommunications and finance industries? Both banks and the Pentagon are making big moves into open source. And why it’s smart to assume that every application is an on-premise application.
The best of November’s application security and open security news (so far) follows in this week’s edition of Open Source Insight.
This document discusses tools and frameworks for developing responsible AI solutions. It begins by outlining some of the costs of AI incidents, such as harm to human life, loss of trust, and fines. It then discusses defining responsible AI principles like respecting human rights, enabling human oversight, and transparency. The document provides examples of bias that can occur in AI systems and tools to detect and mitigate bias. It discusses the importance of a human-centric design approach and case studies of bias in systems. Finally, it outlines best practices for developing responsible AI like integrating tools and certifications.
Proposed T-Model to cover 4S quality metrics based on empirical study of root...IJECEIAES
There are various root causes of software failures. Few years ago, software used to fail mainly due to functionality related bugs. That used to happen due to requirement misunderstanding, code issues and lack of functional testing. A lot of work has been done in past on this and software engineering has matured over time, due to which software’s hardly fail due to functionality related bugs. To understand the most recent failures, we had to understand the recent software development methodologies and technologies. In this paper we have discussed background of technologies and testing progression over time. A survey of more than 50 senior IT professionals was done to understand root cause of their software project failures. It was found that most of the softwares fail due to lack of testing of non-functional parameters these days. A lot of research was also done to find most recent and most severe software failures. Our study reveals that main reason of software failures these days is lack of testing of non-functional requirements. Security and Performance parameters mainly constitute non-functional requirements of software. It has become more challenging these days due to lots of development in the field of new technologies like Internet of things (IoT), Cloud of things (CoT), Artificial Intelligence, Machine learning, robotics and excessive use of mobile and technology in everything by masses. Finally, we proposed a software development model called as T-model to ensure breadth and depth of software is considered while designing and testing of software.
The CIPR's Artificial Intelligence (AI) panel has published new research revealing the impact of technology, and specifically AI, on public relations practice. It predicts the impact on skills in the profession in the next five years.
EPR Annual Conference 2020 Workshop 1 - Simon Uytterhoeven EPR1
This document discusses using AI to help citizens and job seekers. It presents three cases: 1) Using AI for proactive jobseeker profiling to better predict who will find a job within 6 months. 2) Using deep learning to match jobseeker profiles and skills to open jobs. 3) Providing smart suggestions to citizens based on their needs and interests. It emphasizes that developing AI for social good requires collaboration between researchers, AI teams, data protection officers, citizens, and businesses to ensure the AI is developed with privacy, ethics, transparency, and in a way that benefits users.
Computer Forensics
Discussion 1
"Forensics Certifications" Please respond to the following:
· Determine whether or not you believe certifications in systems forensics are necessary and explain why you believe this to be the case. Compare and contrast certifications and on-the-job training and identify which you believe is more useful for a system forensics professional. Provide a rationale with your response.
· Suppose you are the hiring manager looking to hire a new system forensics specialist. Specify at least five (5) credentials you would expect an ample candidate to possess. Determine which of these credentials you believe to be the most important and provide a reason for your decision.
Discussion 2
"System Forensics Organizations" Please respond to the following:
· Use the Internet or the Library to research and select one (1) reputable system forensics organization. Provide a brief overview of the organization you chose, including what it provides for its members, and how one can join the organization. Indicate why, in your opinion, this particular organization would be the best choice for a system forensics professional to join and why you believe this way.
· Examine what you believe to be the most important reason for a systems forensic professional to be a member of a forensics organization and how this could further one’s career in the industry.
Cyber Security
Discussion 1
"Leading Through Effective Strategic Management" Please respond to the following:
· Propose three ways to ensure that cooperation occurs across security functions when developing a strategic plan. Select what you believe is the most effective way to promote collaboration and explain why.
· Explain what may happen if working cultures are overlooked when developing a strategy. Recommend one way to prevent working cultures from being overlooked.
Discussion 2
"Installing Security with System and Application Development" Please respond to the following:
· Provide three examples that demonstrate how security can be instilled within the Systems Development Life Cycle (SDLC). Provide two examples on what users may experience with software products if they are released with minimal security planning.
· Suggest three ways that application security can be monitored and evaluated for effectiveness. Choose what you believe to be the most effective way and discuss why.
Computer Security
Discussion 1
"Current Events and Future Trends" Please respond to the following:
· How can we create a national security culture where all are more cognizant of security threats and involved to help prevent potential incidents? How do we balance the need for this security culture with the rights guaranteed to us by our Bill of Rights?
Research Topics (Choose 1 Topic)
Terrorism
· Terrorism remains one of the major concerns in the wake of the 9-11 events. Research into terrorism as it pertains to homeland security is conducted by corporations like the RAND Corporation, which is.
Research Paper Sentence OutlineResearch Question How e-commer.docxaudeleypearl
Research Paper Sentence Outline::
Research Question: How e-commerce companies address privacy in its policies?
Purpose: The purpose of this assignment is to prepare you for the dissertation process by creating a sentence outline for a research paper.
Description: The topic of your sentence outline is your research paper topic. After completing this week's Learning Activities, develop a sentence outline.
Deliverable: Prepare a Microsoft Word document that includes the following headings and one full sentence in each section:
· Title Page
· Abstract
· Introduction
· Literature review
· Research Method
· Results
· Discussion
· Conclusion
John Fulcher
CYB/110
Playbook / Runbook Part 2 – Social Network Security
John W. Fulcher
University of Phoenix Online
CYB/110
Question 3
The scenario that happened involved the Win32/Virut malware that was notorious and wreaked havoc on one machine in the company (Microsoft). The malware was detected and stopped before it spread to any other computer on the network. It operates by modifying the software executables on the computers and spreads by targeting every software executable that opens and writes its code that introduces a backdoor that allows hackers to access the system from remote servers. The malware is introduced when an infected executable is run on the machine and once it has been installed along with the innocent-looking software, it copies itself to every other executable as soon as it is opened, meaning that it does not spread if no executable file is run. This, in turn, means that any software that is yet to be run is safe.
Upon realizing the corruption, which was done when an online scan using ESET antivirus was conducted, every executable was closed down (ESET). This allowed for antivirus to effectively isolate any executable affected and list it. Indeed, the executables were listed and it turned out that 7 executables had been affected already, these were immediately quarantined. Some of the software affected were office word and operating system executables. To effectively deal with the threat, I restored the quarantined files so that I could cleanly uninstall the software. After the uninstallation, the online scan was run again, since it was not vulnerable to infection through the executable corruption. This time around, every identified threat was removed and an operating system disc used to repair the corrupted operating system files. Finally, the ESET antivirus was installed so that such threats can be prevented before happening to reduce the extent of the damage. The affected software was then reinstalled and the system scanned with the offline antivirus and scheduled to automatically scan every day (Koret and Bachaalany).
Employees must be guided not to share the following information online:
· Usernames
· Office address
· Their medical history and records
· Their work experiences
· The place they have lived in
· Family member’s identity
· Date of births
· ...
Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?PECB
Generative AI offers great opportunities for innovation in various industries. Hence, by adopting ISO/IEC 27032, you can enhance your cybersecurity resilience and efficiently address the risks associated with generative AI.
Amongst others, the webinar covers:
• AI & Privacy
• Generative AI, Models & Cybersecurity
• AI & ISO/IEC 27032
Presenters:
Christian Grafenauer
Anonymization expert, privacy engineer, data protection officer, LegalTech researcher (GDPR, Blockchain, AI) Christian Grafenauer is an accomplished privacy engineer, anonymization expert, and computer science specialist, currently serving as the project lead for anonymity assessments at techgdpr. With an extensive background as a senior architect in Blockchain for IBM and years of research in the field since 2013, Christian co-founded privacy by Blockchain design to explore the potential of Blockchain technology in revolutionizing privacy and internet infrastructure. As a dedicated advocate for integrating legal and computer science disciplines, Christian’s expertise in anonymization and GDPR compliance enables innovative AI applications, ensuring a seamless fusion of technology and governance, particularly in the realm of smart contracts. In his role at techgdpr, he supports technical compliance, Blockchain, and AI initiatives, along with anonymity assessments. Christian also represents consumer interests as a member of the national Blockchain and DTL standardization committee at din (German standardization institute) in ISO/TC 307.
Akin Johnson
Akin J. Johnson is a renowned Cybersecurity Expert, known for his expertise in protecting digital systems from potential threats. With over a decade of experience in the field, Akin has developed a deep understanding of the ever-evolving cyber landscape.
Akin is an advocate for cybersecurity awareness and frequently shares his knowledge through speaking engagements, workshops, and publications. He firmly believes in the importance of educating individuals and organizations on the best practices for safeguarding their digital assets.
Lucas Falivene
Lucas is a highly experienced cybersecurity professional with a solid base in business, information systems, information security, and cybersecurity policy-making. A former Fulbright scholar with a Master of Science degree in Information Security Policy and Management at Carnegie Mellon University (Highest distinction) and a Master's degree in Information Security at the University of Buenos Aires (Class rank 1st). Lucas has participated in several trainings conducted by the FBI, INTERPOL, OAS, and SEI/CERT as well as in the development of 4 cyber ISO national standards.
Date: July 26, 2023
YouTube Link: https://youtu.be/QPDcROniUcc
Similar to Codes of Ethics and the Ethics of Code (20)
The document provides an overview of the Scaled Agile Framework (SAFe) from the perspective of security and privacy specialists. It discusses how SAFe borrows concepts from lean, agile, and DevOps principles. While SAFe incorporates security as a quality attribute, the document notes it may not provide an in-depth treatment and hybrid models could also be considered.
An overview of Google's Site Reliability Engineering with a view toward possible incorporation in the IEEE P2675 DevOps security standard. (Creative Commons with credit.)
The Quality “Logs”-Jam: Why Alerting for Cybersecurity is Awash with False Po...Mark Underwood
What happens when the (Observe) Plan-Do-Check-Adjust cycle is undermined by lapses in data integrity? Observations are questioned. Plans may be ill-conceived. Actions may be undertaken that undermine rather than enhance. “Checks” can fail. Adjustments may be guesswork. In cybersecurity, the results of poor data integrity can be expensive outages, ransom requests, breaches, fines -- even bankruptcy (think Cambridge Analytica). But data integrity issues take many forms, ranging from benign to malicious. The full range of these issues is surveyed from a cybersecurity perspective, where logs and alerts are critical for defenders -- as well as quality engineers . Techniques borrowed from model-based systems engineering and ontology AI to are identified that can mitigate these deleterious effects on PDCA.
Presents a more expansive view of "stakeholders" in systems design, specifically beyond purely human notions. Produced for use by the IEEE P7000 working group "Model Process for Addressing Ethical Concerns During System Design."
Slowing the Two Cultures continental drift. The humanities are drifting further and further away from the realities of science and technology.Their marginalization should worry us all. I survey the current state of affairs 50 years after CP Snow's talk, and suggest how poets should retool.
IoT Day 2016: Cloud Services for IoT Semantic InteroperabilityMark Underwood
Presentation made on IoT Day 2016 about the importance of API-first, cloud services role in implementing ontologies for IoT. The use case is homely: providing proper humidity to my electric violin and guitar instruments while in their cases.
Ontology Summit - Track D Standards Summary & Provocative Use CasesMark Underwood
The OntologySummit is an annual series of events (first started by Ontolog and NIST in 2006) that involves the ontology community and communities related to each year's theme chosen for the summit. The Ontology Summit program is now co-organized by Ontolog, NIST, NCOR, NCBO, IAOA, NCO_NITRD along with the co-sponsorship of other organizations that are supportive of the Summit goals and objectives. This deck summarizes some of the work in Track D, IoT and Ontology Standards Synergies
The presentation discusses design patterns for ontologies in IoT. It proposes using ontologies to influence software engineering practices for IoT, leverage semantics, and foster reuse. Ontology-based design patterns can provide logic, architectural patterns, usability features, and enable simulation/testing. The presentation provides examples of how ontologies can help with issues like sensor provenance, privacy, standards integration, and forensic analysis of IoT data. It argues that ontologies are important to automate reasoning about IoT data and empower domain experts.
QR Secure: A Hybrid Approach Using Machine Learning and Security Validation F...AlexanderRichford
QR Secure: A Hybrid Approach Using Machine Learning and Security Validation Functions to Prevent Interaction with Malicious QR Codes.
Aim of the Study: The goal of this research was to develop a robust hybrid approach for identifying malicious and insecure URLs derived from QR codes, ensuring safe interactions.
This is achieved through:
Machine Learning Model: Predicts the likelihood of a URL being malicious.
Security Validation Functions: Ensures the derived URL has a valid certificate and proper URL format.
This innovative blend of technology aims to enhance cybersecurity measures and protect users from potential threats hidden within QR codes 🖥 🔒
This study was my first introduction to using ML which has shown me the immense potential of ML in creating more secure digital environments!
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
AI in the Workplace Reskilling, Upskilling, and Future Work.pptxSunil Jagani
Discover how AI is transforming the workplace and learn strategies for reskilling and upskilling employees to stay ahead. This comprehensive guide covers the impact of AI on jobs, essential skills for the future, and successful case studies from industry leaders. Embrace AI-driven changes, foster continuous learning, and build a future-ready workforce.
Read More - https://bit.ly/3VKly70
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
"What does it really mean for your system to be available, or how to define w...
Codes of Ethics and the Ethics of Code
1. Codes of Ethics & the
Ethics of Code
in the AI Era
Overview of big data / ML concerns from IEEE P70nn Working Groups @IEEESA http://sites.ieee.org/sagroups-7000/
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
2. Disclaimers
Represents my views only
Does not represent in any way the views of the following:
Not view of my employer Synchrony
Not view of IEEE or IEEE SA
Not view of the IEEE P7003 WG
Not view of the NIST Big Data Public WG
IEEE P7003 standards work still early stage
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
3. My Perspective
Chair Ontology / Taxonomy subgroup for P7000
Occasional participant in P7007, P7003, P7002, P7010, P7001
Co-chair, NIST Big Data Security and Privacy Subgroup (SP 1500)
ASQ, APICS practices
History
CAI (70’s)
Data Fusion / Context Activated Memory Device (80’s)
Data Warehouse, metadata, ERP (90’s)
Cybersecurity, analytics (2000 – present)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
4. Selected Liaison Groups
NIST (mostly 1:1 contacts, catalog of cited SPs and standards)
IEEE P2675 Security for DevOps
IEEE P1915.1 NFV and SDN Security, 5G (1:1 via AT&T)
IEEE P7000-P7010 (S&P in robotics: algorithms, student data, safety & resilience, etc.)
ISO 20546 20547 Big Data
IEEE Product Safety Engineering Society
IEEE Reliability Engineering
IEEE Society for Social Implications of Technology
HL7 FHIR Security Audit WG
Cloud Native SAFE Computing (Kubernetes-centric)
Academic cryptography experts
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
5. “Minority Report” (2002)
The “PreCogs” have landed
Proprietary predictive models already deployed in several
states for
Law enforcement
Child welfare
“Pockets of poverty” identification
Educational / teacher assessment
Credit: Philip K. Dick (1956)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
6. Ethical issues Already in Play
Sustainability
Environment
Climate Change (*data center power consumption)
Bias concerns in gender, race, free speech
Social media technology responsibility
As propaganda platforms
Excessive use of cell phones by children: ADHD?
Weakened critical thinking, F2F social skills (Sherry Turkle Reclaiming Conversation 2015)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
7. IEEE P7000: Marquis Group Charter
“Scope: The standard establishes a process model by which engineers and technologists can address
ethical consideration throughout the various stages of system initiation, analysis and design.
Expected process requirements include management and engineering view of new IT product
development, computer ethics and IT system design, value-sensitive design, and, stakeholder
involvement in ethical IT system design. . .. The purpose of this standard is to enable the pragmatic
application of this type of Value-Based System Design methodology which demonstrates that
conceptual analysis of values and an extensive feasibility analysis can help to refine ethical system
requirements in systems and software life cycles.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
8. Related IEEE P70nn Groups
IEEE P7000 Ethical Systems Design
IEEE P7001 Transparency of Autonomous Systems
IEEE P7002 Data Privacy Process
IEEE P7003 Algorithmic Bias Considerations
IEEE P7004 Standard for Child and Student Data Governance
IEEE P7005 Standard for Transparent Employer Data Governance
IEEE P7006 Standard for Personal AI Agent
IEEE P7007 Ontological Standard for Ethically Driven Robotics and Automation Systems
IEEE P7008 -Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
IEEE P7009 Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems
IEEE P7010 Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems
IEEE P7011 SSIE Standard for Trustworthiness of News Media
IEEE P7012 SSIE Machine Readable Personal Privacy Terms
IEEE P7013 Facial Analysis
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
9. Key References
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Focus: artificial intelligence
and autonomous systems.
Havens asks, “How will
machines know what we
value if we don’t know
ourselves?”
10. Recent Case Study Opportunities
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Faster, Higher, Farther chronicles a corporate
scandal that rivals those at Enron and Lehman
Brothers—one that will cost Volkswagen more
than $22 billion in fines and settlements.” –
Publisher
11. Case Study 2
“Equifax said that about 38,000 driver's
licenses and 3,200 passports details had
been uploaded to the portal that had was
hacked. (http://bit.ly/2jF3VTh) Equifax said
in September that hackers had stolen
personally identifiable information of U.S.,
British and Canadian consumers. The
company confirmed that information on
about 146.6 million names, 146.6 million
dates of birth, 145.5 million social security
numbers, 99 million address information
and 209,000 payment card number and
expiration date, were stolen in the cyber
security incident.” –Yahoo Finance
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
12. Case Study 3
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
It will be remembered as “a breach,” but the Facebook –
Cambridge Analytica incident was about supply chain big data.
Adjectives to
remember:
“Tiny” + “Big”
13. Case Study 4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Finding: Hispanic-owned and managed Airbnb properties, controlled for other
aspects, receive less revenue than other groups.
Response from Airbnb when contacted by reporters: We already provide tools
to help price listings.
Source: American Public Media Marketplace 8-May-2018
Related story: Dan Gorenstein, “Airbnb cracks down on bias – but at what cost?” Marketplace, 2018-09-08.
14. Case Study 5
A “charity” was used to subsidize
payments to Medicare patients in order
to boost drug sales. Multiple
manufacturers were involved.
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
15. Case Study 6
The US FTC Fair Credit Reporting Act requires that customers receive an explanation when credit will
not be extended by a lender.
Fact: Many lenders are using ML and algorithms to make such decisions in real time.
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
16. Case Study 7
“. . . Artificial intelligence. Mr. Zuckerberg’s vision, which
the committee members seemed to accept, was that
soon enough, Facebook’s A.I. programs would be able
to detect fake news, distinguishing it from more reliable
information on the platform. With midterms
approaching, along with the worrisome prospect that
fake news could once again influence our elections, we
wish we could say we share Mr. Zuckerberg’s optimism.
But in the near term we don’t find his vision plausible.
Decades from now, it may be possible to automate the
detection of fake news. But doing so would require a
number of major advances in A.I., taking us far beyond
what has so far been invented.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
https://www.nytimes.com/2018/10/20/opinion/sunday/ai-fake-news-disinformation-campaigns.html
17. Case Study 8
“The [Google DeepMind et al. team] research
acknowledges that current "deep learning" approaches to
AI have failed to achieve the ability to even approach
human cognitive skills. Without dumping all that's been
achieved with things such as "convolutional neural
networks," or CNNs, the shining success of machine
learning, they propose ways to impart broader reasoning
skills.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
18. Case Study 9
“. . . By 2015, the company realized its new system was
not rating candidates for software developer jobs and
other technical posts in a gender-neutral way. That is
because Amazon’s computer models were trained to vet
applicants by observing patterns in resumes submitted to
the company over a 10-year period. Most came from
men, a reflection of male dominance across the tech
industry.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
19. Case Study 10
Solving Poverty through Data Science
It’s
Magic!
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
https://www.marketplace.org/shows/marketplace-morning-report 2018-07-30
20. Related IEEE Associations
Related worries and worriers
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
21. IEEE Society on Social Implications
of Technology
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
22. IEEE Product Safety Engineering Society
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
23. IEEE Reliability Society
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
See free reliability analytics toolkit. Some
items are useful to Big Data DevOps)
https://kbros.co/2rugRij
24. Who is IEEE SA?
Why care what it does?
• Affordable, volunteer-driven, int’l
• IEEE SA members voting rights
• Collaboration with ISO, NIST
• Key standards include ethernet
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
25. But this is an ASQ Symposium!
IEEE limitations:
IEEE Active communities are small.
Standards documents are not free, though participation for IEEE members is.
Heavily weighted toward late career participants.
Despite “Engineering” in title, often not “engineering.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
26. But IEEE has . . .
IEEE Digital Library (with cross reference to ACM digital library)
Multinational reach and engagement
Reasonable internal advocacy and oversight
Diversity
Sometimes good awareness of NIST work
Often best work in lesser-known conference publications (e.g., vs. IEEE Security)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
27. State of Computing Profession Ethics
@ACM_Ethics
ACM Code of Ethics (Draft 3, 2018)
https://www.acm.org/about-acm/code-of-ethics
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
28. Highlights of ACM Ethics v3
“minimize negative consequences of computing, including threats to health, safety, personal
security, and privacy.”
When the interests of multiple groups conflict, the needs of the least advantaged should be given
increased attention and priority
Computing professionals should promote environmental sustainability both locally and globally
(Conference theme!).
“. . .the consequences of emergent systems and data aggregation should be carefully analyzed.
Those involved with pervasive or infrastructure systems should also consider Principle 3.7
(Standard of care when a system is integrated into the infrastructure of society).
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
29. Highlights:Joint ACM IEEE
Software Engr Code of Ethics
https://www.computer.org/web/education/code-of-ethics
Software engineers shall act consistently with the public interest.
Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and
does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to
the public good.
Be fair and avoid deception in all statements, particularly public ones, concerning software or related documents,
methods and tools.
Consider issues of physical disabilities, allocation of resources, economic disadvantage and other factors that can diminish
access to the benefits of software.
Identify, document, and report significant issues of social concern, of which they are aware, in software or related
documents, to the employer or the client.
Strive for high quality, acceptable cost and a reasonable schedule, ensuring significant tradeoffs are clear to and
accepted by the employer and the client, and are available for consideration by the user and the public.
Identify, define and address ethical, economic, cultural, legal and environmental issues related to work projects
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
30. Hidden: Human Computer Interaction
NBDPWG System Communicator
Usability for web and mobile content
Substitutes for old school manuals
“Privacy text” for disclosures, policy, practices
Central to much of the click-based economy
“User” feedback, recommendations
Recommendation engines
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
31. Professional Pride, Public Disillusionment
Broader acceptance within IT & Evidence-based Practices
Growth of data science inside many professions (R, Python)
Extraordinary explosion of OSS tooling
Big Data, ML, Real Time
Watson, AlphaGo, Alexa “AI” (Gee Whiz factor)
Public Perspective
“2017 was the year we fell out of love with algorithms.”
Cambridge Analytica, Equifax
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
32. Natural Language Tooling
Hyperlinks to artifacts
Chatbots
Live agent
Speech to text support
Text mining
Enterprise search (workflow-enabled artifacts)
Some of the indexed artifacts may approach big data status
SaaS Text Analytics
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
33. Dependency Management
Big Data configuration management
Across organizations
Needed for critical infrastructure
See NIST critical sector efforts
Dependencies may not be human-intelligible
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“’Once ze rocket goes up, who cares where
it come down. That’s not my department,’
says Wernher von Braun.” – Tom Lehrer
34. Traceability & Requirements Engineering
What is an ethical requirement?
Possible: big data ethical fabric (transparency, usage)
Can you audit a requirement? What is a quality requirement?
What is requirement traceability?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
35. Special Populations
Disadvantaged
By regulation (e.g., 8A, SBIR, disability)
By “common sense” (“fairness” and “equity”)
By economic / sector (“underserved”)
Internet Bandwidth inequity
Children
“Criminals” / Malware Designers
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
36. Algorithms
“Why am I locked out while she is permitted?”
“Why isn’t my FICO score changing?”
“How can I know when I have explained our algorithm?”
“Is there an ‘explain-ability’ metric?”
What is different about machine-to-machine algorithms?
“Can an algorithm be abusive?”
“Is ‘bias’ the new breach?” https://kbros.co/2I2sxDO
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
37. “Bias is the New Breach”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Researchers from MIT and Stanford University
tested three commercially released facial-analysis
programs from major technology companies and
will present findings that the software contains
clear skin-type and gender biases. Facial
recognition programs are good at recognizing
white males but fail embarrassingly with females
especially the darker the skin tone. The news
broke last week but will be presented in full at the
upcoming Conference on Fairness, Accountability,
and Transparency.“
https://www.cio.com/article/3256272/artificial-intelligence/in-the-ai-revolution-bias-is-the-new-
breach-how-cios-must-manage-risk.html
38. Algorithmic Bias Risk Management
1. Recognize, socialize groups protected by statute (e.g.,
Equal Credit Opportunity Act)
2. Creatively consider other affected subpopulations
Sight impaired – other disabilities
Children, elderly
Unusual household settings (elder care, multi-family
housing)
Part time and workers
Novice vs. Experienced users
What counterfactuals are simply not being measured?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
39. Linkage to Privacy, Surveillance, Distrust
Ask your quality engineer to respond to this question:
“Algorithms are bad because they . . . “
Use data without our knowledge
Are based on incorrect or misleading knowledge about us
Are not accountable to individual citizens
Are used by governments to spy on citizens
Support drone warfare
Are built by specialists who do what they are told without asking questions
Represent a trend to automate jobs out of existence
Are built by big companies with no public accountability
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
40. “When we fell out of love with algorithms.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
41. Audience, Alerts, Audits: Monitoring
Who is the audience for a product or service? (Out of regular coffee in our meeting room)
Who should be alerted, and for what, and how often?
Even if they have opted out?
What should be audited?
What thresholds are appropriate for cost, timetable, risk?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
42. Decisions vs. Decision Support:
Application Areas
Human-Computer Interactions in Decision-making
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
43. Undermining Specialists*
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“The threat the electronic health records
and machine learning post for physicians’
clinical judgment – and their well-being.” – NYT
2018-05-16
“’Food poisoning’ was diagnosed because
the strangulated hernia in the groin was
overlooked, or patients were sent to the
catheterization lab for chest pain because
no one saw the shingles rash on the left
chest.”
*Or adversely changing specialist behavior.
44. “Rote Decision-Making”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“The authors, both emergency room physicians at
Brigham and Women’s Hospital in Boston, do a fine job
of sorting through most of the serious problems in
American medicine today, including the costs, over-
testing, overprescribing, overlitigation and general
depersonalization. All are caused at least in part, they
argue, by the increasing use of algorithms in medical
care.” -NYT 2018-04-01
45. Facial Recognition for Law Enforcement
“AMZ touts its Rekognition facial recognition system as ‘simple and easy to
use,’ encouraging customers to ‘detect, analyze, and compare faces for a
wide variety of user verification, people counting, and public safety use
cases.’ And yet, in a study released Thursday by the American Civil Liberties
Union, the technology managed to confuse photos of 28 members of
Congress with publicly available mug shots. Given that Amazon actively
markets Rekognition to law enforcement agencies across the US, that’s
simply not good enough. The ACLU study also illustrated the racial bias that
plagues facial recognition today. ‘Nearly 40 percent of Rekognition’s false
matches in our test were of people of color, even though they make up only
20 percent of Congress,’ wrote ACLU attorney Jacob Snow. ‘People of color
are already disproportionately harmed by police practices, and it’s easy to
see how Rekognition could exacerbate that.’“ -Wired 2018-07-26
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
46. “Family” Impacts
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Charges of faulty forecasts have accompanied the
emergence of predictive analytics into public policy.
And when it comes to criminal justice, where
analytics are now entrenched as a tool for judges
and parole boards, even larger complaints have
arisen about the secrecy surrounding the workings
of the algorithms themselves — most of which are
developed, marketed and closely guarded by private
firms. That’s a chief objection lodged against two
Florida companies: Eckerd Connects, a nonprofit,
and its for-profit partner, MindShare Technology.” –
NYT “Can an algorithm tell when kids are in danger?” 2018-01-02
47. Lawsuit over Teacher Evaluation Algorithm
Value-added measures for teacher evaluation, called the Education Value-
Added Assessment System, or EVAAS, in Houston, is a statistical method
that uses a student’s performance on prior standardized tests to predict
academic growth in the current year. This methodology—derided as
deeply flawed, unfair and incomprehensible—was used to make decisions
about teacher evaluation, bonuses and termination. It uses a secret
computer program based on an inexplicable algorithm (above).
In May 2014, seven Houston teachers and the Houston Federation of
Teachers brought an unprecedented federal lawsuit to end the policy,
saying it reduced education to a test score, didn’t help improve teaching or
learning, and ruined teachers’ careers when they were incorrectly
terminated. Neither HISD nor its contractor allowed teachers access to the
data or computer algorithms so that they could test or challenge the
legitimacy of the scores, creating a ‘black box.’” http://kbros.co/2EvxjU9
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
48. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Wells Fargo Credit Denial “Glitch”
Mark Underwood @knowlengr | Views my own | Creative Commons | *Thru 2018-08 | v1.4
CNN: “Hundreds of people
had their homes foreclosed
on after software used by
Wells Fargo incorrectly
denied them mortgage
modifications.” 2018-08-05
https://money.cnn.com/2018/08/04/news/companies/wells-fargo-mortgage-modification/index.html
49. . . . All this is not easy to “fix”
Risk mitigation for data science implementations is relatively immature.
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
50. Unintended Use Cases or Ethical Lapse?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
• Algorithm corrected for color bias, but can
now be used for profiling
• “Red Teaming” or “Abuse User Stories” can
help
• Unintended use cases call for a safety vs. a
pure “assurance” framework
51. “Lite” AI Security/Reliability Frameworks
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
https://motherboard.vice.com/en_us/article/bjbxbz/researchers-tricked-ai-into-doing-free-computations-it-wasnt-trained-to-do
“Google researchers demonstrated that a
neural network could be tricked into
performing free computations for an
attacker. They worry that this could one
day be used to turn our smartphones into
botnets by exposing them to images.”
52. XAI: Explain, Interpret, Narrate,
Translate
The elusive holy grail of Transparency
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
53. Challenges of Interpretability
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Adversarial ML literature
suggests that ML models are
very easy to fool and even
linear models work in counter-
intuitive ways.” (Selvaraju et al, 2016)
• Reproducability
• Training sets including results of
other analytics (e.g., FICO)
• Provenance (think IoT)
• Opaque statistical issues
54. Transparency
What does it mean to be “transparent” about ethics?
What connection to IEEE /ACM / ASQ professional ethics?
ASQ: “Be truthful and transparent in all professional interactions and activities.” https://asq.org/about-asq/code-of-ethics
ACM: “The entire computing profession benefits when the ethical decision making process is accountable to and
transparent to all stakeholders. Open discussions about ethical issues promotes this accountability and transparency.”
ACM “A computing professional should be transparent and provide full disclosure of all pertinent system limitations and
potential problems. Making deliberately false or misleading claims, fabricating or falsifying data, and other dishonest
conduct are violations of the Code.”
ACM “Computing professionals should establish transparent policies and procedures that allow individuals to give
informed consent to automatic data collection, review their personal data, correct inaccuracies, and, where appropriate,
remove data.”
ACM “Organizational procedures and attitudes oriented toward quality, transparency, and the welfare of society reduce
harm to the public and raise awareness of the influence of technology in our lives. Therefore, leaders should encourage
full participation of all computing professionals in meeting social responsibilities and discourage tendencies to do
otherwise.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
55. Transparency & Professional Ethics
What connection to IEEE /ACM /ASQ professional ethics?
ASQ: “. . . Fairness . . . Hold paramount the safety, health, and welfare of individuals, the public, and the environment.”
ACM: “The entire computing profession benefits when the ethical decision making process is accountable to and
transparent to all stakeholders. Open discussions about ethical issues promotes this accountability and transparency.”
ACM “A computing professional should be transparent and provide full disclosure of all pertinent system limitations and
potential problems. Making deliberately false or misleading claims, fabricating or falsifying data, and other dishonest
conduct are violations of the Code.”
ACM “Computing professionals should establish transparent policies and procedures that allow individuals to give
informed consent to automatic data collection, review their personal data, correct inaccuracies, and, where appropriate,
remove data.”
ACM “Organizational procedures and attitudes oriented toward quality, transparency, and the welfare of society reduce
harm to the public and raise awareness of the influence of technology in our lives. Therefore, leaders should encourage
full participation of all computing professionals in meeting social responsibilities and discourage tendencies to do
otherwise.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
56. Transparency General Challenges
Some data, algorithms are intellectual property
Some training data includes PII
Predictive analytical models are often “point in time”
“Transparent” according to whose definition?
Should algorithms have “opt-in?” Can they?
Training set big data variety reidentification risks
What quality spectra exist for transparency? A quality BoK for transparency?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
57. Explainability / Interpretability
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“[We need to] find ways of making techniques like
deep learning more understandable to their creators
and accountable to their users. Otherwise it will be
hard to predict when failures might occur—and it’s
inevitable they will. That’s one reason Nvidia’s car is
still experimental.”
58. “Fairness Flow”:
But will you share your ethics guidance?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
https://www.cnet.com/news/facebook-starts-building-ai-with-an-ethical-compass/
“Bin Yu, a professor at UC Berkeley, says
the tools from Facebook and Microsoft
seem like a step in the right direction,
but may not be enough. She suggests
that big companies should have outside
experts audit their algorithms in order
to prove they are not biased. ‘Someone
else has to investigate Facebook's
algorithms—they can't be a secret to
everyone,” Yu says.’”
-Technology Review 2018-05-25
59. Decision Support for Bias Detection
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Things like transparency, intelligibility, and explanation are new enough
to the field that few of us have sufficient experience to know everything
we should look for and all the ways that bias might lurk in our models,”
says Rich Caruna, a senior researcher at Microsoft who is working on
the bias-detection dashboard.”
Technology Review, Will Knight 2018-05-25
60. Insights from More Mature Settings
AI Analytics for distributed military coalitions
“. . . Research has recently started to address such concerns and
prominent directions include explainable AI [4], quantification of
input influence in machine learning algorithms [5], ethics
embedding in decision support systems [6], “interruptability” for
machine learning systems [7], and data transparency [8]. “
“. . . devices that manage themselves and generate their own
management policies, discussing the similarities between such
systems and Skynet.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
S. Calo, D. Verma, E. Bertino, J. Ingham, and G. Cirincione, "How to prevent skynet
from forming (a perspective from Policy-Based autonomic device management),"
in 2018 IEEE 38th International Conference on Distributed Computing Systems
(ICDCS), Jul. 2018, pp. 1369-1376. [Online]. Available:
http://dx.doi.org/10.1109/ICDCS.2018.00137
61. Enterprise Level Risk
Impact on reputation
Litigation
Unintentionally reveal sources, methods, data / interrupted data streams (e.g, web)
Loss of consumer confidence, impact on public safety
Misapplication of internally developed models
Financial losses from data science #fail
“. . . as long as our training is in the form of someone lecturing about the basics of gender or
racial bias in society, that training is not likely to be effective”.
Dr. Hanie Sedghi, Research Scientist, Google Brain
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
62. Corporate Initiatives
Environmental Social Governance
What does quality mean in enterprise sustainability?
Where if there is only lip service to sustainability or quality?
Transparency within employee groups, departments, subsidiaries (See P7005)
Computing decisions that affect carbon footprint (green data centers, etc.)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
63. ISO 26000
“ISO 26000 is the international standard
developed to help organizations effectively
assess and address those social responsibilities
that are relevant and significant to their mission
and vision; operations and processes; customers,
employees, communities, and other
stakeholders; and environmental impact.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
64. Related Work
NIST 800-53 Rev 5 and others, NIST Cloud Security
Building, Auto Automation ISO 29481, 16739, 12006
https://www.buildingsmart.org/about/what-is-openbim/ifc-introduction
Uptane
Ethics and Societal Considerations ISO 26000, IEEE P700x
DevOps Security IEEE P2675
Microsegmentation and NFV IEEE P1915.1
Safety orientation
Infrastructure as code
E.g., security tooling is code, playbooks are code
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
65. Selected Software Engineering References
Bo Brinkman, Catherine Flick, Don Gotterbarn, Keith Miller, Kate Vazansky, and Marty J. Wolf. 2017.
Listening to professional voices: draft 2 of the ACM code of ethics and professional
conduct. Commun. ACM 60, 5 (April 2017), 105-111. DOI: https://doi.org/10.1145/3072528
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
66. Stepping on the quality scales
Beyond ISO 9001
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
67. Quality Engineer as Camera Lens
“So how big is the difference between a lens that costs a few hundred dollars, and one costing over
a thousand dollars more? What kinds of gains does your money buy? Are the quality improvements
substantial enough to be noticed by the untrained eye?” (Richard Baguley, Wired 2014-06-13)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
https://www.wired.com/2014/06/hi-lo-dslr-lenses/
If a quality engineer more fully pursues her goals, would an
enterprise’s moral compass be more finely tuned?
69. Current Challenges
Stop-to-test paradigm often fails
Streaming data quality models are ahead of current quality teaching / practice
AI – for – quality
AI measurement
AI test generation
AI data / sensor simulation, scalability
Quality of XAI by Audience / Enterprise
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
70. Agile development & quality engineering
“[Studies] indicate that there is a significant
correlation between the inclusion of ethical
tools in the process of planning in Agile
methodologies and the achievement of
improved performance in three quality
parameters: schedule, product functionality
and cost. “
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
71. Selected Quality References
H. Abdulhalim, Y. Lurie, and S. Mark, "Ethics as a quality driver in agile software projects," Journal of
Service Science and Management, vol. 11, no. 1, pp. 13-25, 2018. [Online]. Available:
http://dx.doi.org/10.4236/jssm.2018.111002
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
72. Use Cases
Network Protection
Systems Health & Management (AWS metrics, billing, performance)
Education
Cargo Shipping
Aviation (safety)
UAV, UGV regulation
Regulated Government Privacy (FERPA, HIPAA, COPPA, GDPR, PCI etc.)
Healthcare Consent Models
HL7 FHIR Security and Privacy link
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
73. A Final Rationale
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“What, me quality
engineer worry?”
74. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
• Co-Chair NIST Big Data Public WG Security & Privacy subgroup https://bigdatawg.nist.gov/
• Chair Ontology / Taxonomy subgroup for IEEE P7000. Occasional participant in IEEE Standards
WGs P7007, P7003, P7002, P7004, P7010
• IEEE Standard P1915.1 Standard for Software Defined Networking and Network Function
Virtualization Security (member)
• IEEE Standard P2675 WG Security for DevOps (member)
• Current: Finance, large enterprise: supply chain risk, complex playbooks, many InfoSec tools,
workflow automation, big data logging; risks include fraud and regulatory #fail
• Authored chapter “Big Data Complex Event Processing for Internet of Things Provenance:
Benefits for Audit, Forensics, and Safety” in Cyber-Assurance for IoT (Wiley, 2017)
https://kbros.co/2GNVHBv
• @knowlengr dark@computer.org knowlengr.com https://linkedin.com/in/knowlengr
About Me
76. ACM Computing Classification
Security & Privacy Topics
Database and storage security
Data anonymization and sanitation
Management and querying of encrypted data
Information accountability and usage control
Database activity monitoring
Software and application security
Software security engineering
Web application security
Social network security and privacy
Domain-specific security and privacy architectures
Software reverse engineering
Human and societal aspects of security and privacy
Economics of security and privacy
Social aspects of security and privacy
Privacy protections
Usability in security and privacy
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
87. Cloud Native Foundation
Safe Access For Everyone (SAFE)
https://github.com/cn-security/safe
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
88. This deck is released under
Creative Commons
Attribution-Share Alike.
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4