SlideShare a Scribd company logo
1 of 88
Codes of Ethics & the
Ethics of Code
in the AI Era
Overview of big data / ML concerns from IEEE P70nn Working Groups @IEEESA http://sites.ieee.org/sagroups-7000/
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Disclaimers
 Represents my views only
 Does not represent in any way the views of the following:
 Not view of my employer Synchrony
 Not view of IEEE or IEEE SA
 Not view of the IEEE P7003 WG
 Not view of the NIST Big Data Public WG
 IEEE P7003 standards work still early stage
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
My Perspective
 Chair Ontology / Taxonomy subgroup for P7000
 Occasional participant in P7007, P7003, P7002, P7010, P7001
 Co-chair, NIST Big Data Security and Privacy Subgroup (SP 1500)
 ASQ, APICS practices
 History
 CAI (70’s)
 Data Fusion / Context Activated Memory Device (80’s)
 Data Warehouse, metadata, ERP (90’s)
 Cybersecurity, analytics (2000 – present)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Selected Liaison Groups
 NIST (mostly 1:1 contacts, catalog of cited SPs and standards)
 IEEE P2675 Security for DevOps
 IEEE P1915.1 NFV and SDN Security, 5G (1:1 via AT&T)
 IEEE P7000-P7010 (S&P in robotics: algorithms, student data, safety & resilience, etc.)
 ISO 20546 20547 Big Data
 IEEE Product Safety Engineering Society
 IEEE Reliability Engineering
 IEEE Society for Social Implications of Technology
 HL7 FHIR Security Audit WG
 Cloud Native SAFE Computing (Kubernetes-centric)
 Academic cryptography experts
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Minority Report” (2002)
 The “PreCogs” have landed
 Proprietary predictive models already deployed in several
states for
 Law enforcement
 Child welfare
 “Pockets of poverty” identification
 Educational / teacher assessment
 Credit: Philip K. Dick (1956)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Ethical issues Already in Play
 Sustainability
 Environment
 Climate Change (*data center power consumption)
 Bias concerns in gender, race, free speech
 Social media technology responsibility
 As propaganda platforms
 Excessive use of cell phones by children: ADHD?
 Weakened critical thinking, F2F social skills (Sherry Turkle Reclaiming Conversation 2015)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
IEEE P7000: Marquis Group Charter
“Scope: The standard establishes a process model by which engineers and technologists can address
ethical consideration throughout the various stages of system initiation, analysis and design.
Expected process requirements include management and engineering view of new IT product
development, computer ethics and IT system design, value-sensitive design, and, stakeholder
involvement in ethical IT system design. . .. The purpose of this standard is to enable the pragmatic
application of this type of Value-Based System Design methodology which demonstrates that
conceptual analysis of values and an extensive feasibility analysis can help to refine ethical system
requirements in systems and software life cycles.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Related IEEE P70nn Groups
 IEEE P7000 Ethical Systems Design
 IEEE P7001 Transparency of Autonomous Systems
 IEEE P7002 Data Privacy Process
 IEEE P7003 Algorithmic Bias Considerations
 IEEE P7004 Standard for Child and Student Data Governance
 IEEE P7005 Standard for Transparent Employer Data Governance
 IEEE P7006 Standard for Personal AI Agent
 IEEE P7007 Ontological Standard for Ethically Driven Robotics and Automation Systems
 IEEE P7008 -Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
 IEEE P7009 Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems
 IEEE P7010 Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems
 IEEE P7011 SSIE Standard for Trustworthiness of News Media
 IEEE P7012 SSIE Machine Readable Personal Privacy Terms
 IEEE P7013 Facial Analysis
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Key References
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Focus: artificial intelligence
and autonomous systems.
Havens asks, “How will
machines know what we
value if we don’t know
ourselves?”
Recent Case Study Opportunities
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Faster, Higher, Farther chronicles a corporate
scandal that rivals those at Enron and Lehman
Brothers—one that will cost Volkswagen more
than $22 billion in fines and settlements.” –
Publisher
Case Study 2
“Equifax said that about 38,000 driver's
licenses and 3,200 passports details had
been uploaded to the portal that had was
hacked. (http://bit.ly/2jF3VTh) Equifax said
in September that hackers had stolen
personally identifiable information of U.S.,
British and Canadian consumers. The
company confirmed that information on
about 146.6 million names, 146.6 million
dates of birth, 145.5 million social security
numbers, 99 million address information
and 209,000 payment card number and
expiration date, were stolen in the cyber
security incident.” –Yahoo Finance
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Case Study 3
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
It will be remembered as “a breach,” but the Facebook –
Cambridge Analytica incident was about supply chain big data.
Adjectives to
remember:
“Tiny” + “Big”
Case Study 4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Finding: Hispanic-owned and managed Airbnb properties, controlled for other
aspects, receive less revenue than other groups.
Response from Airbnb when contacted by reporters: We already provide tools
to help price listings.
Source: American Public Media Marketplace 8-May-2018
Related story: Dan Gorenstein, “Airbnb cracks down on bias – but at what cost?” Marketplace, 2018-09-08.
Case Study 5
A “charity” was used to subsidize
payments to Medicare patients in order
to boost drug sales. Multiple
manufacturers were involved.
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Case Study 6
The US FTC Fair Credit Reporting Act requires that customers receive an explanation when credit will
not be extended by a lender.
Fact: Many lenders are using ML and algorithms to make such decisions in real time.
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Case Study 7
“. . . Artificial intelligence. Mr. Zuckerberg’s vision, which
the committee members seemed to accept, was that
soon enough, Facebook’s A.I. programs would be able
to detect fake news, distinguishing it from more reliable
information on the platform. With midterms
approaching, along with the worrisome prospect that
fake news could once again influence our elections, we
wish we could say we share Mr. Zuckerberg’s optimism.
But in the near term we don’t find his vision plausible.
Decades from now, it may be possible to automate the
detection of fake news. But doing so would require a
number of major advances in A.I., taking us far beyond
what has so far been invented.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
https://www.nytimes.com/2018/10/20/opinion/sunday/ai-fake-news-disinformation-campaigns.html
Case Study 8
“The [Google DeepMind et al. team] research
acknowledges that current "deep learning" approaches to
AI have failed to achieve the ability to even approach
human cognitive skills. Without dumping all that's been
achieved with things such as "convolutional neural
networks," or CNNs, the shining success of machine
learning, they propose ways to impart broader reasoning
skills.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Case Study 9
“. . . By 2015, the company realized its new system was
not rating candidates for software developer jobs and
other technical posts in a gender-neutral way. That is
because Amazon’s computer models were trained to vet
applicants by observing patterns in resumes submitted to
the company over a 10-year period. Most came from
men, a reflection of male dominance across the tech
industry.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Case Study 10
Solving Poverty through Data Science
It’s
Magic!
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
https://www.marketplace.org/shows/marketplace-morning-report 2018-07-30
Related IEEE Associations
Related worries and worriers
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
IEEE Society on Social Implications
of Technology
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
IEEE Product Safety Engineering Society
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
IEEE Reliability Society
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
See free reliability analytics toolkit. Some
items are useful to Big Data DevOps)
https://kbros.co/2rugRij
Who is IEEE SA?
Why care what it does?
• Affordable, volunteer-driven, int’l
• IEEE SA members voting rights
• Collaboration with ISO, NIST
• Key standards include ethernet
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
But this is an ASQ Symposium!
 IEEE limitations:
 IEEE Active communities are small.
 Standards documents are not free, though participation for IEEE members is.
 Heavily weighted toward late career participants.
 Despite “Engineering” in title, often not “engineering.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
But IEEE has . . .
 IEEE Digital Library (with cross reference to ACM digital library)
 Multinational reach and engagement
 Reasonable internal advocacy and oversight
 Diversity
 Sometimes good awareness of NIST work
 Often best work in lesser-known conference publications (e.g., vs. IEEE Security)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
State of Computing Profession Ethics
@ACM_Ethics
ACM Code of Ethics (Draft 3, 2018)
https://www.acm.org/about-acm/code-of-ethics
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Highlights of ACM Ethics v3
 “minimize negative consequences of computing, including threats to health, safety, personal
security, and privacy.”
 When the interests of multiple groups conflict, the needs of the least advantaged should be given
increased attention and priority
 Computing professionals should promote environmental sustainability both locally and globally
(Conference theme!).
 “. . .the consequences of emergent systems and data aggregation should be carefully analyzed.
Those involved with pervasive or infrastructure systems should also consider Principle 3.7
(Standard of care when a system is integrated into the infrastructure of society).
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Highlights:Joint ACM IEEE
Software Engr Code of Ethics
https://www.computer.org/web/education/code-of-ethics
 Software engineers shall act consistently with the public interest.
 Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and
does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to
the public good.
 Be fair and avoid deception in all statements, particularly public ones, concerning software or related documents,
methods and tools.
 Consider issues of physical disabilities, allocation of resources, economic disadvantage and other factors that can diminish
access to the benefits of software.
 Identify, document, and report significant issues of social concern, of which they are aware, in software or related
documents, to the employer or the client.
 Strive for high quality, acceptable cost and a reasonable schedule, ensuring significant tradeoffs are clear to and
accepted by the employer and the client, and are available for consideration by the user and the public.
 Identify, define and address ethical, economic, cultural, legal and environmental issues related to work projects
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Hidden: Human Computer Interaction
 NBDPWG System Communicator
 Usability for web and mobile content
 Substitutes for old school manuals
 “Privacy text” for disclosures, policy, practices
 Central to much of the click-based economy
 “User” feedback, recommendations
 Recommendation engines
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Professional Pride, Public Disillusionment
Broader acceptance within IT & Evidence-based Practices
 Growth of data science inside many professions (R, Python)
 Extraordinary explosion of OSS tooling
 Big Data, ML, Real Time
 Watson, AlphaGo, Alexa “AI” (Gee Whiz factor)
Public Perspective
 “2017 was the year we fell out of love with algorithms.”
 Cambridge Analytica, Equifax
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Natural Language Tooling
 Hyperlinks to artifacts
 Chatbots
 Live agent
 Speech to text support
 Text mining
 Enterprise search (workflow-enabled artifacts)
 Some of the indexed artifacts may approach big data status
 SaaS Text Analytics
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Dependency Management
 Big Data configuration management
 Across organizations
 Needed for critical infrastructure
 See NIST critical sector efforts
 Dependencies may not be human-intelligible
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“’Once ze rocket goes up, who cares where
it come down. That’s not my department,’
says Wernher von Braun.” – Tom Lehrer
Traceability & Requirements Engineering
 What is an ethical requirement?
Possible: big data ethical fabric (transparency, usage)
 Can you audit a requirement? What is a quality requirement?
 What is requirement traceability?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Special Populations
 Disadvantaged
 By regulation (e.g., 8A, SBIR, disability)
 By “common sense” (“fairness” and “equity”)
 By economic / sector (“underserved”)
 Internet Bandwidth inequity
 Children
 “Criminals” / Malware Designers
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Algorithms
 “Why am I locked out while she is permitted?”
 “Why isn’t my FICO score changing?”
 “How can I know when I have explained our algorithm?”
 “Is there an ‘explain-ability’ metric?”
 What is different about machine-to-machine algorithms?
 “Can an algorithm be abusive?”
 “Is ‘bias’ the new breach?” https://kbros.co/2I2sxDO
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Bias is the New Breach”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Researchers from MIT and Stanford University
tested three commercially released facial-analysis
programs from major technology companies and
will present findings that the software contains
clear skin-type and gender biases. Facial
recognition programs are good at recognizing
white males but fail embarrassingly with females
especially the darker the skin tone. The news
broke last week but will be presented in full at the
upcoming Conference on Fairness, Accountability,
and Transparency.“
https://www.cio.com/article/3256272/artificial-intelligence/in-the-ai-revolution-bias-is-the-new-
breach-how-cios-must-manage-risk.html
Algorithmic Bias Risk Management
 1. Recognize, socialize groups protected by statute (e.g.,
Equal Credit Opportunity Act)
 2. Creatively consider other affected subpopulations
 Sight impaired – other disabilities
 Children, elderly
 Unusual household settings (elder care, multi-family
housing)
 Part time and workers
 Novice vs. Experienced users
 What counterfactuals are simply not being measured?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Linkage to Privacy, Surveillance, Distrust
Ask your quality engineer to respond to this question:
“Algorithms are bad because they . . . “
 Use data without our knowledge
 Are based on incorrect or misleading knowledge about us
 Are not accountable to individual citizens
 Are used by governments to spy on citizens
 Support drone warfare
 Are built by specialists who do what they are told without asking questions
 Represent a trend to automate jobs out of existence
 Are built by big companies with no public accountability
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“When we fell out of love with algorithms.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Audience, Alerts, Audits: Monitoring
 Who is the audience for a product or service? (Out of regular coffee in our meeting room)
 Who should be alerted, and for what, and how often?
 Even if they have opted out?
 What should be audited?
 What thresholds are appropriate for cost, timetable, risk?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Decisions vs. Decision Support:
Application Areas
Human-Computer Interactions in Decision-making
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Undermining Specialists*
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“The threat the electronic health records
and machine learning post for physicians’
clinical judgment – and their well-being.” – NYT
2018-05-16
“’Food poisoning’ was diagnosed because
the strangulated hernia in the groin was
overlooked, or patients were sent to the
catheterization lab for chest pain because
no one saw the shingles rash on the left
chest.”
*Or adversely changing specialist behavior.
“Rote Decision-Making”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“The authors, both emergency room physicians at
Brigham and Women’s Hospital in Boston, do a fine job
of sorting through most of the serious problems in
American medicine today, including the costs, over-
testing, overprescribing, overlitigation and general
depersonalization. All are caused at least in part, they
argue, by the increasing use of algorithms in medical
care.” -NYT 2018-04-01
Facial Recognition for Law Enforcement
 “AMZ touts its Rekognition facial recognition system as ‘simple and easy to
use,’ encouraging customers to ‘detect, analyze, and compare faces for a
wide variety of user verification, people counting, and public safety use
cases.’ And yet, in a study released Thursday by the American Civil Liberties
Union, the technology managed to confuse photos of 28 members of
Congress with publicly available mug shots. Given that Amazon actively
markets Rekognition to law enforcement agencies across the US, that’s
simply not good enough. The ACLU study also illustrated the racial bias that
plagues facial recognition today. ‘Nearly 40 percent of Rekognition’s false
matches in our test were of people of color, even though they make up only
20 percent of Congress,’ wrote ACLU attorney Jacob Snow. ‘People of color
are already disproportionately harmed by police practices, and it’s easy to
see how Rekognition could exacerbate that.’“ -Wired 2018-07-26
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Family” Impacts
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Charges of faulty forecasts have accompanied the
emergence of predictive analytics into public policy.
And when it comes to criminal justice, where
analytics are now entrenched as a tool for judges
and parole boards, even larger complaints have
arisen about the secrecy surrounding the workings
of the algorithms themselves — most of which are
developed, marketed and closely guarded by private
firms. That’s a chief objection lodged against two
Florida companies: Eckerd Connects, a nonprofit,
and its for-profit partner, MindShare Technology.” –
NYT “Can an algorithm tell when kids are in danger?” 2018-01-02
Lawsuit over Teacher Evaluation Algorithm
 Value-added measures for teacher evaluation, called the Education Value-
Added Assessment System, or EVAAS, in Houston, is a statistical method
that uses a student’s performance on prior standardized tests to predict
academic growth in the current year. This methodology—derided as
deeply flawed, unfair and incomprehensible—was used to make decisions
about teacher evaluation, bonuses and termination. It uses a secret
computer program based on an inexplicable algorithm (above).
 In May 2014, seven Houston teachers and the Houston Federation of
Teachers brought an unprecedented federal lawsuit to end the policy,
saying it reduced education to a test score, didn’t help improve teaching or
learning, and ruined teachers’ careers when they were incorrectly
terminated. Neither HISD nor its contractor allowed teachers access to the
data or computer algorithms so that they could test or challenge the
legitimacy of the scores, creating a ‘black box.’” http://kbros.co/2EvxjU9
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Wells Fargo Credit Denial “Glitch”
Mark Underwood @knowlengr | Views my own | Creative Commons | *Thru 2018-08 | v1.4
CNN: “Hundreds of people
had their homes foreclosed
on after software used by
Wells Fargo incorrectly
denied them mortgage
modifications.” 2018-08-05
https://money.cnn.com/2018/08/04/news/companies/wells-fargo-mortgage-modification/index.html
. . . All this is not easy to “fix”
Risk mitigation for data science implementations is relatively immature.
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Unintended Use Cases or Ethical Lapse?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
• Algorithm corrected for color bias, but can
now be used for profiling
• “Red Teaming” or “Abuse User Stories” can
help
• Unintended use cases call for a safety vs. a
pure “assurance” framework
“Lite” AI Security/Reliability Frameworks
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
https://motherboard.vice.com/en_us/article/bjbxbz/researchers-tricked-ai-into-doing-free-computations-it-wasnt-trained-to-do
“Google researchers demonstrated that a
neural network could be tricked into
performing free computations for an
attacker. They worry that this could one
day be used to turn our smartphones into
botnets by exposing them to images.”
XAI: Explain, Interpret, Narrate,
Translate
The elusive holy grail of Transparency
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Challenges of Interpretability
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Adversarial ML literature
suggests that ML models are
very easy to fool and even
linear models work in counter-
intuitive ways.” (Selvaraju et al, 2016)
• Reproducability
• Training sets including results of
other analytics (e.g., FICO)
• Provenance (think IoT)
• Opaque statistical issues
Transparency
 What does it mean to be “transparent” about ethics?
 What connection to IEEE /ACM / ASQ professional ethics?
 ASQ: “Be truthful and transparent in all professional interactions and activities.” https://asq.org/about-asq/code-of-ethics
 ACM: “The entire computing profession benefits when the ethical decision making process is accountable to and
transparent to all stakeholders. Open discussions about ethical issues promotes this accountability and transparency.”
 ACM “A computing professional should be transparent and provide full disclosure of all pertinent system limitations and
potential problems. Making deliberately false or misleading claims, fabricating or falsifying data, and other dishonest
conduct are violations of the Code.”
 ACM “Computing professionals should establish transparent policies and procedures that allow individuals to give
informed consent to automatic data collection, review their personal data, correct inaccuracies, and, where appropriate,
remove data.”
 ACM “Organizational procedures and attitudes oriented toward quality, transparency, and the welfare of society reduce
harm to the public and raise awareness of the influence of technology in our lives. Therefore, leaders should encourage
full participation of all computing professionals in meeting social responsibilities and discourage tendencies to do
otherwise.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Transparency & Professional Ethics
 What connection to IEEE /ACM /ASQ professional ethics?
 ASQ: “. . . Fairness . . . Hold paramount the safety, health, and welfare of individuals, the public, and the environment.”
 ACM: “The entire computing profession benefits when the ethical decision making process is accountable to and
transparent to all stakeholders. Open discussions about ethical issues promotes this accountability and transparency.”
 ACM “A computing professional should be transparent and provide full disclosure of all pertinent system limitations and
potential problems. Making deliberately false or misleading claims, fabricating or falsifying data, and other dishonest
conduct are violations of the Code.”
 ACM “Computing professionals should establish transparent policies and procedures that allow individuals to give
informed consent to automatic data collection, review their personal data, correct inaccuracies, and, where appropriate,
remove data.”
 ACM “Organizational procedures and attitudes oriented toward quality, transparency, and the welfare of society reduce
harm to the public and raise awareness of the influence of technology in our lives. Therefore, leaders should encourage
full participation of all computing professionals in meeting social responsibilities and discourage tendencies to do
otherwise.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Transparency General Challenges
 Some data, algorithms are intellectual property
 Some training data includes PII
 Predictive analytical models are often “point in time”
 “Transparent” according to whose definition?
 Should algorithms have “opt-in?” Can they?
 Training set big data variety reidentification risks
 What quality spectra exist for transparency? A quality BoK for transparency?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Explainability / Interpretability
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“[We need to] find ways of making techniques like
deep learning more understandable to their creators
and accountable to their users. Otherwise it will be
hard to predict when failures might occur—and it’s
inevitable they will. That’s one reason Nvidia’s car is
still experimental.”
“Fairness Flow”:
But will you share your ethics guidance?
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
https://www.cnet.com/news/facebook-starts-building-ai-with-an-ethical-compass/
“Bin Yu, a professor at UC Berkeley, says
the tools from Facebook and Microsoft
seem like a step in the right direction,
but may not be enough. She suggests
that big companies should have outside
experts audit their algorithms in order
to prove they are not biased. ‘Someone
else has to investigate Facebook's
algorithms—they can't be a secret to
everyone,” Yu says.’”
-Technology Review 2018-05-25
Decision Support for Bias Detection
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“Things like transparency, intelligibility, and explanation are new enough
to the field that few of us have sufficient experience to know everything
we should look for and all the ways that bias might lurk in our models,”
says Rich Caruna, a senior researcher at Microsoft who is working on
the bias-detection dashboard.”
Technology Review, Will Knight 2018-05-25
Insights from More Mature Settings
 AI Analytics for distributed military coalitions
 “. . . Research has recently started to address such concerns and
prominent directions include explainable AI [4], quantification of
input influence in machine learning algorithms [5], ethics
embedding in decision support systems [6], “interruptability” for
machine learning systems [7], and data transparency [8]. “
 “. . . devices that manage themselves and generate their own
management policies, discussing the similarities between such
systems and Skynet.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
S. Calo, D. Verma, E. Bertino, J. Ingham, and G. Cirincione, "How to prevent skynet
from forming (a perspective from Policy-Based autonomic device management),"
in 2018 IEEE 38th International Conference on Distributed Computing Systems
(ICDCS), Jul. 2018, pp. 1369-1376. [Online]. Available:
http://dx.doi.org/10.1109/ICDCS.2018.00137
Enterprise Level Risk
 Impact on reputation
 Litigation
 Unintentionally reveal sources, methods, data / interrupted data streams (e.g, web)
 Loss of consumer confidence, impact on public safety
 Misapplication of internally developed models
 Financial losses from data science #fail
 “. . . as long as our training is in the form of someone lecturing about the basics of gender or
racial bias in society, that training is not likely to be effective”.
Dr. Hanie Sedghi, Research Scientist, Google Brain
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Corporate Initiatives
 Environmental Social Governance
 What does quality mean in enterprise sustainability?
 Where if there is only lip service to sustainability or quality?
 Transparency within employee groups, departments, subsidiaries (See P7005)
 Computing decisions that affect carbon footprint (green data centers, etc.)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
ISO 26000
 “ISO 26000 is the international standard
developed to help organizations effectively
assess and address those social responsibilities
that are relevant and significant to their mission
and vision; operations and processes; customers,
employees, communities, and other
stakeholders; and environmental impact.”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Related Work
 NIST 800-53 Rev 5 and others, NIST Cloud Security
 Building, Auto Automation ISO 29481, 16739, 12006
 https://www.buildingsmart.org/about/what-is-openbim/ifc-introduction
 Uptane
 Ethics and Societal Considerations ISO 26000, IEEE P700x
 DevOps Security IEEE P2675
 Microsegmentation and NFV IEEE P1915.1
 Safety orientation
 Infrastructure as code
 E.g., security tooling is code, playbooks are code
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Selected Software Engineering References
Bo Brinkman, Catherine Flick, Don Gotterbarn, Keith Miller, Kate Vazansky, and Marty J. Wolf. 2017.
Listening to professional voices: draft 2 of the ACM code of ethics and professional
conduct. Commun. ACM 60, 5 (April 2017), 105-111. DOI: https://doi.org/10.1145/3072528
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Stepping on the quality scales
Beyond ISO 9001
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Quality Engineer as Camera Lens
“So how big is the difference between a lens that costs a few hundred dollars, and one costing over
a thousand dollars more? What kinds of gains does your money buy? Are the quality improvements
substantial enough to be noticed by the untrained eye?” (Richard Baguley, Wired 2014-06-13)
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
https://www.wired.com/2014/06/hi-lo-dslr-lenses/
If a quality engineer more fully pursues her goals, would an
enterprise’s moral compass be more finely tuned?
Quality Touchpoints
 Requirements development
 Requirements adherence
 Measurement frameworks
 Traceability / integrity
 Multiple overlapping frameworks (social, environmental, psychological, enterprise, regulatory. . . )
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Current Challenges
 Stop-to-test paradigm often fails
 Streaming data quality models are ahead of current quality teaching / practice
 AI – for – quality
 AI measurement
 AI test generation
 AI data / sensor simulation, scalability
 Quality of XAI by Audience / Enterprise
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Agile development & quality engineering
 “[Studies] indicate that there is a significant
correlation between the inclusion of ethical
tools in the process of planning in Agile
methodologies and the achievement of
improved performance in three quality
parameters: schedule, product functionality
and cost. “
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Selected Quality References
H. Abdulhalim, Y. Lurie, and S. Mark, "Ethics as a quality driver in agile software projects," Journal of
Service Science and Management, vol. 11, no. 1, pp. 13-25, 2018. [Online]. Available:
http://dx.doi.org/10.4236/jssm.2018.111002
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Use Cases
 Network Protection
 Systems Health & Management (AWS metrics, billing, performance)
 Education
 Cargo Shipping
 Aviation (safety)
 UAV, UGV regulation
 Regulated Government Privacy (FERPA, HIPAA, COPPA, GDPR, PCI etc.)
 Healthcare Consent Models
 HL7 FHIR Security and Privacy link
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
A Final Rationale
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
“What, me quality
engineer worry?”
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
• Co-Chair NIST Big Data Public WG Security & Privacy subgroup https://bigdatawg.nist.gov/
• Chair Ontology / Taxonomy subgroup for IEEE P7000. Occasional participant in IEEE Standards
WGs P7007, P7003, P7002, P7004, P7010
• IEEE Standard P1915.1 Standard for Software Defined Networking and Network Function
Virtualization Security (member)
• IEEE Standard P2675 WG Security for DevOps (member)
• Current: Finance, large enterprise: supply chain risk, complex playbooks, many InfoSec tools,
workflow automation, big data logging; risks include fraud and regulatory #fail
• Authored chapter “Big Data Complex Event Processing for Internet of Things Provenance:
Benefits for Audit, Forensics, and Safety” in Cyber-Assurance for IoT (Wiley, 2017)
https://kbros.co/2GNVHBv
• @knowlengr dark@computer.org knowlengr.com https://linkedin.com/in/knowlengr
About Me
Background Material
NBDPWG Appendix A, Cloud Native SAFE
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
ACM Computing Classification
Security & Privacy Topics
 Database and storage security
 Data anonymization and sanitation
 Management and querying of encrypted data
 Information accountability and usage control
 Database activity monitoring
 Software and application security
 Software security engineering
 Web application security
 Social network security and privacy
 Domain-specific security and privacy architectures
 Software reverse engineering
 Human and societal aspects of security and privacy
 Economics of security and privacy
 Social aspects of security and privacy
 Privacy protections
 Usability in security and privacy
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
CRISP-DM Process Model
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
Cloud Native Foundation
Safe Access For Everyone (SAFE)
 https://github.com/cn-security/safe
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
This deck is released under
Creative Commons
Attribution-Share Alike.
Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4

More Related Content

What's hot

ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
Daniel Katz
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
Krishnaram Kenthapadi
 

What's hot (20)

Introduction to AI Governance
Introduction to AI GovernanceIntroduction to AI Governance
Introduction to AI Governance
 
The Business Case for Applied Artificial Intelligence
The Business Case for Applied Artificial IntelligenceThe Business Case for Applied Artificial Intelligence
The Business Case for Applied Artificial Intelligence
 
AI in Manufacturing & the Proposed EU Artificial Intelligence Act
AI in Manufacturing & the Proposed EU Artificial Intelligence ActAI in Manufacturing & the Proposed EU Artificial Intelligence Act
AI in Manufacturing & the Proposed EU Artificial Intelligence Act
 
Implementing Ethics in AI
Implementing Ethics in AIImplementing Ethics in AI
Implementing Ethics in AI
 
Susskind, 'A Manifesto for AI in the Law' ICAIL 2017, London, 2017
Susskind, 'A Manifesto for AI in the Law' ICAIL 2017, London, 2017Susskind, 'A Manifesto for AI in the Law' ICAIL 2017, London, 2017
Susskind, 'A Manifesto for AI in the Law' ICAIL 2017, London, 2017
 
Introduction to AI & ML
Introduction to AI & MLIntroduction to AI & ML
Introduction to AI & ML
 
AI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry StandardsAI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry Standards
 
Data ethics
Data ethicsData ethics
Data ethics
 
Azure Machine Learning
Azure Machine LearningAzure Machine Learning
Azure Machine Learning
 
Designing Trustable AI Experiences at IxDA Pittsburgh, Jan 2019
Designing Trustable AI Experiences at IxDA Pittsburgh, Jan 2019Designing Trustable AI Experiences at IxDA Pittsburgh, Jan 2019
Designing Trustable AI Experiences at IxDA Pittsburgh, Jan 2019
 
2018 Princeton Fintech & Quant Conference: AI, Machine Learning & Deep Learni...
2018 Princeton Fintech & Quant Conference: AI, Machine Learning & Deep Learni...2018 Princeton Fintech & Quant Conference: AI, Machine Learning & Deep Learni...
2018 Princeton Fintech & Quant Conference: AI, Machine Learning & Deep Learni...
 
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 6 - Professor...
 
AIF360 - Trusted and Fair AI
AIF360 - Trusted and Fair AIAIF360 - Trusted and Fair AI
AIF360 - Trusted and Fair AI
 
Machine Learning and AI in Risk Management
Machine Learning and AI in Risk ManagementMachine Learning and AI in Risk Management
Machine Learning and AI in Risk Management
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
 
EU Ethics guidelines for trustworthy AI
EU Ethics guidelines for trustworthy AI EU Ethics guidelines for trustworthy AI
EU Ethics guidelines for trustworthy AI
 
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
AI & ML in Cyber Security - Why Algorithms Are Dangerous
AI & ML in Cyber Security - Why Algorithms Are DangerousAI & ML in Cyber Security - Why Algorithms Are Dangerous
AI & ML in Cyber Security - Why Algorithms Are Dangerous
 
AXA x DSSG Meetup Sharing (Feb 2016)
AXA x DSSG Meetup Sharing (Feb 2016)AXA x DSSG Meetup Sharing (Feb 2016)
AXA x DSSG Meetup Sharing (Feb 2016)
 

Similar to Codes of Ethics and the Ethics of Code

Fall2015SecurityShow
Fall2015SecurityShowFall2015SecurityShow
Fall2015SecurityShow
Adam Heller
 
Assistive Technology Considerations TemplateSubject AreaSample.docx
Assistive Technology Considerations TemplateSubject AreaSample.docxAssistive Technology Considerations TemplateSubject AreaSample.docx
Assistive Technology Considerations TemplateSubject AreaSample.docx
cockekeshia
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
Edge AI and Vision Alliance
 
Proposed T-Model to cover 4S quality metrics based on empirical study of root...
Proposed T-Model to cover 4S quality metrics based on empirical study of root...Proposed T-Model to cover 4S quality metrics based on empirical study of root...
Proposed T-Model to cover 4S quality metrics based on empirical study of root...
IJECEIAES
 
Computer ForensicsDiscussion 1Forensics Certifications Ple.docx
Computer ForensicsDiscussion 1Forensics Certifications Ple.docxComputer ForensicsDiscussion 1Forensics Certifications Ple.docx
Computer ForensicsDiscussion 1Forensics Certifications Ple.docx
donnajames55
 
Research Paper Sentence OutlineResearch Question How e-commer.docx
Research Paper Sentence OutlineResearch Question How e-commer.docxResearch Paper Sentence OutlineResearch Question How e-commer.docx
Research Paper Sentence OutlineResearch Question How e-commer.docx
audeleypearl
 
Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?
Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?
Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?
PECB
 

Similar to Codes of Ethics and the Ethics of Code (20)

Technologies in Support of Big Data Ethics
Technologies in Support of Big Data EthicsTechnologies in Support of Big Data Ethics
Technologies in Support of Big Data Ethics
 
DevOps Support for an Ethical Software Development Life Cycle (SDLC)
DevOps Support for an Ethical Software Development Life Cycle (SDLC)DevOps Support for an Ethical Software Development Life Cycle (SDLC)
DevOps Support for an Ethical Software Development Life Cycle (SDLC)
 
Implications of GDPR for IoT Big Data Security and Privacy Fabric
Implications of GDPR for IoT Big Data Security and Privacy FabricImplications of GDPR for IoT Big Data Security and Privacy Fabric
Implications of GDPR for IoT Big Data Security and Privacy Fabric
 
Open Source Insight: Securing IoT, Atlanta Ransomware Attack, Congress on Cyb...
Open Source Insight: Securing IoT, Atlanta Ransomware Attack, Congress on Cyb...Open Source Insight: Securing IoT, Atlanta Ransomware Attack, Congress on Cyb...
Open Source Insight: Securing IoT, Atlanta Ransomware Attack, Congress on Cyb...
 
Fall2015SecurityShow
Fall2015SecurityShowFall2015SecurityShow
Fall2015SecurityShow
 
NIST Big Data Public WG : Security and Privacy v2
NIST Big Data Public WG : Security and Privacy v2NIST Big Data Public WG : Security and Privacy v2
NIST Big Data Public WG : Security and Privacy v2
 
Open Source Insight: Happy Birthday Open Source and Application Security for ...
Open Source Insight: Happy Birthday Open Source and Application Security for ...Open Source Insight: Happy Birthday Open Source and Application Security for ...
Open Source Insight: Happy Birthday Open Source and Application Security for ...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Open Source Insight: AI for Open Source Management, IoT Time Bombs, Ready for...
Open Source Insight: AI for Open Source Management, IoT Time Bombs, Ready for...Open Source Insight: AI for Open Source Management, IoT Time Bombs, Ready for...
Open Source Insight: AI for Open Source Management, IoT Time Bombs, Ready for...
 
Assistive Technology Considerations TemplateSubject AreaSample.docx
Assistive Technology Considerations TemplateSubject AreaSample.docxAssistive Technology Considerations TemplateSubject AreaSample.docx
Assistive Technology Considerations TemplateSubject AreaSample.docx
 
How Healthcare CISOs Can Secure Mobile Devices
How Healthcare CISOs Can Secure Mobile DevicesHow Healthcare CISOs Can Secure Mobile Devices
How Healthcare CISOs Can Secure Mobile Devices
 
Open Source Insight: You Can’t Beat Hackers and the Pentagon Moves into Open...
Open Source Insight: You Can’t Beat Hackers and the Pentagon Moves into Open...Open Source Insight: You Can’t Beat Hackers and the Pentagon Moves into Open...
Open Source Insight: You Can’t Beat Hackers and the Pentagon Moves into Open...
 
Cisco 2014 Midyear Security Report
Cisco 2014 Midyear Security ReportCisco 2014 Midyear Security Report
Cisco 2014 Midyear Security Report
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
 
Proposed T-Model to cover 4S quality metrics based on empirical study of root...
Proposed T-Model to cover 4S quality metrics based on empirical study of root...Proposed T-Model to cover 4S quality metrics based on empirical study of root...
Proposed T-Model to cover 4S quality metrics based on empirical study of root...
 
'Humans still needed' - research project reveals impact of artificial intelli...
'Humans still needed' - research project reveals impact of artificial intelli...'Humans still needed' - research project reveals impact of artificial intelli...
'Humans still needed' - research project reveals impact of artificial intelli...
 
EPR Annual Conference 2020 Workshop 1 - Simon Uytterhoeven
EPR Annual Conference 2020 Workshop 1 - Simon Uytterhoeven EPR Annual Conference 2020 Workshop 1 - Simon Uytterhoeven
EPR Annual Conference 2020 Workshop 1 - Simon Uytterhoeven
 
Computer ForensicsDiscussion 1Forensics Certifications Ple.docx
Computer ForensicsDiscussion 1Forensics Certifications Ple.docxComputer ForensicsDiscussion 1Forensics Certifications Ple.docx
Computer ForensicsDiscussion 1Forensics Certifications Ple.docx
 
Research Paper Sentence OutlineResearch Question How e-commer.docx
Research Paper Sentence OutlineResearch Question How e-commer.docxResearch Paper Sentence OutlineResearch Question How e-commer.docx
Research Paper Sentence OutlineResearch Question How e-commer.docx
 
Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?
Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?
Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?
 

More from Mark Underwood

More from Mark Underwood (8)

Security within Scaled Agile
Security within Scaled AgileSecurity within Scaled Agile
Security within Scaled Agile
 
Site (Service) Reliability Engineering
Site (Service) Reliability EngineeringSite (Service) Reliability Engineering
Site (Service) Reliability Engineering
 
The Quality “Logs”-Jam: Why Alerting for Cybersecurity is Awash with False Po...
The Quality “Logs”-Jam: Why Alerting for Cybersecurity is Awash with False Po...The Quality “Logs”-Jam: Why Alerting for Cybersecurity is Awash with False Po...
The Quality “Logs”-Jam: Why Alerting for Cybersecurity is Awash with False Po...
 
Stakeholders in Systems Design
Stakeholders in Systems DesignStakeholders in Systems Design
Stakeholders in Systems Design
 
TEDx Poetry and Science
TEDx Poetry and ScienceTEDx Poetry and Science
TEDx Poetry and Science
 
IoT Day 2016: Cloud Services for IoT Semantic Interoperability
IoT Day 2016: Cloud Services for IoT Semantic InteroperabilityIoT Day 2016: Cloud Services for IoT Semantic Interoperability
IoT Day 2016: Cloud Services for IoT Semantic Interoperability
 
Ontology Summit - Track D Standards Summary & Provocative Use Cases
Ontology Summit - Track D Standards Summary & Provocative Use CasesOntology Summit - Track D Standards Summary & Provocative Use Cases
Ontology Summit - Track D Standards Summary & Provocative Use Cases
 
Design Patterns for Ontologies in IoT
Design Patterns for Ontologies in IoTDesign Patterns for Ontologies in IoT
Design Patterns for Ontologies in IoT
 

Recently uploaded

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 

Codes of Ethics and the Ethics of Code

  • 1. Codes of Ethics & the Ethics of Code in the AI Era Overview of big data / ML concerns from IEEE P70nn Working Groups @IEEESA http://sites.ieee.org/sagroups-7000/ Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 2. Disclaimers  Represents my views only  Does not represent in any way the views of the following:  Not view of my employer Synchrony  Not view of IEEE or IEEE SA  Not view of the IEEE P7003 WG  Not view of the NIST Big Data Public WG  IEEE P7003 standards work still early stage Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 3. My Perspective  Chair Ontology / Taxonomy subgroup for P7000  Occasional participant in P7007, P7003, P7002, P7010, P7001  Co-chair, NIST Big Data Security and Privacy Subgroup (SP 1500)  ASQ, APICS practices  History  CAI (70’s)  Data Fusion / Context Activated Memory Device (80’s)  Data Warehouse, metadata, ERP (90’s)  Cybersecurity, analytics (2000 – present) Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 4. Selected Liaison Groups  NIST (mostly 1:1 contacts, catalog of cited SPs and standards)  IEEE P2675 Security for DevOps  IEEE P1915.1 NFV and SDN Security, 5G (1:1 via AT&T)  IEEE P7000-P7010 (S&P in robotics: algorithms, student data, safety & resilience, etc.)  ISO 20546 20547 Big Data  IEEE Product Safety Engineering Society  IEEE Reliability Engineering  IEEE Society for Social Implications of Technology  HL7 FHIR Security Audit WG  Cloud Native SAFE Computing (Kubernetes-centric)  Academic cryptography experts Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 5. “Minority Report” (2002)  The “PreCogs” have landed  Proprietary predictive models already deployed in several states for  Law enforcement  Child welfare  “Pockets of poverty” identification  Educational / teacher assessment  Credit: Philip K. Dick (1956) Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 6. Ethical issues Already in Play  Sustainability  Environment  Climate Change (*data center power consumption)  Bias concerns in gender, race, free speech  Social media technology responsibility  As propaganda platforms  Excessive use of cell phones by children: ADHD?  Weakened critical thinking, F2F social skills (Sherry Turkle Reclaiming Conversation 2015) Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 7. IEEE P7000: Marquis Group Charter “Scope: The standard establishes a process model by which engineers and technologists can address ethical consideration throughout the various stages of system initiation, analysis and design. Expected process requirements include management and engineering view of new IT product development, computer ethics and IT system design, value-sensitive design, and, stakeholder involvement in ethical IT system design. . .. The purpose of this standard is to enable the pragmatic application of this type of Value-Based System Design methodology which demonstrates that conceptual analysis of values and an extensive feasibility analysis can help to refine ethical system requirements in systems and software life cycles.” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 8. Related IEEE P70nn Groups  IEEE P7000 Ethical Systems Design  IEEE P7001 Transparency of Autonomous Systems  IEEE P7002 Data Privacy Process  IEEE P7003 Algorithmic Bias Considerations  IEEE P7004 Standard for Child and Student Data Governance  IEEE P7005 Standard for Transparent Employer Data Governance  IEEE P7006 Standard for Personal AI Agent  IEEE P7007 Ontological Standard for Ethically Driven Robotics and Automation Systems  IEEE P7008 -Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems  IEEE P7009 Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems  IEEE P7010 Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems  IEEE P7011 SSIE Standard for Trustworthiness of News Media  IEEE P7012 SSIE Machine Readable Personal Privacy Terms  IEEE P7013 Facial Analysis Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 9. Key References Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 Focus: artificial intelligence and autonomous systems. Havens asks, “How will machines know what we value if we don’t know ourselves?”
  • 10. Recent Case Study Opportunities Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 “Faster, Higher, Farther chronicles a corporate scandal that rivals those at Enron and Lehman Brothers—one that will cost Volkswagen more than $22 billion in fines and settlements.” – Publisher
  • 11. Case Study 2 “Equifax said that about 38,000 driver's licenses and 3,200 passports details had been uploaded to the portal that had was hacked. (http://bit.ly/2jF3VTh) Equifax said in September that hackers had stolen personally identifiable information of U.S., British and Canadian consumers. The company confirmed that information on about 146.6 million names, 146.6 million dates of birth, 145.5 million social security numbers, 99 million address information and 209,000 payment card number and expiration date, were stolen in the cyber security incident.” –Yahoo Finance Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 12. Case Study 3 Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 It will be remembered as “a breach,” but the Facebook – Cambridge Analytica incident was about supply chain big data. Adjectives to remember: “Tiny” + “Big”
  • 13. Case Study 4 Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 Finding: Hispanic-owned and managed Airbnb properties, controlled for other aspects, receive less revenue than other groups. Response from Airbnb when contacted by reporters: We already provide tools to help price listings. Source: American Public Media Marketplace 8-May-2018 Related story: Dan Gorenstein, “Airbnb cracks down on bias – but at what cost?” Marketplace, 2018-09-08.
  • 14. Case Study 5 A “charity” was used to subsidize payments to Medicare patients in order to boost drug sales. Multiple manufacturers were involved. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 15. Case Study 6 The US FTC Fair Credit Reporting Act requires that customers receive an explanation when credit will not be extended by a lender. Fact: Many lenders are using ML and algorithms to make such decisions in real time. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 16. Case Study 7 “. . . Artificial intelligence. Mr. Zuckerberg’s vision, which the committee members seemed to accept, was that soon enough, Facebook’s A.I. programs would be able to detect fake news, distinguishing it from more reliable information on the platform. With midterms approaching, along with the worrisome prospect that fake news could once again influence our elections, we wish we could say we share Mr. Zuckerberg’s optimism. But in the near term we don’t find his vision plausible. Decades from now, it may be possible to automate the detection of fake news. But doing so would require a number of major advances in A.I., taking us far beyond what has so far been invented.” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 https://www.nytimes.com/2018/10/20/opinion/sunday/ai-fake-news-disinformation-campaigns.html
  • 17. Case Study 8 “The [Google DeepMind et al. team] research acknowledges that current "deep learning" approaches to AI have failed to achieve the ability to even approach human cognitive skills. Without dumping all that's been achieved with things such as "convolutional neural networks," or CNNs, the shining success of machine learning, they propose ways to impart broader reasoning skills.” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 18. Case Study 9 “. . . By 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way. That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 19. Case Study 10 Solving Poverty through Data Science It’s Magic! Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 https://www.marketplace.org/shows/marketplace-morning-report 2018-07-30
  • 20. Related IEEE Associations Related worries and worriers Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 21. IEEE Society on Social Implications of Technology Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 22. IEEE Product Safety Engineering Society Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 23. IEEE Reliability Society Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 See free reliability analytics toolkit. Some items are useful to Big Data DevOps) https://kbros.co/2rugRij
  • 24. Who is IEEE SA? Why care what it does? • Affordable, volunteer-driven, int’l • IEEE SA members voting rights • Collaboration with ISO, NIST • Key standards include ethernet Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 25. But this is an ASQ Symposium!  IEEE limitations:  IEEE Active communities are small.  Standards documents are not free, though participation for IEEE members is.  Heavily weighted toward late career participants.  Despite “Engineering” in title, often not “engineering.” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 26. But IEEE has . . .  IEEE Digital Library (with cross reference to ACM digital library)  Multinational reach and engagement  Reasonable internal advocacy and oversight  Diversity  Sometimes good awareness of NIST work  Often best work in lesser-known conference publications (e.g., vs. IEEE Security) Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 27. State of Computing Profession Ethics @ACM_Ethics ACM Code of Ethics (Draft 3, 2018) https://www.acm.org/about-acm/code-of-ethics Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 28. Highlights of ACM Ethics v3  “minimize negative consequences of computing, including threats to health, safety, personal security, and privacy.”  When the interests of multiple groups conflict, the needs of the least advantaged should be given increased attention and priority  Computing professionals should promote environmental sustainability both locally and globally (Conference theme!).  “. . .the consequences of emergent systems and data aggregation should be carefully analyzed. Those involved with pervasive or infrastructure systems should also consider Principle 3.7 (Standard of care when a system is integrated into the infrastructure of society). Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 29. Highlights:Joint ACM IEEE Software Engr Code of Ethics https://www.computer.org/web/education/code-of-ethics  Software engineers shall act consistently with the public interest.  Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good.  Be fair and avoid deception in all statements, particularly public ones, concerning software or related documents, methods and tools.  Consider issues of physical disabilities, allocation of resources, economic disadvantage and other factors that can diminish access to the benefits of software.  Identify, document, and report significant issues of social concern, of which they are aware, in software or related documents, to the employer or the client.  Strive for high quality, acceptable cost and a reasonable schedule, ensuring significant tradeoffs are clear to and accepted by the employer and the client, and are available for consideration by the user and the public.  Identify, define and address ethical, economic, cultural, legal and environmental issues related to work projects Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 30. Hidden: Human Computer Interaction  NBDPWG System Communicator  Usability for web and mobile content  Substitutes for old school manuals  “Privacy text” for disclosures, policy, practices  Central to much of the click-based economy  “User” feedback, recommendations  Recommendation engines Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 31. Professional Pride, Public Disillusionment Broader acceptance within IT & Evidence-based Practices  Growth of data science inside many professions (R, Python)  Extraordinary explosion of OSS tooling  Big Data, ML, Real Time  Watson, AlphaGo, Alexa “AI” (Gee Whiz factor) Public Perspective  “2017 was the year we fell out of love with algorithms.”  Cambridge Analytica, Equifax Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 32. Natural Language Tooling  Hyperlinks to artifacts  Chatbots  Live agent  Speech to text support  Text mining  Enterprise search (workflow-enabled artifacts)  Some of the indexed artifacts may approach big data status  SaaS Text Analytics Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 33. Dependency Management  Big Data configuration management  Across organizations  Needed for critical infrastructure  See NIST critical sector efforts  Dependencies may not be human-intelligible Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 “’Once ze rocket goes up, who cares where it come down. That’s not my department,’ says Wernher von Braun.” – Tom Lehrer
  • 34. Traceability & Requirements Engineering  What is an ethical requirement? Possible: big data ethical fabric (transparency, usage)  Can you audit a requirement? What is a quality requirement?  What is requirement traceability? Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 35. Special Populations  Disadvantaged  By regulation (e.g., 8A, SBIR, disability)  By “common sense” (“fairness” and “equity”)  By economic / sector (“underserved”)  Internet Bandwidth inequity  Children  “Criminals” / Malware Designers Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 36. Algorithms  “Why am I locked out while she is permitted?”  “Why isn’t my FICO score changing?”  “How can I know when I have explained our algorithm?”  “Is there an ‘explain-ability’ metric?”  What is different about machine-to-machine algorithms?  “Can an algorithm be abusive?”  “Is ‘bias’ the new breach?” https://kbros.co/2I2sxDO Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 37. “Bias is the New Breach” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 “Researchers from MIT and Stanford University tested three commercially released facial-analysis programs from major technology companies and will present findings that the software contains clear skin-type and gender biases. Facial recognition programs are good at recognizing white males but fail embarrassingly with females especially the darker the skin tone. The news broke last week but will be presented in full at the upcoming Conference on Fairness, Accountability, and Transparency.“ https://www.cio.com/article/3256272/artificial-intelligence/in-the-ai-revolution-bias-is-the-new- breach-how-cios-must-manage-risk.html
  • 38. Algorithmic Bias Risk Management  1. Recognize, socialize groups protected by statute (e.g., Equal Credit Opportunity Act)  2. Creatively consider other affected subpopulations  Sight impaired – other disabilities  Children, elderly  Unusual household settings (elder care, multi-family housing)  Part time and workers  Novice vs. Experienced users  What counterfactuals are simply not being measured? Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 39. Linkage to Privacy, Surveillance, Distrust Ask your quality engineer to respond to this question: “Algorithms are bad because they . . . “  Use data without our knowledge  Are based on incorrect or misleading knowledge about us  Are not accountable to individual citizens  Are used by governments to spy on citizens  Support drone warfare  Are built by specialists who do what they are told without asking questions  Represent a trend to automate jobs out of existence  Are built by big companies with no public accountability Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 40. “When we fell out of love with algorithms.” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 41. Audience, Alerts, Audits: Monitoring  Who is the audience for a product or service? (Out of regular coffee in our meeting room)  Who should be alerted, and for what, and how often?  Even if they have opted out?  What should be audited?  What thresholds are appropriate for cost, timetable, risk? Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 42. Decisions vs. Decision Support: Application Areas Human-Computer Interactions in Decision-making Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 43. Undermining Specialists* Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 “The threat the electronic health records and machine learning post for physicians’ clinical judgment – and their well-being.” – NYT 2018-05-16 “’Food poisoning’ was diagnosed because the strangulated hernia in the groin was overlooked, or patients were sent to the catheterization lab for chest pain because no one saw the shingles rash on the left chest.” *Or adversely changing specialist behavior.
  • 44. “Rote Decision-Making” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 “The authors, both emergency room physicians at Brigham and Women’s Hospital in Boston, do a fine job of sorting through most of the serious problems in American medicine today, including the costs, over- testing, overprescribing, overlitigation and general depersonalization. All are caused at least in part, they argue, by the increasing use of algorithms in medical care.” -NYT 2018-04-01
  • 45. Facial Recognition for Law Enforcement  “AMZ touts its Rekognition facial recognition system as ‘simple and easy to use,’ encouraging customers to ‘detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.’ And yet, in a study released Thursday by the American Civil Liberties Union, the technology managed to confuse photos of 28 members of Congress with publicly available mug shots. Given that Amazon actively markets Rekognition to law enforcement agencies across the US, that’s simply not good enough. The ACLU study also illustrated the racial bias that plagues facial recognition today. ‘Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress,’ wrote ACLU attorney Jacob Snow. ‘People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that.’“ -Wired 2018-07-26 Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 46. “Family” Impacts Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 “Charges of faulty forecasts have accompanied the emergence of predictive analytics into public policy. And when it comes to criminal justice, where analytics are now entrenched as a tool for judges and parole boards, even larger complaints have arisen about the secrecy surrounding the workings of the algorithms themselves — most of which are developed, marketed and closely guarded by private firms. That’s a chief objection lodged against two Florida companies: Eckerd Connects, a nonprofit, and its for-profit partner, MindShare Technology.” – NYT “Can an algorithm tell when kids are in danger?” 2018-01-02
  • 47. Lawsuit over Teacher Evaluation Algorithm  Value-added measures for teacher evaluation, called the Education Value- Added Assessment System, or EVAAS, in Houston, is a statistical method that uses a student’s performance on prior standardized tests to predict academic growth in the current year. This methodology—derided as deeply flawed, unfair and incomprehensible—was used to make decisions about teacher evaluation, bonuses and termination. It uses a secret computer program based on an inexplicable algorithm (above).  In May 2014, seven Houston teachers and the Houston Federation of Teachers brought an unprecedented federal lawsuit to end the policy, saying it reduced education to a test score, didn’t help improve teaching or learning, and ruined teachers’ careers when they were incorrectly terminated. Neither HISD nor its contractor allowed teachers access to the data or computer algorithms so that they could test or challenge the legitimacy of the scores, creating a ‘black box.’” http://kbros.co/2EvxjU9 Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 48. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 Wells Fargo Credit Denial “Glitch” Mark Underwood @knowlengr | Views my own | Creative Commons | *Thru 2018-08 | v1.4 CNN: “Hundreds of people had their homes foreclosed on after software used by Wells Fargo incorrectly denied them mortgage modifications.” 2018-08-05 https://money.cnn.com/2018/08/04/news/companies/wells-fargo-mortgage-modification/index.html
  • 49. . . . All this is not easy to “fix” Risk mitigation for data science implementations is relatively immature. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 50. Unintended Use Cases or Ethical Lapse? Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 • Algorithm corrected for color bias, but can now be used for profiling • “Red Teaming” or “Abuse User Stories” can help • Unintended use cases call for a safety vs. a pure “assurance” framework
  • 51. “Lite” AI Security/Reliability Frameworks Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 https://motherboard.vice.com/en_us/article/bjbxbz/researchers-tricked-ai-into-doing-free-computations-it-wasnt-trained-to-do “Google researchers demonstrated that a neural network could be tricked into performing free computations for an attacker. They worry that this could one day be used to turn our smartphones into botnets by exposing them to images.”
  • 52. XAI: Explain, Interpret, Narrate, Translate The elusive holy grail of Transparency Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 53. Challenges of Interpretability Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 “Adversarial ML literature suggests that ML models are very easy to fool and even linear models work in counter- intuitive ways.” (Selvaraju et al, 2016) • Reproducability • Training sets including results of other analytics (e.g., FICO) • Provenance (think IoT) • Opaque statistical issues
  • 54. Transparency  What does it mean to be “transparent” about ethics?  What connection to IEEE /ACM / ASQ professional ethics?  ASQ: “Be truthful and transparent in all professional interactions and activities.” https://asq.org/about-asq/code-of-ethics  ACM: “The entire computing profession benefits when the ethical decision making process is accountable to and transparent to all stakeholders. Open discussions about ethical issues promotes this accountability and transparency.”  ACM “A computing professional should be transparent and provide full disclosure of all pertinent system limitations and potential problems. Making deliberately false or misleading claims, fabricating or falsifying data, and other dishonest conduct are violations of the Code.”  ACM “Computing professionals should establish transparent policies and procedures that allow individuals to give informed consent to automatic data collection, review their personal data, correct inaccuracies, and, where appropriate, remove data.”  ACM “Organizational procedures and attitudes oriented toward quality, transparency, and the welfare of society reduce harm to the public and raise awareness of the influence of technology in our lives. Therefore, leaders should encourage full participation of all computing professionals in meeting social responsibilities and discourage tendencies to do otherwise.” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 55. Transparency & Professional Ethics  What connection to IEEE /ACM /ASQ professional ethics?  ASQ: “. . . Fairness . . . Hold paramount the safety, health, and welfare of individuals, the public, and the environment.”  ACM: “The entire computing profession benefits when the ethical decision making process is accountable to and transparent to all stakeholders. Open discussions about ethical issues promotes this accountability and transparency.”  ACM “A computing professional should be transparent and provide full disclosure of all pertinent system limitations and potential problems. Making deliberately false or misleading claims, fabricating or falsifying data, and other dishonest conduct are violations of the Code.”  ACM “Computing professionals should establish transparent policies and procedures that allow individuals to give informed consent to automatic data collection, review their personal data, correct inaccuracies, and, where appropriate, remove data.”  ACM “Organizational procedures and attitudes oriented toward quality, transparency, and the welfare of society reduce harm to the public and raise awareness of the influence of technology in our lives. Therefore, leaders should encourage full participation of all computing professionals in meeting social responsibilities and discourage tendencies to do otherwise.” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 56. Transparency General Challenges  Some data, algorithms are intellectual property  Some training data includes PII  Predictive analytical models are often “point in time”  “Transparent” according to whose definition?  Should algorithms have “opt-in?” Can they?  Training set big data variety reidentification risks  What quality spectra exist for transparency? A quality BoK for transparency? Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 57. Explainability / Interpretability Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 “[We need to] find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.”
  • 58. “Fairness Flow”: But will you share your ethics guidance? Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 https://www.cnet.com/news/facebook-starts-building-ai-with-an-ethical-compass/ “Bin Yu, a professor at UC Berkeley, says the tools from Facebook and Microsoft seem like a step in the right direction, but may not be enough. She suggests that big companies should have outside experts audit their algorithms in order to prove they are not biased. ‘Someone else has to investigate Facebook's algorithms—they can't be a secret to everyone,” Yu says.’” -Technology Review 2018-05-25
  • 59. Decision Support for Bias Detection Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 “Things like transparency, intelligibility, and explanation are new enough to the field that few of us have sufficient experience to know everything we should look for and all the ways that bias might lurk in our models,” says Rich Caruna, a senior researcher at Microsoft who is working on the bias-detection dashboard.” Technology Review, Will Knight 2018-05-25
  • 60. Insights from More Mature Settings  AI Analytics for distributed military coalitions  “. . . Research has recently started to address such concerns and prominent directions include explainable AI [4], quantification of input influence in machine learning algorithms [5], ethics embedding in decision support systems [6], “interruptability” for machine learning systems [7], and data transparency [8]. “  “. . . devices that manage themselves and generate their own management policies, discussing the similarities between such systems and Skynet.” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 S. Calo, D. Verma, E. Bertino, J. Ingham, and G. Cirincione, "How to prevent skynet from forming (a perspective from Policy-Based autonomic device management)," in 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), Jul. 2018, pp. 1369-1376. [Online]. Available: http://dx.doi.org/10.1109/ICDCS.2018.00137
  • 61. Enterprise Level Risk  Impact on reputation  Litigation  Unintentionally reveal sources, methods, data / interrupted data streams (e.g, web)  Loss of consumer confidence, impact on public safety  Misapplication of internally developed models  Financial losses from data science #fail  “. . . as long as our training is in the form of someone lecturing about the basics of gender or racial bias in society, that training is not likely to be effective”. Dr. Hanie Sedghi, Research Scientist, Google Brain Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 62. Corporate Initiatives  Environmental Social Governance  What does quality mean in enterprise sustainability?  Where if there is only lip service to sustainability or quality?  Transparency within employee groups, departments, subsidiaries (See P7005)  Computing decisions that affect carbon footprint (green data centers, etc.) Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 63. ISO 26000  “ISO 26000 is the international standard developed to help organizations effectively assess and address those social responsibilities that are relevant and significant to their mission and vision; operations and processes; customers, employees, communities, and other stakeholders; and environmental impact.” Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 64. Related Work  NIST 800-53 Rev 5 and others, NIST Cloud Security  Building, Auto Automation ISO 29481, 16739, 12006  https://www.buildingsmart.org/about/what-is-openbim/ifc-introduction  Uptane  Ethics and Societal Considerations ISO 26000, IEEE P700x  DevOps Security IEEE P2675  Microsegmentation and NFV IEEE P1915.1  Safety orientation  Infrastructure as code  E.g., security tooling is code, playbooks are code Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 65. Selected Software Engineering References Bo Brinkman, Catherine Flick, Don Gotterbarn, Keith Miller, Kate Vazansky, and Marty J. Wolf. 2017. Listening to professional voices: draft 2 of the ACM code of ethics and professional conduct. Commun. ACM 60, 5 (April 2017), 105-111. DOI: https://doi.org/10.1145/3072528 Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 66. Stepping on the quality scales Beyond ISO 9001 Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 67. Quality Engineer as Camera Lens “So how big is the difference between a lens that costs a few hundred dollars, and one costing over a thousand dollars more? What kinds of gains does your money buy? Are the quality improvements substantial enough to be noticed by the untrained eye?” (Richard Baguley, Wired 2014-06-13) Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 https://www.wired.com/2014/06/hi-lo-dslr-lenses/ If a quality engineer more fully pursues her goals, would an enterprise’s moral compass be more finely tuned?
  • 68. Quality Touchpoints  Requirements development  Requirements adherence  Measurement frameworks  Traceability / integrity  Multiple overlapping frameworks (social, environmental, psychological, enterprise, regulatory. . . ) Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 69. Current Challenges  Stop-to-test paradigm often fails  Streaming data quality models are ahead of current quality teaching / practice  AI – for – quality  AI measurement  AI test generation  AI data / sensor simulation, scalability  Quality of XAI by Audience / Enterprise Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 70. Agile development & quality engineering  “[Studies] indicate that there is a significant correlation between the inclusion of ethical tools in the process of planning in Agile methodologies and the achievement of improved performance in three quality parameters: schedule, product functionality and cost. “ Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 71. Selected Quality References H. Abdulhalim, Y. Lurie, and S. Mark, "Ethics as a quality driver in agile software projects," Journal of Service Science and Management, vol. 11, no. 1, pp. 13-25, 2018. [Online]. Available: http://dx.doi.org/10.4236/jssm.2018.111002 Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 72. Use Cases  Network Protection  Systems Health & Management (AWS metrics, billing, performance)  Education  Cargo Shipping  Aviation (safety)  UAV, UGV regulation  Regulated Government Privacy (FERPA, HIPAA, COPPA, GDPR, PCI etc.)  Healthcare Consent Models  HL7 FHIR Security and Privacy link Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 73. A Final Rationale Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 “What, me quality engineer worry?”
  • 74. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4 • Co-Chair NIST Big Data Public WG Security & Privacy subgroup https://bigdatawg.nist.gov/ • Chair Ontology / Taxonomy subgroup for IEEE P7000. Occasional participant in IEEE Standards WGs P7007, P7003, P7002, P7004, P7010 • IEEE Standard P1915.1 Standard for Software Defined Networking and Network Function Virtualization Security (member) • IEEE Standard P2675 WG Security for DevOps (member) • Current: Finance, large enterprise: supply chain risk, complex playbooks, many InfoSec tools, workflow automation, big data logging; risks include fraud and regulatory #fail • Authored chapter “Big Data Complex Event Processing for Internet of Things Provenance: Benefits for Audit, Forensics, and Safety” in Cyber-Assurance for IoT (Wiley, 2017) https://kbros.co/2GNVHBv • @knowlengr dark@computer.org knowlengr.com https://linkedin.com/in/knowlengr About Me
  • 75. Background Material NBDPWG Appendix A, Cloud Native SAFE Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 76. ACM Computing Classification Security & Privacy Topics  Database and storage security  Data anonymization and sanitation  Management and querying of encrypted data  Information accountability and usage control  Database activity monitoring  Software and application security  Software security engineering  Web application security  Social network security and privacy  Domain-specific security and privacy architectures  Software reverse engineering  Human and societal aspects of security and privacy  Economics of security and privacy  Social aspects of security and privacy  Privacy protections  Usability in security and privacy Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 77. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 78. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 79. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 80. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 81. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 82. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 83. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 84. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 85. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 86. CRISP-DM Process Model Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 87. Cloud Native Foundation Safe Access For Everyone (SAFE)  https://github.com/cn-security/safe Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4
  • 88. This deck is released under Creative Commons Attribution-Share Alike. Mark Underwood @knowlengr | Synchrony | Views my own | dark@computer.org | v1.4

Editor's Notes

  1. Reference https://www.washingtonpost.com/technology/2018/06/28/facial-recognition-technology-is-finally-more-accurate-identifying-people-color-could-that-be-used-against-immigrants/?noredirect=on&utm_term=.b639c243cd91