1. As autonomous systems make more complex decisions with less human oversight, determining liability becomes unclear, as machines cannot be held legally responsible in the same way humans can.
2. Autonomous systems make decisions based on data and programmed behaviors, but cannot anticipate all possible outcomes or scenarios, and their ability to learn introduces greater unpredictability.
3. For autonomous systems to be implemented widely, laws may need to be updated to address liability for systems whose behavior is not directly linked to their original programming, as current laws may not clearly apply to autonomous decision making.
PIPL - So I got it wrong! Want to make something of it?
1. Dr. Sanjeev B Ahuja - Transaction Advisory (Strategy & Operations) - Due Diligence, Risk
Assessment, Integration and Scale-up
So I got it wrong! Do you want to make something of it?
Liability issues when using autonomous decision making systems
Over the decades, technologists and application designers have used IT-enabled
systems to improve efficiency, lower risk and save costs through automation across
every industry. In doing so, they codified a range of “intelligent” behaviors into
computer programs, from low level process bits (e.g., displaying a customer’s data
when he/she is on the phone) to higher level decision processes (e.g., problem
alerts, diagnostics, and remediation.)
What has changed?
Cognitive Automation refers not just to the automation of a process but
specifically, to a system that emulates a set of mental processes that are either
based on “knowing” something, or on “perceiving” something, which provide a basis
2. for taking action towards the achievement of a goal (i.e., deriving value). It utilizes
a priori knowledge about the data that is being processed or information that
emerges from processing it, to arrive at conclusions that are either unknown or do
not manifest in the data itself.
For example, whereas a specific person’s name is data, knowing that people have a
last name is a priori knowledge; it’s the basis for concluding that persons sharing
the same last name belong to the same family and further, allow one to trace a
person’s genealogy.
Similarly, whereas the amount of money spent on average by households for
grocery each month is data, when overlaid across the map of a city, values within a
certain range may appear to come together around specific locations. This
perception of clustering is an emergent fact; it’s the basis for concluding the spend
patterns across customer demographics.
Structured knowledge and data patterns are routinely used by humans and
by systems that operate (semi-) autonomously with little or no human intervention.
As we move from one end of the spectrum, where humans make the decisions, to
the other end, where systems autonomously make those same
decisions, assignment of responsibility gets obscured.
3. Often these decisions are based on
incomplete information, aggregated data,
or simply business and common sense
heuristics; this can potentially lead to
incorrect conclusion or worse, actions
that may cause grievous harm. In a semi
or fully autonomous system, it is unclear
as to who is accountable for the final
outcomes; it cannot be the machine - the system or its programs.
Whereas “intelligent” systems can demonstrate predictive behavior,
they must not be construed as being clairvoyant. Self-regulating
autonomous systems that can not only qualify the quality of the data that
they process but also determine the nature of outcomes with various
combinations thereof, will require a whole other class of intelligence.
Systems that can provide a measure of relevance of the outcomes are
indeed available, but they are not in general able to qualify whether a given
outcome might lead to adverse consequences.
An autonomous vehicle when faced with the option of falling off a bridge
into the river or hitting an oncoming car must make a choice between
injuring a human being and protecting itself; it is one scenario where
Asimov’s three Laws of Robotics come into conflict. A robot therefore, will
either have to be explicitly programmed to react correctly in every possible
conflict scenario, or then be relied upon to come to a conclusion on its own
based on some kind of learning paradigm. Even if one was to somehow
program the idea of “contextually optimal” the vehicle robot still has to be
programmed to choose the least harmful of the potential outcomes. As with
4. us, there will be times of ambiguity when it commits a fatal error of
judgment. Who would then be liable?
With the increasing ubiquity and popularity of ‘Big Data’ analytics and ‘AI’
technologies enabling autonomous processing for business insights,
knowledge management, data analytics, machine learning, automated
business processes and robotic decision making, the issues
of liability, risk allocation, and building in of risk premiums in
the pricing of service contracts is quickly coming to the fore. In situations
where systems autonomously share sensitive data, lose control of who
accesses that data, or become non-compliant in one way or another, we
enter into a legal grey area!
How autonomous can we afford to make our “intelligent” systems?
Even the most well-engineered systems can exhibit unanticipated
behavior, or have unintended outcomes, or they may simply fail. With
autonomous and intelligent systems, i.e., systems that can either process or
acquire new abilities on their own based on scripted behavior, or from past
experience, the extent of uncertainty will grow exponentially.
In the case of autonomous processing systems, unanticipated
behavior may result if the designer of the system is not able to
comprehensively codify a response for all possible combinations of input
data. However, in the case of autonomous learning systems, even
given a (finite) set of input data combinations the codified behavior itself
will change over time; it could behave unpredictably at any time, albeit as
it’s learning algorithm dictates. The lack of an obvious means to correlate
the eventual behavior of an intelligent system to codification of its learning
5. ability makes it impossible to link intention and consequence; it could
be a legal quagmire.
Liability provisions would have to minimally take into account
applicable industry law, civil liability for injuries, criminal law related to
intentional harm; product liability; and data protection. In areas like
medical diagnosis, financial advisory, autonomous vehicles, IOT
applications, etc., it may not be possible to naturally extend the
interpretation of existing law. In order that the products and services using
autonomous processing and decision making do not fall into a legal grey
area or are potentially prohibited (after the fact) because they are illegal, far
reaching changes in the existing law might be required.
In practical terms, data/knowledge/solution architects have to be mindful
that the underlying basis of their analyses or decisions made by the
system autonomously could potentially raise questions about
its legitimacy, justifiable rationale, fairness, and non-
discriminatory behavior.
It’s a fine balancing act
The triad of ‘Value’, ‘Risk’ and ‘Liability’ can however, strike harmony
in various automation scenarios. It depends on a multitude of factors, e.g.,
whether it is used for interpreting available data, or inferring new
information, or predicting future outcomes, and whether its behavior is
predictable, or a ‘malfunction’ could potentially result in irrecoverable
harm, etc.
6. The challenge then is to find that acceptable balance between value, risk
and liability, which not only makes the process worthwhile but also,
ensures that the ensuing risk and liability is justifiable from a business
perspective.
Whilst providers have gotten away with disclaimers of liability as part of
their standard terms in the past, with platforms that purport to be
“intelligent” it is unlikely to be sufficient. This is especially the case within
the regulated sectors, e.g., financial, insurance, life sciences, utilities, etc.,
but risk exposure from using robotic systems and processes is pervasive,
from simple data analysis to smart cities of the future - intelligent
infrastructure, autonomous transportation, efficient distribution and
storage of energy, etc.
Additionally, where such analysis leads a business into making decisions
that have an adverse outcome or worse, impacts the well being of a
consumer, consequential liabilities can be far reaching.
Much as Cloud-based service providers are taking on their share of risk and
liability, accommodating modification to their traditionally one-sided
supplier-friendly terms and conditions by aligning them to be more
conducive to their clients’ compliance obligations, so also the Cognitive
Automation industry will eventually have to step-up to take appropriate
accountability for the outcomes of using its products and services.