1. As autonomous systems make more complex decisions with less human oversight, determining liability becomes unclear, as machines cannot be held legally responsible in the same way humans can.
2. Autonomous systems make decisions based on data and programmed behaviors, but cannot anticipate all possible outcomes or scenarios, and their ability to learn introduces greater unpredictability.
3. For autonomous systems to be implemented widely, laws may need to be updated to address liability for systems whose behavior is not directly linked to their original programming, as current laws may not clearly apply to autonomous decision making.
Shift AI 2020: Deep Learning in Intelligent Process Automation - Slater Victo...Shift Conference
Shift AI was a success, connecting hundreds of professionals that were eager to propel the progress of AI and discuss the newest technologies in data mining, machine learning and neural networks. More at https://ai.shiftconf.co/.
Talk description:
RPA has exploded in recent years leading to never-before-seen levels of enterprise and desktop automation. As automation continues across the enterprise attention has turned from simple, deterministic processes to probabilistic document-driven processes now typically referred to as intelligent process automation (IPA). IPA has a number of new requirements and best practices distinct from those of RPA. Topics include human-in-the-loop learning, biases in model-chaining, and business-driven model efficacy assessment.
AI Underwriting Case Study for Life Insurance company Artivatic.ai
AUSIS (AI Underwriting Platform) helped a Life Insurance Giant in India to improve their complex underwriting journey to be simple, automated & in real- time.
Life Insurance companies are regulated by IRDA in India and also life insurance companies uses old age legacy processes, systems, risk assessment models and rule based outcome.
To know more, write to contact@artivatic.ai or visit www.artivatic.ai
A3 - Análise de ameaças - Threat analysis in goal oriented security requireme...Spark Security
Goal and threat modelling are important activities of security requirements engineering: goals express why a system is needed, while threats motivate the need for security. Unfortunately, existing approaches mostly consider goals and threats separately, and thus neglect the mutual influence between them. In this paper, we address this deficiency by proposing an approach that extends goal modelling with threat modelling and analysis.
Typically Government security efforts are discounted as being for Government use only. The purpose of this presentation is to describe why it is important for security professionals to pay attention to what the Government is doing and learn from their successes and mistakes.
Understand, that Federal Government regulations have a nasty habit of working their way to the State and Local levels of government. Whatever your level of involvement with government and security, you would do well to get ahead of the curve.
Legal challenges for big data companiesRoger Royse
This powerpoint is on legal challenges for big data vendors. The challenges include issues regarding data privacy and security, compliance, and service level guarantees.
Shift AI 2020: Deep Learning in Intelligent Process Automation - Slater Victo...Shift Conference
Shift AI was a success, connecting hundreds of professionals that were eager to propel the progress of AI and discuss the newest technologies in data mining, machine learning and neural networks. More at https://ai.shiftconf.co/.
Talk description:
RPA has exploded in recent years leading to never-before-seen levels of enterprise and desktop automation. As automation continues across the enterprise attention has turned from simple, deterministic processes to probabilistic document-driven processes now typically referred to as intelligent process automation (IPA). IPA has a number of new requirements and best practices distinct from those of RPA. Topics include human-in-the-loop learning, biases in model-chaining, and business-driven model efficacy assessment.
AI Underwriting Case Study for Life Insurance company Artivatic.ai
AUSIS (AI Underwriting Platform) helped a Life Insurance Giant in India to improve their complex underwriting journey to be simple, automated & in real- time.
Life Insurance companies are regulated by IRDA in India and also life insurance companies uses old age legacy processes, systems, risk assessment models and rule based outcome.
To know more, write to contact@artivatic.ai or visit www.artivatic.ai
A3 - Análise de ameaças - Threat analysis in goal oriented security requireme...Spark Security
Goal and threat modelling are important activities of security requirements engineering: goals express why a system is needed, while threats motivate the need for security. Unfortunately, existing approaches mostly consider goals and threats separately, and thus neglect the mutual influence between them. In this paper, we address this deficiency by proposing an approach that extends goal modelling with threat modelling and analysis.
Typically Government security efforts are discounted as being for Government use only. The purpose of this presentation is to describe why it is important for security professionals to pay attention to what the Government is doing and learn from their successes and mistakes.
Understand, that Federal Government regulations have a nasty habit of working their way to the State and Local levels of government. Whatever your level of involvement with government and security, you would do well to get ahead of the curve.
Legal challenges for big data companiesRoger Royse
This powerpoint is on legal challenges for big data vendors. The challenges include issues regarding data privacy and security, compliance, and service level guarantees.
A journey of pollution from hazaribagh to savarM S Siddiqui
There is no plan for dispose of solid waste. The experience says that the ETP run by government will not work properly and they will not use right chemical and right quantity for treatment of waste water. These re-located tanneries may cause same pollution at Savar. The pollution will spread over nearby village and paddy fields.
Today’s endpoints—PCs, tablets, smartphones, IoT, and more—are dynamic gateways that bring greater productivity to the workforce, yet often greater vulnerability to the organization as a whole. In this session you will learn how agencies are reimagining their endpoint strategies to unleash greater workforce productivity, as well guard against cyber threats more effectively—all to gain better insights into the endpoint endgame. This session is produced by MeriTalk. Government employees are eligible to receive CPE credits with this session.
The Data Rotonde is a flexible and scalable MDM platform that provides PostNL customers with party data that is of a high quality standard. PostNL have created a digital platform called the Data Rotonde that provides party data as a set of service products to their customers real time.
Presented by Mario Suykerbuyk, CIO- PostNL and Frank Hewett, Capgemini at Informatica World 2016.
GE_: eingetragene meine Kommentaeren zu pdf_files des EurOCouncil....giovanni Colombo
GE_: meine Commentarien sind auf der pdf_files des EurOCouncill_beschreibungen eingetragen. Teil 1 ..in naechster Zeit Teil 2°.... Dott(2°).Ing.Arch.giovanni Colombo A1360 Ord.Ing.PG_I_1995 09171 Arch.kammer B_de_2003_2011
Algorithms and bias: What lenders need to knowWhite & Case
The algorithms that power fintech may discriminate in ways that can be difficult to anticipate—and financial institutions can be held accountable even when alleged discrimination is clearly unintentional.
Shaping the right strategy, managing thebiggest risk.Until recently, the Internet of Things (IoT) was on the strategic agenda of only the largest and most progressive insurers. The IoT was largely viewed as a futuristic concept, and many insurers adopted a “wait and see” attitude.
The rise of Fintech, changing consumer behavior, and advanced technologies are disrupting equally all the financial services industry, among which also it’s most prominent member, insurance
The insurance industry has been using data to calculate risks for years, still, with new technology now available to collect and analyze large volumes of data for patterns and better risk prediction and calculation, the value of understanding how to store and analyze it has grown exponentially (Liu et al., 2018).
Insurers are at their early stage of discovering the potential of big data, and multiple technology companies are investigate how to make value of such technology (Pisoni, 2020)
DutchMLSchool 2022 - Multi Perspective AnomaliesBigML, Inc
Multi Perspective Anomalies, by Jan W Veldsink, Master in the art of AI at Nyenrode, Rabobank, and Grio.
*Machine Learning School in The Netherlands 2022.
Prognosis - An Approach to Predictive Analytics- Impetus White PaperImpetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
The paper talks about implementation of Behavioral Targeting for the ad world. This is a statistical machine learning algorithm that helps select most relevant ads to be displayed to a web user based on their historical data.
Learn how IBM Smarter Analytics Solution for insurance helps Detect and prevent insurance claims fraud, waste and abuse. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
La inteligencia artificial (IA) está demostrando ser una espada de doble filo. Si bien esto se puede decir de la mayoría de las nuevas tecnologías, ambos lados de la hoja de IA son mucho más nítidos, y ninguno de los dos es bien entendido.
Este artículo busca ayudar ilustrando primero una gama de trampas fáciles de pasar por alto. A continuación, presenta marcos que ayudarán a los líderes a identificar sus mayores riesgos e implementar la amplitud y profundidad de los controles matizados necesarios para eludirlos. Por último, ofrece una visión temprana de algunos esfuerzos del mundo real que se están llevando a cabo actualmente para hacer frente a los riesgos de IA mediante la aplicación de estos enfoques.
Algorithms are taking control of our information rich world. As the twin sibling to Big Data, increasingly they decide how society views us via constructed profiles (as criminals? as terrorists? as rich or poor consumers?); what we see as important, newsworthy, cool or profitable (eg Twitter trending topics, automated stock selling, Amazon recommendations, BBC website top news topics etc); and indeed what we see at all as algorithms are increasingly used to filter our illegal or undesirable content as tools of public policy. Algorithms are peceived by virtue of their automation as neutral, objective and fair, unlike human decision makers - yet evidence increasingly shows the opposite - eg a series of legal complaints assert that Google games its own search results to promote its own economic interests and demote those of competitors or annoyances; while in the defamation field, French, German and Italian courts have decided that algorithmically generated autosuggests in search can be libellous (eg "Bettina Wolf prostitute"). . This paper asks if any legal remedies do or should exist to *audit* proprietary algorithms , given their importance, and asks if one way forward might be via existing and future subject access rights to personal data in EU data protection law. The transformation of these rights as proposed in the draft Data Protection Regulation is not however hopeful.
A journey of pollution from hazaribagh to savarM S Siddiqui
There is no plan for dispose of solid waste. The experience says that the ETP run by government will not work properly and they will not use right chemical and right quantity for treatment of waste water. These re-located tanneries may cause same pollution at Savar. The pollution will spread over nearby village and paddy fields.
Today’s endpoints—PCs, tablets, smartphones, IoT, and more—are dynamic gateways that bring greater productivity to the workforce, yet often greater vulnerability to the organization as a whole. In this session you will learn how agencies are reimagining their endpoint strategies to unleash greater workforce productivity, as well guard against cyber threats more effectively—all to gain better insights into the endpoint endgame. This session is produced by MeriTalk. Government employees are eligible to receive CPE credits with this session.
The Data Rotonde is a flexible and scalable MDM platform that provides PostNL customers with party data that is of a high quality standard. PostNL have created a digital platform called the Data Rotonde that provides party data as a set of service products to their customers real time.
Presented by Mario Suykerbuyk, CIO- PostNL and Frank Hewett, Capgemini at Informatica World 2016.
GE_: eingetragene meine Kommentaeren zu pdf_files des EurOCouncil....giovanni Colombo
GE_: meine Commentarien sind auf der pdf_files des EurOCouncill_beschreibungen eingetragen. Teil 1 ..in naechster Zeit Teil 2°.... Dott(2°).Ing.Arch.giovanni Colombo A1360 Ord.Ing.PG_I_1995 09171 Arch.kammer B_de_2003_2011
Algorithms and bias: What lenders need to knowWhite & Case
The algorithms that power fintech may discriminate in ways that can be difficult to anticipate—and financial institutions can be held accountable even when alleged discrimination is clearly unintentional.
Shaping the right strategy, managing thebiggest risk.Until recently, the Internet of Things (IoT) was on the strategic agenda of only the largest and most progressive insurers. The IoT was largely viewed as a futuristic concept, and many insurers adopted a “wait and see” attitude.
The rise of Fintech, changing consumer behavior, and advanced technologies are disrupting equally all the financial services industry, among which also it’s most prominent member, insurance
The insurance industry has been using data to calculate risks for years, still, with new technology now available to collect and analyze large volumes of data for patterns and better risk prediction and calculation, the value of understanding how to store and analyze it has grown exponentially (Liu et al., 2018).
Insurers are at their early stage of discovering the potential of big data, and multiple technology companies are investigate how to make value of such technology (Pisoni, 2020)
DutchMLSchool 2022 - Multi Perspective AnomaliesBigML, Inc
Multi Perspective Anomalies, by Jan W Veldsink, Master in the art of AI at Nyenrode, Rabobank, and Grio.
*Machine Learning School in The Netherlands 2022.
Prognosis - An Approach to Predictive Analytics- Impetus White PaperImpetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
The paper talks about implementation of Behavioral Targeting for the ad world. This is a statistical machine learning algorithm that helps select most relevant ads to be displayed to a web user based on their historical data.
Learn how IBM Smarter Analytics Solution for insurance helps Detect and prevent insurance claims fraud, waste and abuse. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
La inteligencia artificial (IA) está demostrando ser una espada de doble filo. Si bien esto se puede decir de la mayoría de las nuevas tecnologías, ambos lados de la hoja de IA son mucho más nítidos, y ninguno de los dos es bien entendido.
Este artículo busca ayudar ilustrando primero una gama de trampas fáciles de pasar por alto. A continuación, presenta marcos que ayudarán a los líderes a identificar sus mayores riesgos e implementar la amplitud y profundidad de los controles matizados necesarios para eludirlos. Por último, ofrece una visión temprana de algunos esfuerzos del mundo real que se están llevando a cabo actualmente para hacer frente a los riesgos de IA mediante la aplicación de estos enfoques.
Algorithms are taking control of our information rich world. As the twin sibling to Big Data, increasingly they decide how society views us via constructed profiles (as criminals? as terrorists? as rich or poor consumers?); what we see as important, newsworthy, cool or profitable (eg Twitter trending topics, automated stock selling, Amazon recommendations, BBC website top news topics etc); and indeed what we see at all as algorithms are increasingly used to filter our illegal or undesirable content as tools of public policy. Algorithms are peceived by virtue of their automation as neutral, objective and fair, unlike human decision makers - yet evidence increasingly shows the opposite - eg a series of legal complaints assert that Google games its own search results to promote its own economic interests and demote those of competitors or annoyances; while in the defamation field, French, German and Italian courts have decided that algorithmically generated autosuggests in search can be libellous (eg "Bettina Wolf prostitute"). . This paper asks if any legal remedies do or should exist to *audit* proprietary algorithms , given their importance, and asks if one way forward might be via existing and future subject access rights to personal data in EU data protection law. The transformation of these rights as proposed in the draft Data Protection Regulation is not however hopeful.
SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docxmaoanderton
SHOULD ALGORITHMS DECIDE YOUR FUTURE?
This publication was prepared by Kilian Vieth and
Joanna Bronowicka from Centre for Internet and
Human Rights at European University Viadrina. It was
prepared based on a publication “The Ethics of
Algorithms: from radical content to self-driving cars”
with contributions from Zeynep Tufekci, Jillian C. York,
Ben Wagner and Frederike Kaltheuner and an event
on the Ethics of Algorithms, which took place on
March 9-10, 2015 in Berlin. The research was support-
ed by the Dutch Ministry of Foreign Affairs.
Find out more: cihr.eu/ethics-of-algorithms/
Follow the discussion on Twitter: #EoA2015
Graphic design by Thiago Parizi
cihr.eu @cihr_eu
1 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS | 2
WHAT IS AN ALGORITHM?
ALGORITHMS SHAPE OUR WORLD(S)!
Our everyday life is shaped by computers and our computers are shaped
by algorithms. Digital computation is constantly changing how we commu-
nicate, work, move, and learn. In short, digitally connected computers are
changing how we live our lives. This revolution is unlikely to stop any time
soon.
Digitalization produces increasing amounts of datasets known as ‘big
data’. So far, research focused on how ‘big data is produced and stored.
Now, we begin to scrutinize how algorithms make sense of this growing
amount of data
Algorithms are the brains of our computers, mobiles, Internet of Things.
Algorithms are increasingly used to make decisions for us, about us, or
with us – oftentimes without us realizing it. This raises many questions
about the ethical dimension of algorithms.
WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of algorithms.
What are typical functions they perform? What are negative impacts for
human rights? Here are some examples that probably affect you too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is ignored;
and even what gets published at all, and what is censored. This is true for
all kinds of search rankings, for example the way your social media news-
feed looks. In other words, algorithms perform a gate-keeping function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking part in
hiring (and firing) of employees. Deciding who gets a job and who does
not, is among the most powerful gate-keeping function in society.
• Research shows that human managers display many different biases in
hiring decisions, for example based on social class, race and gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic hiring can
easily overcome human biases. Algorithms might work more accurate
in some areas, but can also create new, sometimes unintended, prob-
lems depending on how they are programmed and what input data is
used.
Ethical.
Learn how IBM Smarter Analytics is Signature Solution for healthcare, detecting and preventing healthcare fraud, waste and abuse. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Supply Chain Finance and Artificial Intelligence - a game changing relationsh...Igor Zax (Zaks)
Igor Zax (Zaks) , CFA, President of Tenzor Ltd. and Alexei Lapouchnian, Ph.D. published a new article, Supply Chain Finance and Artificial Intelligence -a game changing relationship? in Receibable Finance Technology Yearbook 2018 by BCR Publishing. The book would be officially launched at RFIx Receivables Finance International Convention 14-15 March 2018 in London, UK.
“An ably led, well defined, pragmatic, measured, and adequately funded enterprise-wide Data Risk Management (DRM) program is not an executive prerogative; it is a tacit mandate from the shareholders for the very survival of a business in today’s data-driven economy.
In 2020, the Ministry of Home Affairs established a committee led by Prof. (Dr.) Ranbir Singh, former Vice Chancellor of National Law University (NLU), Delhi. This committee was tasked with reviewing the three codes of criminal law. The primary objective of the committee was to propose comprehensive reforms to the country’s criminal laws in a manner that is both principled and effective.
The committee’s focus was on ensuring the safety and security of individuals, communities, and the nation as a whole. Throughout its deliberations, the committee aimed to uphold constitutional values such as justice, dignity, and the intrinsic value of each individual. Their goal was to recommend amendments to the criminal laws that align with these values and priorities.
Subsequently, in February, the committee successfully submitted its recommendations regarding amendments to the criminal law. These recommendations are intended to serve as a foundation for enhancing the current legal framework, promoting safety and security, and upholding the constitutional principles of justice, dignity, and the inherent worth of every individual.
How to Obtain Permanent Residency in the NetherlandsBridgeWest.eu
You can rely on our assistance if you are ready to apply for permanent residency. Find out more at: https://immigration-netherlands.com/obtain-a-permanent-residence-permit-in-the-netherlands/.
ASHWINI KUMAR UPADHYAY v/s Union of India.pptxshweeta209
transfer of the P.I.L filed by lawyer Ashwini Kumar Upadhyay in Delhi High Court to Supreme Court.
on the issue of UNIFORM MARRIAGE AGE of men and women.
DNA Testing in Civil and Criminal Matters.pptxpatrons legal
Get insights into DNA testing and its application in civil and criminal matters. Find out how it contributes to fair and accurate legal proceedings. For more information: https://www.patronslegal.com/criminal-litigation.html
ALL EYES ON RAFAH BUT WHY Explain more.pdf46adnanshahzad
All eyes on Rafah: But why?. The Rafah border crossing, a crucial point between Egypt and the Gaza Strip, often finds itself at the center of global attention. As we explore the significance of Rafah, we’ll uncover why all eyes are on Rafah and the complexities surrounding this pivotal region.
INTRODUCTION
What makes Rafah so significant that it captures global attention? The phrase ‘All eyes are on Rafah’ resonates not just with those in the region but with people worldwide who recognize its strategic, humanitarian, and political importance. In this guide, we will delve into the factors that make Rafah a focal point for international interest, examining its historical context, humanitarian challenges, and political dimensions.
Military Commissions details LtCol Thomas Jasper as Detailed Defense CounselThomas (Tom) Jasper
Military Commissions Trial Judiciary, Guantanamo Bay, Cuba. Notice of the Chief Defense Counsel's detailing of LtCol Thomas F. Jasper, Jr. USMC, as Detailed Defense Counsel for Abd Al Hadi Al-Iraqi on 6 August 2014 in the case of United States v. Hadi al Iraqi (10026)
RIGHTS OF VICTIM EDITED PRESENTATION(SAIF JAVED).pptxOmGod1
Victims of crime have a range of rights designed to ensure their protection, support, and participation in the justice system. These rights include the right to be treated with dignity and respect, the right to be informed about the progress of their case, and the right to be heard during legal proceedings. Victims are entitled to protection from intimidation and harm, access to support services such as counseling and medical care, and the right to restitution from the offender. Additionally, many jurisdictions provide victims with the right to participate in parole hearings and the right to privacy to protect their personal information from public disclosure. These rights aim to acknowledge the impact of crime on victims and to provide them with the necessary resources and involvement in the judicial process.
WINDING UP of COMPANY, Modes of DissolutionKHURRAMWALI
Winding up, also known as liquidation, refers to the legal and financial process of dissolving a company. It involves ceasing operations, selling assets, settling debts, and ultimately removing the company from the official business registry.
Here's a breakdown of the key aspects of winding up:
Reasons for Winding Up:
Insolvency: This is the most common reason, where the company cannot pay its debts. Creditors may initiate a compulsory winding up to recover their dues.
Voluntary Closure: The owners may decide to close the company due to reasons like reaching business goals, facing losses, or merging with another company.
Deadlock: If shareholders or directors cannot agree on how to run the company, a court may order a winding up.
Types of Winding Up:
Voluntary Winding Up: This is initiated by the company's shareholders through a resolution passed by a majority vote. There are two main types:
Members' Voluntary Winding Up: The company is solvent (has enough assets to pay off its debts) and shareholders will receive any remaining assets after debts are settled.
Creditors' Voluntary Winding Up: The company is insolvent and creditors will be prioritized in receiving payment from the sale of assets.
Compulsory Winding Up: This is initiated by a court order, typically at the request of creditors, government agencies, or even by the company itself if it's insolvent.
Process of Winding Up:
Appointment of Liquidator: A qualified professional is appointed to oversee the winding-up process. They are responsible for selling assets, paying off debts, and distributing any remaining funds.
Cease Trading: The company stops its regular business operations.
Notification of Creditors: Creditors are informed about the winding up and invited to submit their claims.
Sale of Assets: The company's assets are sold to generate cash to pay off creditors.
Payment of Debts: Creditors are paid according to a set order of priority, with secured creditors receiving payment before unsecured creditors.
Distribution to Shareholders: If there are any remaining funds after all debts are settled, they are distributed to shareholders according to their ownership stake.
Dissolution: Once all claims are settled and distributions made, the company is officially dissolved and removed from the business register.
Impact of Winding Up:
Employees: Employees will likely lose their jobs during the winding-up process.
Creditors: Creditors may not recover their debts in full, especially if the company is insolvent.
Shareholders: Shareholders may not receive any payout if the company's debts exceed its assets.
Winding up is a complex legal and financial process that can have significant consequences for all parties involved. It's important to seek professional legal and financial advice when considering winding up a company.
PIPL - So I got it wrong! Want to make something of it?
1. Dr. Sanjeev B Ahuja - Transaction Advisory (Strategy & Operations) - Due Diligence, Risk
Assessment, Integration and Scale-up
So I got it wrong! Do you want to make something of it?
Liability issues when using autonomous decision making systems
Over the decades, technologists and application designers have used IT-enabled
systems to improve efficiency, lower risk and save costs through automation across
every industry. In doing so, they codified a range of “intelligent” behaviors into
computer programs, from low level process bits (e.g., displaying a customer’s data
when he/she is on the phone) to higher level decision processes (e.g., problem
alerts, diagnostics, and remediation.)
What has changed?
Cognitive Automation refers not just to the automation of a process but
specifically, to a system that emulates a set of mental processes that are either
based on “knowing” something, or on “perceiving” something, which provide a basis
2. for taking action towards the achievement of a goal (i.e., deriving value). It utilizes
a priori knowledge about the data that is being processed or information that
emerges from processing it, to arrive at conclusions that are either unknown or do
not manifest in the data itself.
For example, whereas a specific person’s name is data, knowing that people have a
last name is a priori knowledge; it’s the basis for concluding that persons sharing
the same last name belong to the same family and further, allow one to trace a
person’s genealogy.
Similarly, whereas the amount of money spent on average by households for
grocery each month is data, when overlaid across the map of a city, values within a
certain range may appear to come together around specific locations. This
perception of clustering is an emergent fact; it’s the basis for concluding the spend
patterns across customer demographics.
Structured knowledge and data patterns are routinely used by humans and
by systems that operate (semi-) autonomously with little or no human intervention.
As we move from one end of the spectrum, where humans make the decisions, to
the other end, where systems autonomously make those same
decisions, assignment of responsibility gets obscured.
3. Often these decisions are based on
incomplete information, aggregated data,
or simply business and common sense
heuristics; this can potentially lead to
incorrect conclusion or worse, actions
that may cause grievous harm. In a semi
or fully autonomous system, it is unclear
as to who is accountable for the final
outcomes; it cannot be the machine - the system or its programs.
Whereas “intelligent” systems can demonstrate predictive behavior,
they must not be construed as being clairvoyant. Self-regulating
autonomous systems that can not only qualify the quality of the data that
they process but also determine the nature of outcomes with various
combinations thereof, will require a whole other class of intelligence.
Systems that can provide a measure of relevance of the outcomes are
indeed available, but they are not in general able to qualify whether a given
outcome might lead to adverse consequences.
An autonomous vehicle when faced with the option of falling off a bridge
into the river or hitting an oncoming car must make a choice between
injuring a human being and protecting itself; it is one scenario where
Asimov’s three Laws of Robotics come into conflict. A robot therefore, will
either have to be explicitly programmed to react correctly in every possible
conflict scenario, or then be relied upon to come to a conclusion on its own
based on some kind of learning paradigm. Even if one was to somehow
program the idea of “contextually optimal” the vehicle robot still has to be
programmed to choose the least harmful of the potential outcomes. As with
4. us, there will be times of ambiguity when it commits a fatal error of
judgment. Who would then be liable?
With the increasing ubiquity and popularity of ‘Big Data’ analytics and ‘AI’
technologies enabling autonomous processing for business insights,
knowledge management, data analytics, machine learning, automated
business processes and robotic decision making, the issues
of liability, risk allocation, and building in of risk premiums in
the pricing of service contracts is quickly coming to the fore. In situations
where systems autonomously share sensitive data, lose control of who
accesses that data, or become non-compliant in one way or another, we
enter into a legal grey area!
How autonomous can we afford to make our “intelligent” systems?
Even the most well-engineered systems can exhibit unanticipated
behavior, or have unintended outcomes, or they may simply fail. With
autonomous and intelligent systems, i.e., systems that can either process or
acquire new abilities on their own based on scripted behavior, or from past
experience, the extent of uncertainty will grow exponentially.
In the case of autonomous processing systems, unanticipated
behavior may result if the designer of the system is not able to
comprehensively codify a response for all possible combinations of input
data. However, in the case of autonomous learning systems, even
given a (finite) set of input data combinations the codified behavior itself
will change over time; it could behave unpredictably at any time, albeit as
it’s learning algorithm dictates. The lack of an obvious means to correlate
the eventual behavior of an intelligent system to codification of its learning
5. ability makes it impossible to link intention and consequence; it could
be a legal quagmire.
Liability provisions would have to minimally take into account
applicable industry law, civil liability for injuries, criminal law related to
intentional harm; product liability; and data protection. In areas like
medical diagnosis, financial advisory, autonomous vehicles, IOT
applications, etc., it may not be possible to naturally extend the
interpretation of existing law. In order that the products and services using
autonomous processing and decision making do not fall into a legal grey
area or are potentially prohibited (after the fact) because they are illegal, far
reaching changes in the existing law might be required.
In practical terms, data/knowledge/solution architects have to be mindful
that the underlying basis of their analyses or decisions made by the
system autonomously could potentially raise questions about
its legitimacy, justifiable rationale, fairness, and non-
discriminatory behavior.
It’s a fine balancing act
The triad of ‘Value’, ‘Risk’ and ‘Liability’ can however, strike harmony
in various automation scenarios. It depends on a multitude of factors, e.g.,
whether it is used for interpreting available data, or inferring new
information, or predicting future outcomes, and whether its behavior is
predictable, or a ‘malfunction’ could potentially result in irrecoverable
harm, etc.
6. The challenge then is to find that acceptable balance between value, risk
and liability, which not only makes the process worthwhile but also,
ensures that the ensuing risk and liability is justifiable from a business
perspective.
Whilst providers have gotten away with disclaimers of liability as part of
their standard terms in the past, with platforms that purport to be
“intelligent” it is unlikely to be sufficient. This is especially the case within
the regulated sectors, e.g., financial, insurance, life sciences, utilities, etc.,
but risk exposure from using robotic systems and processes is pervasive,
from simple data analysis to smart cities of the future - intelligent
infrastructure, autonomous transportation, efficient distribution and
storage of energy, etc.
Additionally, where such analysis leads a business into making decisions
that have an adverse outcome or worse, impacts the well being of a
consumer, consequential liabilities can be far reaching.
Much as Cloud-based service providers are taking on their share of risk and
liability, accommodating modification to their traditionally one-sided
supplier-friendly terms and conditions by aligning them to be more
conducive to their clients’ compliance obligations, so also the Cognitive
Automation industry will eventually have to step-up to take appropriate
accountability for the outcomes of using its products and services.