The document discusses several ethical issues related to artificial intelligence (AI) with reference to law. It discusses how AI systems can exhibit bias if the data used to train them is biased. It also discusses privacy concerns around how much personal data about individuals is collected. It provides steps that can be taken to address these issues, such as making sure data quality is good, using synthetic data, checking for bias, ensuring transparency and interpretability, including an ethical committee, rigorous testing, and keeping records. It discusses some examples of AI being used in the Indian legal system, like a machine translation tool for court documents and a research engine for judges. It concludes that while AI can help with administrative tasks, human judgment is still needed and more data
3. - Introduction
- Artificial intelligence
- Artificial intelligence is stimulation of human intelligence
process by machines.
- No widely conceded definition yet.
- Morals are charecterized as ethical standards that administer
an existent’s gesture. In other words morals are the rules that
decide what is right or what is wrong.
4. Bias
AI functions on the material and data that is provided into the computer, and there
may be bias on the side of a specific group or class of people when they input
the data. For illustration, recidivism is a tech talk for “repeat offenders.” The computer
analysis is that it is only blacks who will commit crimes, therefore AI might
be manipulated by providing particular facts that could generate bias and lead to unfair
outcomes.
Privacy
Few things can always don’t want to discuss in the outer world such as health, family, etc.
which are personal to a specific individual, but we are followed everywhere and
everything about us is comprehended to others.
Recently Justice Mr. Subramanian while delivering his speech said that “Today
google knows more about you than your wife.” Implying that
your computer knows everything about you. As a result, an individual’s privacy is being
invaded by gathering all of this data and then assessing individuals, so whatever we allow is
being assessed by our computer.
These are certain privacy concerns.
5. How to address these ethical issues?
Counterfeit insights are identifying their way into increasing businesses indeed. Even
though AI is developing and picking up more popularity, many businesses still discover it
as challenging. This impacts our ordinary lives in more ways than we
can imagine. No company wants to be involved with data or AI ethics scandals that could
potentially ruin their reputation, so it is important for companies and employees alike to
watch out for the ethical implications of AI on business practices.
Organizations have various steps that AI developers who
are implementing the AI models to make sure the bias does not remain in
the system. However, bias cannot be completely resolved, but there is a chance of
minimizing it by the followed steps.
6. One needs to have a clear framework that
everybody follows, it is not only the people who are
building the models but also the people who were
indirectly impacted. Hence, everyone should have a
distinct view of how the organization goes through
the activity of developing models and
implementing them in their system.
MAKE A CLEAR FRAMEWORK
7. Hazel highlighted some of the impacts of
AI predictions go wrong. When we used to
build an AI model we have to take the
risk-based approach. If the risk is higher
then we should follow all the steps
mentioned above, and have the proper due
diligence. Also, we should have additional
controls to cop up with the severe impacts.
IMPACT TO PEOPLE
8. ASSESS THE DATA
AI starts with data. So, when we are
building an AI model, of course, the
model is to address the particular subject,
problem, or challenge. Now we have to
make sure that the data we choose should
be relevant to the business problem that
we are trying to achieve. We have to seek
a few questions about data that whether it
is accessible or not, available or not, have
any constraints & finally to make sure
that the quality of data is good which
reduce time and then models predict
accurately.
9. As we know that data is exceptionally
critical to AI it is not always the case
where can get the data that we want. It
is also highlighted previously that if
data are not of good quality, it may not
be useful and restricted. So
organizations are coming up with
synthetic data. Synthetic data is nothing
but artificially created data that retains
the traits, features, and characteristics
of the production data. But we are not
using protection data. So, in a way we
can use as much as data possible we
can generate the diversity we need in
the data so that algorithms can predict
more accurately.
USE SYTHETIC DATA
10. Check bias in AI algorithm
We have to make sure that
whether bias consciously or
unconsciously creeps into the
system.
11. Transparency
It means finding the issues in the decision so that we
can develop a backtrack and find out where the cause of
the issue was and then we can address it.
Interpretability
It is where we should be qualified to explain models of
how decisions have been carried out. The higher the interpretability
of machine learning is the less demanding it is for somebody to get
why specific judgments or forecasts were made.
If we are rejecting a decision why do we derive that conclusion? We
must be skilled to provide reasons for that.
12. Include an ethical committee
We ought to have an
administrative body that will
be spoken to by independent
specialists from diverse areas
who come together, review the
AI system, inquire and
approve the model before
going into production.
13. Use synthetic data
As we know that data is exceptionally critical to AI it is not
always the case where can get the data that we want. It is
also highlighted previously that if data are not of good
quality, it may not be useful and restricted.
So organizations are coming up with
synthetic data. Synthetic data is nothing but artificially
created data that retains the traits, features,
and characteristics of the production data. But we are not
using protection data. So,in a way we can use as much
as data possible we can generate the diversity we need in
the data so that algorithms can predict more accurately.
14. Test, test, and Retest
Once we receive the information, we
ought to prepare the model with this
information. We ought to be careful
when we are preparing the information
that is planning to demonstrate is once
more inclination we have to be made
beyond any doubt. We need to make sure
that data are diverse. It is fully
representative of the population for what
the model is meant to.
15. Records
Eventhough, we have built and
implemented the model into
production, we still keep a record
of every decision that is being
made. Why it is made so? If an
issue pops up we can always go
back and make sure how it was
made.
16. Be prepared
As we saw, bias cannot be removed completely
but we can minimize it with the help of these
steps mentioned above. Even then bias can
creep into production systems. Then we need
to prepare and maintain a plan of action to
prevent the models going back to the people
impacted has something to have essentially to
address the issues. These are some of the
general steps that people who build the
algorithm, organizations go through rigorously
to make sure they can litigate the bias and
algorithms as much as possible.
17. BUILDING A RESPONSIBLE AI
It means designing and building systems that act
in responsible ways towards human beings.
Two pillars of responsible AI are:
ETHICAL AI: The major ethical principles
of AI are accountability, inclusiveness, reliability,
and safety.
EXPLAINABLE AI: The major explainable principles of
AI are fairness, transparency, privacy, and security.
18. People
We might have processes, technologies,
and frameworks for everything but without
people, nothing happens.
Organizations need to
train people to know what responsible AI
means to their unique organization.
Companies outline their limits to
know what is accepted vs what is not, and
further use sophisticated tools
and technology to stay in life with those
clearly stated missions.
19. SUVAS
SUPREME COURT VIDHIK ANUVAAD
SOFTWARE
In India, registry of the top court launched a
program called SUVAS a machine-assisted
translation tool trained by Artificial Intelligence
especially designed for judicial domain and can
translate English Judicial documents, including
orders and judgments, into nine vernacular
languages
20. SUPACE (Supreme Court Portal
for Assistance in courts
efficiency) research engine for
the benefit of Judicial Officers.
The idea is to ensure that teh
judes and the judicial officers
make things easier in
disposal of appeals come up
before them them. this help
us in lcearing the back log
21. Richard Susskind is someone who has endorsed, sooner or later AI goes to throw all of the
legal professionals out of their function. According to him, there is no vital of legal
professionals and when he was declaiming of online courts, he stated that at the present utmost
of the evolving nations, online courts are only used for the production of accused from jails or
recording of expert substantiation of persons who cannot come to the court.
Even now, he says that a day might come where a person wants to come up the court he might
ship his papers and the papers are processed automated through AI. The Judge, if it is
vital he grants his order online. Every component is administered without intervention.
In today's scenario due to pandemic hearings held online. As of now, AI is being used by
several law firms.
Hence, predictions are being obtained and judgments are being delivered. In India, though we
are lagging behind digitalization which is the most important prerequisite for any AI to
be used. The CJ of India has made a statement in November 2019, that we would use Artificial
intelligence for our administrative purposes to ease the process & purpose of administrative
justice.
22. CONCLUSION AND SUGGESTIONS
1.AI is good for righteous purposes & timely actions as well as it should
be very good for mankind.
2. This is certainly a great start for an equity tech future.
One day we might see AI assisting judges in making legitimate choices that ease the process of
administrative justice.
3. Finally, we can say that as memory-based technology is continuing to develop the threats and
opportunities also increases. The human rights-based approach needs to be embedded when
the business engages in the design, development, and regulation of AI.
4. There is also a need to examine how individual and societal harm AI can be addressed
through dedicated policies, strategies, and potential regulation.
5. I am not an agreed fan of machines taking over the judges as human thinking is different from
that of a machine.
6. AI will be more efficient only if we keep feeding more and more data that can improves the
accuracy rate.
7. The enthusiastic portion of an attorney one that incorporates procedure, creative energy,
and impact cannot be diminished to one or many AI programs.