In the wake of mass automation, UBIs might be the answer low-income families and citizens might be looking towards. As automation across industries increases, the induced fear within citizens of its impact is severe. From privacy concerns through rogue AI to doomsday scenarios to more realistic concerns of misused AI and loss of jobs, pop-culture led paranoia has shaken up the world. These concerns have to be dealt with, and tech companies and businesses need to have a robust moral framework under which decisions are made, to ensure any negative externalities of implementing AI are mitigated to the maximum degree. Artificial Intelligence is a great tool to optimize businesses and make our world more efficient, but the moral imperative on all of us is to ensure it happens sides by side human sustainability, not at its expense.
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMS
1. THE SOCIAL IMPACTS OF AI
AND HOW TO MITIGATE ITS
HARMS
Artificial intelligence (AI) has been the central focus of technological
development within the world for a few decades now. From the first
chess engine that defeated a human to AI today, which is capable of
more complex tasks, AI has found applications in all areas of life.
From assisting humans in personal lives to expanding the horizons for
business strategies, AI has revolutionized our world and will keep on
leaving a long-lasting impact on it.
2. With all its benefits, automation and AI comes with consequences that
a lot of experts have deemed as “negative.”
The conversation around this has been somewhat reductive, where a
deeper dive into the ethical and moral dilemmas of implementing AI is
of the utmost importance.
It is particularly essential within the treatment of AI and its application
to consider its moral implications. DeepMind is a company that
develops and trains artificial intelligence, which opened its research
unit called “DeepMind Ethics and Society.”
The goal of this unit is to fund research on AI’s ethical implications
concerning morality, values, accountability, transparency, and
economics.
Now, let’s take a look at some of those negative externalities and how
we can minimize the risks, some of which are blown out of
proportion. But before that, let’s understand what Artificial
Intelligence is.
3. What Is Artificial Intelligence?
Artificial intelligence is the integration of human elements and
behavior within machines. AI is the ability of machines or computers
to reproduce aspects of human intelligence. It includes motor
functions, the ability to understand sensory information and to
rationalize data through logic.
In practice, this translates into the ability of a computer to perform
tasks related to movement, create symbolic or graphical
representations of data, and make decisions based on information.
It is important to understand what drives the functionality of an AI. On
the back-end is the training process of the machine, which is heavily
data-oriented.
The simple difference between a complex and a simple AI is the
difference between the data sets it has been fed.
Mostly, this data comes from businesses, organizations, AI experts,
and individuals. Hence, the nature of activities that an AI performs is
based on the kind of data set it is trained with. Outside of that data, an
AI is simply a software.
4. While this sounds intuitive, it is a fact often forgotten in the discussion
when deliberating accountability for the ramifications of AI-backed
activities.
Social Impacts Of AI:
Bias Decision-Making
With AI, one of the biggest concerns is its decision making bias. This
has been a problem in multiple application fields such as credit
allocations and criminal sentencing and law enforcement risk
assessment.
Within trials, AI-backed software has been seen to give results that are
tainted with racial biases.
In other application fields, there have often been concerns with sexist
decision making in cases of judging potential hiring at workplaces.
There are multiple other scenarios in which one can rationalize the
existence of bias within the decision-making process of an AI. On this
topic, there is a necessary clarification that needs to be understood.
5. In its nature, Artificial Intelligence is not biased. The software in its
functions can only be trained to portray these elements due to human
imperfection.
Think of it as a reflection of the worst tendencies of a human being.
The data set that informs an AI can be biased, which is a moral
reflection of its creators.
For example, let’s imagine a business that develops its AI-based on
their data, previous decision making patterns, and their history.
If this AI once developed shows signs of discrimination, it is merely a
reflection of the data set and the existence of bias within the previous
decisions made by humans. The AI is trained through this data, learns
from its patterns, and executes its functions accordingly.
At best, artificial intelligence is only as flawed as its creators. Then
the question arises, where does accountability come in?
Here, the logically sound action is to hold the creators accountable for
the data set. Accountability measures can be put in place to assess the
dataset before its implementation.
Data scientists and experts can intervene to rationalize the data with its
creators. Question the logic behind the decisions being used to train
6. the AI. Understanding the root of the patterns that train the AI to
classify them as valid or invalid is a critical step.
For this, steps can be taken for both internal and external
investigations within teams and businesses. This ensures quality
assurance and accountability.
For a technology that is as impactful as AI, the measures taken in its
application need to incredibly critical.
This bias can be a result of thousands of people and their individual
decisions. While it is harder to fix the mindset of each of these
individuals, fixing this problem within an AI and its data can be an
easier task to deal with.
This doesn’t take away from the efforts needed to change the overall
mindset of the world, but it does provide a viable to the problem at
hand.
Unemployment As A Result Of Automation
Since its advent, AI has been defined to replace human beings in the
workforce. This is particularly true for labor-intensive jobs. For this,
there are two critical considerations.
7. First, automation doesn’t always mean a reduction in employment.
With the invention of the ATMs, it was predicted that bank tellers
would become redundant. This was seen as a threat to the position of
bank tellers.
Though once ATMs were installed across the globe, the overall
employment of bank tellers was seen to rise.
While the need for tellers at a bank did drop, what happened in its
aftermath was a reduction in operational costs for banks. This
increased the demand for more branches and, therefore, the need for
tellers within the industry.
This trend can be observed within multiple labor jobs such as cashiers
and paralegals across the US and the world.
It is, although essential to understand that a loss in jobs is a possible
scenario. Within the trucking industry, the introduction of self-driving
trucks will result in unemployment. On the other hand, it also reduces
the chances of accidents and loss.
This moral conundrum is based on this ethical trade-off that needs to
be dealt with, in both the short and long-term.
8. Based on this, the employment trend will shift from labor-intensive to
more creative or managerial positions and technical experts within
industries.
The way forward us is to determine the way we educate our younger
generations to be capable of sustaining life in an evolving market.
The takeaway here is to understand that the blanket idea of automation
always resulting in a loss of jobs is premature and myopic. As AI
allows businesses to reduce operational costs, it also increases their
ability to hire and spend more.
Unemployment is a major ethical concern within the field of AI, and
the need of our time is to give the younger generations the right skill
set and tools to adapt within this world. As AI allows rapid scalability
to businesses, the demand for some jobs will increase, while others
will fall.
The exaggeration of this advent is based on only specific industries
that need to be realized first before we move towards solving it.
Fake News
Social media platforms and search engines rely on AI to provide more
personalized experiences to users.
9. They use search trends and behavioral patterns to prioritize one piece
of information over the other. This comes in the shape of news,
advertisements, and trending topics.
One of the recent issues about this AI-backed content prioritization
has been the promotion of fake news.
This is a systemic issue that is being perpetuated by Artificial
intelligence, leading to severe consequences. Fake news has become a
grave issue today, with more information being spread around the
world with no strict accountability.
Even on this end, the two obvious solutions are both harder to
implement than they sound.
First, the idea of self-regulation by social media platforms. Internal
accountability to ensure that fake news is taken down is already in
place. Tech giants such as Facebook and Twitter are already taking
necessary steps to ensure this.
On the other hand, training AI to dissect real information from fake
news is not a simple task. Data verification based on source
availability is a work in progress.
10. While certain statistical lies can be caught, fake news that lies within
gray areas is harder to detect. This is precisely why “alternative facts”
is such a widely accepted phenomenon.
To train AIs to detect such deviations from the truth can often be a
personal reflection of what the creators of the AI believe in.
To standardize this process is a task that requires precision and an
understanding of data science that is beyond the reach of non-
specialists.
To this end, fake news is a dangerous tool with the potential to
misinform people and damage the very fabric of democracy, informed
decision-making. The positive here is the widely accepted
mobilization to curb this with businesses realizing their social
responsibility to ensure their platforms are not being misused.
Wealth Distribution Within A Post-Labor World:
One of the outcomes of a massively automated market is the
accumulation of wealth. If the number of employees gets reduced in
businesses, the cost of operations will go down.
This will mean that a higher percentage of revenue will end up being
the profit. This is most likely going to increase the wealth gap. The
11. middle-class is seen as the biggest victim in this movement towards
automation.
So how do we distribute wealth in this post-labor world? Well, there is
one way to do mitigate this harm.
As operational costs go down, profits for corporations will rise. The
current trend suggests a strong move towards a “tech tax,” as we like
to call it. As businesses implement automation, a tax targeting the loss
of employees can be applied.
The purpose of this tax would be to utilize this tax to increase
government safety net and provide alternative training for the re-
training of these employees.
There is no doubt that automation will benefit businesses at an
unprecedented level. The next best step is to ensure that it happens
besides helping those who lose their jobs rather than at their expense.
This training could help employees find new jobs, gain more relevant
skill sets, and ultimately sustain their lives.
Furthermore, this doesn’t negate the benefits for businesses. A tax on
even a percentage of the additional revenue businesses generate can
alleviate these problems.
12. A more extreme but not impossible logical extension of this principle
can apply to a world that is based on Universal Basic Income. There
seems to be a growing trend of high-profile people accepting this as
the long-term plan.
“There is a pretty good chance we end up with a universal basic
income, or something like that, due to automation,” SpaceX
and Tesla boss Elon Musk told CNBC in 2016.
Andrew Yang, a Democratic presidential nominee, has also pressed on
this issue in his campaign, with his plan for a “freedom dividend.”
For a future that is less dependent on human labor, an alternative is
required, and safety nets and training seems to be the most intuitive
solutions. Until a fresh perspective is vocalized, this appears to be the
general direction that the world might take.
13. About The Author
SHAH ANAS
Shah Anas is a Business Development Executive who believes in the
responsibility of businesses to give back to society. Away from the
office, Shah loves to talk about sports and politics and is always ready
to post a long status on trending topics.