1. WHY THE FUTURE DOES NOT NEED US
• In this Module, we will look into dangers that could humanity could
experience when science and technology is unchecked by society’s
standards. We will primarily draw from the views of Bill Joy, then
Chief Scientist at Sun Microsystems, as expressed in his article
entitled Why the Future Does Not Need Us? in the April 2000 issue of
Wired magazine regarding new technologies and the issues that
revolve around themWe hope that we could answer the question of
the title of his article and articulate why he thinks that the future
does not need us.
2.
3. • Is there a possibility for a future where humans will cease to exist and
will be replaced by robots? Why or why not?
• Can you even call the earth “world” after human extinction?
• Do you think technology can eventually take control of humanity?
• Is there a possibility for a future where humans will cease to exist
because of medical breakthroughs that resulted to drg-resistant
viruses?
• Do you think that this occurrence might be prevented? If so, how?
4. WHY THE FUTURE DOES NOT NEED US
• In his article Why the Future Does Not Need Us?, Joy expressed his concerns on the
emergence of new technologies, their consequences, and their possible dangers. He said
that because of the complexity of our systems and our attitude towards science and
technology that these problems may emerge. In particular, he was concerned with three
21st century technologies: genetic engineering, nanotechnology, and robotics (GNR) (these
technologies will be discussed in more detail as the course progresses). He recognized the
appeal of the developments of these new technologies and the promise that they bring
humanity.
• However, though the advantages of these technologies are undeniable, the dangers they
present and issues that they raise is also very concerning and alarming. They raise moral
or ethical issues, safety concerns, and they might be used to destroy humanity.
•
5. • He cited the work of Theodore Kaczynski, entitled Unabomber Manifesto, to illustrate the
dangers of these technologies.
• In his work, he said that there are two possibilities that could occur when intelligent
machines that can eliminate human effort in doing work: either we let these machines do
decisions or we retain control.
• Either way, the result would be the same: the ending of humanity and the loss of the
purpose of life.
• He asserted that biological species will lose against competition with new technologies.
Gradually, but eventually, new technologies will take over.
• Though new technologies have already been introduced before, such as nuclear,
biological, and chemical technologies, GNR is much more worrying.
• New technologies can offer new types of accidents and abuses that can be accessed by
small groups.
• He asserted that we have not learned the lessons of the past, citing the creation and use
of the atomic bomb and its consequences as an example.
• He was worried to we could be in the same path, if not worse. He warned of an
impending arms race not against enemies that threaten our civilization bur against our
wants and desires.
•
6. • Some solutions on these issues have been raised, such as leaving the planet and
exploring other possible places to inhabit or building shields to ward off
dangerous technologies. However, he believed that these solutions might create
more moral problems in addition to being impractical and unrealistic in the
current time frame.
• Though he said that seeking knowledge and pursuit of our dreams is good, if it
will lead to danger, we should think of restricting ourselves and reexamine our
views. He referenced the Dalai Lama’s principle that it is neither material things
nor the gain of knowledge that will make us happy. He remained hopeful that the
discussion of these issues and our capacity to care will help us solve these issues.
•
• To better understand the arguments that Joy presented, read the article Why the
Future Does Not Need Us? through the link https://www.wired.com/2000/04/joy-
2/.
•
7. CRITICISMS ON JOY’S VIEWS
• However, some have shown some crtiticisms on Joy’s views. In the article entitled A
Response to Bill Joy and the Doom-and-Gloom Technofuturists by John Seely Brown and
Paul Duguid, they argued that although new technologies needed to be contemplated
thoroughly, technology and social systems shape each other and that social systems have
the capacity to direct these new technologies.
• For example, genetic engineering, once regarded to be unstoppable in its development,
had some issues because society has seen its potential threats.
• Nanotechnology, on the other hand, has not been even been fully developed to pose any
threat.
• Even robots, according to them, cannot make decisions the same way that humans can in
their present state.
• Developments and advances in robotics, they argued, might not necessarily mean that
they will lead to a state that is similar to humans.
• Society may be able to plan ahead and respond to issues that new technologies pose.
8. • They basically argued that the extension of ideas made by Joy
regarding the possible events that might happen because of these
technologies is too great a leap. Before getting to a point of danger,
there will be actions that society will take to prevent arriving at these
grim destinations.
• To better understand these arguments, read the article A Response to
Bill Joy and the Doom-and-Gloom Technofuturists through the link
http://nook.cs.ucdavis.edu/~koehl/Teaching/ECS188/Reprints/Respo
nse_to_BillJoy.pdf.
9. •Bill Joy, the author of the article “Why the Future Doesn’t Need
Us?”, discussed about how advance technology could affect the
human race. His views about the rapid progress of technology,
specifically GNR technologies, embody a negative relation
between humanity and technologies.
•Critics of Joy believed that Joy showed only one part of the
bigger picture. In this case it is preeminently necessary that the
scientific community, governments, and businesses engage in a
discussion to determine the safe guards of humans against the
potential dangers of science and technology.
10. Answer the following questions:
1.Explain the positive and negative impacts of GNR technologies. What moral or ethical issues and safety concerns do they
pose?
2.We know by now that any technology may be dangerous. However, Joy was much more worried about GNR technologies
compared to other technologies. What were the reasons for these great concerns?
3.Explain how we will lose our humanity and purpose of life whether we retain control of decision-making or give this capability
to technology.
4.Do you believe in the opinions of Joy? Why or why not?
5.What solutions can you propose as to not reach what he predicts might happen?
6.Some people accuse of Joy of being a neo-Luddite, something which he denied in his article. What is a neo-Luddite?
Based from Joy’s article, do you think that he is a neo-Luddite? Why or why not?
7.Complete the following metacognitive reading report:
a..What three concepts from the article will you never forget?
b.What three realizations did you have after reading the article? State your answer in the following manner: Before reading
the article I thought… However after reading, I can now say that I learned…
c . What three things are still unclear to you after reading the article?
11. Humans and Robots
• Automation, increasing sophistication of computers, and robots may be threatening the
usefulness of humans and threatening human employment.
• The development of artificial intelligence may make robots act or decide like humans.
• This possibility needs reflection regarding ethical considerations concerning robots.
• A robot is an actuated mechanism programmable in two or more axes with a degree of
autonomy, moving within its environment, to perform intended tasks.
• Autonomy means the ability to perform intended tasks based on current state and
sensing without human intervention.
• According to Dylan Evans in his article “The ethical dilemma of robots”, some countries
are drawing ethical codes and legislation regarding human abuse to robots and vice
versa.
• The development of emotional robotics which allows robots to recognize human
expressions of emotion and to engage in behavior that humans readily perceive as
emotional also contributes to the ethical dilemma regarding robots and humans.
12. Some of the ethical questions that are relevant to this issue include:
• What does it mean for humans to be replaced by machines?
• Is the value of a human inversely proportional to that of a machine
exhibiting artificial intelligence?
• How do we guard against mistakes committed by machines?
• If a robot injures someone, is the designer to blame, or the user, or the
robot itself?
• If robots can feel pain, should they be granted certain rights?
• If robots develop emotions, as some experts think they will, should they be
allowed to marry humans?
• Should robots be allowed to own property?
• If we see machines as increasingly human-like, will we come to see
ourselves as more machine-like?
•
13. Humans, Television Sets, Mobile Phones, and Computers
• Almost every household contain television sets, mobile phones, and computers.
• There are hundreds of millions of mobile phone subscription, millions of active Facebook
accounts, and several hours of mobile phone and computer interface.
• The Philippines has currently one of the highest digital populations in the world and is
the fastest-growing application market in Southeast Asia.
• These devices are used as platforms for advertisements, propaganda, and advocacies for
communication, for information dissemination, as recreational activity and stress
reliever, and as way to bond with family members.
• Though there are uses, some argue that there are ethical dilemmas that these
advancements bring forth. These include:
• Parents argue that they make children lazy and unhealthy.
• People become alienated from other people because they are fixated with these devices.
Instead of connecting people, they tend to separate them.
• People who are unable to distinguish from what is right and wrong are exposed to things
which are not suitable for them.
• Also, according to the article “Is Google making us stupid?” by Nicolas Carr, we become
dependent on the Internet that our intelligence is affected.
• We begin to lose our way of concentration and contemplation and we began to lose
interest in reading longish articles or books.
14. • In April 2000, William Nelson Joy, an American computer scientist and
chief scientist of Sun Microsystems, wrote an article for Wired magazine
entitled Why the future doesn’t need us?
• Joy warned against the rapid rise of new technologies.
• He explained that 21st century technologies - genetics, nanotechnology,
and robotics (GNR) - are becoming very powerful that they can
potentially bring about new classes of accidents, threats, and abuses.
• He further warned that these dangers are even more pressing because
they do not require large facilities or even rare materials
• Knowledge alone will make them potentially harmful to humans
• He argued that robotic, genetic engineering, and nanotechnology pose
much greater threats than technological developments that have come
before
15. • He cited the ability of nanobots to self-replicate, which could quickly get out of
control
• He also voiced out about the rapid increase of computer power
• He was also concerned that computers will eventually become more intelligent
than humans, thus societies into dystopian visions, such as robot rebellions.
• Joy’s article tackles the unpleasant and uncomfortable possibilities that a
senseless approach to scientific and technological advancements may bring.
• It is very unavoidable to think of a future that will no longer need the human
race.
• It makes thinking of the roles and obligations of every stakeholder a necessary
component of scientific and technological advancement.
• In this case, it is very necessary that the scientific community, governments,
and businesses engage in a discussion to determine the safeguards of humans
against the potential dangers of science and technology.
•