SlideShare a Scribd company logo
1 of 13
Carreon 1
Leonardo Carreon
Can Computers think?
Will computers ever become self-aware? Is self-awareness for a computer even a
possibility? Is there even a stark distinction between a strong-AI and a weak-AI or is it all a
gradual change in which a weak-AI begins “dumb” and eventually becomes self-aware? A self-
aware Artificial Intelligence would be capable of achieving the same level of cognitive abilities
that a human can reach and even surpass it in certain cases. I would argue that self-aware AI is
within the realm of possibility and we are just beginning to see the early stages of proto-AI that
could well be able to keep growing and learning until it reaches a critical point in which it can be
stated that it has sufficient reason to be called self-aware. We will look at the works of Alan
Turing and John Searle to better grasp the picture of the concept of a self-aware AI as described
by Turing and Searle.
Alan Turing was one of the first people to argue for the possibility of a computer one day
reaching the same level of awareness of that of a human being. Turing stated that by the year
2000 a computer should be able to convince a person with just five minutes of conversing with
one another that it was not a computer but rather another human being. Alan Turing in his papers
wrote about a certain test to which we should be able to ascertain if a computer is sufficiently
“intelligent”. The Turing Test is widely known as his attempt to accurately gauge whether or not
a machine is able to successfully mimic a human being. This Turing Test is flawed in a
fundamental way in which I will further explain later in this paper but the fact is that the Turing
Test that we commonly know as The Turing Test was in fact part of two tests that Alan Turing
conceived to test machine intelligence. This second test is arranged in a different way from the
Carreon 2
standard Turing Test whose sole purpose is to see if a person can mistake a machine for another
person. Susan Sterrett illustrates the difference between these two tests in her article “Turing's
Two Tests for Intelligence” for the Minds and Machines: Journal for Artificial Intelligence,
Philosophy, and Cognitive Science.
Carreon 3
The figure illustrates inherent bias present within the standard Turing Test by which
everyone has come to known since in that specific test a computer is asked to imitate a man
against another man which in the end the interrogator makes the choice on which is genuine and
which is false. This method of testing is flawed from the beginning since the man in question
here doesn’t need to convince the interrogator that he is a man since all the man has to do is
simply speak from personal experience toward any question put forth. This stacks the odds of the
test heavily in the human side since the computer has no way of actually competing against that
sort of standard. An example would be if we were to test a group of animals to see which are
good climbers and on that basis argue that they then possess some sort of intelligence and we
then put fish against chimpanzees to climb that tree it would appear obvious that the test is
heavily stacked from the beginning in favor of the chimpanzee. This is the fundamental flaw of
the standard Turing Test since we are forcing machines to conform to our own set of standards to
gauge if they have some sort of intelligence.
Now here is where the second test by Turing would be much more effective than the
standard test. The Original Imitation Game Test would instead put the man and machine on an
equal playing field by making them both have the same sort of handicap which in this case
would be to convince the interrogator that the man or the machine are in fact a woman. This
forces both parties to actively think about how to successfully trick the person asking the
questions to mistake them for something that they aren’t. This test differs from the standard test
since in the standard test the man is simply conversing with the interrogator using basic ideas
that everyone would usually take for granted, ie it rains outside, everyone eats, everyone sleeps
etc. The original test would therefore not make it entirely based on cultural or societal relativity
which the standard test is based on. Both of these tests were created to see if a machine is
Carreon 4
intelligent enough to be mistaken for a human but on that basis alone I would argue that they are
inadequate for actually testing if a machine is self-aware or not. If a computer is mistaken for a
human it does not imply it is actually self-aware although according to Alan Turing that is in fact
a good indication of intelligence and self-awareness.
John Searle is a major proponent against the validity of the Turing Test to be used to test
any sort of intelligence of a machine. Searle argues that a computer or a program by definition is
unable to ever acquire any sort of self-awareness. According to Searle a program has syntax but
no semantics and so it could not be considered that the program is able to think or understand
what it is doing. A program only goes through the motions and has no real understanding of what
its actions are.
Searle makes four premises as to why a program would be unable to have the capacity or
the notion of being human and then makes four conclusions to summarize his arguments.
Searle’s first premise is that brain causes mind, by this he does not mean in a metaphysical sense
where the mind is an abstract thing that is our consciences but rather is all originated from an
organic foundation. For us to have mind we must therefore have a brain with its millions of
neurons and synapses that make up what makes us ourselves, our thoughts, our emotions, our
self-awareness is what gives rise to our grasp of semantics as well as syntax. “What we mean by
that is that mental processes that we consider to constitute a mind are caused, entirely caused, by
processes going on inside the brain. But let’s be crude, let’s just abbreviate that as three words –
brain cause minds. And that is just a fact about how the world works.” (Searle, p.282)
What allows us as humans to have consciousness is that our brains work in a certain way
so that we are always learning always adapting and able to do many tasks that we weren’t born
Carreon 5
with. It is the chemistry in our brains that gives rise to minds and self-awareness of the self,
which gives us semantics or having a deeper understanding of our actions other than the simple
motions. An example would be the telling of a joke where if you simply look at telling of joke in
strict syntax it would usually not make any sense but with having semantics you can understand
a deeper meaning from the syntax that isn’t overt.
Searle’s second premise is that syntax is not sufficient for semantics. Searle means that a
program simply doing an action does not mean it has a deeper understanding of its action or even
understands what its action even mean. For Searle a program simply goes through the motions of
whatever it is being ordered without any thought on what it is actually doing. Searle provides an
example to this by using the Chinese room example in which a person is inside a room with no
outside interference and he would arrange Chinese letters according to the shape. In the Chinese
room example a person would ask a question in Chinese and the person inside the room would
respond by just finding the symbols that matched the ones being asked thus making it seem like
the person asking was actually talking to another person. So inside the room the person had
syntax necessary to do the task but he had no semantics as he had no understanding of the
Chinese language and was simply fitting in the symbols to other similar looking symbols. This is
what Searle argues in that a program simply has no semantics and is just going through the
actions of matching symbols and not actually understanding their meaning. "So, for example, if
the computer is given a question in Chinese, it will match the question against its memory, or
data base, and produce appropriate answers to the questions in Chinese. Suppose for the sake of
argument that the computer's answers are as good as those of a native Chinese speaker. Now
then, does the computer, on the basis of this, understand Chinese; does it literally understand
Chinese, in the way that Chinese speakers understand Chinese?" (Searle, p.279)
Carreon 6
Searle’s third premise is that computer programs are entirely defined by their formal, or
syntactical, structure. "That proposition, I take it, is true by definition; it is part of what we mean
by notion of a computer program." (Searle, p.282) For Searle a program has a strict definition of
something that can accomplish a task by having syntax but no semantics. A program is simply
just a program no matter what kind of hardware it is currently running on since by the definition
of Searle it still only has syntax, albeit a more efficient one if running on highly advanced
hardware, and has no conception of the self or its actions. Take for example the Apple Program
"Siri" which has been called very intelligent for its ability to take voice recognition and
accurately answer back whatever it is being asked. While on the surface it would appear that
"Siri" seems to understand and is able to carry out small talk with a person, if we were too look
deeper into what "Siri" is we would come to find out that it is simply a fancy version of Searle's
Chinese room. "The server compares your speech against a statistical model to estimate, based
on the sounds you spoke and the order in which you spoke them, what letters might constitute it.
(At the same time, the local recognizer compares your speech to an abridged version of that
statistical model.) For both, the highest-probability estimates get the go-ahead. Based on these
opinions, your speech -- now understood as a series of vowels and consonants -- is then run
through a language model, which estimates the words that your speech is comprised of. Given a
sufficient level of confidence, the computer then creates a candidate list of interpretations for
what the sequence of words in your speech might mean." (Nusca) As we can see all of the
programming magic of "Siri" is simply it having syntax to matching symbols to the symbols that
you are speaking thus reinforcing Searle argument that a program has syntax but no semantics.
Searle’s forth premise is that minds have mental contents; specifically, they have
semantic contents. "And that, I take it, is just an obvious fact about how our minds work. My
Carreon 7
thoughts, and beliefs, and desires are about something, or they refer to something, or they
concern states of affairs in the world; and they do that because their content directs them as these
states of affairs in the world. Now, from these four premises, we can draw our first conclusion;
and it follows obviously from premises 2,3 and 4." (Searle, p.282.) Our brains give rise to minds
which contain semantic contents in that it contains the entire essence that makes us who we are
in the individual sense. Our minds have semantics in that it is able to understand things beyond
the simple syntax meaning. Take for example the fictional character of the unicorn in which we
look at it in pure syntax meaning we realize it would not make any sense since there is no
physical creation that we call a unicorn. The unicorn was a semantic creation in our minds in that
we are able to see the unicorn when we say or think of a unicorn but at the same time it does not
exist in the real sense that we can see it physically with our visual sense. For Searle a program
would simply not comprehend what a unicorn actually is nor would it be able to even "imagine"
something that we as humans with our semantic contents in our minds are able to do.
John Searle and Alan Turing are on opposite sides of the spectrum when it comes toward
if self-aware AI even possible with Searle vehemently arguing that such a notion is nonsense and
Turing proposing certain tests to see if a machine is self-aware which although are a good start
are still flawed for testing of self-aware AI. Now we must first define what a self-aware AI
would look like to see if such a thing is even possible. For an AI to be self-aware it must first be
able to distinguish itself as an individual and to be aware of its own thoughts. Individuality
would be a unique trait among self-aware AI since it would give rise to unique personalities
among AI’s. “Consciousness is the quality or state of being aware of an external object or
something within oneself. It has been defined as: subjectivity, awareness, sentience, the ability
to experience or to feel, wakefulness, having a sense of selfhood, and the executive control
Carreon 8
system of the mind.” A self-aware AI is able to obtain consciousness which would allow it to
have thoughts and subjective experiences. The ability to have thoughts is crucial for an AI since
that would make a difference between being self-aware and simply doing tasks like the Chinese
room example. Also for it to have the ability to have subjective experiences would mean that
something it experiences would mean something else to it than it would to someone else or
another AI if they were to experience the same thing. For example, if two people look at the
same car one might simply feel a neutral response toward the car since it means nothing to him
but the other person might have an emotional response if the car had some sort of value toward
that person. Having subjective experiences is what makes us humans what we are since we all
have different experiences and have different emotional responses toward each experience.
Secondly, a self-aware AI must be sentient. “Sentience is the ability
to feel, perceive, or be conscious, or to have subjective experiences.” A sentient AI would be
able to perceive emotions and “feel” perceptions all of which would be subjectively meaning no
two AI would have the same emotions or perceptions. Being sentient, an AI would be able to
“feel” sadness, pain, anguish, happiness, and any other emotions that we would label as human
emotions. Having the ability to be sentient would be the foundation necessary to truly reach the
point in which it crosses over into the realm of self-awareness. Although, simply having
sentience would not necessarily mean that it has acquired self-awareness. An example would be
if we were to take a look at a baby that was just born and analyzed it. It would be obvious to us
that it is capable of feeling pain, sadness, and happiness yet a new born baby does not really
seem to portray self-awareness the moment it is out the womb. This tells me that at some point in
our development from child to adult we somehow reach a threshold in which we realize we are
an I and we are capable of making our own decisions.
Carreon 9
Thirdly, a self-aware AI must have sapience which is distinct from the ability to
be sentient. “Sapience is often defined as wisdom, or the ability of an organism or entity to act
with appropriate judgment, a mental faculty which is a component of intelligence or alternatively
may be considered an additional faculty, apart from intelligence, with its own properties.”
Sapience is the ability to have wisdom or acquire wisdom either though age or though subjective
experiences. Having Sapience for an AI would mean that the AI would be able to learn and
though experiences become wise. The same way a teenage human is considered to be sentient it
would also be argued that it can sometimes lack sapience since teenage humans tend to do
irrational things that we would not call wise in any shape or form. In contrast an aged human
would sometimes be called wise since they have lived though many subjective experiences that
have expanded their knowledge of things and though the learning of new things and seeing new
things they have been able to acquire what we would consider wisdom. Wisdom would be the
final step of any self-aware AI since it would then be able to learn even more and expand on
what it has learned and even pass it on to other younger AI which has not yet gained wisdom.
Now that we have thoroughly defined what a self-aware AI would be or should be we
must then decide if such a thing even is a possibility in our current world. Searle argues that such
a notion is nonsense since a program cannot hope to possibly have self-awareness. Searle is right
in this respect in that a program in our current conventional computing simply would not have
the capability to ever hope to reach the level as previously defined. But, simply because such a
thing would not be possible under current conventional computing does not mean that such a
thing is not possible under any circumstances. For example, our current computing is based
around logic and the 0 and 1 value with everything falling within those two values meaning
everything is done in absolutes or in universals. This way of computing would mean that any
Carreon 10
program created to attempt to emulate some sort of self-awareness is doomed from the start since
conventional computing is simply too logical for self-awareness to arise. What I mean by this is
that if we were too look at the only current creature in this world which we can completely
ascertain that it is self-aware and intelligent it would have to be ourselves. Humans are currently
the only creatures present on this planet which exhibit all the features of being self-aware which
if we were to do further study we will realize that even though we are the only beings that are
self-aware we are also highly illogical or irrational.
Humans are known for making decisions which we would under normal circumstances
argue that it was simply complete nonsense that overtook that particular person to make such a
decision yet we would never argue that particular person is not self-aware even though they
made an illogical choice. This certain circumstances shows that having self-awareness would
mean that we are capable of making illogical choices and such a choice appears to go hand in
hand with self-awareness. Now if we were to apply the same principles to a program and
conventional computing we realize that being irrational under that certain circumstance is simply
not fundamentally possible as a result of the way computing is designed. Yet, if such a thing is
not possible under conventional computing how can we then argue self-aware AI is a possibility?
The answer to that question would be present within quantum computing.
Dr. Michio Kaku talks about quantum computers in his big think web series in which he
mentions how quantum computers will change everything about what we know about artificial
intelligence. He states that currently our most intelligent robots are equivalent to that of a
cockroach in that they can see and hear better than us but are unable to understand what they are
seeing and hearing. This limitation reinforces Searle previous statement that a program has
syntax but no semantics. Dr. Kaku states the reason that the most intelligent robot is hardly
Carreon 11
smarter than an cockroach is that our current technology is all based on silica chips which double
in processing power every 18 months, but according to Moore’s Law that cannot go on forever
since we would eventually reach a point in which we cannot go any further in silica based chips.
Quantum computing is a game changer within the computing world since it would be the first
time we would have encountered a machine that was capable of doing extraordinary amount of
tasks with relative ease that would put our most powerful supercomputers to shame. But while a
quantum computer would be capable of processing ludicrous amount of logical data it would also
be able to do what a conventional computer would not be capable of and that would be that a
quantum computer is able to be irrational. A quantum computer differs from conventional
computing in that a quantum computer processes in 1 and 0 but also in between these two
options. While a conventional computer would simply give you a yes or no answer a quantum
computer could give you a maybe answer which at the surface may seem nothing so special we
must realize that giving a maybe answer is something very human like. Now if we were to have a
quantum computer loaded up with a program that would emulate a self-aware intelligence it
would be indistinguishable from that of a human being. The reason this would be possible is that
while a quantum computer can give you a maybe answer it can also be done in near instant
response time. This is another problem of conventional computing to attempt self-awareness
since our current computers simply aren’t fast enough to compete with a human brain. If self-
aware AI would then be possible under quantum computing we must then answer if self-
awareness is a process that simply happens over time or if one day you are not self-aware and the
next you are.
I would argue that the road to self-awareness is a gradual one in which you start without
it and end up somehow acquiring it along the way. A previous example of this was that of a child
Carreon 12
and an adult and the difference between the two. A child that was just born would simply not
possess any sort of self-awareness yet it somehow gains it as it grows up to become an adult.
This change within the person would show that self-awareness is something that we acquire
gradually over time as we develop though the years the same way we go from being a tiny baby
to a fully grown adult. The same way a human would somehow gain it through its development
an AI would also go from being a weak-AI to being a strong-AI.
Carreon 13
Works Cited
 Akman, Varol, and Patrick Blackburn. "Editorial: Alan Turing and Artificial
Intelligence." Journal of Logic, Language and Information 9.4 (2000): 391-5. ProQuest.
24 Nov. 2013 .
 Hauser, Larry. "Searle's Chinese Box: Debunking the Chinese Room Argument." Minds
and Machines: Journal for Artificial Intelligence, Philosophy, and Cognitive Science 7.2
(1997): 199-226. ProQuest. 24 Nov. 2013 .
 Nusca, Andrew. "How Apple's Siri Really Works." ZDNet. N.p., 3 Nov. 2011. Web. 10
Dec. 2012.
 Searle, John. "Can Computers Think?" Analytic Philosophy: An Anthology. By Aloysius
Martinich and David Sosa. Malden, MA: Blackwell, 2001. 277-83. Print.
 Sterrett, Susan G. "Turing's Two Tests for Intelligence." Minds and Machines: Journal
for Artificial Intelligence, Philosophy, and Cognitive Science 10.4 (2000): 541-
59. ProQuest. 24 Nov. 2013 .
 The Future of Quantum Computing. Video. Michio Kaku. Big Think, 31 May 2011. Web.
10 Dec. 2012.

More Related Content

Similar to strong ai final paper

AnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdfAnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdfnareshsonyericcson
 
The Mind as the Software of the Brain by Ned Block httpww.docx
The Mind as the Software of the Brain by Ned Block httpww.docxThe Mind as the Software of the Brain by Ned Block httpww.docx
The Mind as the Software of the Brain by Ned Block httpww.docxarnoldmeredith47041
 
AI Mod1@AzDOCUMENTS.in.pdf
AI Mod1@AzDOCUMENTS.in.pdfAI Mod1@AzDOCUMENTS.in.pdf
AI Mod1@AzDOCUMENTS.in.pdfKUMARRISHAV37
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial IntelligenceAkshay Thakur
 
Artificial intelligence and ethics
Artificial intelligence and ethicsArtificial intelligence and ethics
Artificial intelligence and ethicsMia Eaker
 
What Artificial intelligence can Learn from Human Evolution
What Artificial intelligence can Learn from Human EvolutionWhat Artificial intelligence can Learn from Human Evolution
What Artificial intelligence can Learn from Human EvolutionAbhimanyu Singh
 
LEC_2_AI_INTRODUCTION - Copy.pptx
LEC_2_AI_INTRODUCTION - Copy.pptxLEC_2_AI_INTRODUCTION - Copy.pptx
LEC_2_AI_INTRODUCTION - Copy.pptxAjaykumar967485
 
Can abstraction lead to intelligence?
Can abstraction lead to intelligence?Can abstraction lead to intelligence?
Can abstraction lead to intelligence?Dr Janet Bastiman
 
Ch 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdfCh 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdfKrishnaMadala1
 
sfalls. neuroethics. final exam
sfalls. neuroethics. final examsfalls. neuroethics. final exam
sfalls. neuroethics. final examSarah Falls
 

Similar to strong ai final paper (20)

AnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdfAnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
AnswerTuring TestCoined by computing pioneer Alan Turing in .pdf
 
The Mind as the Software of the Brain by Ned Block httpww.docx
The Mind as the Software of the Brain by Ned Block httpww.docxThe Mind as the Software of the Brain by Ned Block httpww.docx
The Mind as the Software of the Brain by Ned Block httpww.docx
 
AI.doc
AI.docAI.doc
AI.doc
 
4-2
4-24-2
4-2
 
1.introduction to ai
1.introduction to ai1.introduction to ai
1.introduction to ai
 
AI Mod1@AzDOCUMENTS.in.pdf
AI Mod1@AzDOCUMENTS.in.pdfAI Mod1@AzDOCUMENTS.in.pdf
AI Mod1@AzDOCUMENTS.in.pdf
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Artificial intelligence and ethics
Artificial intelligence and ethicsArtificial intelligence and ethics
Artificial intelligence and ethics
 
What Artificial intelligence can Learn from Human Evolution
What Artificial intelligence can Learn from Human EvolutionWhat Artificial intelligence can Learn from Human Evolution
What Artificial intelligence can Learn from Human Evolution
 
(Ch#1) artificial intelligence
(Ch#1) artificial intelligence(Ch#1) artificial intelligence
(Ch#1) artificial intelligence
 
Decoding human emotions
Decoding human emotionsDecoding human emotions
Decoding human emotions
 
Mind Body
Mind BodyMind Body
Mind Body
 
LEC_2_AI_INTRODUCTION - Copy.pptx
LEC_2_AI_INTRODUCTION - Copy.pptxLEC_2_AI_INTRODUCTION - Copy.pptx
LEC_2_AI_INTRODUCTION - Copy.pptx
 
Can abstraction lead to intelligence?
Can abstraction lead to intelligence?Can abstraction lead to intelligence?
Can abstraction lead to intelligence?
 
Cosc 208 lecture note-1
Cosc 208 lecture note-1Cosc 208 lecture note-1
Cosc 208 lecture note-1
 
4-4
4-44-4
4-4
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
Ch 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdfCh 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdf
 
short paper 5
short paper 5short paper 5
short paper 5
 
sfalls. neuroethics. final exam
sfalls. neuroethics. final examsfalls. neuroethics. final exam
sfalls. neuroethics. final exam
 

strong ai final paper

  • 1. Carreon 1 Leonardo Carreon Can Computers think? Will computers ever become self-aware? Is self-awareness for a computer even a possibility? Is there even a stark distinction between a strong-AI and a weak-AI or is it all a gradual change in which a weak-AI begins “dumb” and eventually becomes self-aware? A self- aware Artificial Intelligence would be capable of achieving the same level of cognitive abilities that a human can reach and even surpass it in certain cases. I would argue that self-aware AI is within the realm of possibility and we are just beginning to see the early stages of proto-AI that could well be able to keep growing and learning until it reaches a critical point in which it can be stated that it has sufficient reason to be called self-aware. We will look at the works of Alan Turing and John Searle to better grasp the picture of the concept of a self-aware AI as described by Turing and Searle. Alan Turing was one of the first people to argue for the possibility of a computer one day reaching the same level of awareness of that of a human being. Turing stated that by the year 2000 a computer should be able to convince a person with just five minutes of conversing with one another that it was not a computer but rather another human being. Alan Turing in his papers wrote about a certain test to which we should be able to ascertain if a computer is sufficiently “intelligent”. The Turing Test is widely known as his attempt to accurately gauge whether or not a machine is able to successfully mimic a human being. This Turing Test is flawed in a fundamental way in which I will further explain later in this paper but the fact is that the Turing Test that we commonly know as The Turing Test was in fact part of two tests that Alan Turing conceived to test machine intelligence. This second test is arranged in a different way from the
  • 2. Carreon 2 standard Turing Test whose sole purpose is to see if a person can mistake a machine for another person. Susan Sterrett illustrates the difference between these two tests in her article “Turing's Two Tests for Intelligence” for the Minds and Machines: Journal for Artificial Intelligence, Philosophy, and Cognitive Science.
  • 3. Carreon 3 The figure illustrates inherent bias present within the standard Turing Test by which everyone has come to known since in that specific test a computer is asked to imitate a man against another man which in the end the interrogator makes the choice on which is genuine and which is false. This method of testing is flawed from the beginning since the man in question here doesn’t need to convince the interrogator that he is a man since all the man has to do is simply speak from personal experience toward any question put forth. This stacks the odds of the test heavily in the human side since the computer has no way of actually competing against that sort of standard. An example would be if we were to test a group of animals to see which are good climbers and on that basis argue that they then possess some sort of intelligence and we then put fish against chimpanzees to climb that tree it would appear obvious that the test is heavily stacked from the beginning in favor of the chimpanzee. This is the fundamental flaw of the standard Turing Test since we are forcing machines to conform to our own set of standards to gauge if they have some sort of intelligence. Now here is where the second test by Turing would be much more effective than the standard test. The Original Imitation Game Test would instead put the man and machine on an equal playing field by making them both have the same sort of handicap which in this case would be to convince the interrogator that the man or the machine are in fact a woman. This forces both parties to actively think about how to successfully trick the person asking the questions to mistake them for something that they aren’t. This test differs from the standard test since in the standard test the man is simply conversing with the interrogator using basic ideas that everyone would usually take for granted, ie it rains outside, everyone eats, everyone sleeps etc. The original test would therefore not make it entirely based on cultural or societal relativity which the standard test is based on. Both of these tests were created to see if a machine is
  • 4. Carreon 4 intelligent enough to be mistaken for a human but on that basis alone I would argue that they are inadequate for actually testing if a machine is self-aware or not. If a computer is mistaken for a human it does not imply it is actually self-aware although according to Alan Turing that is in fact a good indication of intelligence and self-awareness. John Searle is a major proponent against the validity of the Turing Test to be used to test any sort of intelligence of a machine. Searle argues that a computer or a program by definition is unable to ever acquire any sort of self-awareness. According to Searle a program has syntax but no semantics and so it could not be considered that the program is able to think or understand what it is doing. A program only goes through the motions and has no real understanding of what its actions are. Searle makes four premises as to why a program would be unable to have the capacity or the notion of being human and then makes four conclusions to summarize his arguments. Searle’s first premise is that brain causes mind, by this he does not mean in a metaphysical sense where the mind is an abstract thing that is our consciences but rather is all originated from an organic foundation. For us to have mind we must therefore have a brain with its millions of neurons and synapses that make up what makes us ourselves, our thoughts, our emotions, our self-awareness is what gives rise to our grasp of semantics as well as syntax. “What we mean by that is that mental processes that we consider to constitute a mind are caused, entirely caused, by processes going on inside the brain. But let’s be crude, let’s just abbreviate that as three words – brain cause minds. And that is just a fact about how the world works.” (Searle, p.282) What allows us as humans to have consciousness is that our brains work in a certain way so that we are always learning always adapting and able to do many tasks that we weren’t born
  • 5. Carreon 5 with. It is the chemistry in our brains that gives rise to minds and self-awareness of the self, which gives us semantics or having a deeper understanding of our actions other than the simple motions. An example would be the telling of a joke where if you simply look at telling of joke in strict syntax it would usually not make any sense but with having semantics you can understand a deeper meaning from the syntax that isn’t overt. Searle’s second premise is that syntax is not sufficient for semantics. Searle means that a program simply doing an action does not mean it has a deeper understanding of its action or even understands what its action even mean. For Searle a program simply goes through the motions of whatever it is being ordered without any thought on what it is actually doing. Searle provides an example to this by using the Chinese room example in which a person is inside a room with no outside interference and he would arrange Chinese letters according to the shape. In the Chinese room example a person would ask a question in Chinese and the person inside the room would respond by just finding the symbols that matched the ones being asked thus making it seem like the person asking was actually talking to another person. So inside the room the person had syntax necessary to do the task but he had no semantics as he had no understanding of the Chinese language and was simply fitting in the symbols to other similar looking symbols. This is what Searle argues in that a program simply has no semantics and is just going through the actions of matching symbols and not actually understanding their meaning. "So, for example, if the computer is given a question in Chinese, it will match the question against its memory, or data base, and produce appropriate answers to the questions in Chinese. Suppose for the sake of argument that the computer's answers are as good as those of a native Chinese speaker. Now then, does the computer, on the basis of this, understand Chinese; does it literally understand Chinese, in the way that Chinese speakers understand Chinese?" (Searle, p.279)
  • 6. Carreon 6 Searle’s third premise is that computer programs are entirely defined by their formal, or syntactical, structure. "That proposition, I take it, is true by definition; it is part of what we mean by notion of a computer program." (Searle, p.282) For Searle a program has a strict definition of something that can accomplish a task by having syntax but no semantics. A program is simply just a program no matter what kind of hardware it is currently running on since by the definition of Searle it still only has syntax, albeit a more efficient one if running on highly advanced hardware, and has no conception of the self or its actions. Take for example the Apple Program "Siri" which has been called very intelligent for its ability to take voice recognition and accurately answer back whatever it is being asked. While on the surface it would appear that "Siri" seems to understand and is able to carry out small talk with a person, if we were too look deeper into what "Siri" is we would come to find out that it is simply a fancy version of Searle's Chinese room. "The server compares your speech against a statistical model to estimate, based on the sounds you spoke and the order in which you spoke them, what letters might constitute it. (At the same time, the local recognizer compares your speech to an abridged version of that statistical model.) For both, the highest-probability estimates get the go-ahead. Based on these opinions, your speech -- now understood as a series of vowels and consonants -- is then run through a language model, which estimates the words that your speech is comprised of. Given a sufficient level of confidence, the computer then creates a candidate list of interpretations for what the sequence of words in your speech might mean." (Nusca) As we can see all of the programming magic of "Siri" is simply it having syntax to matching symbols to the symbols that you are speaking thus reinforcing Searle argument that a program has syntax but no semantics. Searle’s forth premise is that minds have mental contents; specifically, they have semantic contents. "And that, I take it, is just an obvious fact about how our minds work. My
  • 7. Carreon 7 thoughts, and beliefs, and desires are about something, or they refer to something, or they concern states of affairs in the world; and they do that because their content directs them as these states of affairs in the world. Now, from these four premises, we can draw our first conclusion; and it follows obviously from premises 2,3 and 4." (Searle, p.282.) Our brains give rise to minds which contain semantic contents in that it contains the entire essence that makes us who we are in the individual sense. Our minds have semantics in that it is able to understand things beyond the simple syntax meaning. Take for example the fictional character of the unicorn in which we look at it in pure syntax meaning we realize it would not make any sense since there is no physical creation that we call a unicorn. The unicorn was a semantic creation in our minds in that we are able to see the unicorn when we say or think of a unicorn but at the same time it does not exist in the real sense that we can see it physically with our visual sense. For Searle a program would simply not comprehend what a unicorn actually is nor would it be able to even "imagine" something that we as humans with our semantic contents in our minds are able to do. John Searle and Alan Turing are on opposite sides of the spectrum when it comes toward if self-aware AI even possible with Searle vehemently arguing that such a notion is nonsense and Turing proposing certain tests to see if a machine is self-aware which although are a good start are still flawed for testing of self-aware AI. Now we must first define what a self-aware AI would look like to see if such a thing is even possible. For an AI to be self-aware it must first be able to distinguish itself as an individual and to be aware of its own thoughts. Individuality would be a unique trait among self-aware AI since it would give rise to unique personalities among AI’s. “Consciousness is the quality or state of being aware of an external object or something within oneself. It has been defined as: subjectivity, awareness, sentience, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control
  • 8. Carreon 8 system of the mind.” A self-aware AI is able to obtain consciousness which would allow it to have thoughts and subjective experiences. The ability to have thoughts is crucial for an AI since that would make a difference between being self-aware and simply doing tasks like the Chinese room example. Also for it to have the ability to have subjective experiences would mean that something it experiences would mean something else to it than it would to someone else or another AI if they were to experience the same thing. For example, if two people look at the same car one might simply feel a neutral response toward the car since it means nothing to him but the other person might have an emotional response if the car had some sort of value toward that person. Having subjective experiences is what makes us humans what we are since we all have different experiences and have different emotional responses toward each experience. Secondly, a self-aware AI must be sentient. “Sentience is the ability to feel, perceive, or be conscious, or to have subjective experiences.” A sentient AI would be able to perceive emotions and “feel” perceptions all of which would be subjectively meaning no two AI would have the same emotions or perceptions. Being sentient, an AI would be able to “feel” sadness, pain, anguish, happiness, and any other emotions that we would label as human emotions. Having the ability to be sentient would be the foundation necessary to truly reach the point in which it crosses over into the realm of self-awareness. Although, simply having sentience would not necessarily mean that it has acquired self-awareness. An example would be if we were to take a look at a baby that was just born and analyzed it. It would be obvious to us that it is capable of feeling pain, sadness, and happiness yet a new born baby does not really seem to portray self-awareness the moment it is out the womb. This tells me that at some point in our development from child to adult we somehow reach a threshold in which we realize we are an I and we are capable of making our own decisions.
  • 9. Carreon 9 Thirdly, a self-aware AI must have sapience which is distinct from the ability to be sentient. “Sapience is often defined as wisdom, or the ability of an organism or entity to act with appropriate judgment, a mental faculty which is a component of intelligence or alternatively may be considered an additional faculty, apart from intelligence, with its own properties.” Sapience is the ability to have wisdom or acquire wisdom either though age or though subjective experiences. Having Sapience for an AI would mean that the AI would be able to learn and though experiences become wise. The same way a teenage human is considered to be sentient it would also be argued that it can sometimes lack sapience since teenage humans tend to do irrational things that we would not call wise in any shape or form. In contrast an aged human would sometimes be called wise since they have lived though many subjective experiences that have expanded their knowledge of things and though the learning of new things and seeing new things they have been able to acquire what we would consider wisdom. Wisdom would be the final step of any self-aware AI since it would then be able to learn even more and expand on what it has learned and even pass it on to other younger AI which has not yet gained wisdom. Now that we have thoroughly defined what a self-aware AI would be or should be we must then decide if such a thing even is a possibility in our current world. Searle argues that such a notion is nonsense since a program cannot hope to possibly have self-awareness. Searle is right in this respect in that a program in our current conventional computing simply would not have the capability to ever hope to reach the level as previously defined. But, simply because such a thing would not be possible under current conventional computing does not mean that such a thing is not possible under any circumstances. For example, our current computing is based around logic and the 0 and 1 value with everything falling within those two values meaning everything is done in absolutes or in universals. This way of computing would mean that any
  • 10. Carreon 10 program created to attempt to emulate some sort of self-awareness is doomed from the start since conventional computing is simply too logical for self-awareness to arise. What I mean by this is that if we were too look at the only current creature in this world which we can completely ascertain that it is self-aware and intelligent it would have to be ourselves. Humans are currently the only creatures present on this planet which exhibit all the features of being self-aware which if we were to do further study we will realize that even though we are the only beings that are self-aware we are also highly illogical or irrational. Humans are known for making decisions which we would under normal circumstances argue that it was simply complete nonsense that overtook that particular person to make such a decision yet we would never argue that particular person is not self-aware even though they made an illogical choice. This certain circumstances shows that having self-awareness would mean that we are capable of making illogical choices and such a choice appears to go hand in hand with self-awareness. Now if we were to apply the same principles to a program and conventional computing we realize that being irrational under that certain circumstance is simply not fundamentally possible as a result of the way computing is designed. Yet, if such a thing is not possible under conventional computing how can we then argue self-aware AI is a possibility? The answer to that question would be present within quantum computing. Dr. Michio Kaku talks about quantum computers in his big think web series in which he mentions how quantum computers will change everything about what we know about artificial intelligence. He states that currently our most intelligent robots are equivalent to that of a cockroach in that they can see and hear better than us but are unable to understand what they are seeing and hearing. This limitation reinforces Searle previous statement that a program has syntax but no semantics. Dr. Kaku states the reason that the most intelligent robot is hardly
  • 11. Carreon 11 smarter than an cockroach is that our current technology is all based on silica chips which double in processing power every 18 months, but according to Moore’s Law that cannot go on forever since we would eventually reach a point in which we cannot go any further in silica based chips. Quantum computing is a game changer within the computing world since it would be the first time we would have encountered a machine that was capable of doing extraordinary amount of tasks with relative ease that would put our most powerful supercomputers to shame. But while a quantum computer would be capable of processing ludicrous amount of logical data it would also be able to do what a conventional computer would not be capable of and that would be that a quantum computer is able to be irrational. A quantum computer differs from conventional computing in that a quantum computer processes in 1 and 0 but also in between these two options. While a conventional computer would simply give you a yes or no answer a quantum computer could give you a maybe answer which at the surface may seem nothing so special we must realize that giving a maybe answer is something very human like. Now if we were to have a quantum computer loaded up with a program that would emulate a self-aware intelligence it would be indistinguishable from that of a human being. The reason this would be possible is that while a quantum computer can give you a maybe answer it can also be done in near instant response time. This is another problem of conventional computing to attempt self-awareness since our current computers simply aren’t fast enough to compete with a human brain. If self- aware AI would then be possible under quantum computing we must then answer if self- awareness is a process that simply happens over time or if one day you are not self-aware and the next you are. I would argue that the road to self-awareness is a gradual one in which you start without it and end up somehow acquiring it along the way. A previous example of this was that of a child
  • 12. Carreon 12 and an adult and the difference between the two. A child that was just born would simply not possess any sort of self-awareness yet it somehow gains it as it grows up to become an adult. This change within the person would show that self-awareness is something that we acquire gradually over time as we develop though the years the same way we go from being a tiny baby to a fully grown adult. The same way a human would somehow gain it through its development an AI would also go from being a weak-AI to being a strong-AI.
  • 13. Carreon 13 Works Cited  Akman, Varol, and Patrick Blackburn. "Editorial: Alan Turing and Artificial Intelligence." Journal of Logic, Language and Information 9.4 (2000): 391-5. ProQuest. 24 Nov. 2013 .  Hauser, Larry. "Searle's Chinese Box: Debunking the Chinese Room Argument." Minds and Machines: Journal for Artificial Intelligence, Philosophy, and Cognitive Science 7.2 (1997): 199-226. ProQuest. 24 Nov. 2013 .  Nusca, Andrew. "How Apple's Siri Really Works." ZDNet. N.p., 3 Nov. 2011. Web. 10 Dec. 2012.  Searle, John. "Can Computers Think?" Analytic Philosophy: An Anthology. By Aloysius Martinich and David Sosa. Malden, MA: Blackwell, 2001. 277-83. Print.  Sterrett, Susan G. "Turing's Two Tests for Intelligence." Minds and Machines: Journal for Artificial Intelligence, Philosophy, and Cognitive Science 10.4 (2000): 541- 59. ProQuest. 24 Nov. 2013 .  The Future of Quantum Computing. Video. Michio Kaku. Big Think, 31 May 2011. Web. 10 Dec. 2012.