there is a history of thought that supposes the very thing that turing asserts. that is, if a machine cannot think because it cannot respond like a human.
“ If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others. For we can certainly conceive of a machine so constructed that it utters words, and even utters words that correspond to bodily actions causing a change in its organs. … But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do. Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs. For whereas reason is a universal instrument, which can be used in all kinds of situations, these organs need some particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act.”
a digital machine is a universal computing machine which can do a number of things
this is the same kind of description descartes uses for a human
a substance dualist may hold that that the mental is a different substance
the response is that there is no good ground for thinking this is the case, and it seems entirely plausible that God could grant “souls” to anything He wanted
“heads in the sand” objection
the consequences of machines thinking are terrible
we are no longer special
we could be supplanted by machines
we could become dominated by machines
response: it is irrelevant if any of this is true; it does not change the state of the world
the mathematical objection
basically, the idea is that logical systems like computers are subject to a kind of incompleteness that prevents them from giving all possible answers
humans might not be free from such a constraint
in general, we do not know all the consequences of this kind of restraint, and so we must refrain from speculating too much
argument from consciousness
machines can’t “feel” anything; they lack qualia. as a result, they cannot be said to be thinking
response: this is a specific example of a more general problem: the problem of other minds. we cannot know the subjective states of any other individual. the only way we can be sure anyone is thinking anything is by looking at their behavior. indeed, this is exactly why we would be inerested in this kind of test for intelligence.
argument from various disabilities
some people have claimed machines will never be able to do: (1) be kind; (2) be resourceful; (3) be beautiful; (4) be friendly; (5) have initiative; (6) have a sense of humor; (7) tell right from wrong; (8) make mistakes; (9) fall in love; (10) enjoy strawberries and cream; (11) make someone fall in love with one; (12) learn from experience; (13) use words properly; (14) be the subject of one's own thoughts; (15) have as much diversity of behavior as a man; (16) do something really new.
this seems unduly chauvinistic in that it isn’t clear that an intelligent being, such an as alien, would be able to do all these things
why are we so certain that a machine can’t do these things
it seems we are basing such an assumption on inductive premises, and that could mean we simply haven’t yet experienced such a machine
lady lovelace’s objection
“The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform.”
we too are constrained
machines do originate something, namely responses, and this seems to be the kind of thing humans originate as well
the continuity of the nervous system
brain is not a discrete-state machine; it is a continuous-state machine
response: discrete-state machines can do a very good job act acting exactly like continuous-state machines without producing much error
informality of behavior
there is no set of rules that describes what a person ought to do in every possible set of circumstances
if the world is determined, then both man and machine are determined; if it is not then neither man nor machine are determined
norms apply to humans as well, and we consider what deviations from these norms a mistakes in rational processes
argument from e.s.p.
there is not really any evidence for e.s.p., so we need not address it
the turing test may very well be a necessary condition (although maybe not if a chauvinistic argument is taken seriously), but is it sufficient?