ARTIFICIAL INTELLIGENCE AND REAL INTELLIGENCE |
The question, “Is artificial intelligence
possible?” is ambiguous. It may mean “Can AI programs actually produce results
that resemble human behaviour?” This is a scientific question. The answer at
present is yes, at least in some cases. Whether it would also be true to say
that this is so in all cases is not yet known. Some things that most people
assume computers could never do are already possible. AI programs can compose
aesthetically appealing music, draw attractive pictures, and even play the piano
“expressively”. Other things are more elusive: producing perfect translations of
a wide range of texts; making fundamental, yet aesthetically acceptable,
transformations of musical style; producing robots that move nimbly over rough
ground, swim across rivers, or climb mountains. It is controversial whether
these things are merely very difficult in practice, or impossible in
principle.
Alternatively, “Is artificial intelligence
possible?” may mean “Could any program (or robot), no matter how humanlike its
performance, really be intelligent?” This question involves highly controversial
issues in the philosophy of mind, including the importance of embodiment and the
nature of intentionality and consciousness. Some philosophers and AI researchers
argue that intelligence can arise only in bodily creatures sensing and acting in
the real world. If this is correct, then robotics is essential to the attempt to
construct truly intelligent artefacts. If not, then a mere AI program might be
intelligent.
The celebrated mathematician and computer
scientist Alan Turing proposed what is now called the Turing Test as a way of
deciding whether a machine is intelligent. He imagined a person and a computer
hidden behind a screen, communicating by electronic means. If we cannot tell
which one is the human, we have no reason to deny that the machine is thinking.
That is, a purely behavioural test is adequate for identifying intelligence (and
consciousness). The philosopher John Searle has expressed a different view. He
admits that a program might produce replies identical to those of a person, and
that a programmed robot might behave exactly like a human. But he argues that a
program cannot understand anything it “says”. It is not actually saying
(asserting) anything at all, merely outputting meaningless symbols that it has
manipulated according to purely formal rules. Lacking understanding
(intentionality), it is all syntax and no semantics. But human beings can
ascribe meaning to its empty symbols, because our brains can somehow (Searle
does not say how) cause intentionality, whereas metal and silicon cannot. There
is no consensus, in either AI or philosophy, as to which theory, that of Turing
or that of Searle, is right.
Whether an AI system could be conscious is an
especially controversial topic. The concept of consciousness itself is
ill-understood, both scientifically and philosophically. Some people think it
obvious that any robot, no matter how superficially humanlike, must be
zombie-like. But others think it obvious that a robot whose functions matched
the relevant functions of the brain (whatever those may be) would inevitably be
conscious. The answer has moral implications: if an AI system were conscious, it
would arguably be wrong to “kill” it, or even to use it as a “slave”.
No comments:
Post a Comment