[comp.ai] Re^2: Simulating thinking is NOT like simulating flying

s64421@zeus.irc.usq.oz (house ron) (03/01/90)

zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) writes:

> What does it really matter if a system displays "conciousness" or not,
> as long as it is capable of performing as required?

It matters because conscious beings can *suffer*.  If computers are, or can
be, conscious, then we have moral issues to deal with in handling them.

> I liken the question of "conciousness" to the question of "life".  Is a
> virus alive?  Pursuit of these questions may be a pleasant distraction,
> but they will never produce anything of value.

The questions differ in a significant way: the virus question is a borderline
case; the problem only exists because the 'life' of a virus, if it has any,
is very small.  (Please no discussions on the definition of 'small' in this
context - that's another can of worms.)  A genuine analogy to the virus-life
question might be asking whether a molecule is a solid.  Or how about an
electron?  With a computer acting like a full-blown human, it is a very
real and very significant question whether it contains a full-blown
human conscious awareness like the one I have (and I assume, you too).
'Pursuit of these questions' only produces nothing of 'value' if you don't
value ethics.

> These (fuzzy) terms represent concepts which simply do not 
> exist in reality.

HMM! Perhaps Mccarthy ISN'T conscious after all?  He doesn't seem to
even know that his own consciousness is the only reality he CAN be
sure of.

Let me add that it seems quite obvious that, given enough speed, memory
and programming logic, a computer can act like anything at all, since,
whatever differences are found, one can insert further conditionals, etc.,
to remove the discrepancy (questions of practicality aside here).  Also,
the only way we have of understanding the inner reality of other beings
is by observation of their behaviour and analogy with our own and the
inner states we know to accompany similar behaviour by ourselves.  We tend
to credit other beings with consciousness because they behave like us
in relevant situations (i.e. not because we both fall if thrown off a cliff)
-- unless they make peculiar remarks to make us doubt it.

This method available to us is, however, only likely to be valid if our
observations of them are statistically valid.  Let me clarify with an analogy:
if many independent members of the public make accusations to the police
about a particular person committing similar crimes, the police would have
strong suspicions that that person might really be involved in such crimes.
But if it turned out that only one person originally made a complaint and
the other complainants had heard the first person express his suspicions
in a bar, and the others 'put two and two together', the implication loses
most of its force.  This is why many countries do not allow evidence of
previous charges to be admitted.

My point is that by deliberately trying to mimmick human behaviour, AI is
invalidating the usefulness of our only method of acquiring knowledge in
an _important_ area, as they are invalidating our belief that beings which
_act_ conscious _are_ conscious.  The question would be entirely different
if people stuck to 'traditional' computing, and, someday, one of these
machines started acting human.

Regards,

Ron House.   (s64421@zeus.irc.usq.oz)
(By post: Info Tech, U.C.S.Q. Toowoomba. 4350)