[comp.ai] what is a suitable program

throopw@dg-rtp.dg.com (04/01/89)

> gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
>> throopw@agarn.dg.com (Wayne A. Throop)
>> I simply do not think that the
>> human brain has any mysterious "causal powers" that a computer
>> executing a suitable program does not.
> OK then, let's here what a "suitable" program would be.  I contend that
> AI research doesn't have a grasp of what "suitable" means at all.

The nickel tour of what I think a suitable program would be is one
that can reach decisions to take actions which advance goals, where
these decisions are reached based on internal representations of
objects and processes occuring in the "real world".

Of course, any reasonable non-sloganish explication of "suitable"
would be at least several thousand words long.  Also of course, this
truncated treatment doesn't address the notion of "self-aware", or
who's goals, or the complexity or accuracy of the decision making, or
other things important to judging degree of understanding or
intelligence.

Further, it is quite true that my notion of what is "suitable" may be
vague.  This is primarily because everybody's notion of what humans do
when they "understand" something is vague.  But even so, that is no
reason to jeer at my skepticism about the assertion that no program
can possibly be suitable (that is, "do the essentials of what humans
do to understand things").

> For one, human minds are not artefacts, whereas computer programs
> always will be.

If by "artefact" it is meant that computer programs consist mostly of
components designed by some entity to meet some specific goal which
the program is supposed to advance in some way, then I don't see that
this statement is likely to be true in any meaningful way.  One of the
major fields of current AI is the investigation of self-learning
systems, or systems which "organize themselves" in response to fairly
complex interaction with the real world.

And even granting that, the fact that one object is not an artifact
while another is is no sign that the two objects cannot share some
feature or quality.  Certainly, I see no evidence that "understanding"
or "intelligence" is not a feature that can be shared among artifacts
and non-artifacts.

> This alone will ALWAYS result in performance
> differences.  Given a well-understood task, computer programs will
> out-perform humans.

So what?  Even humans have performance differences from one to
another.  Someone wih a savant talent (or even just someone with a
highly trained technique) can easily outperform other humans by orders
of magnitude on "well understood" problems (problems which are
completely solved with a known algorithm).  But to suppose that this
performance difference is crucial is just plain silly.  Especially
since in the strict Turing situation it is easy to fake slower speeds,
or simply run the program with delays.

> Given a poorly understood task, they will look
> almost as silly as the author of the abortive program.

I think another meaning for "well (or poorly) understood task" is
being slipped in here.  In particular, tasks for which no algorithm is
known can be solved for practical cases "well enough" by many, many
techniques.  So, I think maybe "poorly specified" is meant here.

If so, I note that humans themselves look pretty silly when trying to
solve poorly understood (specified) problems.

> The issue as ever is what we do and do not understand about the human
> mind, the epistemelogical constraints on this knowledge, and the
> ability of AI research as it is practised to add anything at all to
> this knowledge.

Exactly so.  And I contend that we do NOT understand enough about the
human mind to rule out "suitable" programs.

--
If someone tells me I'm not really conscious, I don't marvel about
how clever he is to have figured that out... I say he's crazy.
          --- Searle (paraphrased) from an episode of PBS's "The Mind"
--
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw