[comp.ai] Philosophy - not a pejorative

gilbert@hci.hw.ac.uk (Gilbert Cockton) (01/29/88)

In article <4222@utai.UUCP> tjhorton@ai.UUCP (Timothy J. Horton) writes:
>>...People who flounder hopelessly are probably short on their
>>philosophical training.
>
>Not true.  Realize, also, that there are conceptual chasms between fields.
>
>Philosophical arguments about computational models of intelligence, for
>instance, among those without comprehensive conceptual bases in computer
>science, often seem to reduce to expressions of superstition and ignorance,
>at least among the vocal.

On conceptual chasms, what - philosophical analysis apart - can bridge them?

On ignorance of computability in 'philosophical' arguments on natural
and artificial intelligence, perhaps the Theory of Computation needs
to be as much a part of a proper philosophical training as
linguistic analysis and formal logic.  Some people in AI could do with
it as well (i.e. those who don't have it).  

As for reduction to superstition, isn't this the outcome for an
analysis of many 'natural truths'.  On the existence of objects,
nothing is 'proven', but nevertheless, we find no reason for rejecting
the natural truth of their existence.  Arguments based on ignorance
must be discounted, but are we not left with the case that we still
have no reason for rejecting the natural truth that human and machine
intelligence are different?  Not only is the case for the equivalence
of human and machine intelligence not proven, no analysis exists, to my
knowledge, which points to a way of establishing the equivalence.  This
leaves AI as a piece of very expensive speculation based on beliefs
which insult our higher views of ourselves.  Superstition no doubt, but
a dominant and moral superstition which needs to command some respect.
Vocal polemic is as much a reaction to the arrogant unreasonableness of
some major AI pundits, as it is a reflection of the incompetence of the
advocate.  The debate has been fair on neither side, and the ability
of AI pundits to stand their ground is due to their social marginality
as round-the-clock scientists and their cultural marginality as workers
outwith a proper discipline (look up Sociology of Deviance). People
who live in bunkers don't get hit by stones ;-)  The big AI pundits
just remind me of Skinner.

BTW: NOT(AI pundit = AI worker) -  most AI workers know their systems
     aren't working (yet?) and do leave their bunkers to mingle :-)

>I suggest, in balance, Russell's "The Cult of Common Usage," for instance.
Great - keep balancing, more competent philosophy for the reading list.

>Experience would seem to indicate that a few vocal individuals may press
>their arguments on the entire network, rather than delivering ambivalent
>analysis or investigating before disseminating.

Sounds like a netiquette proposal which I thoroughly endorse.  Whilst
guilty of advocacy on occasions, I think that everyone should strive for
an ambivalent analysis in this sort of public forum, and leave people to make
their own minds up.  Sounds like good philosophy to me.  However,
ambivalence cannot be expected in response to imcompetence, however candid.
Witness the current debate on economic structure and diachronic syntax.  Nor,
as with tolerance of the intolerant, I can't be ambivalent about dogmatists.
-- 
Gilbert Cockton, Scottish HCI Centre, Heriot-Watt University, Chambers St.,
Edinburgh, EH1 1HX.  JANET:  gilbert@uk.ac.hw.hci   
ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk UUCP: ..{backbone}!mcvax!ukc!hci!gilbert