[comp.ai] ReProgrammed Cockton

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/30/89)

In article <1138@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>	Cockton never discusses AI here, so what difference does it
>make what topic he raves upon?

Out of the mouths of babes ...

Actually, there is a serious point in there, so I presume some neutron
storm passed through the poster's mind at the time of composition :-))

Many AI workers complain that the arguments against AI rarely refer to
the work of AI.  Critics of AI are not reknowned for their copious
references to AI sources.  I attended a master's course on IKBS as part
of my postgraduate research, so I could cite references.  The point is
that it's unnecessary.

Doubtless, our white-heat technophiles reading this group poo-poo
astrology (so do I).  If astrology deserves the same well-researched
criticism that AI demands, then arguments against astrology which do
not cite the predictions of the world's greatest (and I can see your
marvellous jumpers right now Russel Grant), including the complete
works of every Gipsey Rose Lee, are obviously not well-founded.
Following the intellectual etiquette of AI, critiques must cite a
random collection of key papers as listed by the sages.  A proper
critique of AI must cite X, Y, and Z, and understand all of them in the
context of the folk knowledge that has passed down with presentation,
re-presentation and indirect homage.  Thus an acceptable critique of
astrology must cite any works considered germane by astrologers?

On the contrary, sound arguments against the validity of astrology can
be constructed on epistemelogical grounds alone.  The same is true of
AI.  The question is, can computers be programmed to be valid models of
some woolly construct called 'Mind' (what the hell is mind?)?  This
question can be addressed competently without reference to any program
constructed within the last 30 years of AI.  The answers depend wholly
on epistemic stances, some more flawed than others.  Indeed, even the
respectable work in AI (ACT*, SOAR etc.), is dependent on unfounded
assumptions about the significance of isolated and unconnected
laboratory experiments involving silly words, pings and dots on
displays.

The complaint that philosophy is irrelevant, and time will tell in the
advance of AI, is no more convincing for AI than astrology.

It is not for humanity to learn of AI, but for AI to learn of
humanity.  This unfortunately includes all that turgid Euro-rubbish
which Maddox tries to lampoon.  After all, it's as much his
intellectual heritage as mine.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (04/01/89)

In article <2702@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>
>On the contrary, sound arguments against the validity of astrology can
>be constructed on epistemelogical grounds alone.  The same is true of
>AI.  The question is, can computers be programmed to be valid models of
>some woolly construct called 'Mind' (what the hell is mind?)?  This
>question can be addressed competently without reference to any program
>constructed within the last 30 years of AI. 

Actually, I agree with you here.  More basic than the question that any
particular program addresses is the question of whether the mind is
grounded in a purely physical entity (the brain) or not.  You indicate
not.  If you are wrong, then the only argument is whether the AI
researchers are on the right track in trying to replicate/simulate
brain function or not.  If you are right, then the question is what
is the mind that an artificial brain can not contain it or acquire it.
I find your answer "socialization" inadequate, because if I could make
a android that appeared to be a human child and grew and had an adequately
functioning brain, why couldn't it be socialized as well as a real child,
(a la the replicant in "Bladerunner" who didn't even know she wasn't
human).
>
>It is not for humanity to learn of AI, but for AI to learn of
>humanity.  This unfortunately includes all that turgid Euro-rubbish
>which Maddox tries to lampoon.  After all, it's as much his
>intellectual heritage as mine.

Well, Gilbert, I'm glad you finally acknowledged our common
cultural roots.  After all, our cultures were divided only in the
last 400 years, whereas that of Britain and the rest of Europe
were sundered from 1 to 3 millenia ago.  Perhaps with the "United
States of Europe" concept, this will change over the next few
decades, but it hasn't happened yet.  In fact, one could argue
that the cultures of non-Anglic European countries (especially the young
people) have picked up far more transculturation from America than
Britain.