[comp.ai] Dualisms for the 5-minute autodidact

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (02/28/89)

In article <4369@pt.cs.cmu.edu> kck@g.gp.cs.cmu.edu (Karl Kluge) writes:
>
>Huh? Brains understand. Brains are physical objects obeying physical laws.
>Either we regress into dualism or we acknowledge that understanding arises

We do no 'regress' into dualism.  'We', as a culture, i.e. Western civilisation,
have never rejected dualism in all its forms.  "Scientific" method has proved to be
very effective in many areas, although the speed of modern progress has more to do
with the (wo)manpower involved than in any significant post-Baconian shift. If more
people look hard at the physical world, then more has always been discovered about
it.  Hardly surprising, and similar in kind to the exploration of Africa by
Europeans in the C19.  The more they looked, the more they found.  Similar mark you,
not identical.  Quinine and the Gattling gun were simpler than the paradigm shifts
needed to advance normal science another few yards up its Niger.

Now the same just isn't true for a Science of Mind.  While adding to the stock of
genome mappers will definitely add to our knowledge, adding to the stock of
Cognitive Psychologists will only increase LISP machine sales.  There are enormous
epistemic difficulties in applying kindergarten positivism to the study of mind.
Steven Harnard has been trying to get this across, but some nasty ideology is making
an easy task hard going for him.  Still, one should take the compliment of his
concern.  We really are worried about your cultural health :-)

Don't ask me what these "enormous" epistemic difficulties are.  They're on and
in-between the lines of my current postings and most of Stevan Harnad.  If one
really cares about this issue, one will go and get a reading list from a competent
epistemologist  (these pop up in strange places like Literature, Social Theory and
Jurisprudence as well as philosophy departments, especially when the latter are
overrun with formal logicians).

Now then Dualism.  Now who filled your head on this one?  Chats with colleagues?
Lectures?  Research methods seminars?  Autodidacticism (ugh)?

I remember Drew McDermot telling us (with gross inaccuracy and the lack of 
empirical substance that we expect from AI :-)), that Dualism (and there is only one
Dualism!) went out of fashion 4 centuries ago (i.e. well before Bacon, so my
lectures on the Great Instauration must have missed something crucial).

Well, I have to tell you that at least FIVE versions of dualism are still in fashion
in varying groups of Western intellectuals (as for the general public, then
knee-jerk science isn't much in fashion at all).

So, the five dualisms:
	1) Platonic - i.e. forms versus temporal objects
	2) Descartian - including the 10c AI version
	3) Ethical - fact and value statements are not reducible to a common form
		     or means of verification (Moore, the Naturalistic Fallacy)
	4) Explanatory -  human actions are not 'caused' like natural events, they
			  involve reasons and motives
	5) Epistemological - das ding is nicht das ding an sich (the Dead Horse Song)

If anyone wants to chase these through, I raided the list from Anthony Quinton's
contribution on Dualism to th Fontana Dictionary of Modern Thought.  You could chase
up references here.  Better still, go and chat to the relevant philosophy types
(i.e. ethics, epistemology, metaphysics), as there is really is nothing more tacky
than an autodidact (a life of Bacon's contracts of error).

My postings are motivated by Dualism (4), not Dualism (2) - the brand known to most
AI types.  However, I am in no position (as is no-one else) to say whether Dualism (2)
is wrong.   There is more evidence for it than against it however.  The only regression
lies in ending the belief in a science of human behaviour.  This only affects the tiny
ideological clique who believe in such a possibility anyway.  So believing D2 or D4, if
it became universal (which it is almost is anyway), would do little more than to weed
out certain zealots from a few university departments.  Hardly a regression, and certainly
no more than a ripple in Western cultural history.

>> The same is true of the new and creative meanings developed within the AI
>> subculture.  If a computer system has understanding, then where does it lie?
>If AI is correct, in the interaction of the parts of certain kinds of systems.
>Resistance to that notion in no way constitutes disproof of it. If brains
>have understanding, then where does it lie?

I don't know if brains do have understanding.  As for programs, "interaction" is
unconvincing.  For humans, self-attribution of understanding is a ternary decision
over the workings of everything else in our minds.  I demand a sub-system with a
cell marked "I understand" and a model of how this gets set.  I also want the
performance of the simulation to degrade whenever this is set to false, even if the
knowledge-base is theoretically capable of optimum performance.

>1) I doubt many "AI types" disown that small voice, they simply refuse to 
>accept that it is something which must forever remain sacred, mysterious,
>and beyond human comprehension. There is more awe to be felt in contemplating
>the Universe from a position of comprehension of its functioning than from
>a position of fear and ignorance.

Oh use your imagination.  It's possible to experience understanding without deifying
it, fearing it or being ignorant of it.  You're in a whopping bag of category mistakes.
One cannot be
	a) ignorant of understanding
	b) fail to understand understanding
	c) put understanding beyond understanding.

because of ignorance.  The argument is that it is strong AI which is (a), (b)s and
(c)s.  I'm alright Jack, and so are the other 99.9999999999%  It is the scientist of
behaviour who chooses to be (a), or to (b) or (c), by insisting on positivist or
computational methods.  The "study" of behaviour is not so constrained, and thus can
avoid (b)ing (c)ing and being (a).  As Searle said, there's a blinding ideology at work.

I do comprehend my understanding.  It's the AI types who insist that all mental
experiences have to be grounded in a computational mechanism.  I am not afraid of
this prank, nor does it deny the sanctity or mystery of understanding.  Nor does the
prank promise any end to any fear and ignorance.  The prank just doesn't connect.
It's a waste of time.  Sanctity and mystery are not at issue.  It's accuracy and
intellectual honesty.

>2) The fact that something is clear to you is hardly compelling evidence.
We're talking about the meaning of words and how they are used. All I am saying is
that ordinary usage of the word "understanding" does not allow understanding in a
machine or mechanism.  This is just raw linguistic prejudice and very unfair to
machines.  But really, why do the strong AI camp want to rip the English language
apart?  To what end?  Machines don't understand, fact of language.  This is not to
say that a machine cannot simulate some behaviour, that is a separate issue that has
no bearing on the location of understanding.

>At the moment, information processing models seem to be doing quite well.
Don't be silly.  They don't perform anywhere near as well as human intuitions about
mind.  The current state of the art in computational modelling is very very crude.
D4 suggests that this will remain the case, since significant advances cannot be
made by the application of scientific method.  It just won't work here.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

geb@cadre.dsl.PITTSBURGH.EDU (Gordon E. Banks) (03/04/89)

In article <2484@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>So, the five dualisms:
>	4) Explanatory -  human actions are not 'caused' like natural events, they
>			  involve reasons and motives
>
>
>My postings are motivated by Dualism (4), not Dualism (2) - the brand known to most

I agree that this is a most attractive hypothesis.  Otherwise, what about
free will?  What about man's responsibility for his actions?  Indeed,
I would love to see an argument that could convince me that
this was a true statement.  I really find Skinnerian determinism repulsive.
But like so many of your postings, you seem to indicate that the proof
of your statements is so deep and complex that it would require a few
years of graduate study (preferably at OxBridge, and NOT in science) to begin 
to fathom them, if indeed such Philistines as we could EVER do so.

>It's a waste of time.  Sanctity and mystery are not at issue.  It's accuracy and
>intellectual honesty.
>
Again, how so?  I don't understand what you mean.  Can you explain in plain 
English?

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (03/10/89)

In article <2377@cadre.dsl.PITTSBURGH.EDU> geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks) writes:
>I would love to see an argument that could convince me that
>this was a true statement.  I really find Skinnerian determinism repulsive.
>But like so many of your postings, you seem to indicate that the proof

It is not for the writer to prove, but for the reader to be convinced.
My convincement as a reader took some time, and I would do no justice
to what I have read by summarising it for the net.  All I can say is, go and
look here or there, see if there is anything in it, I thought there
was.

At the most, I suppose I could tempt you.  I can start by saying that OxBridge is
irrelevant (and less o'that or I'll bray yer one).

As for plain English, the connotations of words are dependent on
culture.  Perhaps nothing can be plain in these domains, hence the need
for study.

What I was on about was that the anger and outrage over the AI
bowdlerizing of "understanding" is not due to some satanic
versification breaking deep spiritual taboos. No - the outrage is over the
lack of candour and disrespect for language implicit in the
bowdlerization, hence the lack of intellectual honesty and a disturbing
disregard for liberal academic standards of truth.  If these folk were
snake-oil salesmen, then no-one would really mind.  But academics?  Not
by the standards which I know!  Before I am asked to get down off my high horse,
tell me what I'm riding through.  I cleaned my shoes this morning.

The social and human sciences have long had the good sense to avoid
ordinary language, or to quote* an everyday word used in a technical
context (most books on Semantics, e.g. Lyons).  If AI did the same, we
could avoid having to stamp on charlatan usage.

After all, how AI describes its work has nothing to do with its
efficacy or accuracy. If AI developed a proper techical language
disjoint from ordinary language for the most part, then AI workers may
gain the academic respect which has been denied them for so long by so
many.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert