[comp.ai] Sorry, no philosphy allowed here.

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/02/88)

> Could the philosophical discussion be moved to "talk.philosophy"? (John Nagle)
I am always suspicious of any academic activity which has to request that it
becomes a philosophical no-go area.  I know of no other area of activity which
is so dependent on such a wide range of unwarranted assumptions.  Perhaps this
has something to do with the axiomatic preferences of its founders, who came
from mathematical traditions where you could believe anything as long as it was
logically consistent.  Before the 5th Generation scare, AI in the UK had been 
sat on for dodging too many methodological issues.  Whilst, like the AI 
pioneers, they "could see no reasons WHY NOT [add list of major controversial
positions", Lighthill could see no reasons WHY in their work.

> What about compatibilism?  There are a lot of arguments that free will is
> compatible with strong determinism.  (The ones I've seen are riddled with
> logical errors, but most philosophical arguments I've seen are.) (R. O'Keefe)
I would not deny the plausibility of this approach.  However, detection of 
logical errors in an argument is not enough to sensibly dismiss it, otherwise
we would have to become resigned to widespread ignorance.  My concern over AI
is, like some psychology, it has no integration with social theories, especially
those which see 'reality' as a negotiated outcome of social processes, and not
logically consistent rules.  If the latter approach to 'reality', 'truth' etc.  
were feasible, why have we needed judges to deliver equity? For some AI 
enthusiasts, the answer of course, is that we don't.  In the brave new world, 
machines will interpret the law unequivocably, making the world a much fairer
place:-) Anyway, everyone know's that mathematicians are much smarter
than lawyers and can catch up with them in a few months. Hey presto, rule-base!

> One of the problems with the English Language is that most of the
> words are already taken.  ( --Barry Kort)
ALL the words that exist are taken! And if the West Coast had managed to take
more of them, we wouldn't have needed that silly Beach Boys talk ('far out' -> 
'how extraordinary/well, fancy that; etc. :-))  AI was a natural term in the
late 50's before the whole concept of definable and measurable intelligence was
shot through in the 1960s on statistical, methodological and sociological 
grounds.  Given the changed intellectual climate, it would be sensible if the
mathematical naievety of the late 1950s were replaced by the more sophisticated 
positions of at least the 1970s.  There's no need to change names, just absorb
AI into computer science, linguistics, psychology, management etc.  That would
leave workers in advanced computer applications free to get on with pragmatic 
issues with no pressure to address the pottier postures of 'pure' AI.

> I would like to learn how to imbue silicon with consciousness,
> awareness, free will, and a value system.  (--Barry Kort)
But why!  Surely you can't have been bullied that much at school to
have developed such a materialist view of human nature? :-) :-)

> Suppose I were able to inculcate a Value System into silicon.   And in the 
> event of a tie among competing choices, I use a  random mechanism to force
> a decision.  Would the behavior of  my system be very much different from a 
> sentient being with free will? (--Barry Kort)
Oh brave Science! One minute it's Mind on silicon, the next it's a little
randomness to explain the inexplicable.  Random what? Which domain? Does it
close? Is it enumerable?  Will it type out Shakespeare? More seriously 'forcing
decisions' is a feature of Western capitalist society (a historical point 
please, not a political one).  There are consensus-based (small) cultures where
decisions are never forced and the 'must do something now' phenomena is 
mercifully rare.  Your system should prevaricate, stall, duck the
issue, deny there's a problem, pray, write to an agony aunt, ask its
mum, wait a while, get its friends to ring it up and ask it out ...

Before you put your value system on Silicon, put it on paper.  That's hard
enough, so why should a dumb bit of constrained plastic and metal promise any
advance over the frail technology of paper? If you can't write it down, you
cannot possibly program it.

So come on you AI types, le't see your *DECLARATIVE TESTIMONIALS* on
this news group by the end of the month. Lay out your value systems
in good technical English. If you can't manage it, or even a little of it,
should you really keep believing that it will fit onto silicon?

ok@quintus.UUCP (Richard A. O'Keefe) (05/05/88)

In article <1069@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> > What about compatibilism?  There are a lot of arguments that free will is
> > compatible with strong determinism.  (The ones I've seen are riddled with
> > logical errors, but most philosophical arguments I've seen are.) (R. O'Keefe)
> I would not deny the plausibility of this approach.  However, detection of 
> logical errors in an argument is not enough to sensibly dismiss it, otherwise
> we would have to become resigned to widespread ignorance.  My concern over AI
> is, like some psychology, it has no integration with social theories, especially
> those which see 'reality' as a negotiated outcome of social processes, and not
> logically consistent rules.  If the latter approach to 'reality', 'truth' etc.  
> were feasible, why have we needed judges to deliver equity? For some AI 
> enthusiasts, the answer of course, is that we don't.  In the brave new world, 
> machines will interpret the law unequivocably, making the world a much fairer
> place:-) Anyway, everyone know's that mathematicians are much smarter
> than lawyers and can catch up with them in a few months. Hey presto, rule-base!

The rider was intended to indicate that I neither endorse nor reject
compatibilism.  In philosphy, I fear that I _have_ become resigned to
ignorance (though some recent moral philosophers have raised my hopes).

Concerning 'reality' as a negotiated outcome of social processes, do you
remember the old days when we were assured that colour terms were a social
construct, and that biological species were merely a theoretical construct
imposed on a world with no boundaries?  An archaeologist once commented in
a lecture that I attended that a number of ancient peoples had the practice
of "killing" the deceased's possessions (breaking bowls, bending knives,
tearing cloth &c), and that by and large the artefacts that had been
damaged least (e.g. just a "token" chip out of the rim of a bowl) were the
ones the archaeologists found most beautiful (comparing reconstruction with
reconstruction).  Negotiated social processes between people 5,000 years
apart?

Why have we needed judges to deliver equity?
(a) Because anyone who does that we _call_ a judge.
(b) They don't.
(c) The use of judges has nothing to do with the nature of reality.
    The facts may be known to both parties, yet either or both may
    simply be pushing to see what it/they can get away with.
    Ever heard of "the Kerry alibi?"
(d) This is something of a loaded example, insofar as law _is_ a socially
    negotiated field (particularly in the USA).  But none of the admittedly
    few books on jurisprudence I've read makes any claim that the law is an
    instrument for reaching 'truth' or 'reality', only a mechanism for
    reducing the level of dispute in a society to a workable degree.  (It
    doesn't even matter too much if a law is and is known to be unjust,
    so long as you know what it _is_ and can rely on it well enough to avoid
    its consequences.)

For an example of someone who _has_ looked at AI with an eye to sociology,
	Plans and Situated Actions : The problem of human/machine communication
	Lucy A. Suchman
	Cambridge University Press 1987
	ISBN 0-521-33739-9