[comp.ai] Arguments against AI are arguments against human formalisms

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/05/88)

In article <1579@pt.cs.cmu.edu> yamauchi@speech2.cs.cmu.edu (Brian Yamauchi) writes:
>Cockton seems to be saying that humans do have free will, but is totally
>impossible for AIs to ever have free will.  I am curious as to what he bases
>this belief upon other than "conflict with traditional Western values".
Isn't that enough?  What's so special about academia that it should be
allowed to support any intellectual activity without criticism from
the society which supports it?  Surely it is the duty of all academics
to look to the social implications of their work?  Having free will,
they are not obliged to pursue lines of enquiry which are so controversial.

I have other arguments, which have popped up now and again in postings
over the last few years:

	1) Rule-based systems require fully formalised knowledge-bases.

	   Rule-based systems are impossible in areas where no written
	   formalisation exists.  Note how scholars like John Anderson
	   restrict themselves to proper psycholgical data. I regard Anderson
	   as a psychologist, not as an AI worker.  He is investigating
	   computational accounts of known phenomena.  As such, his research
	   is a respectable confrontation with the boundaries of the
	   computational paradigm.  His writing is candid and I have yet to
	   see him proceed confidently from assumptions, though he often has
	   to live with some.

	   Conclusion, AI as a collection of mathematicians and computer
	   scientists playing with machines, cannot formalise psychology where
	   no convincing written account exists.  Advances here will come from
	   non-computational psychology first, as computational psychology has
	   to follow in the wake of the real thing. 

	   The real thing unfortunately cuts a slow and shallow bow-wave.
	   
	   [yes, I know about connectionism, but then you have to formalise the
	    inputs.   Furthermore, you don't know what a PDP network does know]

	2) Formal accounts of nearly every area of human activity are rare.

	   I have a degree in Education. For it I studied Philosophy, Psychology
	   and Sociology.  My undegraduate dissertation was on Curriculum design           -  an interdisciplinary topic which has to draw on inputs from a 
	   number of disciplines.  What I learnt here was which horse was best 
	   suited for which course, and thus when not to use mathematics, which
	   was most of the time.  I did philosophy with a (ex-)mathematician BTW

	   I know of few areas in psychology where there is a WRITTEN account of
	   human decision making which is convincing.  If no written account
	   exists, no computational account, a more restrictive representation,
	   is possible.  Computability adds nothing to 'writability', and many 
	   things in this world have not been well represented using written
	   language.  Academics are often seduced by the word, and forget that
	   the real decisions in life are rarely written down, and when they are
	   (laws, treaties) they seem worlds apart from what originally was said

	   AI depends on being able to use written language (physical symbol
	   hypothesis) to represent the whole human and physical universe.  AI
	   and any degree of literate-ignorance are incompatible.  Humans, by
	   contrast, may be ignorant in a literate sense, but knowlegeable in
	   their activities.  AI fails as this unformalised knowledge is
	   violated in formalisation, just as the Mona Lisa is indescribable.

	   Philosophically, this is a brand of scepticism.  I'm not arguing that
	   nothing is knowable, just that public, formal knolwedge accounts for
	   a small part of our effective everyday knowledge (see Heider).

	   So, AI person, you say you can compute it.  Let's forget the Turing
	   Test and replace it with the Touring Test.  Write down what you did
	   on your holidays, in English, then come up with a computational model
	   to account for everything you did.  There is a warm-up problem which
	   involves the first 10-minutes as you step out of bed in the morning.
	   After 10 minutes, write down EVERYTHING you did (from video?).  Then
	   elaborate what happened.  This writing will be hard enough.

	   Get my point?  The world's just too big for your head.  The arrogance
	   of AI lies in its not grasping this.  AI needs everything formalised
	   (world-knowledge problem).  BTW, Robots aren't AI. Robots are robots.

	3) The real world is social, not printed.

	   Because so little of our effective knowledge is formalised, we learn
	   in social contexts, not from books.  I presume AI is full of relative
	   loners who have learnt more of what they publicly interact with from
	   books rather than from people.  Well I didn't, and I prefer 
	   interaction to reading.  

	   Learning in a social context is the root of our humanity.  It is
	   observations of this social context that reveal our free will in
	   action.  Note that we become convinced of our free will, we do not
	   formalise accounts of it.  This is the humanity which is beyond AI.
	   Feigenbaum & McCorduck (5th Gen) mention this 'socialisation'
	   objection to AI in passing, but produce no argument for rejecting it.

	   It is the strongest argument against AI.  Look at language
	   acquisition in its social context.  AI people cannot program a system
	   at the same rate as humans acquire language.  OK, perhaps 'n'
	   generations of AI workers could slowly program a NLP system up to
	   competence. But as more gets added, there is more to learn, and there
	   would come a point that the programmers wouldn't understand the
	   system until they were a few years from retirement.

	   We spend enough of our time growing in this world to ever have time
	   to formalise it.  The moment we grasp ourselves, we are already out
	   of date, for this grasping is now part of the self that was grasped.

Anyway, you did ask.  Hope this makes sense.

yamauchi@speech2.cs.cmu.edu (Brian Yamauchi) (05/07/88)

In article <1103@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> In article <1579@pt.cs.cmu.edu> yamauchi@speech2.cs.cmu.edu (Brian Yamauchi) writes:
> >Cockton seems to be saying that humans do have free will, but is totally
> >impossible for AIs to ever have free will.  I am curious as to what he bases
> >this belief upon other than "conflict with traditional Western values".
> Isn't that enough?  What's so special about academia that it should be
> allowed to support any intellectual activity without criticism from
> the society which supports it?  Surely it is the duty of all academics
> to look to the social implications of their work?  Having free will,
> they are not obliged to pursue lines of enquiry which are so controversial.

These are two completely separate issues.  Sure, it's worthwile to consider
the social consequences of having intelligent machines around, and of
course, the funding for AI research depends on what benefits are anticipated
by the government and the private sector.

This has nothing to do with whether it is possible for machines to have
free will.  Reality does not depend on social consensus.
            --------------------------------------------

Or do you believe that the sun revolved around the earth before Copernicus?
After all, the heliocentric view was both controversial and in conflict
with the social consensus.

In any case, since when is controversy a good reason for not doing
something?  Do you also condemn any political or social scientist who has
espoused controversial views?

> I have other arguments, which have popped up now and again in postings
> over the last few years:
> 
> 	1) Rule-based systems require fully formalised knowledge-bases.

This is a reasonable criticism of rule-based systems, but not necessary
a fatal flaw.

> 	   Conclusion, AI as a collection of mathematicians and computer
> 	   scientists playing with machines, cannot formalise psychology where
> 	   no convincing written account exists.  Advances here will come from
> 	   non-computational psychology first, as computational psychology has
> 	   to follow in the wake of the real thing. 

I am curious what sort of non-computational psychology you see as having had
great advances in recent years.

>   [yes, I know about connectionism, but then you have to formalise the
>    inputs.

For an intelligent robot (see below), you can take inputs directly from the
sensors.

>   Furthermore, you don't know what a PDP network does know]

This is a broad overgeneralization.  I would recommend reading Rumelhart &
McClelland's book.  You can indeed discover what a PDP network has learned,
but for very large networks, the process of examining all of the weights
and activations becomes impractical.  Which, at least to me, is
suggestive of an analogy with human/animal brains with regard to the
complexity of the synapse/neuron interconnections (just suggestive, not
conclusive, by any means).

> 	   AI depends on being able to use written language (physical symbol
> 	   hypothesis) to represent the whole human and physical universe.

Depends on which variety of AI.....

>  BTW, Robots aren't AI. Robots are robots.

And artificially intelligent robots are artificially intelligent robots.

> 	3) The real world is social, not printed.

The real world is physical -- not social, not printed.  Unless you consider
it to be subjective, in which case if the physical world doesn't objectively
exist, then neither do the other people who inhabit it.

> Anyway, you did ask.  Hope this makes sense.

Well, you raise some valid criticisms of rule-based/logic-based/etc systems,
but these don't preclude the idea of intelligent machines, per se.  Consider
Hans Moravec's idea of building intelligence from the bottom up (starting
with simple robotic animals and working your way up to humans).

After all, suppose you could replace every neuron in a person's brain with
an electronic circuit that served exactly the same function, and afterwards,
the individual acted like exactly the same person.  Wouldn't you still
consider him to be intelligent?

So, if it is possible -- or at least conceivable -- in theory to build an
intelligent being of some type, the real question is how.

______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

jeff@aiva.ed.ac.uk (Jeff Dalton) (05/10/88)

In article <1103@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> BTW, Robots aren't AI. Robots are robots.

I'm reminded of the Lighthill report that caused a significant loss of
AI funding in the UK about 10 years ago.  The technique is to divide
up AI, attack some parts as useless or worse, and then say the others
are "not AI".  The claim then is that "AI", properly used, turns out
to encompass only things best not done at all.  All of the so-called
AI that's worth supporting (and funding) belongs to other disciplines
and so should be done there.

Another example of this approach can be found earlier in the message:

> Note how scholars like John Anderson restrict themselves to proper
> psycholgical data. I regard Anderson as a psychologist, not as an AI
> worker.

A problem with this attack is that it is not at all clear that
AI should be defined so narrowly as to exclude, for example, *all*
robotics.  That robots are robots does not preclude some of them
being programmed using AI techniques.  Nor would an artificial
intelligence embodied in a robot automatically fail to be AI.

The attack seeks to set the terms of debate so that the defenders
cannot win.  Any respectable result cited will turn out to be "not
AI".  Any argument that AI is possible will be sat on by something
like the following (from <1069@crete.cs.glasgow.ac.uk>):

   Before the 5th Generation scare, AI in the UK had been sat on for
   dodging too many methodological issues.  Whilst, like the AI pioneers,
   they "could see no reasons WHY NOT [add list of major controversial
   positions", Lighthill could see no reasons WHY in their work.

In short, the burden of proof would be such that it could not be met.
The researcher who wanted to persue AI would have to show the research
would succeed before undertaking it.

Fortunately, there is no good reason to accept the narrow definition
of AI, and anyone seeking to reject the normal use of the term should
accept the burden of proof.  AI is not confined to attempts at human-
level intelligence, passing the Turing test, or other similar things
now far beyond its reach.

Moreover, the actual argument against human-level AI, once we strip
away all the misdirection, makes claims that are at least questionable.

> The argument against AI depends on being able to use written language
> (physical symbol hypothesis) to represent the whole human and physical
> universe.  AI and any degree of literate-ignorance are incompatible.
> Humans, by contrast, may be ignorant in a literate sense, but
> knowlegeable in their activities.  AI fails as this unformalised
> knowledge is violated in formalisation, just as the Mona Lisa is
> indescribable.

The claim that AI requires zero "literate-ignorance", for example, is
far from proven, as is the implication that humans can call on
abilities of a kind completely inaccessable to machines.  For some
reasons to suppose that humans and machines are not on opposite sides
of some uncrossable line, see (again) Dennett's Elbow Room.

jbn@glacier.STANFORD.EDU (John B. Nagle) (05/11/88)

In article <1103@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> BTW, Robots aren't AI. Robots are robots.

      Rod Brooks has written "Robotics is a superset of AI".  Robots have
all the problems of stationary artificial intelligences, plus many more.
Several of the big names in AI did work in robotics back in the early days of 
AI.  McCarthy, Minsky, Winograd, and Shannon all did robotics work at one 
time.  But they did it in a day when the difficulty of the problems to be
faced was not recognized.  There was great optimism in the early days,
but even such seemingly simple problems such as grasping turned out to be
very hard.  Non-trivial problems such as general automatic assembly or
automatic driving under any but the most benign conditions turned out to
be totally out of reach with the techniques available.

      Progress has been made, but by inches.  Nevertheless, I suspect that
over the next few years, robotics will start to make a contribution to
the more classic AI problems, as the techniques being developed for geometric
reasoning and sensor fusion start to become the basis for new approaches
to artificial intelligence. 

      I consider robotics a very promising field at this point in time.
But I must offer a caution.  Working in robotics is risky.  Failure is
so obvious.  This can be bad for your career.

					John Nagle