[comp.ai] Understanding

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/05/89)

In article <819@htsa.uucp> fransvo@htsa.UUCP (Frans van Otten) writes:
>Jerry Jackson writes:
>Please see the difference between "many simple tasks, all the same" and
>"many different and difficult tasks".  But yes: AI was invented (at least)
>20 years ago.  The cheque clearing system you write about does understand
>how to process a check.
No it doesn't, it relies on humans to get the cheques to the right
place, and to input cheques where the magnetic characters can't be read
(hint on how to slow down cheque clearing:-)).  The decision on
'bouncing' a cheque, part of cheque-clearing, is rarely made by
machines.

Cheque clearing is a human-machine system with a clearly defined set of
subordinate tasks assigned to the machine.  Humans hold the system
together, and therefore only they understand cheque clearing.  The
automated tasks possess no concept of cheque clearing in all its
glory.

>So when we understand how the human mind works, we can build a machine
>which has properties like "consciousness", "understanding" etc.

No, you assume these are properties of mind.  Until you give me a
sensible account of the role of 'mind' in human agency, I cannot accept
or reject anything you state on the issue.

How about the act of making a cup of tea.  Where does mind come in in
the Chinese Tea Room?

As for it's artefactness, it is quesionable whether any artefact fully
Copies any entity in the physical world.  Indeed, it may be impossible
to fully synthesise anything, since there is no objective test for
knowing that a natural entity is fully understood.  There are many
objective criteria for knowing that something is not fully understood,
as the natural entity and the simulating artefact perform differently.

Natural entities and simulating/surrogate artefacts can only be
equivalent in so far as they perform the same way under a finite and
enumerated set of conditions (tasks for mind machines).  Under these
circumstances, 'complete understanding' is impossible.

The fact is, societies only seek knowledge on useful things.  The
post-war knowledge for its own sake institutionalised is a minor
deviation which is on the way out.  Total knowledge is uninteresting.
Sensible folk restrict themselves to useful knowledge with an obvious
application (I count filling out existing applied theories as useful,
so 'basic' research is not ruled out by this dogma).

The only things that matter are tasks, and even these are slippery, as
I/O equivalence cannot be tightly defined for most interesting tasks.
The Turing test is thus an uninteresting subjective game.  People are
unlikely to agree that a system is 'intelligent'.  It depends on who
you ask, and what they ask the system to do.

Given all these epistemelogical problems - and more (see all 17 volumes
of obscure Euro-drivel) - I stick to my argument that computer
simulation cannot advance our understanding of 'mind', rather it always
lags behind it (even pulling it back by showing gaps in current
understandings).  The gap between understanding and computability is
even larger, due to the lack of sources used by strong AI research.
Current computer models come nowhere near our cultural understanding of
human agency, and given the preference for hacking over reading, the
gap is unlikely to be closed.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

rjc@aipna.ed.ac.uk (Richard Caley) (04/07/89)

In article <2728@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>I stick to my argument that computer
>simulation cannot advance our understanding of 'mind', rather it always
>lags behind it (even pulling it back by showing gaps in current
>understandings).

So now showing up gaps in our current understanding is a bad thing? We
should perhaps skip allong happily constructing partial and possibly
self contradictory theories?




-- 
	rjc@uk.ac.ed.aipna

 "Politics! You can wrap it up in fancy ribbons, but you can't hide the smell"
			- Jack Barron

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (04/10/89)

>So now showing up gaps in our current understanding is a bad thing? We
>should perhaps skip allong happily constructing partial and possibly
>self contradictory theories?

Of course not.  Folk who need to spend months writing a computer
program to do it though can't be using their grey matter to its limit.

The question is, has computer simulation of 'mind' exposed ignorance
unknown to mainstream cognitive psychologists.  What's computer
simulation got to do with 'self-contradiction' as well?  Is computer
logic really the arbitor of consistency?

If not, Strong AI is an expensive way of finding holes in knowledge.
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert