[comp.ai.digest] [gilbert%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK: Immortality, Home-made BDI states and Systems Theory

NICK@AI.AI.MIT.EDU (Nick Papadakis) (06/02/88)

From: Gilbert Cockton <gilbert%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK>
Date: Mon, 30 May 88 07:13 EDT
To: AIList@ai.ai.mit.edu
Subject: Immortality, Home-made BDI states and Systems Theory (3 snippets)

In article <8805250631.AA09406@BLOOM-BEACON.MIT.EDU>
>Step (4) gives us virtual immortality, since whenever our current
>intelligence-carrying hardware (human body? computer? etc.) is about to
>give up (because of a disease, old age ...) we can transfer the
>intelligence to another piece of hardware. there are some more delicate
>problems here, but you get the idea.
Historically, I think there's going to be something in this.  There is
no doubt that we can embody in a computer program something that we
would not sensibly embody in a book.  In this sense, computers are
going to alter what we can pass on from one generation to another.

But there are also similarities with books.  Books get out of date, so
will computer programs.  As I've said before, we've got one hell of a
maintenance problem with large knowledge-based programs.  Is it really
going to be more economical than people?  See the latest figures on
automation in the car industry, where training costs are going through
the roof as robots move from single functions to programmable tasks.
GM's most heavily automated plant (Hamtramck, Michigan) is less productive
than a much less automated one at Fremont Ca. (Economist, 21/5/88, pp103-104).


In art. <8805250631.AA09382@BLOOM-BEACON.MIT.EDU>
>
>Let my current beliefs, desires, and intentions be called my BDI state.
These change.  Have you no responsibility for the way they change?  Do
you just wake up one morning a different person, or are you consciously 
involved in major changes of perspective?  Or do you never change?

In article <19880527050240.9.NICK@MACH.AI.MIT.EDU>
>Gilbert Cockton: Even one reference to a critique of Systems Theory would be
>helpful if it includes a bibliography.
I recommended the work of Anthony Giddens (Kings College, Cambridge).
There are sections on systems theory in his "Studies in Social and Political
Theory" (either Hutchinson or Macmillan or Polity Press, can't remember which).

A book which didn't impress me a long time ago was Apple's "Ideology
and Education" or something like that.  He's an American Marxist, but
I remember references to critiques of systems theory in between his
polemic.  I'll try to find a full reference.

Systems theory is valuable compared to classical science.  To me,
systems theory and simulation as a scientific method go hand in hand.
It falls down in its overuse of biological concepts (which with
mathematics represent the two scientific influences on many post-war
approaches to humanity. Sociobiological game theory, ugh!)

Another useful book is David Harvey's "Science, Ideology and Human
Geography" (Longman?), which followed his systems theory/postivist 
"Explanation in Geography".  You'll see both sides of systems theory
in his work.

Finally,  I am surprised at the response to my original comments on
free will and AI.  The point is still being missed that our
current society needs free will, whether or not it can be established
philosophically that free will exists or not.  But I have changed my
mind about "AI's" concern about the issue, both in the orderliness
of John McCarthy's representation of a 1969 paper (missed it due
to starting secondary school :-)), and in Drew McDermott's awareness
of the importance of the issue and its relation to modern science
and dualism, plus all the other traffic on the issue.  I only wish
AI theorists could get to grips with the socialisation question as well,
and understand more sympathetically why dualism persists (by law in the case
of the UK school curriculum).

Hope you're enjoying all this as much as I am :-)
Gilbert.