[comp.ai] Against AI

feedback (Bryan Bankhead) (09/25/90)

I may get flamed to death, but I have to say this, although I have a vital 
interest in the area of AI, in science all pronouncements must be subject 
to COUNTERVERIFIABILITY. With this in mind I have decided to start a 
thread dealing with arguments AGAINST the idea that true artificial 
intellegence is reproducible.
        1/ Intellingence occurs at the wrong hierarchial level 
                        We may not be capable of programming what is going 
on in our minds because intellingence is produced at levels our software 
is not capable of "backstrapping" (a term I just coined) itself to, that 
is our software may not be capable of instantiating our 'other order' 
operations in the form that our 'awareness' is capable of processing.
There are known examples of this in computer sci..  for instance it has 
been proven that von nueman programming constructs are incapable of 
determining if they themselves are self terminating.  Other laws of 
information processing may indeed prevent the recursive instantiation of 
the 'clockwork' behind our own consciousness.
        Pleasnote I have placed the number 
'1' in front of my above instance.  I am sure you bright people are 
capable of adding to the list.
 Thank you for your time
                        B Bankhead

tesar@boulder.Colorado.EDU (Bruce Tesar) (09/26/90)

In article <94y2P6w163w@bluemoon.UUCP> feedback (Bryan Bankhead) writes:

>I may get flamed to death, but I have to say this, although I have a vital 
>interest in the area of AI, in science all pronouncements must be subject 
>to COUNTERVERIFIABILITY.

    I believe this is more commonly referred to as falsifiability, namely,
the idea that a theory is not scientific unless there are conceivable
instances of evidence that could disprove the hypothesis. If that is
what you mean, then I agree completely; in fact, it is a fundamental
tenet of philosophy of science.

>        1/ Intellingence occurs at the wrong hierarchial level 
>                        We may not be capable of programming what is going 
>on in our minds because intellingence is produced at levels our software 
>is not capable of "backstrapping" (a term I just coined) itself to, that 
>is our software may not be capable of instantiating our 'other order' 
>operations in the form that our 'awareness' is capable of processing.
>There are known examples of this in computer sci..  for instance it has 
>been proven that von nueman programming constructs are incapable of 
>determining if they themselves are self terminating.  Other laws of 
>information processing may indeed prevent the recursive instantiation of 
>the 'clockwork' behind our own consciousness.

Are you assuming that programs can only work at one level of
abstraction? If you are, I would suggest you think some more.
It is true that many "AI" programs work directly on objects like
words and high-level concepts. However, if you can define a model
that uses lower level brain-like objects and concepts, then you
could try writing your program with objects at that level. The
thinking here is that you start at the lower level, rather than writing
a program at a high level and trying to get it to "backstrap" to
the lower levels (if I understand what you mean by that term).

> Thank you for your time
>                        B Bankhead


==========================
Bruce B. Tesar
Computer Science Dept., University of Colorado at Boulder 
Internet:  tesar@boulder.colorado.edu

cpshelley@violet.uwaterloo.ca (cameron shelley) (09/27/90)

  So, your reasoning is: 1/ the halting problem exists, therefore
*true* AI does not.  This would be quite acceptable if I could see
some reason why one necessarily excludes the other.

  Frankly, "proofs" of the impossibility of computer intelligence
strike me very much like the medieval "proofs" of the existence 
of god, they amount to a statement of opinion.  Perhaps the problem
of machine intelligence is much like the halting problem:  if we
succeed then we'll know, otherwise the question will simply remain
undecided.

--
------------------------------------------------------------------
| Cameron Shelley                 |  Office: dc 2136             |
| cpshelley@violet.waterloo.edu   |  Phone: (519) 885-1211 x3390 |
|----------------------------------------------------------------|

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (09/27/90)

In article <94y2P6w163w@bluemoon.UUCP> feedback (Bryan Bankhead) writes:
>        1/ Intellingence occurs at the wrong hierarchial level 
>                        We may not be capable of programming what is going 
>on in our minds because intellingence is produced at levels our software 
>is not capable of "backstrapping" (a term I just coined) itself to, that 
>is our software may not be capable of instantiating our 'other order' 
>operations in the form that our 'awareness' is capable of processing.

I think we are on fairly safe ground if we claim cognitive processing
done by "symbolic AI" is not what is done on the ground of neurons.
The big question is whether or not processing done by "symbolic AI"
is similar to what is being done by millions of neurons.

I have experience with artificial neural networks.  ANN's have developed
to the point where we can solve a good number of fairly small
"toy" problems, and even some "slightly-less-than-toy" problems.
The problem with neural networks has always been one of scaling.
Big, complex problems are still not easily tackled by ANN's.
However, the problems they can solve by induction still seem very
neat, especially compared to "symbolic AI."  For example, an ANN
can learn to perform a pole-balancing task in a fairly short time
using reinforcement learning. 

There are some things which "symbolic AI" does really well.  For instance,
MACSYMA can do all kinds of symbolic algebra, partial fraction
expansion, taking derivatives, integrals, Laplacian transforms, etc.
Many "symbolic AI" programs use deductive reasoning to solve their
problems (i.e. build up to a goal using known subgoals).

I think that more impressive cognitive systems will eventually be built
utilizing both of these modalities.  Sub-goals can be inductively 
learned using neural networks.  These sub-goals will be farly simple
problems.  The networks will then be strung together in methods
similar to traditional symbolic structures to solve large difficult
goals.  The heuristics of goal solving from "symbolic AI" will be
helped by neural networks building sub-goal networks using
inductive-style learning.

Of course, I could be totally wrong.

>There are known examples of this in computer sci..  for instance it has 
>been proven that von nueman programming constructs are incapable of 
>determining if they themselves are self terminating.

This is kind of a red herring.  I could easily write a program which
could say whether it is self-terminating.  It would say "yes."
Not a "theoretical" program, but a real one running on a real machine.
It has a very high percentage chance of being right (especially
if I don't pay the electric bill :-)
People determining whether they are self-terminating stand about as
good a chance.

-Thomas Edwards

rstark@cogsci.ed.ac.uk (Randall Stark) (09/30/90)

shouldn't this discussion be in comp.ai.philosophy?