[net.ai] parallelism vs. novel architecture

"GlasserAlan"@LLL-MFE.ARPA (11/15/83)

There has been a lot of discussion in this group recently about the
role of parallelism in artificial intelligence.  If I'm not mistaken,
this discussion began in response to a message I sent in, reviving a
discussion of a year ago in Human-Nets.  My original message raised
the question of whether there might exist some crucial, hidden,
architectural mechanism, analogous to DNA in genetics, which would
greatly clarify the workings of intelligence.  Recent discussions
have centered on the role of parallelism alone.  I think this misses
the point.  While parallelism can certainly speed things up, it is
not the kind of fundamental departure from past practices which I
had in mind.  Perhaps a better example would be Turing's and von
Neumann's concept of the stored-program computer, replacing earlier
attempts at hard-wired computers.  This was a fundamental break-
through, without which nothing like today's computers could be
practical.  Perhaps true intelligence, of the biological sort,
requires some structural mechanism which has yet to be imagined.
While it's true that a serial Turing machine can do anything in
principle, it may be thoroughly impractical to program it to be
truly intelligent, both because of problems of speed and because of
the basic awkwardness of the architecture.  What is hopelessly
cumbersome in this architecture may be trivial in the right one.  I
know this sounds pretty vague, but I don't think it's meaningless.

notes@ucbcad.UUCP (11/19/83)

#R:sri-arpa:-1368800:ucbesvax:1100006:000:2386
ucbesvax!turner    Nov 19 00:55:00 1983

Re: parallelism and fundamental discoveries

The stored-program concept (Von Neumann machine) was indeed a breakthrough
both in the sense of Turing (what is theoretically computable) and in the
sense of Von Neuman (what is a practical machine).  It is noteworthy,
however, that I am typing this message using a text editor with a segment
of memory devoted to program, another segment devoted to data, and with an
understanding on the part of the operating system that if the editor were
to try to alter one of its own instructions, the operating system should
treat this as pathological, and abort it.

In other words, the vaunted power of being able to write data that can be
executed as a program is treated in the most stilted and circumspect manner
in the interests of practicality.  It has been found to be impractical to
write programs that modify their own inner workings.  Yet people do this to
their own consciousness all the time--in a largely unconscious way.

Turing-computability is perhaps a necessary condition for intelligence.
(That's been beaten to death here.)  What is needed is a sufficient condition.
Can that possibly be a single breakthrough or innovation?  There is no
question that, working from the agenda for AI that was so hubristically
layed out in the 50's and 60's, such a breakthrough is long overdue.  Who
sees any intimation of it now?

Perhaps what is needed is a different kind of AI researcher.  New ground
is hard to break, and harder still when the usual academic tendency is to
till old soil until it is exhausted.  I find it interesting that many of
the new ideas in AI are coming from outside the U.S. AI establishment
(MIT, CMU, Stanford, mainly).  Logic programming seems largely to be a
product of the English-speaking world *apart* from the U.S.  Douglas
Hofstadter's ideas (though probably too optimistic) are at least a sign
that, after all these years, some people find the problem too important
to be left to the experts.  Tally Ho!  Maybe AI needs a nut with the
undaunted style of a Nicola Tesla.

Some important AI people say that Hofstadter's schemes can't work.  This
makes me think of the story about the young 19th century physicist, whose
paper was reviewed and rejected as meaningless by 50 prominent physicists
of the time.  The 51st was Maxwell, who had it published immediately.
---
Michael Turner (ucbvax!ucbesvax.turner)