lemmon (12/06/82)
I think that looking first at hardware won't tell us much. There are
a couple of reasons for this:
1. What Godel, Turing, and others were getting at is, "What can
be done by a formal (read "mechanical" or "predictable" if you
prefer) system?" At this level hardware is not relevant,
though of course some operations could be done orders of
magnitude more efficiently with certain hardware.
2. The brain is NOT a random collection of neurons. Recent research
(see for example the Dec. 1982 Scientific American) has shown
that the brain is very tightly organized. Presumably some
details of its structure are irrelevant to its operation,
and others vital; and we don't know yet exactly which are which.
In any case random collections of things seem a poor starting
point for anything interesting; remember the adage "Garbage In,
Garbage Out".
3. Even if you do have a piece of hardware that does something
interesting, how will you describe its operation? Descriptions
in terms of basic hardware units are like looking at the bits
in the machine code. To get at "what it all means" requires
a rather high level of abstraction, which in turn means
that you are in effect discussing software.
As far as the primordial soup is concerned, I could flame a lot,
but I don't see it giving rise to the kind of organization that
supports DIRECTED synthesis of proteins. I compare proponents of such
theories to a contractor who might submit a bill for completing a house
when he has only delivered (successfully, to be sure) piles of bricks
and lumber.
I would like to see a newsgroup for AI (and for evolution,
for that matter)
Alan Lemmon
...linus!lemmon
lemmon@mitre or lemmon@usc-eclleichter (12/07/82)
Re: "primordial soup" theory of AI. This was a big deal back at the beginning of AI - late 50's and 60's. The "Chaostron" article that someone mentioned was a spoof of real work; the devices were called "Perceptrons". The buzzword for the whole idea was "self-organizing". While some interesting work was done, ultimately it was found to be way too limited a point of view; and, as we now know (but didn't really know then, the brain isn't "self-organizing" from a totally random mess anyway. The whole "self-organizing" era pretty much ended with the publication of a book on Perceptrons by Marvin Minsky (and someone else whose name escapes me). They sat down and developed a good mathematical theory to describe the things, and showed just what kinds of systems could and could not arise by "self-organization". They did such a thorough job of exploring the limi- tations that no one has touched the things since. (Minsky says that he sometimes regrets that they worked at it so long & published so much; they killed off a large number of potentially good PhD theses.) -- Jerry decvax!yale-comix!leichter leichter@yale