MAREK%MIT-OZ@MIT-MC.ARPA (01/21/84)
From: Marek W. Lugowski <MAREK%MIT-OZ@MIT-MC.ARPA>
DROGERS (c. November '84):
I have a few questions I would like to ask, some (perhaps most)
essentially unanswerable at this time.
Appologies in advance for rashly attempting to answer at this time.
- Should the initially constructed subcognitive systems be
"learning" systems, or should they be "knowledge-rich" systems? That
is, are the subcognitive structures implanted with their knowledge
of the domain by the programmer, or is the domain presented to the
system in some "pure" initial state? Is the approach to
subcognitive systems without learning advisable, or even possible?
I would go off on a limb and claim that attempting wholesale "learning"
first (whatever that means these days) is silly. I would think one
would first want to spike the system with hell of a lot of knowledge
(e.g., Dughof's "Slipnet" of related concepts whose links are subject to
cummulative, partial activation which eventually makes the nodes so
connected highly relevant and therefore taken into consideration by the
system). To repeat Minsky (and probably, most of the AI folk: one can
only learn if one already almost knows it).
- Assuming human brains are embodiments of subcognitive systems,
then we know how they were constructed: a very specific DNA
blueprint controlling the paths of development possible at various
times, with large assumptions as to the state of the intellectual
environment. This grand process was created by trial-and-error
through the process of evolution, that is, essentially random
chance. How much (if any) of the subcognitive system must be created
essentially by random processes? If essentially all, then there are
strict limits as to how the problem should be approached.
This is an empirical question. If my now-attempted implementation of
the Copycat Project (which uses the Slipnet described above)
[forthcoming MIT AIM #755 by Doug Hofstadter] will converge nicely, with
trivial tweaking, I'll be inclined to hold that random processes can
indeed do most of the work. Such is my current, unfounded, belief. On
the other hand, a failure will not debunk my position--I could always
have messed up implementationally and made bad guesses which "threw"
the system out of its potential convergence.
- Which processes of the human brain are essentially subcognitive
in construction, and which use other techniques? Is this balance
optimal? Which structures in a computational intelligence would be
best approached subcognitively, and which by other methods?
Won't even touch the "optimal" question. I would guess any process
involving a great deal of fan-in would need to be subcognitive in
nature. This is argued from efficiency. For now, and for want of
better theories, I'd approach ALL brain functions using subcognitive
models. The alternative to this at present means von Neumannizing the
brain, an altogether quaint thing to do...
- How are we to judge the success of a subcognitive system? The
problems inherent in judging the "ability" of the so-called expert
systems will be many times worse in this area. Without specific goal
criteria, any results will be unsatisfying and potentially illusory
to the watching world.
Performance and plausibility (in that order) ought to be our criteria.
Judging performance accurately, however, will continue to be difficult
as long as we are forced to use current computer architectures.
Still, if a subcognitive system converges at all on a LispM, there's no
reason to damn its performance. Plausibility is easier to demonstrate;
one needs to keep in touch with the neurosciences to do that.
- Where will thinking systems REALLY be more useful than (much
refined) expert systems? I would guess that for many (most?)
applications, expertise might be preferable to intelligence. Any
suggestions about fields for which intelligent systems would have a
real edge over (much improved) expert systems?
It's too early (or, too late?!) to draw such clean lines. Perhaps REAL
thinking and expertise are much more intertwined than is currently
thought. Anyway, there is nothing to be gained by pursuing that line of
questioning before WE learn how to explicitly organize knowledge better.
Over all, I defend pursuing things subcognitively for these reasons:
-- Not expecting thinking to be a cleanly organized, top-down driven
activity is minimizing one's expectations. Compare thinking with such
activities as cellular automata (e.g., The Game of Life) or The Iterated
Pairwise Prisoner's Dilemma Game to convince yourself of the futility of
top-down modeling where local rules and their iterated interactions are
very successful at concisely describing the problem at hand. No reason
to expect the brain's top-level behavior to be any easier to explain
away.
-- AI has been spending a lot of itself on forcing a von Neumannian
interpretation on the mind. At CMU they have it down to an art, with
Simon's "symbolic information processing" the nowadays proverbial Holy
Grail. With all due respect, I'd like to see more research devoted to
modeling various alleged brain activities with high degree of
parallelism and probabilistic interaction, systems where "symbols" are
not givens but intricately invovled intermediates of computation.
-- It has not been done carefully before and I want at least a thesis
out of it.
-- Marek