[comp.ai] Capabilities of "logic machines"

Krulwich-Bruce@cs.yale.edu (Bruce Krulwich) (11/04/88)

In article <349@uceng.UC.EDU>, dmocsny@uceng (daniel mocsny) writes:
>Man can build physical mechanisms that can outperform his own physical
>work capacity by orders of magnitude. We can't even define intelligence,
>much less establish limits for it. I see no reason to doubt that he
>will oneday build a machine that is more intelligent than himself, unless
>the dualist view is correct (and physico-chemical mechanisms cannot
>account for intelligence). However, if you asked me ``Can Man build a
>_logic_ machine more intelligent than himself?'' I would laugh.

What do you consider to be a "logic machine" ??  You might mean any of:

	- A formal system based on axioms and inference rules
	- A rule-based system (ie, OPS)
	- A standard computer
	- Lots of other things

Which of these you mean determines the correctness of some of the things you
say below, such as:

>However, logic machines require explicit programming for the most trivial
>tasks.  They are not self-organizing nor adaptive. They do not learn from
>everyday experience in a generally useful way. As long as that is true they
>can never possess what we could reasonably call intelligence.

This is not necessarily true for any of the definitions of "logical machines"
that I gave above.  Can you give some more details about exactly what you're
saying??

>The connectionist approach to AI may succeed in creating machines that
>correct these glaring deficiencies of logic machines. If so, then in
>combination with logic machines they may create a hybrid intelligence that
>exceeds anything we have yet seen. Especially if that hybrid includes us.

If you're claiming that it's possible to do something with connectionist 
models that its not possible to do with "logical machines," you have to
define "logical machines" in such a way that they aren't capable of
simulating connectionist models.  

On the other hand, I think your claim is incorrect even if simulating
connectionist models on "logical machines" is ignored.

>In any case, discussing whether machines will exceed human intelligence
>is a bit premature, rather like arguing over how tall a redwood seedling
>might eventually become. Probably none of us will live to see the
>question settled, and the seedling has an enormous struggle ahead of
>it. Better to pay attention to nibbling away at subproblems...

While you're working away on your subproblem, you shouldn't ignore other
people's subproblems.  While I am the last person to question the validity 
of connectionist approaches to AI, it looks as if you are unfamiliar with 
any recent work in the more "classical" areas of AI (ie, machine learning, 
case based reasoning, etc).


Bruce Krulwich

dmocsny@uceng.UC.EDU (daniel mocsny) (11/07/88)

In article <42136@yale-celray.yale.UUCP>, Krulwich-Bruce@cs.yale.edu (Bruce Krulwich) writes:

[ in reply to my doubts about ``logic-machine'' approaches to learning ]

> If you're claiming that it's possible to do something with connectionist 
> models that its not possible to do with "logical machines," you have to
> define "logical machines" in such a way that they aren't capable of
> simulating connectionist models.  

Good point, and since simulating a connectionist model can be easily
expressed as a sequence of logical operations, I would have to be
pretty creative to design a logical machine that could not do that.
(By ``logical machine,'' I mean any algorithmic device with sufficient
generality to implement any of the instances you cited in your
article.)

I have a vague concept of a ``universal computer,'' gleaned from the
occasional Wolfram or Hopfield paper, distorted somewhat through the
transfer function of my inadequate understanding, but retaining some
conceptual utility nonetheless. A sufficiently capable computer,
whether based on a Von Neumann or PDP model, should be able to
simulate all other computers, given enough time and memory. A machine
works best in its own ``native mode,'' but that does not limit all the
things we might kludge it up to do. An occasional human brain can
(under appropriate duress) be made to operate at least momentarily
much like a logical machine -- pushing symbols around, performing
elementary operations on them one at a time, until the input vector
becomes the output vector. I have trouble imagining that is what is
going on when I recognize a friend's face, predict a driver's
unsignaled turn by the sound of his motor, realize that a particular
computer command applies to a novel problem, etc.

Upon a microsecond's reflection I must admit that all connectionist
models require explicit programming of some sort. Before they can
start learning, someone must specify their structure, to ``get the
ball rolling,'' so to speak. Indeed, our own brains start off with
explicit genetic programming. The difference, I suppose, is all in the
amount of programming required, compared to the total information
gain. The information content of the human genome is ~750 MB, of which
a sizable fraction determines our basic brain structure. The human
brain goes on to absorb a terrific amount of information during its
service life. (Terabytes? With electric stimulus, your brain can
recall past experiences in vivid detail -- sights, sounds, smells,
textures. If you've done any graphics or audio work, you'll know
that's scary.)

Can a system that only does logical inferences on symbols with direct
semantic significance achieve a similar information gain through
experience? Can we really, truly, specify a set of logical constructs
that will fit on a Maxtor, turn it loose in the real world, and have
it come back twenty years later to regale us with its discoveries?  

> On the other hand, I think your claim is incorrect even if 
> simulating connectionist models on "logical machines" is ignored.

Time will tell. I long to be proven wrong. I would dearly love to have
a computer that was not so brittle and helpless as the ones to be had
today. I hope that I did not sound too critical of logical machines in
my earlier post. I did say that they have many strengths where we have
weaknesses. But the original question was whether they would exceed
human intelligence. And that is a very tall order.

> it looks as if you are unfamiliar with 
> any recent work in the more "classical" areas of AI (ie, machine learning, 
> case based reasoning, etc).

I will appreciate pointers to significant results. Is anyone making
serious progress with the classical approach in non-toy-problem
domains? (One serious problem with the logical machine approach is
that the bigger these systems get, the more likely they are to
collapse.  Success in toy domains is not easy to scale up.) Can a
purely logical machine demonstrate a convincing ability to spot
analogies that don't follow directly from explicit coding or
hand-holding?  Is any logical machine demonstrating information gain
ratios exceeding (or even approaching) unity? Are any of these
machines _really_ surprising their creators?

Dan Mocsny

ray@bcsaic.UUCP (Ray Allis) (11/16/88)

In article <393@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
>In article <42136@yale-celray.yale.UUCP>, Krulwich-Bruce@cs.yale.edu
 (Bruce Krulwich) writes:
>
>[ in reply to my doubts about ``logic-machine'' approaches to learning ]
>
>> If you're claiming that it's possible to do something with connectionist 
>> models that its not possible to do with "logical machines," you have to
>> define "logical machines" in such a way that they aren't capable of
>> simulating connectionist models.  
>
>Good point, and since simulating a connectionist model can be easily
>expressed as a sequence of logical operations, I would have to be
>pretty creative to design a logical machine that could not do that.

Whoa!  Wrong!  (Well, sort of.)  I think you conceded much too quickly.
'Simulate' and 'model' are trick words here.  The problem is that most
'connectionist' approaches are indeed models, and logical ones, of some
hypothesized 'reality'.  There is no fundamental difference between such
models and more traditional logical or mathematical models; of course
they can be interchanged. 

A distinction must be made between digital and analog; between form and
content; between symbol and referent; between model and that which is
modelled.  

Suppose you want to calculate the state of a toy rubber balloon full of
air at ambient temperature and pressure as it is moved from your office
to direct sunlight outside.  To do a completely accurate job, you're
going to need to know the vector of every molecule of the balloon and
its contents, every external molecule which affects the balloon, or
affects molecules which affect the balloon, the photon flux, the effects
of haze and clouds drifting by, and whether passing birds and aircraft
cast shadows on the balloon.  And of course even that's not nearly enough,
or at fine enough detail.  To diminishing degrees, everything from
sunspots to lunar reflectivity will have some effect. Did you account for
the lawn sprinkler's effect on temperature and humidity? "Son of a gun!"
you say, "I didn't even notice the lousy sprinkler!"

Well, it's impossible.  In any case most of these are physical quantities
which we cannot know absolutely but can only measure to the limits of our
instruments.  Even if we could manage to include all the factors affecting
some real object or event, the values used in the arithmetic calculations
are approximations anyway.  So, we approximate, we abstract and model.
And arithmetic is symbolic logic, which deals, not directly with quantities,
but with symbols for quantities.  

Now with powerful digital computers, calculation might be fast enough to
produce a pretty good fake, one which is hard for a person to distinguish
from "the real thing", something like a movie.  But I don't think this is
likely to be really satisfactory.  Consider another example I like, the
modelling of Victoria Falls.  Water, air, impurities, debris and rock all
interacting in real time on ninety-seven Cray Hyper-para-multi-3000s. Will
you be inspired to poetry by the ground shaking under your feet?  No?

You see, all the ai work being done on digital computers is modelling using
formal logic.  There is no reason to argue over whether one type of logical
model can simulate another.  The so-called "neurologically plausible"
approach, when it uses real, physical devices is an actual alternative to
logical systems.  In my estimation, it's the most promising game in town.

>much like a logical machine -- pushing symbols around, performing
>elementary operations on them one at a time, until the input vector
>becomes the output vector. I have trouble imagining that is what is
>going on when I recognize a friend's face, predict a driver's
>unsignaled turn by the sound of his motor, realize that a particular
>computer command applies to a novel problem, etc.

Me, too!

>Can a system that only does logical inferences on symbols with direct
>semantic significance achieve a similar information gain through
>experience?

Key here is "What constitutes experience?"  How is this system in touch
with its environment?

>I will appreciate pointers to significant results. Is anyone making
>serious progress with the classical approach in non-toy-problem
>domains? 
[...]
>                                                         Can a
>purely logical machine demonstrate a convincing ability to spot
>analogies that don't follow directly from explicit coding or
>hand-holding?  Is any logical machine demonstrating information gain
>ratios exceeding (or even approaching) unity? Are any of these
>machines _really_ surprising their creators?
>
>Dan Mocsny

Excellent questions.  I'd also like to hear of any significant results.

Ray Allis, Boeing Computer Services, Seattle, Wa. ray@boeing.com

ok@quintus.uucp (Richard A. O'Keefe) (11/18/88)

In article <8673@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:
>Whoa!  Wrong!  (Well, sort of.)  I think you conceded much too quickly.
>'Simulate' and 'model' are trick words here.

Correct.  A better would would be _emulate_.
For any given electronic realisation of a neural net,
there is a digital emulation of that net which cannot be
behaviourally distinguished from the net.
The net is indeed an analogue device, but such devices are
subject to the effects of thermal noise, and provided the
digital emulation carries enough digits to get the
differences down below the noise level, you're set.

In order for a digital system to emulate a neural net adequately,
it is not necessary to model the entire physical universe, as Ray
Allis seems to suggest.  It only has to emulate the net.

>You see, all the ai work being done on digital computers is modelling using
>formal logic.

Depending on what you mean by "formal logic", this is either false or
vacuous.  All the work on neural nets uses formal logic too (whether the
_nets_ do is another matter).

>>much like a logical machine -- pushing symbols around, performing
>>elementary operations on them one at a time, until the input vector
>>becomes the output vector. I have trouble imagining that is what is
>>going on when I recognize a friend's face, predict a driver's
>>unsignaled turn by the sound of his motor, realize that a particular
>>computer command applies to a novel problem, etc.

>Me, too!

Where does this "one at a time" come from?  Most computers these days
do at least three things at a time, and the Connection Machine, for all
that it pushes bits around, does thousands and thousands of things at
a time.  Heck, most machines have some sort of cache which does
thousands of lookups at once.  Once and for all, free yourself of the
idea that "logical machines" must do "elementary operations one at a
time".