[net.ai] AI's level of operation

cbostrum (01/23/83)

There is a controversy associated with all of the "special sciences" as
to whether there is a legitimate "level" of existence that is the object
of their study, or whether ultimately they are merely doing some sort
of physics, and that their science will only become fully understood or
legitimate when the ultimate reduction to physics is produced.
There is one classical chain which goes: physics, chemistry, biology,
pyschology, social science (economics, polisci, etc), where every element
in the chain is supposed to be reducible to the previous one. Without
disputing this, I would like to know where people think AI fits in, not
to the chain, but just in general.
A lot of talk in net.misc previously went on all about making nueral
models, and watching them evolve, and this sort of thing. This would
imply that there is no signifigant "knowledge level" as newell calls
it, and that there is no meat to taking what dennett calls the 
"intentional stance". Or perhaps it merely implies that those positions
are too difficult to get results with.
Personally I at least hope (and presently believe) that both implications
are false. Surely there are signifigant things about intelligence we can
learn without going to the low-level brain stuff, and surely there must
be signifigant things we can learn on the "higher" level that would actually
be IMPOSSIBLE to learn on the "lower" level. It seems to me that AI is 
predicated upon these optimistic beliefs.
What are others thoughts about this?

tombl (01/25/83)

One of the problems which seems central to the issues CB has raised
concerns what is knowable. One may say that chemistry is reducible to
the laws of physics according to our understanding of the physical
world; it is not, however, possible for us to construct or predict
the principles of chemistry solely from a knowledge of some set of
physical laws. Perhaps "has not" would be better then "is not",
but the problem is in what I perceive to be a limitation on human mental
abilities.  We could just as well ask the question of whether it is
possible to know the OS/360 operating system.  Most peoples' answers
would be no.

Steering clear of the existential dilemma, what we find is a body of
scientific theory describing the world at a physical level; also a
body describing the world at the chemical level. Then there is the
theory which attempts to relate the two. We can find the same pattern
reflected in diverse habitats: in the structure of institutions, in the
algorithms we employ to solve problems, and in the architecture of
computing machines. This is not an argument for or against a coherent
scientific explanation of the world based upon laws of physics,
but a suggestion of what the structure of our understanding of the
it is and will be.

Given an estimated 10^10 cells in the brain, with an average fanout
of say 100 on input and/or output, and several modes of interaction
between "wires" and cells, I must say that the prospects of reasoning
about all brain function (especially behavior) from our limited
knowledge of its substructure, seem to me rather unpromising in my
lifetime. Higher levels of organization, and the notion of several
such levels, seem plausible. AI research, at best, attempts to define
one or more of these levels. Beyond that, I wouldn't make any great
claims (there are entirely enough people doing that), but I certainly
am not going to trade the opportunity to find out for the grand opiate,
the illusion of a secure job, or for the intrinsic pleasure of
attending church every Sunday morning.

	Tom Blenko
	decvax!teklabs!tekmdp!tombl
	ucbvax!teklabs!tekmdp!tombl