[comp.ai] Who else isn't a science?

weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) (06/03/88)

In article <3c671fbe.44e6@apollo.uucp>, nelson_p@apollo writes:

>>fail to see how this sort of intellectual background can ever be
>>regarded as adequate for the study of human reasoning.  On what
>>grounds does AI ignore so many intellectual traditions?

>  Because AI would like to make some progress (for a change!).  I
>  originally majored in psychology.  With the exception of some areas
>  in physiological pyschology, the field is not a science.  Its
>  models and definitions are simply not rigorous enough to be useful.

Your description of psychology reminds many people of AI, except
for the fact that AI's models end up being useful for many things
having nothing to do with the motivating application.

Gerald Edelman, for example, has compared AI with Aristotelian
dentistry: lots of theorizing, but no attempt to actually compare
models with the real world.  AI grabs onto the neural net paradigm,
say, and then never bothers to check if what is done with neural
nets has anything to do with actual brains.

ucbvax!garnet!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

bjpt@maui.cs.ucla.edu (Benjamin Thompson) (06/04/88)

In article <10510@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>Gerald Edelman, for example, has compared AI with Aristotelian
>dentistry: lots of theorizing, but no attempt to actually compare
>models with the real world.  AI grabs onto the neural net paradigm,
>say, and then never bothers to check if what is done with neural
>nets has anything to do with actual brains.

This is symptomatic of a common fallacy.  Why should the way our brains
work be the only way "brains" can work?  Why shouldn't *A*I workers look
at weird and wonderful models?  We (basically) don't know anything about
how the brain really works anyway, so who can really tell if what they're
doing corresponds to (some part of) the brain?

Ben

weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) (06/11/88)

In article <13100@shemp.CS.UCLA.EDU>, bjpt@maui (Benjamin Thompson) writes:
>In article <10510@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>>Gerald Edelman, for example, has compared AI with Aristotelian
>>dentistry: lots of theorizing, but no attempt to actually compare
>>models with the real world.  AI grabs onto the neural net paradigm,
>>say, and then never bothers to check if what is done with neural
>>nets has anything to do with actual brains.
>
>This is symptomatic of a common fallacy.

No, it is not.  You did not catch the point of my posting, embedded in
the subject line.

>					   Why should the way our brains
>work be the only way "brains" can work?  Why shouldn't *A*I workers look
>at weird and wonderful models?

AI researchers can do whatever they want.  But they should stop trying
to gain scientific legitimacy from wild unproven conjectures.

>			         We (basically) don't know anything about
>how the brain really works anyway, so who can really tell if what they're
>doing corresponds to (some part of) the brain?

Right.  Or if they're all just hacking for the hell of it.

But if they are in fact interested in the brain, then they could period-
ically check back at what is know about real brains now and then.  Since
they don't, I think Edelman's "Aristotelian dentistry" criticism is per-
fectly valid.

In article <3c84f2a9.224b@apollo.uucp>, nelson_p@apollo (Peter Nelson) writes,
replying to the same article:

>  I don't see why everyone gets hung up on mimicking natural
>  intelligence.  The point is to solve real-world problems.

This makes for an engineering discipline, not a science.  I'm all for
AI research in methods of solving difficult ill-defined problems.  But
calling the resulting behavior "intelligent" is completely unjustified.

Indeed, many modern dictionaries now give an extra meaning to the word
"intelligent", thanks, partly due to AI's decades of abuse of the term:
it means "able to peform some of the functions of a computer".

Ain't it wonderful?  AI succeeded by changing the meaning of the word.

ucbvax!garnet!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

sierch@well.UUCP (Michael Sierchio) (06/12/88)

I agree, I think anyone should study whatever s/he likes -- after all, what
matters but whatever you decide matters.  I also agree that, simply because
you are interested in something, you shouldn't expect me to regard your
study as important or valid.

AI suffers from the same syndrome as many academic fields -- ddissertations
are the little monographs that are part of the ticket to respectability in
academe.  The big, seminal questions (seedy business, I know) remain 
unanswered, while the rush to produce results and get grants and make $$
(or pounds, the symbol for which...) is overwhelming. Perhaps we would not
be complaining if the study of intelligence and automata, and all the
theoretical foundations for AI work received their due. It HAS become an
engineering discipline, if not for the nefarious reasons I mentioned, then
simply because the gratification that comes from RESULTS is easier to get
than answers to the nagging questions about what we are, and what intelligence
is, etc.

Engineering has its pleasures, and I wouldn't deny them to anyone. But to
those who hold fast to the "?" and abjure the "!", I salute you.
-- 
	Michael Sierchio @ SMALL SYSTEMS SOLUTIONS
	2733 Fulton St / Berkeley / CA / 94705     (415) 845-1755

	sierch@well.UUCP	{..ucbvax, etc...}!lll-crg!well!sierch

marsh@mitre-bedford.ARPA (Ralph J. Marshall) (06/13/88)

In article <10785@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) writes:
>
>Indeed, many modern dictionaries now give an extra meaning to the word
>"intelligent", thanks, partly due to AI's decades of abuse of the term:
>it means "able to peform some of the functions of a computer".
>
>Ain't it wonderful?  AI succeeded by changing the meaning of the word.
>
>ucbvax!garnet!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

I don't know what dictionary you are smoking, but _MY_ dictionary has the
following perfectly reasonable definition of intelligence:

	"The ability to learn or understand or to deal with new or
	 trying situations." (Webster's New 9th Collegiate Dictionary)

I'm not at all sure that this is really the focus of current AI work,
but I am reasonably convinced that it is a long-term goal that is worth
pursuing.

bc@mit-amt.MEDIA.MIT.EDU (bill coderre) (06/14/88)

In article <34227@linus.UUCP> marsh@mbunix (Ralph Marshall) writes:
>In article <10785@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) writes:
>>Ain't it wonderful?  AI succeeded by changing the meaning of the word.
......(lots of important stuff deleted)
>I'm not at all sure that this is really the focus of current AI work,
>but I am reasonably convinced that it is a long-term goal that is worth
>pursuing.

Oh boy. Just wonderful. We have people who have never done AI arguing
about whether or not it is a science and whether or not it CAN succeed
ever and the definition of Free Will and whether a computer can have
some.

It just goes on and on!


Ladies and Gentlemen, might I remind you that this group is supposed
to be about AI, and although there should be some discussion of its
social impact, and maybe even an enlightened comment about its
philosophical value, the most important thing is to discuss AI itself:
programming tricks, neat ideas, and approaches to intelligence and
learning -- not have semantic arguments or ones about whose dictionary
is bigger.

I submit that the definition of Free Will (whateverTHATis) is NOT AI.

I submit that those who wish to argue in this group DO SOME AI or at
least read some of the gazillions of books about it BEFORE they go
spouting off about what some lump of organic matter (be it silicon or
carbon based) can or cannot do.

May I also inform the above participants that a MAJORITY of AI
research is centered around some of the following:

Description matching and goal reduction
Exploiting constraints
Path analysis and finding alternatives
Control metaphors
Problem Solving paradigms
Logic and Theorem Proving
Laguage Understanding
Image Understanding
Learning from descriptions and samples
Learning from experience
Knowledge Acquisition
Knowledge Representation

(Well, my list isn't very good, since I just copied it out of the table
of contents of one of the AI books.)

Might I also suggest that if you don't understand the fundamental and
crucial topics above, that you refrain from telling me what I am doing
with my research. As it happens, I am doing simulations of animal
behavior using Society of Mind theories. So I do lots of learning and
knowledge acquisition.

And if you decide to find out about these topics, which are extremely
interesting and fun, might I suggest a book called "The Dictionary of
Artificial Intelligence." 

And of course, I have to plug Society of Mind both since it is the
source of many valuable new questions for AI to pursue, and since
Marvin Minsky is my advisor. It is also simple enough for high school
students to read.

If you have any serious AI questions, feel free to write to me (if
they are simple) or post them (if you need a lot of answers). I will
answer what I can.

I realize much of the banter is due to crossposting from
talk.philosphy, so folks over there, could you please avoid future
crossposts? Thank...

Oh and Have a Nice Day................................mr bc

jim@epistemi.ed.ac.uk (Jim Scobbie) (06/16/88)

In article <2618@mit-amt.MEDIA.MIT.EDU> bc@media-lab.media.mit.edu.MEDIA.MIT.EDU (bill coderre) writes:

 [[Do I quote out of context?? (ok paraphrase/quote) ]]

>Ladies and Gentlemen, might I remind you that this group is supposed
>to be about AI, and although there should be some discussion of its
>social impact, and maybe even an enlightened comment about its
>philosophical value, the most important thing is to discuss AI itself:

here it is, `AI itself'

>programming tricks, neat ideas, ...

woops a bit of reality there, check, remember critical audience, better
add in a wee bit of science to keep them happy

>                               ...and approaches to intelligence and learning


=========================================================but to be serious==

>May I also inform the above participants that a MAJORITY of AI
 research is centered around some of the following:

>Description matching and goal reduction
 Exploiting constraints
 Path analysis and finding alternatives
 Control metaphors
 Problem Solving paradigms
 Logic and Theorem Proving
 Laguage Understanding
 Image Understanding
 Learning from descriptions and samples
 Learning from experience
 Knowledge Acquisition
 Knowledge Representation

>(Well, my list isn't very good, since I just copied it out of the table
 of contents of one of the AI books.)
>refrain from telling me what I am doing
 with my research. As it happens, I am doing simulations of animal
 behavior using Society of Mind theories.
>And of course, I have to plug Society of Mind both since it is the
 source of many valuable new questions for AI to pursue, and since
 Marvin Minsky is my advisor. It is also simple enough for high school
 students to read.      
			(I've laughed out loud every time I've read this)

>I realize much of the banter is due to crossposting from
 talk.philosphy, so folks over there, could you please avoid future
 crossposts? Thank...

         (Sure, high schools produce easier questions to answer, yes?)



-- 
Jim Scobbie:    Centre for Cognitive Science and Department of Linguistics,
		Edinburgh University, 
                2 Buccleuch Place, Edinburgh, EH8 9LW, SCOTLAND
UUCP:	 ...!ukc!cstvax!epistemi!jim     JANET:	 jim@uk.ac.ed.epistemi     

weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) (06/27/88)

In article <????>, now expired here, ???? asked me for references.  I
find this request strange, since at least one of my references was in
the very article being replied to, although not spelled out as such.

Anyway, let me recommend the following works by neurophysiologists:

G M Edelman _Neural Darwinism: The Theory of Neuronal Group Selection_
(Basic Books, 1987)

C A Skarda and W J Freeman "How brains make chaos in order to make sense
of the world", _Behavorial and Brain Sciences_, (1987) 10:2 pp 161-195.

These researchers start by looking at *real* brains, *real* EEGs, they
work with what is known about *real* biological systems, and derive very
intriguing connectionist-like models.  To me, *this* is science.

GME rejects all the standard categories about the real world as the start-
ing point for anything.  He views brains as--yes, a Society of Mind--but
in this case a *biological* society whose basic unit is the neuronal group,
and that the brain develops by these neuronal groups evolving in classical
Darwinian competition with each other, as stimulated by their environment.

CAS & WJF have developed a rudimentary chaotic model based on the study
of olfactory bulb EEGs in rabbits.  They hooked together actual ODEs with
actual parameters that describe actual rabbit brains, and get chaotic EEG
like results.
------------------------------------------------------------------------
In article <34227@linus.UUCP>, marsh@mitre-bedford (Ralph J. Marshall) writes:
>	"The ability to learn or understand or to deal with new or
>	 trying situations."

>I'm not at all sure that this is really the focus of current AI work,
>but I am reasonably convinced that it is a long-term goal that is worth
>pursuing.

Well, sure.  So what?  Everyone's in favor of apple pie.
------------------------------------------------------------------------
In article <2618@mit-amt.MEDIA.MIT.EDU>, bc@mit-amt (bill coderre) writes:

>Oh boy. Just wonderful. We have people who have never done AI arguing
>about whether or not it is a science [...]

We've also got what I think a lot of people who've never studied the
philosophy of science here too.  Join the crowd.

>May I also inform the above participants that a MAJORITY of AI
>research is centered around some of the following:

>[a list of topics]

Which sure sounded like programming/engineering to me.

>		   As it happens, I am doing simulations of animal
>behavior using Society of Mind theories. So I do lots of learning and
>knowledge acquisition.

Well good for you!  But are you doing SCIENCE?  As in:

If your simulations have only the slightest relevance to ethology, is your
advisor going to tell you to chuck everything and try again?  I doubt it.

ucbvax!garnet!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

bc@mit-amt.MEDIA.MIT.EDU (bill coderre and his pets) (06/27/88)

In article <11387@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) writes:
>Anyway, let me recommend the following works by neurophysiologists:
(references)

>These researchers start by looking at *real* brains, *real* EEGs, they
>work with what is known about *real* biological systems, and derive very
>intriguing connectionist-like models.  To me, *this* is science.

And working in the other direction is not SCIENCE? Oh please...

>CAS & WJF have developed a rudimentary chaotic model based on the study
>of olfactory bulb EEGs in rabbits.  They hooked together actual ODEs with
>actual parameters that describe actual rabbit brains, and get chaotic EEG
>like results.

There is still much that is not understood about how neurons work.
Practically nothing is known about how structures of neurons work. In
50 years, maybe we will have a better idea. In the mean time,
modelling incomplete and incorrect physical data is risky at best. In
the mean time, synthesizing models is just as useful.

>------------------------------------------------------------------------
>In article <2618@mit-amt.MEDIA.MIT.EDU>, bc@mit-amt (bill coderre) writes:
>>Oh boy. Just wonderful. We have people who have never done AI arguing
>>about whether or not it is a science [...]

>We've also got what I think a lot of people who've never studied the
>philosophy of science here too.  Join the crowd.

I took a course from Kuhn. Speak for youself, chum.

>>May I also inform the above participants that a MAJORITY of AI
>>research is centered around some of the following:
>>[a list of topics]
>Which sure sounded like programming/engineering to me.

Oh excuse me. They're not SCIENCE. Oh my. Well, we can't go studying
THAT.

>>		   As it happens, I am doing simulations of animal
>>behavior using Society of Mind theories. So I do lots of learning and
>>knowledge acquisition.
>Well good for you!  But are you doing SCIENCE?  As in:
>If your simulations have only the slightest relevance to ethology, is your
>advisor going to tell you to chuck everything and try again?  I doubt it.

So sorry to disappoint you. My coworkers and I are modelling real,
observable behavior, drawn from fish and ants. We have colleagues at
the New England Aquarium and Harvard (Bert Holldobler).

Marvin Minsky, our advisor, warns that we should not get "stuck" in
closely reproducing behavior, much as it makes no sense for us to
model the chemistry of the organism to implement its behavior (and
considering that ants are almost entirely smell-driven, this is not a
trite statement!).

The bottom line is that it is unimportant for us to argue whether or
not this or that is Real Science (TM).

What is important is for us to create new knowledge either
analytically (which you endorse) OR SYNTHETICALLY (which is just as
much SCIENCE as the other).

Just go ask Kuhn..........................................mr bc
				   heart full of salsa jalapena

weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) (07/03/88)

I'm responding very slowly nowadays.  I think this will go over better
in s.phil.tech anyway, so I'm directing followups there.

In article <2663@mit-amt.MEDIA.MIT.EDU>, bc@mit-amt (bill coderre and his pets) writes:
>In article <11387@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) writes:

>>These researchers start by looking at *real* brains, *real* EEGs, they
>>work with what is known about *real* biological systems, and derive very
>>intriguing connectionist-like models.  To me, *this* is science.

>And working in the other direction is not SCIENCE? Oh please...

Indeed it isn't.  In principle, it could be, but it hasn't been.

Physics, for example, does work backwards.  All physical models are expected
to point to experiment.  Non-successful models are called "failures" or, more
politely, "mathematics".  They are not called "Artificial Reality" as a way
of hiding the failure.

(If it isn't clear, I do not consider mathematics to be a "science".  My say-
ing AI has not been science, in particular, is not meant as a pejorative.)

>>CAS & WJF have developed a rudimentary chaotic model based on the study
>>of olfactory bulb EEGs in rabbits.  They hooked together actual ODEs with
>>actual parameters that describe actual rabbit brains, and get chaotic EEG
>>like results.

>There is still much that is not understood about how neurons work.
>Practically nothing is known about how structures of neurons work.

And theorizing forever won't tell us either.  You have to get your hands
dirty.

>								    In
>50 years, maybe we will have a better idea. In the mean time,
>modelling incomplete and incorrect physical data is risky at best.

Incorrect???  What are you referring to?

Risky or not--it is "science".  It provides constraints that theory must
keep in mind.

>								    In
>the mean time, synthesizing models is just as useful.

No.  Synthesizing out of thin air is mostly useless.  Synthesizing when
there is experiment to give theory feedback, and theory to give exper-
iment a direction to look, is what is useful.  That is what Edelman,
Skarda and Freeman are doing.

>>We've also got what I think a lot of people who've never studied the
>>philosophy of science here too.  Join the crowd.
>
>I took a course from Kuhn. Speak for youself, chum.

Gee.  And I know Kuhn's son from long ago.  A whole course?  Just enough
time to memorize the important words.  I'm not impressed.

>>>May I also inform the above participants that a MAJORITY of AI
>>>research is centered around some of the following:
>>>[a list of topics]
>>Which sure sounded like programming/engineering to me.

>Oh excuse me. They're not SCIENCE. Oh my. Well, we can't go studying THAT.

What's the point?  Who said you had to study "science" in order to be
respectable.  I think philosophy is great stuff--but I don't call it
science.  The same for AI.

>>If your simulations have only the slightest relevance to ethology, is your
>>advisor going to tell you to chuck everything and try again?  I doubt it.

>So sorry to disappoint you. My coworkers and I are modelling real,
>observable behavior, drawn from fish and ants.

>Marvin Minsky, our advisor, warns that we should not get "stuck" in
>closely reproducing behavior,

That seems to be precisely what I said up above.

>The bottom line is that it is unimportant for us to argue whether or
>not this or that is Real Science (TM).

You do so anyway, I notice.

>What is important is for us to create new knowledge either
>analytically (which you endorse) OR SYNTHETICALLY (which is just as
>much SCIENCE as the other).

Huh??  Methinks you've got us backwards.  Heinously so.  And I strongly
disagree with this "just as much as the other".

>Just go ask Kuhn.

Frankly, I'm not all that impressed with Kuhn.

ucbvax!garnet!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

bc@mit-amt.MEDIA.MIT.EDU (bill coderre) (07/06/88)

I am going to wrap up this discussion here and now, since I am not
interested in semantic arguments or even philosophical ones. I'm sorry
to be rude. I have a thesis to finish as well, due in three
weeks. 

First, the claim was made that there is little or no research in AI
which counts as Science, in a specific interpretation. This statement
is incorrect.

For example, the reasearch that I an my immediate colleagues are doing
is "REAL" Science, since we model REAL animals, make very REALISTIC
behavior, and have REAL ethologists as critics of our work.

Next, the claim was made that synthesis as an approach to AI has not
panned out as Science. Well, wrong again. There's plenty of such.

Then I am told that few AI people understand the Philosophy of
Science. Well, gee. Lots of my colleagues have taken courses in such.
Most are merely interested in the fundamentals, and have taken survey
courses, but some fraction adopt a philosophical approach to AI.

If I was a better AI hacker, I would just append a list of references
to document my claims. Unfortunately, my references are a mess, so let
me point you at The Encyclopedia of Artificial Intelligence (J Wiley
and Sons), which is generally excellent, and although lacking specific
articles on AI as a Science (I think, I didn't find any on a quick
glance), there are plenty of references concering the more central
philosophical issues to AI. Highly recommended. (Incidentally, there's
plenty of stuff in there on the basic approaches to and results from
AI research, so if you're a pragmatic engineer, you'll enjoy it too.)

Enough. No more followups from me.