[sci.philosophy.tech] Who else isn't a science?

weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) (07/03/88)

I'm responding very slowly nowadays.  I think this will go over better
in s.phil.tech anyway, so I'm directing followups there.

In article <2663@mit-amt.MEDIA.MIT.EDU>, bc@mit-amt (bill coderre and his pets) writes:
>In article <11387@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu (Obnoxious Math Grad Student) writes:

>>These researchers start by looking at *real* brains, *real* EEGs, they
>>work with what is known about *real* biological systems, and derive very
>>intriguing connectionist-like models.  To me, *this* is science.

>And working in the other direction is not SCIENCE? Oh please...

Indeed it isn't.  In principle, it could be, but it hasn't been.

Physics, for example, does work backwards.  All physical models are expected
to point to experiment.  Non-successful models are called "failures" or, more
politely, "mathematics".  They are not called "Artificial Reality" as a way
of hiding the failure.

(If it isn't clear, I do not consider mathematics to be a "science".  My say-
ing AI has not been science, in particular, is not meant as a pejorative.)

>>CAS & WJF have developed a rudimentary chaotic model based on the study
>>of olfactory bulb EEGs in rabbits.  They hooked together actual ODEs with
>>actual parameters that describe actual rabbit brains, and get chaotic EEG
>>like results.

>There is still much that is not understood about how neurons work.
>Practically nothing is known about how structures of neurons work.

And theorizing forever won't tell us either.  You have to get your hands
dirty.

>								    In
>50 years, maybe we will have a better idea. In the mean time,
>modelling incomplete and incorrect physical data is risky at best.

Incorrect???  What are you referring to?

Risky or not--it is "science".  It provides constraints that theory must
keep in mind.

>								    In
>the mean time, synthesizing models is just as useful.

No.  Synthesizing out of thin air is mostly useless.  Synthesizing when
there is experiment to give theory feedback, and theory to give exper-
iment a direction to look, is what is useful.  That is what Edelman,
Skarda and Freeman are doing.

>>We've also got what I think a lot of people who've never studied the
>>philosophy of science here too.  Join the crowd.
>
>I took a course from Kuhn. Speak for youself, chum.

Gee.  And I know Kuhn's son from long ago.  A whole course?  Just enough
time to memorize the important words.  I'm not impressed.

>>>May I also inform the above participants that a MAJORITY of AI
>>>research is centered around some of the following:
>>>[a list of topics]
>>Which sure sounded like programming/engineering to me.

>Oh excuse me. They're not SCIENCE. Oh my. Well, we can't go studying THAT.

What's the point?  Who said you had to study "science" in order to be
respectable.  I think philosophy is great stuff--but I don't call it
science.  The same for AI.

>>If your simulations have only the slightest relevance to ethology, is your
>>advisor going to tell you to chuck everything and try again?  I doubt it.

>So sorry to disappoint you. My coworkers and I are modelling real,
>observable behavior, drawn from fish and ants.

>Marvin Minsky, our advisor, warns that we should not get "stuck" in
>closely reproducing behavior,

That seems to be precisely what I said up above.

>The bottom line is that it is unimportant for us to argue whether or
>not this or that is Real Science (TM).

You do so anyway, I notice.

>What is important is for us to create new knowledge either
>analytically (which you endorse) OR SYNTHETICALLY (which is just as
>much SCIENCE as the other).

Huh??  Methinks you've got us backwards.  Heinously so.  And I strongly
disagree with this "just as much as the other".

>Just go ask Kuhn.

Frankly, I'm not all that impressed with Kuhn.

ucbvax!garnet!weemba	Matthew P Wiener/Brahms Gang/Berkeley CA 94720

bc@mit-amt.MEDIA.MIT.EDU (bill coderre) (07/06/88)

I am going to wrap up this discussion here and now, since I am not
interested in semantic arguments or even philosophical ones. I'm sorry
to be rude. I have a thesis to finish as well, due in three
weeks. 

First, the claim was made that there is little or no research in AI
which counts as Science, in a specific interpretation. This statement
is incorrect.

For example, the reasearch that I an my immediate colleagues are doing
is "REAL" Science, since we model REAL animals, make very REALISTIC
behavior, and have REAL ethologists as critics of our work.

Next, the claim was made that synthesis as an approach to AI has not
panned out as Science. Well, wrong again. There's plenty of such.

Then I am told that few AI people understand the Philosophy of
Science. Well, gee. Lots of my colleagues have taken courses in such.
Most are merely interested in the fundamentals, and have taken survey
courses, but some fraction adopt a philosophical approach to AI.

If I was a better AI hacker, I would just append a list of references
to document my claims. Unfortunately, my references are a mess, so let
me point you at The Encyclopedia of Artificial Intelligence (J Wiley
and Sons), which is generally excellent, and although lacking specific
articles on AI as a Science (I think, I didn't find any on a quick
glance), there are plenty of references concering the more central
philosophical issues to AI. Highly recommended. (Incidentally, there's
plenty of stuff in there on the basic approaches to and results from
AI research, so if you're a pragmatic engineer, you'll enjoy it too.)

Enough. No more followups from me.