[net.ai] Philosophy and other amusements.

BIESEL@RUTGERS.ARPA (06/25/84)

Judging from the responses on this net, the audience is evenly split between
those who consider philosophy a waste of time in the context of AI, and those
who love to dig up and discuss the same old chestnuts and conundrums that
have amused amateur philosophers for many years now.

First, any AI program worthy of that apellation is in fact an implementation
of of philosophical theory, whether the implementer is aware of that fact or
not. It is  unfortunate  that most implementers do *NOT* seem to be
aware of this.

Take something as apparently clear and unphilosophical as a vision program
trying to make sense out of a blocks-world. Well, all that code deciding
whether this or that junction of line segments could correspond to a corner
is ultimately based on the (usually subconscious) presumption that there
is a "real" world, that it exhibits certain regularities whether perceived
by man or machine, that these regularities correspond to arrangements of
"matter" and "energy", and that some aspects of these regularities can and
should serve to constrain the behavior of some machine. There are  even
more buried assumptions about the time invariance of physical phenomena,
the principle of causation, and the essential equivalence of "intelligent"
behavior realized by different kinds of hardware/mushware (i.e. cells vs.
transistors). ALL of these assumptions represent philosophical positions,
which at other times, and in other places would have been severely
questioned. It is only our common western heritage of rationalism and
materialism that cloaks these facts, and makes it appear that the matter is
settled. The unfortunate end-effect of this is that some of our more able
practitioners (hackers) are unable to critically examine the foundations
on which they build their systems, leading to ever more complex hacks, with
patches applied where the underlying fabric of thought becomes threadbare.

Second, for those who are fond of unscrewing the inscrutable, it should be
pointed out that philosophy has never answered any fundamental questions
(i.e. identity, duality, one vs. many, existence, essence etc. etc.).
That is not its purpose; instead it should be an attempt to critically
examine the foundations of our opinions and beliefs about the world, and
its meaning. Take a real hard look at why you believe that "...Intuition
is nothing more than..." thus-and-such, and if you come up with:'it is
intuitively obvious', or 'everybody knows that', you've uncovered a mental
blind spot. You may in the end confirm your original views, but at least
you will know why you believe what you do, and you will have become aware
of alternative views.

Consider a solipsist AI program: philosophically unassailable, logically
self-consistent, but functionally useless and indistinguishable from
an autistic program. I'm afraid that some of the AI program approaches
are just as dead-end, because they reflect only too well the simplistic
views of their authors.

        Pete    BIESEL@RUTGERS.ARPA


(quick, more gasoline, I think the flames are dying down...)

robison@eosp1.UUCP (Tobias D. Robison) (07/02/84)

References:

If we can fairly divide AI researchers into those who find discussions
of philosphy relevant and those who don't, then I have a mild warning
for those who don't -- ignoring philosophical and religious questions
that may arise in the context of AI is analogous to an idea expressed
in Tom Lehrer's "Werner von Braun":

	"Once the Rockets go up, who cares where they come down?
	That's not my department..."  (approximate quote)

Once the rockets, in the form of astoundingly successful AI programs,
go  up, they will land in the laps of non-AI people who will try to
make sense of them.
These non-AI people will worry about souls and human aspects of good AI
programs, in terms that will seem laughable within the field.
What will happen as you try to communicate with these laypersons?

To illustrate, here's a new spiffy form of Robison's challenge,
"quoting" a layperson  commenting on an AI program whose uncanny
ability to imitate human behavior suprasses his or her comprehension:

	That computer is amazing!  Only a human being could behave
	like that.  God must have bestowed a soul on that computer.

					- Toby Robison (not Robinson!)
					allegra!eosp1!robison
					decvax!ittvax!eosp1!robison

eugene@ames-lm.UUCP (Eugene Miya) (07/05/84)

With regard to expert systems, I thought of an interesting
[take this with a grain of salt] set of tests to evolve or refine
the development of such systems.  These tests would test the expertise
of such systems.  Take a classic system like MYCIN.
When the developers feel the system is ready for a shake down,
[remember, this is not entirely serious, but not for the weak of heart]
infect the developers of the system with one of the diseases in the
knowledgebase, and let them diagnose their own ailment.
There might be interesting evolutionary consequences in software development.

Similarly, other people developing other systems would put their
faith and lives on the line for the software systems they develop.
Are these systems, truly 'expert?'

Admittedly, not a rigorous test, but neither was Turing's.

The above are opinions of the author and not the funding Agency.

--eugene miya
  NASA Ames Research Center
  {hplabs,hao,dual}!ames-lm!aurora!eugene