[comp.ai] An apology for being overly sarcastic

harwood@cvl.umd.edu (David Harwood) (07/07/87)

	I want to apologize for being overly sarcastic with Mr. Harnad.
Although I consider my complaint about his postings to be justified, I am
sorry about my overly-sarcastic manner. For the record, this apology was
my own idea, not involving discussion with others. I simply felt fairly
guilty about my irritable responses. (Actually, it is only recently that
I've had a chance to read this newsgroup; it has been suggested that I
read the moderated newsgroup instead - without posting of course!)


\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
	Briefly responding to a posted reply by B.I. Olasov, also to 
correspondence by email from D. Stampe (for different reasons):

In article <1071@bloom-beacon.MIT.EDU> bolasov@aphrodite.UUCP (Benjamin I Olasov) writes:
[...]
>Some of the most challenging and interesting problems of AI are philosophical
>in nature.  I frankly don't see why this fact should disturb anyone.
>
>Perhaps if more of us pursued our theoretical models with comparable rigor
>to that with which Mr. Harnad pursues his, the balance of topics represented
>on comp.ai might shift .....

	As I tried to make clear - supplying fairly clear examples of his
posting style - it is definitely not Mr. Harnad's particular philosophy or 
theoretical proclivity which irritated me - it was his manner of discussion
I was complaining about. (Just as others complained about my sarcasm, more
than the content of my complaint.) Among other things, for example, I 
scarcely consider his arguments to be what you say "rigorous." Some of the
discussants themselves have complained, albeit politely, about his somewhat
idiosyncratic usage of terminology (among other things).
	So you are mistaken in your suggestion about my complaint. Rigorous
use of very complex and abstract concepts is commonplace in many branches
of computer science, eg. semantic specification of languages executed by
parallel systems. The level of abstraction and rigor is not at all less
than in any area of inquiry, including philosophy or cognitive psychology.
On the other hand, I fully agree that both philosophy and psychology have
very important and relevant contributions to what is called "artificial
intelligence," although it seems to me that too much of the purported
interdisciplinary discussion is polemical and political rather than really
constructive. And I would add that much, even most, of AI's recent "advance"
has been nonsensical propaganda for funding, and devoid of theoretical 
foundation. 
	Also, I would add that Mr. Harnad - what is clear by his 
postings - is perhaps only superficially familiar with what are real
advances in symbolical "AI", eg development of very powerful systems for 
automatic deduction, which have practical importance for all of "AI" 
as well as have rigorous foundations. These surely are not entirely
founded on theories of human psychology or on speculative philosophy, 
and probably should not be, since we would like to consider computing
machines which do some things according to specification, and better
than we do. 
	I realize very well that some areas of AI are very much
harder than others - computer vision comes to mind ;-) and it is
obvious to everyone concerned that we need both numerical and symbolical 
algorithms and representations. (I will not get involved in discussing
what S.H. might mean by "symbolical", "analog", "invertible", and so
forth - I don't really know.) 
	I think it is also apparent that we might have yet
to consider some "connectionist" architectures and algorithms, which
perhaps do not admit any simple formal specification of input/output
relations. This would invite some philosophical speculation about
the adequacy of purely logical specification for development of
artificial intelligence. Conversely, we may already have sufficient
theoretical basis for 'creating' human-like artificial intelligence,
by functional simulation of neurons, although we do have the technology 
(and moral sense I hope) to de-engineer a human brain. This will surely
happen in the distant future only depending on our technology and not
on major improvements in our theoretical understanding of neurons. The
situation might well be that we can recreate human intelligence which
we still largely cannot comprehend by formal specification. In part,
these means that psychology, theoretical "AI", even S.H.'s "Total 
Turing Test" are loose ends as much as interdependent.
	(As a religious person, I wonder about what this might mean - 
I recall that an ancient interpretation of the Genesis story said that 
when mankind ate of the fruit of the knowledge of good and evil - just
as the serpent claimed - mankind became endowed with a power like that
of God - that is, having the power to create and to destroy worlds. In 
our times, our technology has surpassed our moral sensibilty - which 
many computer sceientists say does not exist anyway. Of course, other
Jewish tradition has it that many worlds have already been destroyed
before this one. I'm not even sure that pursuit of "AI" technology
is such a good thing, if it contributes to our destruction or loss of
dignity. But who knows, except for God?)



Response to a reply by email:
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\


Message-Id: <8707061548.AA11938@uhmanoa.ICS.HAWAII.EDU>
Date: Mon, 6 Jul 87 05:36:04-1000
From: seismo!scubed!sdcsvax!uhccux.UHCC.HAWAII.EDU!nosc!humu!stampe (David Stampe)
To: harwood@cvl.umd.edu (David Harwood)
In-Reply-To: harwood@cvl.umd.edu's message of 5 Jul 87 21:48:28 GMT
Subject: Re: The symbol grounding problem - please start your own newsgroup
Status: R

You have now posted four messages to comp.ai containing nothing
but rude complaints about another's postings on symbol grounding.
They are not required reading, and they don't prevent you from
reading or posting on other topics.  What you MAY NOT do is
disturb the newsgroup with irrelevant and loutish postings like
your last four.  There are people who care about how University of
Maryland employees behave in public.

If I were you, I'd consider a public apology.

David Stampe, Univ. of Hawaii.

\\\\\\\\\\\\
	I don't have any desire to prevent S.H. from posting,
as I have made clear. You are right that I should apologize for
being overly sarcastic. He deserved some of it, but I overdid
it.
	I don't like your mention of my employment here - which
might be considered to be a threat, either to my employment or
to post things which you dislike, even sarcastic complaints. If
you did threaten me like this, you would have misjudged me, also
misjudged what would be my reaction.
	In any case, you are right about the apology being due.

-David Harwood