[comp.ai] this is philosophy ??!!?

acha@centro.soar.cs.cmu.edu (Anurag Acharya) (05/04/88)

In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>I am always suspicious of any academic activity which has to request that it
>becomes a philosophical no-go area.  I know of no other area of activity which
>is so dependent on such a wide range of unwarranted assumptions.  Perhaps this
>has something to do with the axiomatic preferences of its founders, who came
>from mathematical traditions where you could believe anything as long as it was
>logically consistent.  

Just when is an assumption warranted ? By your yardstick (it seems ), 'logically 
inconsistent' assumptions are more likely to be warranted than the logically
consistent ones. Am I parsing you wrong or do you really claim that ?!

>logical errors in an argument is not enough to sensibly dismiss it, otherwise
>we would have to become resigned to widespread ignorance. 

Same problem again ( sigh ! ). Just how do you propose to argue with logically
inconsistent arguments ? Come on Mr. Cockton, what gives ? 

> My concern over AI
>is, like some psychology, it has no integration with social theories, especially
>those which see 'reality' as a negotiated outcome of social processes, and not
>logically consistent rules. 

You wish to assert that reality is a negotiated outcome of social processes ???
Imagine Mr. Cockton, you are standing on the 36th floor of a building and you and
your mates decide that you are Superman and can jump out without getting hurt.
By the 'negotiated outcome of social processes' claptrap, you really are Superman.
Would you then jump out and have fun ?

> Your system should prevaricate, stall, duck the
>issue, deny there's a problem, pray, write to an agony aunt, ask its
>mum, wait a while, get its friends to ring it up and ask it out ...

Whatever does all that stuff have to do with intelligence per se ?

Mr. Cockton, what constitutes a proof among you and your "philosopher/sociologist/.." 
colleagues ? Since logical consistency is taboo, logical errors are acceptable,
reality and truth are functions of the current whim of the largest organized gang
around ( oh! I am sorry, they are the 'negotiated ( who by ? ) outcomes of social
processes ( what processes ? )') how do you guys conduct research ? Get together and
vote on motions or what ?

-- anurag

-- 
Anurag Acharya		      Arpanet: acharya@centro.soar.cs.cmu.edu  	
			     
"There's no sense in being precise when you don't even know what you're
 talking about"   -- John von Neumann

marty1@houdi.UUCP (M.BRILLIANT) (05/04/88)

In article <1588@pt.cs.cmu.edu>, acha@centro.soar.cs.cmu.edu (Anurag Acharya) writes:
> In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
> ...
> > Your system should prevaricate, stall, duck the
> >issue, deny there's a problem, pray, write to an agony aunt, ask its
> >mum, wait a while, get its friends to ring it up and ask it out ...
> 
> Whatever does all that stuff have to do with intelligence per se ?
> ....

Pardon me for abstracting out of context.  Also for daring to comment
when I am not an AI researcher, only an engineer waiting for a useful
result.

But I see that as an illuminating bit of dialogue.  Cockton wants to
emulate the real human decision maker, and I cannot say with certainty
that he's wrong.  Acharya wants to avoid the pitfalls of human
fallibility, and I cannot say with certainty that he's wrong either.

I wish we could see these arguments as a conflict between researchers
who want to model the human mind, and researchers who want to make more
useful computer programs.  Then we could acknowledge that both schools
belong in AI, and stop arguing over which should drive out the other.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201)-949-1858
Holmdel, NJ 07733	ihnp4!houdi!marty1

Disclaimer: Opinions stated herein are mine unless and until my employer
            explicitly claims them; then I lose all rights to them.

nancyk@hpfclp.SDE.HP.COM (Nancy Kirkwood) (05/05/88)

nancyk@hpfclp.sde.hp.com                 Nancy Kirkwood at HP, Fort Collins

Come now!!! Don't try to defend "logical consistency" with exaggeration
and personal attack.

> Just when is an assumption warranted ? By your yardstick (it seems ), 
> 'logically inconsistent' assumptions are more likely to be warranted than 
                                           ^^^^
> the logically consistent ones.

It's important to remember that the rules of logic we are discussing come
from Western (European) cultural traditions, and derive much of their power
"from the consent of the governed," so to speak.  We have agreed that if
we present arguments which satisfy the rules of this system, that the
arguments are correct, and we are speaking "truth."  This is a very useful
protocol, but we should not be so narrow as to believe that it is the only
yardstick for truth.

The "laws" of physics certainly preclude jumping off a 36 story building
and expecting not to get hurt, but physicists would be the first to admit
that these laws are incomplete, and the natural processes involved are 
*not* completely known, and possibly never will be.  Nor can we be sure,
being fallible humans who don't know all the facts, that our supposed
logical arguments are useful or even correct.

"Reality" in the area of human social interactions is largely if not
completely the "negotiated outcome of social processes."  It has been a
topic of debate for thousands of years at least as to whether morality
has an abstract truth unrelated to the social milieu it is found in.

> Since logical consistency is taboo, logical errors are acceptable,
> reality and truth are functions of the current whim of the largest organized 
> gang around ( oh! I am sorry, they are the 'negotiated ( who by ? ) outcomes 
> of social processes ( what processes ? )') how do you guys conduct research ?

Distorting someone's statements and then attacking the distortions is
not an effective means of carrying on a productive discussion (though
it does stir up interest :-)).  

						-nancyk
*   *   *   ***********************************************   *   *   *
*       "There are more things in heaven and earth, Horatio,          *
*                than are dreamt of in your philosophy."              *
*                              -Shakespeare                           *
*   *   *   ***********************************************   *   *   * 

byerly@cars.rutgers.edu (Boyce Byerly ) (05/07/88)

|In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
|(Gilbert Cockton) writes:
|>logical errors in an argument is not enough to sensibly dismiss it, otherwise
|>we would have to become resigned to widespread ignorance. 
|
|To which acha@centro.soar.cs.cmu.edu (Anurag Acharya) replies: Just
|when is an assumption warranted ? By your yardstick (it seems ),
|'logically inconsistent' assumptions are more likely to be warranted
|than the logically consistent ones. Am I parsing you wrong or do you
|really claim that ?!

My feelings on this are that "hard logic", as perfected in first-order
predicate calculus, is a wonderful and very powerful form of
reasoning.  However, it seems to have a number of drawbacks as a
rigorous standard for AI systems, from both the cognitive modeling and
engineering standpoints.

1) It is not a natural or easy way to represent probabalistic or
intuitive knowledge. 

2) In representing human knowledge and discourse, it fails because it
does not recognize or deal with contradiction.  In a rigorously
logical system, if 

  P ==> Q
  ~Q
   P
Then we have the tautology ~Q and Q.

If you don't believe human beings can have the above deriveably
contradictory structures in their logical environments, I suggest you
spend a few hours listening to some of our great political leaders :-)
Mr. Reagan's statements on dealing with terrorists shortly before
Iranscam/Contragate leap to mind, but I am sure you can find equally
good examples in any political party.  People normally keep a lot of
contradictory information in their minds, and not from dishonesty -
you simply can't tear out a premise because it causes a contradiction
after exhaustive derivation.

3) Logic also falls down in manipulating "belief-structures" about the
world.  The gap between belief and reality ( whatever THAT is) is
often large.  I am aware of this problem from reading texts on natural
language, but I think the problem occurs elsewhere, too.

Perhaps the logical deduction of western philosophy needs to take a
back seat for a bit and let less sensitive, more probalistic
rationalities drive for a while.

	Boyce
	Rutgers University DCS

smoliar@vaxa.isi.edu (Stephen Smoliar) (05/09/88)

In article <May.6.18.48.07.1988.29690@cars.rutgers.edu> byerly@cars.rutgers.edu
(Boyce Byerly ) writes:
>
>Perhaps the logical deduction of western philosophy needs to take a
>back seat for a bit and let less sensitive, more probalistic
>rationalities drive for a while.
>
I have a favoire paper which I always like to recommend when folks like Boyce
propose putting probabilistic reasoning "in the driver's seat:"

	Alvan R. Feinstein
	Clinical biostatistics XXXIX.  The haze of Bayes, the aerial palaces
		of decision analysis, and the computerized Ouija board.
	CLINICAL PHARMACOLOGY AND THERAPUTICS
	Vol. 21, No. 4
	pp. 482-496

This is an excellent (as well as entertaining) exposition of many of the
pitfalls of such reasoning written by a Professor of Medicine and Epidemiology
at the Yale University School of Medicine.  I do not wish this endorsement to
be interpreted as a wholesale condemnation of the use of probabilities . . .
just a warning that they can lead to just as much trouble as an attempt to
reduce the entire world of first-order predicate calculus.  We DEFINITELY
need abstractions better than such logical constructs to deal with issues
such as uncertainty and belief, but it is most unclear that probability
theory is going to provide those abstractions.  More likely, we should
be investigating the shortcomings of natural deduction as a set of rules
which represent the control of reasoning and consider, instead, possibilities
of alternative rules, as well as the possibility that there is no one rule
set which is used universally but that different sets of rules are engaged
under different circumstances.

rjc@edai.ed.ac.uk (Richard Caley) (05/10/88)

In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:

> { that 'reality' is seen in some theories as  the negotiated outcome of 
>   social processes }

In reply acha@centro.soar.cs.cmu.edu writes

>You wish to assert that reality is a negotiated outcome of social processes ?

No he said that some theories said that and there are good arguments for 
them and AI tends to ignore them.

> Imagine Mr. Cockton, you are standing on the 36th floor of a building and 
> you and your mates decide that you are Superman and can jump out without 
> getting hurt.

Then there is something going wrong in the negotiations within the group!!

> By the 'negotiated outcome of social processes' claptrap, you really are 
> Superman.

Calling something claptrap is not a very good argument against it, 
especially when you are talking through your hat.

Saying that Y is the result of process X does not imply that any result
from X is a valid Y. In particular 'reality is the outcome
of social negotiation' does not imply that "real world" (whatever that is)
constraints do not have an effect. 

Peoples' view of the world is very restricted and so their model of how
the world really is is massively underconstrained; social and logical
norms are used to fill in the gaps. I have no proof that Australia exists
since I've never been there; I accept the picture of reallity which has 
developed in this society which includes the existence of Australia.

> Would you then jump out and have fun ?

If we decided that I was Superman then presumably there is good evidence
for that assumption, since it is pretty hard to swallow. _In_such_a_case_
I might jump. Being a careful soul I would probably try some smaller drops
first!

To say you would not jump would be to say that you would not accept that
you were Superman no matter _how_ good the evidence. Unless you say that the
concept of you being Superman is impossible ( say logically inconsistent with
your basic assumptions about the world ), which is ruled out by the 
presuppositions of the example ( since if this was so you would never come
to the consensus that you were him ), then you _must_ accept that sufficient
evidence would cause you to believe and hence be prepared to jump.




-- 

	Real Mailers		rjc@uk.ac.ed.edai
	Imaginary Mailers	. . . .!mcvax!ukc!cstvax!edai!rjc

cwp@otter.hple.hp.com (Chris Preist) (05/10/88)

M. Brilliant, you have made a very valid, but not entirely fair, point.
This debate is not really about Cog Sci style AI vs Engineering AI, but 
rather the claims of Cog Sci style AI. Even if someone waves a magic 
wand and proves once and for all that machines canot be 'intelligent',
it does not mean that nothing useful can come out of AI..

Now THAT's ANOTHER debate!!!!!

Chris

acha@centro.soar.cs.cmu.edu.UUCP (05/12/88)

In article <86@edai.ed.ac.uk> rjc@edai.ed.ac.uk (Richard Caley) writes:
>> Imagine Mr. Cockton, you are standing on the 36th floor of a building and 
>> you and your mates decide that you are Superman and can jump out without 
>> getting hurt.
>Then there is something going wrong in the negotiations within the group!!

Oh, yes! There definitely is! But it is still is a "negotiation" and it is 
"social"!. Since 'reality' and 'truth' are being
defined as "negotiated outcomes of social processes", there are no 
constraints on what these outcomes may be. I can see no reason why a group couldn't
conclude that ( esp. since physical world constraints are not necessarilly
a part of these "negotiations").

>Saying that Y is the result of process X does not imply that any result
>from X is a valid Y. In particular 'reality is the outcome
>of social negotiation' does not imply that "real world" (whatever that is)
>constraints do not have an effect. 

Do we have "valid" and "invalid" realities around ? 

>If we decided that I was Superman then presumably there is good evidence
>for that assumption, since it is pretty hard to swallow. _In_such_a_case_
>I might jump. Being a careful soul I would probably try some smaller drops
>first!

Why would it be pretty hard to swallow ? And why do you need "good" evidence ?
For that matter, what IS good evidence - that ten guys ( possibly deranged or
malicious ) say so ? Have you thought why would you consider getting some real
hard data by trying out smaller drops ? It is because Physical World just won't
go away and the only real evidence that even you would accept are actual outcomes
of physical events. Physical world is the final arbiter of
"reality" and "truth" no matter what process you use to decide on your course
of action.


>To say you would not jump would be to say that you would not accept that
>you were Superman no matter _how_ good the evidence.

If you accept consensus of a group of people as "evidence", does the degree of
goodness depend on the number of people, or what ?

> Unless you say that the
>concept of you being Superman is impossible ( say logically inconsistent with
>your basic assumptions about the world ), which is ruled out by the 
>presuppositions of the example ( since if this was so you would never come
>to the consensus that you were him ), then you _must_ accept that sufficient
>evidence would cause you to believe and hence be prepared to jump.

Ah, well.. if you reject logical consistency as a valid
basis for argument then you could come to any conclusion/consensus in the
world you please - you could conclude that you (simultaneously) were and 
were not Superman! Then, do you jump out or not ? ( or maybe teeter at the edge :-)) 
On the other hand, if you accept logical consistency as a valid basis for
argument - you have no need for a crowd to back you up. 

Come on, does anyone really believe that if he and his pals reach a consensus on
some aspect of the world - the world would change to suit them ? That is the
conclusion I keep getting out of all these nebulous and hazy stuff about 
'reality' being a function of 'social processes'.
-- 
Anurag Acharya		      Arpanet: acharya@centro.soar.cs.cmu.edu  	
			     
"There's no sense in being precise when you don't even know what you're
 talking about"   -- John von Neumann