[comp.ai.philosophy] Truth and Rationality

corey@athena.mit.edu (Corey L Lofdahl) (12/03/90)

          Truth and Utility

I was in philosophy class the other day, listening to a lecture
on John Stewart Mills, when some undergrad asked, "does truth
provide utility?"  The professor asked him to elaborate, and he
came up with some pseudo-interesting examples.  For instance,
what if a gigantic meteor was about to smash the Earth.  Would
you want to know the truth, or would you want to remain content
in a comfortable lie, living out your remaining hours in blissful
ignorance?   Another offered the example of nuclear power. 
Wouldn't it be better not to know so much physics.  That way atom
bombs wouldn't exist.  For me the answer to that one would have
to be "no" because it nuke my favorite quote (so to speak) from
Senator C. Johnson of Colorado who said in 1945, for the first
time, the United States, "with vision and guts and plenty of
atomic bombs, ... [could] compel mankind to adopt the policy of
lasting peace ... or be burned to a crisp."  Hope Johnson wasn't
a CU student.  But this comment would have made Arthur
Schopenhauer happy.  You remember him, the German philosopher who
argued that the best anyone could hope for was renunciation of
desire, temporary absence of pain through the contemplation of
high art, and - with any luck - the eventual extinction of the
species.  Funny guy Schopenhauer, went over really big at
cocktail parties.

Anyway, this whole dialogue reminded me of my favorite argument
by Bertrand Russell who wrote in "Power", 

       "When a British military expedition invaded Tibet
       in 1905, the Tibetans at first advanced boldly,
       because the Lamas had given them magic charms
       against bullets.  When they nevertheless had
       casualties, the Lamas observed that the bullets
       were nickel-pointed, and explained that their
       charms were only effective against lead.  After
       this, the Tibetan armies showed less valor."

This little vignette, in a nutshell, captures the whole essence
of the problem.  These sucker Tibetan soldiers were fed a lie by
their Lamas that increased the Lamas' utility.  At their expense
too I may add.  This lie, implanted firmly in the heads of
Tibetan soldiers enabled them to attack like frenzied ... well
... like frenzied, nutty, madcap Tibetans.  (please, if any of
you out there in tpt-land are Tibetan and offended, don't write
me.  I'm far too busy to be abused right now.)

Unfortunately this false belief effectively transformed these
zany Tibetans into dead Tibetans.  And when the un-dead and now
less-zany Tibetans returned to the Lamas and said in their own
Tibetan way, "How come this belief that we initially accepted as
the truth, when tested empirically, resulted in very objective
death?"  

Now the Lama has a problem.  If he comes clean and says, "Hey
guys, I was just funning you.  You see, I told you that simply
because it had utility for me as an elite in our society," then
they soldiers may take away his company car and Lama
certificate.  In fact, if the troops were sufficiently irritated
as I would be if I lost my best buddy and almost got my own butt
shot off by the British, then I'd go find some bullets - nickel
or lead - and stick them into the Lama until his zaniness reached
a nadir level.  No, he feeds these poor guys another lie, "Oh
yeah, sorry guys.  These bullets were nickel tipped so of course
the goat's feet I gave you won't work.  Goat's feet only work for
lead.  Sorry," and then shrugs his shoulders.  I can just see the
these poor soldiers walking away from the Lama after the
explanation saying to his surviving buddy, "Boy, those Lamas sure
are smart.  I mean, this time it didn't work but it really IS
amazing what the CAN do with goat's feet today.  Just a little
more testing at Lama-Labs and they'll figure out the
nickel-tipped problem."  Sounds like nanotechnology.

Well, enough interesting story.  If you've read this far then
you're probably interested enough to consider the question that
really interests me: Is it rational to always believe in the
truth?  Rational of course is used in reference to the increase
of one's Maximum Expected Utility (MEU).

The above example shows a case where a lie increases one's
maximum expected utility.  The lie put forward by the Lama
increases each soldier's ability to kill British soldiers, which
in his opinion increases each soldier's utility.  The soldiers of
course, who value their own lives, see the lie as decreasing
their maximum expected utility.  Thus the difference must somehow
be reconciled.  

Now I can already hear the subjectivists who will no doubt argue,
"But how do you know its the truth?"  Well, those soldiers got a
pretty good idea when, charging British rifles protected by
nothing but goat's feet they mowed down like ... well ... they
just got mowed down okay?

Now here is the main problem with the subjectivist argument. 
They think being a subjectivist grants them the right to be
arbitrary.  Even the august conservative philosopher George
F. Will bought into this common misinterpretation when he wrote
that modern day philosophy is, "so committed to subjectivism that
it believes only in believing," which is true for hack media
types and Democrats, but it certainly isn't true here at MIT.

Let's introduce a little Bayesian notation to make the discussion
that much more academic, ergo more funner.  Remember the P(A/B)
reads "the probability of A given B".  Let's let A equal "death
by nickel tipped bullet" and let B equal "wearing a goat's foot".
Therefore, P(A/B) means, "the probability of death by nickel
tipped bullet given the wearing of a goat's foot."  At the outset
of the battle, the soldiers believed P(A/B) = 0.  But as the
casualties increased, so did P(A/B).  So overwhelming was the
evidence that for the most prudent soldiers, P(A/B) = 1; they
were sure that if they were hit with a nickel tipped bullet, they
would die.  And for those of you who want to argue, "Well maybe
he'll just get injured or wounded," you KNOW what I mean. 
Furthermore, I will switch this argument to black ravens, and I
think we all know how painful that can be.

So what does this have to do with subjectivity?  Let's abstract
upwards and say that a brain is chocked full of different
Bayesian probabilities which can, by simple isomorphisms, be
transformed to equivalent if-then rules a la Artificial
Intelligence or semantic networks a la Godel-Escher-Bach.  So in
any person's head, there resides lots of probabilities: P(A/B),
P(C/D), P(A/C), P(F/G), P(D/A), .... up to n where n is - like -
really, really big.  Now a person is absolutely free to choose
these probabilities - whatever he desires.  On that there is no
question, and in these terms, people truly are subjective. 
That's why kids are kids, n isn't very big and their
probabilities are all out of whack.  As they become adults, their
subjective minds more accurately map objective reality.

So while minds are admittedly subject, I would argue that utility
is maximized the more closely subjective minds map objective
reality.  Will they ever map exactly?  No, of course not.  But
some minds get closer than others and they maximize their
utility.  So I would argue that reality is objective, and
subjectivity can be broken up into normative and positive
sections.  Normative meaning what people SHOULD believe, and
positive meaning what people DO believe.

Now for those cases in which some benefit is received by an
aberration, I would argue that this occurs only within some
sub-realm, and that eventually when other beliefs attempt to base
themselves on that sub-belief, then the evidence will mount that
something is wrong.  For instance, take the Tibetan P(A/B). 
Believing it may make their soldiers fiercer in battle, but that
belief inhibits the progress of their physics.  In fact, such
beliefs by the Catholic church clearly inhibited scientific
development in Europe.  So while in limited cases, a lie may 
maximize expected utility, ultimately it decreases it; and
it is to be expected that, like the Lama, those who most profit
from those lies will continue to provide ad hoc explanations
until the whole belief edifice crumbles.  That's why we have
scientific revolutions a la T.S. Kuhn - to help people to better
map their subjective minds onto objective reality.

-----------------------------------------------------------------
          Corey L. Lofdahl         corey@athena.mit.edu
    A lie may fool someone else, but it tells you the truth: 
                         you're weak
-----------------------------------------------------------------

hiho@csd4.csd.uwm.edu (Mark Peterson) (12/03/90)

From article <1990Dec2.192939.11608@athena.mit.edu>, by corey@athena.mit.edu (Corey L Lofdahl):
> 
>           Truth and Utility
> 


[[lots of stuff... and by the way, pretty entertaining I thought.. deleted]]


Too busy to be abused, but not too busy to write all that?  I should
be so busy.  -)

A couple of thoughts:

re: Tibetan soldiers.  Do you suppose there's any utility in death?  -)

re: Truth.  No utility in the truth.  The truth, as everyone knows,
always hurts.


hiho
-- 
mark peterson         | hiho@csd4.csd.uwm.edu | "...and you know where
dept of philosophy    | voice: (414)335-5200  | *that's* at..."
uw-washington county  |                       |   Remark overheard at a too,
west bend, wi.        |                       |  too casual restaurant.