[sci.philosophy.tech] The nature of knowledge

sarge@thirdi.UUCP (Sarge Gerbode) (01/01/70)

In article <1161@pdn.UUCP> alan@pdn.UUCP (0000-Alan Lovejoy) writes:

>One of the more interesting results of semiotics is that in order for
>something to serve as a "sign", it must be possible for the "sign" to
>be in error.  In other words, there can be no symbolic communication
>without the possibility of lying (or at least being mistaken).  If
>messages can inherently be false, then communication inherently
>requires belief in the veracity of the message.
>
>It is precisely the ability to be false that gives messages their power;
>it would otherwise be impossible to discuss the hypothetical cases, the "might
>be's", "might have been's", "could be's" and "should have been's".
>Abstractions require the ability to signify what is not.

Excellent point. Makes a lot of sense, for most messages.

Some messages, such as recitations of poetry, paintings, and the like, as well
as various aspects of body language, songs, and the like can't really be
mistaken or true.  Such messages are meant to convey a certain experience
(i.e. a mental picture or sensation or sense of experiencing something) to the
receiver (sometimes the exact experience is not specified by the originator).
Other messages are statements that *could* be meaningfully described as true
or false.

These messages, I would maintain, are those messages which refer to a
concept.  In these cases, the intent of the communicator is to convey a
concept, rather than an experience.  It is in the nature of a concept to be
considered true or false, but, in any case, a logical possibility.

Of course, we could define "message" to exclude non-conceptual communication,
in which case what you have said is exactly true.
-- 
"Absolute knowledge means never having to change your mind."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge

sarge@thirdi.UUCP (Sarge Gerbode) (07/02/87)

I an interested in the issue being discussed by Richard Carnes, Eric Raymond,
and Gene Ward Smith, namely: what constitutes bona fide knowledge.  I'd like to
present a rather radical approach to the issue as, if you like, a Devil's
Advocate position, though one I currently think is correct.  I'm willing,
however, to be argued out of it.

Knowledge takes on a different appearance when looked at from the viewpoint
that an individual takes at a particular time.  Michael Polanyi *Personal
Knowledge* is a good reference on this point, though my views could be
considered an extrapolation (or a corruption) of his.

At any particular moment, from the viewpoint of an individual, knowledge and
belief (meaning not a weak opinion but a firmly-held conviction) are one and
the same thing.  If I believe something (such as the truth of *this*
philosophical position), I say "I *know* it's true."  In other words, it is
knowledge, to me.  If you agree with my belief, you also call it knowledge,
because then it is a belief of *yours* and therefore knowledge for you.  If
you don't agree or aren't sure, you call it a "belief" of *mine*.  It isn't,
then, a belief of *yours*, in the sense of belief I gave above.  That, in my
view, is what knowledge actually is.

From this point, two further very important questions can be asked:

1.  What criteria do I (or others) commonly use (and what criteria *ought* we
    to use) to decide what to believe and what not to believe?

2.  What kinds of arguments or demonstrations could I adduce to get others to
    agree with my beliefs.

(1) and (2) could be reduced to the single question:

 "What are the criteria that individuals use in fixing belief, and what
criteria *should* they use?"

These criteria can and do vary enormously from person to person.  And for one
person, these criteria vary at different times, depending on the circumstances
or context.  These criteria have nothing to do with what *knowledge* is, from an
individual viewpoint, namely belief or acceptance of ideas as true.  It has a
lot to do with how we *decide* what we know (or believe).

The most common criterion for belief (or knowledge) is authority.  The vast,
vast majority of my beliefs are based on the assertions of others whom I
trust, *especially* those beliefs that are called scientific.  The fact is, I
simply have not had and could not have the time to do all the necessary
experiments myself, so I must rely on authorities (and textbooks, and the
like) to tell me what I should believe.  Now, of course, I assume that if I
*did* have the time to carry out any particular experiment, I would duplicate
the results reported by the authorities.  But this is a mere assumption, based
on acceptance of authority.  I don't think there's anything *wrong* with
believing trusted authorities.  There's a fine art to knowing which
authorities to trust.  But, in point of fact, we do *not* generally engage in
experimentation and observation as the basis of *most* of our beliefs.  And
with many of these beliefs, we never find ourselves in a position to *use*
them in order to determine for ourselves whether they work or not.  In fact,
if we did not absorb beliefs without question as infants and children, we
wouldn't have any kind of world-view at all, not even one from which to
question certain ideas.

It is actually more in our own personal lives than in scientific pursuits that
we use empirical and pragmatic criteria for fixing belief.  In the past, on
several occasions, when I ate a certain kind of ice cream I felt sick, so I
assume that will occur in the future (an empirical criterion).  Or I find that
the assumption that others usually have good intentions is a workable
assumption, in that acting on it leads to good results (a pragmatic
criterion).  And so forth.  Other beliefs I hold because they are elegant or
aesthetic.  A certain chord on a guitar "feels" good, and so I accept it as
the correct chord.  Of two theories that fit the known facts, I would tend to
choose the one that is the most elegant or the simplest.  Some people accept
certain things as true because they find these things comforting, or because
they are novel or bizarre, and they are tired of boringly acceptable ideas.

Now it is a perfectly acceptable activity to advise people as to *how* they
should decide what to think, to believe, about the world.  A follower of
William of Occam would advise a bias towards simplicity; a hippie might say,
"If it feels good, believe it,"  a psychic might advise following intuition, a
theoretical physicist might advise mathematical elegance, a social scientist
might advise attending to statistics, etc..  To me, however, it would seem
wrong to say that just *one* criterion (such as a pragmatic one) will fit all
occasions.  It seems best to say that the criteria for belief should fit the
context to which that belief is appropriate.

To summarize:  As each of us looks at what he knows in his own life, what he
knows at a particular time is coextensive with what he believes at that time.
Thus, for an individual at a particular time, knowledge and belief are
equivalent.  Different methods of assigning belief to various ideas are
appropriate to different situations.  Knowing, and the various criteria we use
for deciding what we know and what we don't know, are two different things.
-- 
"From his own viewpoint, no one ever has false beliefs; he only *had* false
beliefs."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge

jfbuss@water.UUCP (07/03/87)

In article <48@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
>At any particular moment, from the viewpoint of an individual, knowledge and
>belief (meaning not a weak opinion but a firmly-held conviction) are one and
>the same thing.  If I believe something (such as the truth of *this*
>philosophical position), I say "I *know* it's true."  In other words, it is
>knowledge, to me.

A distinction between knowledge and true beliefs is not made in
everyday discourse.  We should not expect "know" in everyday use to
refer to "knowledge" in the philosophical sense.  The phrase "I know
it's true" usually means "Stop arguing with me.  I'm right, even though
I can't give a good reason."

If you want to argue that the philosophical concepts of "knowledge" and
"true beliefs" are the same, that is proper.  But this is a different
question from that of the common use of these words.


>To summarize:  As each of us looks at what he knows in his own life, what he
>knows at a particular time is coextensive with what he believes at that time.
> ...  Knowing, and the various criteria we use
>for deciding what we know and what we don't know, are two different things.

There are three things here:  what we have "knowledge" of (in the
philosophical sense), what we say we "know," and our criteria for
deciding what things to put in the second category.

eric@snark.UUCP (Eric S. Raymond) (07/06/87)

In article <48@thirdi.UUCP>, sarge@thirdi.UUCP (Sarge Gerbode) writes:
> At any particular moment, from the viewpoint of an individual, knowledge and
> belief (meaning not a weak opinion but a firmly-held conviction) are one and
> the same thing.  If I believe something (such as the truth of *this*
> philosophical position), I say "I *know* it's true."  In other words, it is
> knowledge, to me.  If you agree with my belief, you also call it knowledge,
> because then it is a belief of *yours* and therefore knowledge for you.  If
> you don't agree or aren't sure, you call it a "belief" of *mine*.  It isn't,
> then, a belief of *yours*, in the sense of belief I gave above.  That, in my
> view, is what knowledge actually is.

This is a correct *psychological* view of the relation of 'belief' and
'knowledge' to the believing mind, but it sidesteps the real issue, which
is the degree of confirmation of beliefs and how confirmation happens.

Also, it is quite possible for two people to have a shared 'belief'
that is not defined as 'knowledge' between them. Have you ever discussed
theology with a couple of Unitarians (for example)? 

Even if one were to accept your proposal as stated, there are problems.

	1. Your terminology doesn't solve any problems. "What are the
	   proper criteria for forming beliefs?" is not formally superior
	   to "What strategies lead to valid knowledge?", though I agree
	   that the connotations and emphases are different.

	2. Your terminology erases a useful distinction between

	   belief = weakly confirmed or not yet predictively checked
	   knowledge = strongly confirmed, successfully used for prediction

You later state that you think that one's method for evaluating beliefs
should vary, depending on context. This I completely disagree with, because
it takes you right back to a subjectivist "truth is what I *choose* to believe"
frame. Furthermore, this premise is unnecessary.

All the 'truth' cases you describe can be viewed as assertions about the
predictive value of statements. What varies is the kind of prediction being
made. In the case of (say) an equation in physics, one is predicting the
behavior of particles and forces; in the case of 'better' judgements about
musical chords, one is predicting the future responses of one's own auditory
and nervous system (and possibly the auditory/nervous systems of others).

Whether you accepted a belief on authority may be psychologically important
or not, but should have nothing to do with your methods for *checking*
beliefs. If I tell you that oxygen has an atomic number of 8, and remark
that I learned this from the CRC Handbook, I am making a predictive
statement about weighing oxygen which should be *tested* by weighing
oxygen; the question of my 'authority' response to the book only needs
to be opened if a) you find that I predicted incorrectly and b) upon seeing
your results I fail to be convinced.

Translating 'x is true' or 'x is a valid belief' into 'x predicts future
consequences y' and then testing y in some way isn't just a pragmatically
good thing, it is the *only* test of 'truth' that doesn't degenerate into
circularity or babble. If you doubt this, try to come up with a
counterexample. Try very hard. I shall be interested to see what, if
anything, you evolve.

And, BTW, welcome to the discussion. I criticize (and may continue to do
so) but I liked your posting.
-- 
      Eric S. Raymond
      UUCP:  {{seismo,ihnp4,rutgers}!cbmvax,sdcrdcf!burdvax}!snark!eric
      Post:  22 South Warren Avenue, Malvern, PA 19355
      Phone: (215)-296-5718
-- 
      Eric S. Raymond
      UUCP:  {{seismo,ihnp4,rutgers}!cbmvax,sdcrdcf!burdvax}!snark!eric
      Post:  22 South Warren Avenue, Malvern, PA 19355
      Phone: (215)-296-5718

eric@snark.UUCP (Eric S. Raymond) (07/06/87)

I'd have used email for this reply (as you should have for your flame),
but a public attack like that demands a public rejoinder.

In article <9871@duke.cs.duke.edu>, mps@duke.cs.duke.edu (Michael P. Smith) writes:
> Imagine a discussion on computer science with the summary line "I know
> what I'm talking about" and the reason given was that "I minored in
> computer science."

I'm sorry you interpreted it that way. "I know what I'm talking about."
was a response to some snottiness in an earlier posting by Gene Ward Smith
in which he imputed that I was using the terminology incorrectly.

I had no intention of claiming special access to Final Answers -- and
"if I have seen far, it is because I have stood on the shoulders of giants"
-- Democritus, Willam of Ockham, C. S. Peirce, Bertrand Russell, Ludwig
Wittgenstein, Alfred Korzybski (to name but a few).

And, BTW, I subsequently got email from 3 netters and (just this morning)
a transcontinental phone call from a fourth congratulating me on "saying
things that needed to be said" in this debate. I guess 'sophomoric tone'
is in the eye of the beholder.

Now: do you have anything constructive to contribute to the discussion?
-- 
      Eric S. Raymond
      UUCP:  {{seismo,ihnp4,rutgers}!cbmvax,sdcrdcf!burdvax}!snark!eric
      Post:  22 South Warren Avenue, Malvern, PA 19355
      Phone: (215)-296-5718

sarge@thirdi.UUCP (Sarge Gerbode) (07/06/87)

In article <1022@water.UUCP> jfbuss@water.waterloo.edu (Jonathan Buss) writes:
>
>There are three things here:  what we have "knowledge" of (in the
>philosophical sense), what we say we "know," and our criteria for
>deciding what things to put in the second category.

I'm pleased you concur with the distinction between knowledge and the criteria
for deciding what we know.  I think it's an important distinction.  One could
arrive at a piece of knowledge by a variety of different paths (e.g. you cold
see it on TV, see it with your eyes, hear it, figure it out, etc.) and it
could still be the same piece of knowledge (e.g. the knowledge that Kennedy was
shot).

I'm not entirely sure I follow what you're saying when you talk about
"philosophical knowledge" as a different sort of knowledge.  Do you mean that
when an ordinary person says "I know X", he is saying something different from
what a *philosopher* is saying when he says, "I know X."?  What's unclear is
what the special *philosophical* significance of the word "knowledge" is.  I'd
be interested to hear your views on the subject.

The meaning of "know" I am using relates to the knowledge that we operate on
in our everyday life, in all of our various activities, the knowledge that
makes a practical difference in our lives, the knowledge that we are aware of
experiencing.

What is meant by "philosophical knowledge", perhaps, might be *provable*
knowledge, i.e. knowledge whose truth can be demonstrated to others, as many
(perhaps most) kinds of personal knowledge cannot. For instance, I have a
certain idea in my head, e.g. I'm thinking of a number from 1 to 10.  I know
which number I have in my head with great certainty (6, actually), but I'm at
a loss to know how I could prove the truth of that belief to you.  So would
that not be knowledge, in the philosophical sense?  On the other hand, if I
assert that I have a cit500 terminal in front of me (which, unfortunately, I
do), *that* might be provable by demonstration.  So would that constitute
philosophical knowledge?  It seems to me, though, that the provability of a
belief has nothing to do with whether it is knowledge for an individual or not.
In many ways, my knowledge of which number is in my head is *more* certain and
philosophically respectable than my knowledge of the physical universe.

Another possibility is that, by "philosophical knowledge", you mean some sort
of *absolute* knowledge.  "The number I am thinking of" is about as close to
absolute knowledge as I can imagine.  I do not see, though, that this is
really any different from absolute certainty or absolute belief.  Those things
of which I am absolutely certain (at a given moment) are those things that I
know, at that moment.  Belief admits of degrees in exactly the same way that
knowledge does.  To say "I know X with certainty," is equivalent to saying "I
believe X with certainty"; to say "I know X is false" is equivalent to saying,
"I disbelieve X with certainty", and to say "I know X may be true," is
equivalent to "I have a moderate (or small) degree of certainty in the belief
that X."

Another possibility is that, in talking about "philosophical knowledge", you
are departing from the criterion I gave as the basis for my assertions, namely
that we were talking about an individual person at a particular moment.  When
you allow for a lapse of time or more than one individual, then the distinction
between knowledge and belief lies in disagreement.  If another person disagrees
with my "knowledge" at a given time, then he calls it a mere "belief" of mine.
If he agrees, he calls it "knowledge".  If I agree with a former belief of
mine, I call it "knowledge"; if I no longer agree with that belief, I call it a
mere belief (and a false one at that).

I don't know if I've covered what you meant by "philosophical knowledge", and
perhaps I should have been more patient and waited for you to explain what you
meant by the term before going off half-cocked.  But...

Also, "know", in everyday use, doesn't *necesarily* or even *usually* mean
"Stop arguing with me. I'm right, even though I can't give a good reason."
Seems to me it is more likely to mean "I believe this and I can prove it!".
Actually, "I know X" and the simple assertion "X" are essentially the same, and
would be verified in the same way.

>A distinction between knowledge and true beliefs is not made in
>everyday discourse.  We should not expect "know" in everyday use to
>refer to "knowledge" in the philosophical sense.
>
>If you want to argue that the philosophical concepts of "knowledge" and
>"true beliefs" are the same, that is proper.  But this is a different
>question from that of the common use of these words.
>

By the way, I don't think, from this particular, here-and-now individual
viewpoint, that one can make a valid distinction between belief and true
belief.  To a person at a given time, *all* his beliefs are true beliefs (see
my "motto" below).  They only become false if he later disagrees with them (at
which point they are, of course, no longer beliefs, but *former* beliefs), or
if others disagree with them (at which points, they are not beliefs for those
others).  So knowledge = belief = true belief, from the viewpoint of an
individual at a given time.

But, to me, the kicker is that the knowledge a person has *is* knowledge he has
at a given time.  He never has knowledge otherwise than at a given time or
otherwise than as an individual.  So it seems to me that that's what knowledge
*is*.  If knowledge were anything else, it owuld not be something that a person
could have, so what would be the point of it?
-- 
"From his own viewpoint, no one ever has false beliefs; he only *had* false
beliefs."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge

jiml@alberta.UUCP (07/06/87)

In article <51@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
[discussion of philosophical knowledge]
>
>I don't think, from this particular, here-and-now individual
>viewpoint, that one can make a valid distinction between belief and true
>belief.  To a person at a given time, *all* his beliefs are true beliefs.
				       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>They only become false if he later disagrees with them (at
>which point they are, of course, no longer beliefs, but *former* beliefs), or
>if others disagree with them (at which points, they are not beliefs for those
>others).  So knowledge = belief = true belief, from the viewpoint of an
>individual at a given time.
>
>Sarge Gerbode
>Institute for Research in Metapsychology
>UUCP:  pyramid!thirdi!sarge

I'm not convinced of the portion high-lighted above.  It seems to me that
not all of my beliefs reflect true propositions (unless I'm incredibly
skilled in choosing what to believe).  Nonetheless, it is not true of
any particular belief p that I consider it to be false, otherwise I would
reject it and believe ~p.  Let us also assume that I have a finite number
of beliefs.
  Consider a much smaller scenario--one in which I have but three beliefs:

	1. Bel(p)
	2. Bel(q)
	3. Bel(~(p^q))

Surely if such a situation were to come about, you'd have no trouble
considering me to be inconsistent.  Yet my proposal is that we all
entertain a much greater version of precisely the same notion.  Are
we inconsistent, or just unreflective (are certain beliefs not questioned)?
Is this to deny

	4. Bel(Bel(p))

for some p?
-- 
  Jim Laycock		Philosophy grad, University of Alberta
  alberta!Jim_Laycock@UQV-MTS
    OR
  decvax!bellcore!ulysses!mhuxr!mhuxn!ihnp4!alberta!cavell!jiml

mps@duke.UUCP (07/07/87)

In article <51@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:

>By the way, I don't think, from this particular, here-and-now
>individual viewpoint, that one can make a valid distinction between
>belief and true belief.  To a person at a given time, *all* his
>beliefs are true beliefs (see my "motto" below).... So knowledge =
>belief = true belief, from the viewpoint of an individual at a given
>time.  
>-- 
>"From his own viewpoint, no one ever has false beliefs; he only *had*
>false beliefs."

Each of my beliefs I believe to be true, naturally.  But I do not
"here-and-now" believe that all my beliefs are true.  Such optimism
would be epistemically irrational.  "From my own viewpoint," not only
have I *had* false beliefs, I surely *have* some now.  I have never
had any false knowledge, however, nor do I now.

----------------------------------------------------------------------------
"Unless man have a natural bent in accordance with nature's, he has no
chance of understanding nature at all."		C. S. Peirce

Michael P. Smith	ARPA mps@duke.cs.duke.edu
----------------------------------------------------------------------------

sarge@thirdi.UUCP (Sarge Gerbode) (07/07/87)

In article <112@snark.UUCP> eric@snark.UUCP (Eric S. Raymond) writes:
>In article <48@thirdi.UUCP>, sarge@thirdi.UUCP (Sarge Gerbode) writes:
>> At any particular moment, from the viewpoint of an individual, knowledge and
>> belief (meaning not a weak opinion but a firmly-held conviction) are one and
>> the same thing.  If I believe something (such as the truth of *this*
>> philosophical position), I say "I *know* it's true."  In other words, it is
>> knowledge, to me.  If you agree with my belief, you also call it knowledge,
>> because then it is a belief of *yours* and therefore knowledge for you.  If
>> you don't agree or aren't sure, you call it a "belief" of *mine*.  It isn't,
>> then, a belief of *yours*, in the sense of belief I gave above.  That, in my
>> view, is what knowledge actually is.
>
>This is a correct *psychological* view of the relation of 'belief' and
>'knowledge' to the believing mind, but it sidesteps the real issue, which
>is the degree of confirmation of beliefs and how confirmation happens.
> ....
>	1. Your terminology doesn't solve any problems. "What are the
>	   proper criteria for forming beliefs?" is not formally superior
>	   to "What strategies lead to valid knowledge?", though I agree
>	   that the connotations and emphases are different.
>

You are right that it is a psychological approach (or I would say, a
METApsychological approach to these issues).  But, for practical purposes, you
will never see knowledge in the absence of the individual knower.  Whatever is
known is known by *someone*.  Therefore it does not seem to me to make sense to
look at the issue of knowledge in the absence of a knower and his viewpoint.
It is hard to conceive of what such knowledge would be like, if it existed.

You are right, however, in stating that I have not dealt with what the correct
criteria are for fixing belief, and you are also right that the question of
what strategies lead to valid knowledge (for a knower, of course) is the same
as the question of what the correct criteria are for forming beliefs.  I did
not mean to sidestep this important issue.  I simply didn't want to be too
awfully long-winded.  [I now see that I was anyway....] Perhaps we can get
into that as the discussion progresses.  Proper criteria (whatever they may
turn out to be) can be derived, I believe (or know?!), from an observation of
the criteria that are, in fact, universally used in forming beliefs.  If such
criteria didn't exist, people would find it almost impossible to reach
agreements, and that would be a real *drag*.

>Also, it is quite possible for two people to have a shared 'belief'
>that is not defined as 'knowledge' between them. Have you ever discussed
>theology with a couple of Unitarians (for example)? 
>

I wouldn't call this a "shared belief".  I'd call it "shared openmindedness"!

>	   belief = weakly confirmed or not yet predictively checked
>	   knowledge = strongly confirmed, successfully used for prediction
>

I think, in one usage of "belief", it means a weak conviction as opposed to a
strong one.  I don't mean it that way.  I sometimes mean it to denote a very
strong conviction.  At other times, I mean it to denote a conviction that can
range from complete disbelief, through various ranges of milder disbelief,
through various stages of stronger belief to complete conviction.  In the same
way, we can "know things, positively and negatively, with varying degrees of
certainty.  From an individual viewpoint, truth varies from complete falsity
through various degrees of improbability, various degreees of probability, to
complete truth.  "A belief" also can mean "an idea" or "an hypothesis", apart
from whether it is believed by an individual or not, and I'm afraid I do
sometimes use it in that way, though I shouldn't.  So you could say that an
idea is a belief before it is believed :-).  Admittedly, this usage is
confusing, but I hope you understand what I mean from the context.   If usage
continues to be a problem, perhaps we'll have to agree on a convention in the
usage of different words to cover these different meanings.

I would agree that prediction is *one* of the criteria for acceptance of an
idea, but not the only one.  One tends to accept ideas that have
predictive value (i.e. that help us to predict things) over those which do
not.  For this reason, we tend to accept an explanation of how things happen
like Newton's Laws of Motion over an explanation like "God makes everything
happen", because, while both fit the facts equally well (actually, the
explanation about God fits the facts slightly better. because no conceivable
fact could contradict it...), the former explanation has greater predictive
value because God knows what God is going to do next (engineering is easier
than theurgy or divination)!

However, one does not, I think, form one's beliefs on the basis of one's
predictions.  One forms them, if anything, on the basis of present and past
experience, not predicted future experience.  Now maybe one forms beliefs on
the basis of the observation of a constant pattern of past experience.  The
ball bounced the last fifteen times so if I drop in now, I believe it will
bounce again.  That is a belief about the future, a prediction (as many, but
not all, beliefs are), but it is itself  based, not on prediction, but on a
past consistency of pattern over time (another criterion people have for
accepting beliefs).  One shouldn't confuse methods of *checking* beliefs with
methods of *forming* them or confuse what you can *do* with a belief with the
way in which you acquired it.  If, when you had an belief, you always had to
wait for it to be predictively checked before you could know it, you would
never know anything.  Every time you thought something was true on the basis
of past experience, you would still have to wait for the outcome of a further
test before it would be knowledge.  Ad infinitum, because the future test
becomes a past experience, and now you need *another* future test before it is
knowledge, etc..

Actually, it sounds as though what you are really saying that all statements
are, in some sense, statements about the future -- i.e. that all knowledge can
be reduced (or expanded) to predictions.  On the face of it, that seems
counter-intuitive.  I know certain things about the past, and, generally, in
so knowing I am not even *thinking* about what I could do, in the future, to
*prove* what I said.  Some items of knowledge, in fact, seem *completely*
unprovable.  Any evidence for many (perhaps *most*) statements about the past
has long since disappeared, remaining subjectively only in the memory of an
individual.  In such a case, I think you *might* say that the statement that
such a thing happened is equivalent to a statement to the effect that if you
tried to remember it, yo would remember it in the wame way....  But this seems
an unnecessary gyration.  Seems to me it would be just as valid (or invalid)
to say that all statements about the future (all predictions) are really
statements about the past evidence that we sifted through to formulate them.
I'd rather take a statement at its face value.

>Translating 'x is true' or 'x is a valid belief' into 'x predicts future
>consequences y' and then testing y in some way isn't just a pragmatically
>good thing, it is the *only* test of 'truth' that doesn't degenerate into
>circularity or babble.

I cannot fight this assertion because it is tautologous.  Any test you could
make of a belief would *have* to come after you *had* the idea.  Therefore it
would, trivially, be in the future -- a prediction.  But, again, this is
different from the criteria you used to *formulate* the belief.

If you say that the predictive value of an idea (or a belief) is the *only*
criterion for its acceptance, then I would say that is contrary to my
observation, on the face of it.  If you say that that *ought* to be the basis
of belief, then I would say, "Why?" (and your answer would have to speak to my
current methods of fixing belief, otherwise I wouldn't believe you.)

>You later state that you think that one's method for evaluating beliefs
>should vary, depending on context. This I completely disagree with, because
>it takes you right back to a subjectivist "truth is what I *choose* to believe"
>frame.

I didn't say (I don't think) that the method for coming to believe something
*should* vary from context to context (although perhaps it should).  I said
that it *does* so vary.  It is useless, in a sense, to *prescribe* the way in
which people should fix beliefs.  If people could arbitrarily change their
criteria for belief, then you would indeed have the situation of people
choosing to believe whatever they want, a situation you find objectionable, as
do I (perhaps for different reasons).  But I think certain criteria are
unalterable.

And I have no problem with a individual-centered criterion for truth, since I,
for one, do, in fact, have to decide what's true.  I don't think that this
"subjective" focus leads to "Truth is what I choose to believe", however.  I
think there's a big difference between having subjective criteria for truth
and being arbitrary about how truth (or belief) is assigned.  There are at
least two major kinds of constraints on what one *can* believe:

1.  Experience -- the totality of all observed and remembered facts and our
current beliefs about them (in other words, one's current belief structure).

2.  The various universal criteria that are held in common amongst different
individuals, such as a tendency towards logical consistency, towards aesthetics
and pleasure, away from pain, and towards favoring beliefs that have heuristic
value (heuristic assumptions that tend to lead toward further and deeper
knowledge).

Within these constraints, there is considerable lattitude, which accounts for
the differences in people's beliefs, as these constraints account for the
similarities.

Sorry if I waxed a bit long-winded.  I'd have to be even *more* long-winded if
I had to give a thorough account of the criteria for belief and why I think
they are what they are.

I *do* so enjoy being disagreed with by intelligent people.  When there's a
disagreement, provided communicaton continues, at least one person is bound to
learn something (probably both).  The challenge of meeting your arguments has
definitely made it necessary for me to rethink my position.  It's actually a
drag being *agreed* with all the time, don't you think?
-- 
"From his own viewpoint, no one ever has false beliefs; he only *had* false
beliefs."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sa tha

mps@duke.cs.duke.edu (Michael P. Smith) (07/08/87)

In article <121@cavell.UUCP> jiml@cavell.UUCP (Jim Laycock) writes:
> ...  It seems to me that
>not all of my beliefs reflect true propositions (unless I'm incredibly
>skilled in choosing what to believe).  Nonetheless, it is not true of
>any particular belief p that I consider it to be false, otherwise I would
>reject it and believe ~p.  Let us also assume that I have a finite number
>of beliefs.
>  Consider a much smaller scenario--one in which I have but three beliefs:
>
>	1. Bel(p)
>	2. Bel(q)
>	3. Bel(~(p^q))
>
>Surely if such a situation were to come about, you'd have no trouble
>considering me to be inconsistent.  Yet my proposal is that we all
>entertain a much greater version of precisely the same notion.  Are
>we inconsistent, or just unreflective (are certain beliefs not questioned)?
>Is this to deny
>
>	4. Bel(Bel(p))
>
>for some p?
>-- 
>  Jim Laycock		Philosophy grad, University of Alberta
>  alberta!Jim_Laycock@UQV-MTS
>    OR
>  decvax!bellcore!ulysses!mhuxr!mhuxn!ihnp4!alberta!cavell!jiml

A closer analgoue might be *omega-inconsistency*.  A formal system is
omega-inconsistent when for some open formula Px, its existential
closure (Ex)Px as well as the denial of each instance, ~Pa, ~Pb, etc.,
are provable.  No restriction to finitude required. Godel's first
incompleteness theorem as originally proven applied to omega-
consistent systems. 

Take a universe of propositions, let P stand for 'is false', and
substitute 'believed (by me)' for 'proveable'.  Since we have
substituted a fuzzy notion for a precise one (in proof theory),
however, we only have an analogy here.  Doxastically, we are *like*
omega-inconsistent systems.

I don't see the relation to B --> BB, though.  OK, so I believe that
some of my beliefs are false.  Suppose, *per impossibile*, that I
have particular beliefs I believe to be false.  Still, by hypothesis
I have these beliefs, and I see nothing to prevent my believing that I
have them.

----------------------------------------------------------------------------
"The truth seems to be like the proverbial [side of a barn] which no one 
can fail to hit, ... but the fact that we can have a whole truth and not
the particular part we aim at shows the difficulty of it." 	Aristotle

Michael P. Smith	ARPA mps@duke.cs.duke.edu
----------------------------------------------------------------------------

jfbuss@water.UUCP (07/08/87)

In article <121@cavell.UUCP> jiml@cavell.UUCP (Jim Laycock) writes:
>  Consider a much smaller scenario--one in which I have but three beliefs:
>
>	1. Bel(p)
>	2. Bel(q)
>	3. Bel(~(p^q))
>
>Surely if such a situation were to come about, you'd have no trouble
>considering me to be inconsistent.  Yet my proposal is that we all
>entertain a much greater version of precisely the same notion.  Are
>we inconsistent, or just unreflective (are certain beliefs not questioned)?

A friend of mine claims the following happened to him:

1) He believed that he would run a particular errand at 2:00pm Friday.
2) He believed that he would attend his 2-hour class, which met
	Fridays starting at 1:00.

These beliefs co-existed quite comfortably for several days, because
he did not perform any logical analysis of them.  (For example, he did
not keep a planning calendar.)

I think that this kind of situation is common and not an exception.
Although the beliefs that are never tested or questioned are few, many
beliefs are not fully examined for a considerable period.  Hence one's
set of beliefs may be formally inconsistent.

wex@milano.UUCP (07/08/87)

In article <121@cavell.UUCP>, jiml@alberta.UUCP (Jim Laycock) writes:
>   Consider a much smaller scenario--one in which I have but three beliefs:
> 
> 	1. Bel(p)
> 	2. Bel(q)
> 	3. Bel(~(p^q))
> 
> Surely if such a situation were to come about, you'd have no trouble
> considering me to be inconsistent.

Perhaps this is alright for small cases, but in the real world, people
knowingly hold inconsistent beliefs.  My favorite example is the one
of the proofreader.  He has just finished proofreading a 350-page book
and seen all the typos corrected.  If we ask him "Do you believe there
is a typo on page <n> of this book?" for all 350 possible values of
<n>, he will say "no" each time.

However, if we ask "Do you believe there is a typo somewhere in the
350 pages of this book?" he will answer "yes."  Inconsistent?  Yes.
So why does he hold this set of beliefs?

The best answer I could give him was that his beliefs were not a
matter of simple truth/falsity, but were a matter of degree.  Thus,
the correct questions should have been "Do you believe that there is a
one-in-three-hundred-fifty chance that there is a typo on page <n> of
this book?"  To this, I claimed, he would have answered "yes."  This
makes consistent his reply of "yes" to the final question.

That is, given that he understands probability, and that there is a
1/n chance of a typo per page in an n-page book, it is reasonable to
say that there is a typo in the book.

[Side note: he was not satisfied with this answer.  He remarked that
he did not actively consider such probabilities in his answers and, in
fact, he really had no grasp of what a one-in-three-hundred-fifty
chance meant for proofreading.  His counter-claim was that my answer
was not an explanation, simply a way to rationalize a set of beliefs
that he, the belief-holder, considered inconsistent.]


-- 
Alan Wexelblat
ARPA: WEX@MCC.COM
UUCP: {seismo, harvard, gatech, pyramid, &c.}!sally!im4u!milano!wex

"Oh well, a touch of grey,
 Kinda suits you anyway."

andrews@ubc-anchor.uucp (Jamie Andrews) (07/08/87)

In article <9877@duke.cs.duke.edu> mps@duke.UUCP (Michael P. Smith) writes:
>Each of my beliefs I believe to be true, naturally.  But I do not
>"here-and-now" believe that all my beliefs are true.  Such optimism
>would be epistemically irrational.  "From my own viewpoint," not only
>have I *had* false beliefs, I surely *have* some now.  I have never
>had any false knowledge, however, nor do I now.

     So Michael feels that ( \exists X (Bel(X) & ~X) ) ...but he cannot
actually exhibit such an X, because he also feels that
( \forall X (Bel(X) -> ~Bel(~X)) ).  However, he does feel that there
is a "knows" connective, with the property that
( \forall X (Kn(X) -> X) ).  Is this a good summary?  Or are you
trying to avoid syntactic systems altogether?

     I was going to add a comment, but I realized I'm out of my depth.
Can anyone suggest a good summary paper of logics which encompass the
notion of belief?

--Jamie.
...!seismo!ubc-vision!ubc-cs!andrews
"What made it special, made it dangerous"

sarge@thirdi.UUCP (07/08/87)

In article <9877@duke.cs.duke.edu> mps@duke.UUCP (Michael P. Smith) writes:
>>false beliefs."
>
>Each of my beliefs I believe to be true, naturally.  But I do not
>"here-and-now" believe that all my beliefs are true.  Such optimism
>would be epistemically irrational.  "From my own viewpoint," not only
>have I *had* false beliefs, I surely *have* some now.  I have never
>had any false knowledge, however, nor do I now.
>

I think a couple of points will help, here.  The first is that belief (and
knowledge) is often not absolute, but admits of degrees.  One operates in
terms of probabilities, from complete impossibility through various degrees of
unlikelihood through various degrees of likelihood to complete certainty.
Probably most of the things I know (or believe), I know (or believe) without
complete certainty.  If I have beliefs a, b, c, ... , n, with an average
probability of 99%, the probability of all of them being true may be
vanishingly small (multiplying together the separate probabilities). So what
you say is quite correct, that I can believe that I have at least one false
beliefs, without any of my *specific* beliefs being false, so far as I am
concerned.  This doesn't invalidate, however, the equivalence of knowledge and
belief, from a subjective viewpoint.  This is also a way of looking at Jim
Laycock's question whether you could believe A, believe B, but not believe A &
B.  I think you can do so without inconsistency by looking at it in terms of
probabilities.

Re: your having knowledge which will never turn out to be false -- This would
have to mean that you have assigned a 100% probability to that item -- i.e.
complete certainty -- or it could mean that it's a fixed belief that you are
unwilling ever to reconsider.  So in this sense, "knowledge" would be a belief
that one will never change a certain belief.  I don't know whether one could
have something of which one is absolutely certain and yet reconsider it.  This
seems to be the method of Hume, the Cartesian Reduction and also Husserl's
phenomenological reduction.  Certainly, things that were at one time regarded
as absolutely certain (such as the Newtonian universe) are now considered
fallacious.  I think one should say that these items *were* knowledge (or
beliefs) at the time and are now not knowledge (or beliefs).  Otherwise, since
virtually any opinion, however certain (excepting, perhaps, tautologies and
some mathematical truths), can turn out later, in the light of further data,
to be false, we would have to say that knowledge (in the sense of something
that will always be true) is impossible or unlikely.

Of course, once one has reconsidered a belief, one has entered into a new
moment in time, and the original premise of "from an individual viewpoint at a
given time" is violated.

Perhaps you could provide an example of something you regard as knowledge, as
opposed to belief.  That might bring things down to a more concrete and
understandable level.
-- 
"From his own viewpoint, no one ever has false beliefs; he only *had* false
beliefs."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge

mps@duke.UUCP (07/09/87)

In article <54@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
>In article <9877@duke.cs.duke.edu> mps@duke.UUCP (Michael P. Smith) writes:
>>
>>Each of my beliefs I believe to be true, naturally.  But I do not
>>"here-and-now" believe that all my beliefs are true.  Such optimism
>>would be epistemically irrational.  "From my own viewpoint," not only
>>have I *had* false beliefs, I surely *have* some now.  I have never
>>had any false knowledge, however, nor do I now.

>So what you say is quite correct, that I can believe that I have at
>least one false belief, without [believing any] any of my *specific*
>beliefs [to be] false, so far as I am concerned.  This doesn't
>invalidate, however, the equivalence of knowledge and belief, from a
>subjective viewpoint. 
[my insertions]

First, your admission above contradicts your motto, which was my main
point.  Further, it certainly disproves the subjective equivalence of
knowledge and belief from my viewpoint, since I *do* believe I have
false beliefs, and I *don't* believe I have false knowledge, and in
fact I believe I don't have any false knowledge.  If that's not
subjective non-equivalence, what is? 

I take it that you too now believe that some of your beliefs are
false.  So if knowledge and belief are still indiscernable to you, it
must be because you believe that you have false knowledge.  Here it's
difficult to know what to say.  For myself, and I should have thought
most people, truth is a necessary condition of knowledge.  (Or at
least of what we might call "propositional" or "theoretical"
knowledge, as opposed to, say, knowing how to wiggle your ears.)  When
I find out that something I believed I knew is false, I don't say
	I knew it, but it was false.
I say
	I thought I knew it, but I was wrong.
(Well, there is a usage for the first.  But its non-literal status is
indicated by the obligatory stress on 'knew'.  Ain't English wonderful?)

>Re: your having knowledge which will never turn out to be false -- 
This section is based on a misinterpretation.  I don't believe
knowledge has to be absolutely certain or 100% probable.  Knowledge
can never be false for the same sort of trivial reason that a native Texan
can't have been born in Rhode Island.  

>Certainly, things that were at one time regarded as absolutely certain
>(such as the Newtonian universe) are now considered fallacious.  I
>think one should say that these items *were* knowledge (or beliefs) at
>the time and are now not knowledge (or beliefs). 

I would say that it was widely believed in 14th century Europe that
tarantula venom produces melancholy best relieved by music and
dancing, not that it was widely known.  I should be interested to know
if any non-Californians talk like Sarge. 

>Otherwise, since virtually any opinion, however certain (excepting,
>perhaps, tautologies and some mathematical truths), can turn out
>later, in the light of further data, to be false, we would have to say
>that knowledge (in the sense of something that will always be true) is
>impossible or unlikely. 

Here's your reasoning as I understand it:
	Consider a man with gun 10 yards from the side of a barn.
	Since virtually any bullet, however well-aimed, might, due to
	unforeseen circumstances, miss the target, we should have to
	say that a hit is impossible or unlikely.
All that follows from the fact that we might be wrong is that we might
be wrong.  How does the mere possibility of error suddenly become the
impossibility of avoiding error?

>Perhaps you could provide an example of something you regard as knowledge, as
>opposed to belief.

I know that I am sitting here, by this computer, wearing shorts,
holding this book in my hands, and so on.  I know that FOL is complete
and compact, and higher-order logics are not.  I know that whales are
mammals, that Great Britain has a monarch but France does not, that,
ounce-for-ounce, ice cream has more calories that carrots, that I
might be wrong about any of these things.  But I don't think I am,
else I wouldn't say that I know them.

Let me suggest a point that you might be trying to make.  Suppose with
the philosophers that knowledge is something like a well-founded true
belief.  It is commonly thought that two out the three can be
subjectively checked, that is, that we can check "from the inside"
whether we believe something, and whether our belief is based on
sufficient evidence.  (Both these claims would be challenged by
current naturalistic epistemologists.) But we have no way of checking
the truth of our beliefs other than by accumulating evidence.  Truth
is not directly checkable.  So well-founded false beliefs and
well-founded true beliefs are subjectively indiscernable in the sense
that we can't tell them apart from the inside.  This doesn't mean that
there is no difference between them, that true belief = false belief,
nor that they are the same from anyone's viewpoint.  It simply means
that when we ask whether or not we know something, we answer three
questions with two answers.  Do we believe it?  Sure.  Do we have
enough evidence?  Yup.  Is it true?  Well, look at all this evidence!

----------------------------------------------------------------------------
"All nature actually is nothing but a nexus of appearances according to
rules; and there is nothing at all *without rules*. 	Immanuel Kant

Michael P. Smith	ARPA  mps@duke.cs.duke.edu
----------------------------------------------------------------------------

rjf@eagle.ukc.ac.uk (R.J.Faichney) (07/10/87)

Summary:

Expires:

Sender:

Followup-To:


It's a fair time since I studied epistemology (approaching 10 years), but
I seem to remember a concensus of opinion amoung my fellow students that
the best definition of knowledge was `justified true belief'. It must be
justified because to believe something which is true, but for a bad 
reason (eg I was told it by someone who I did not have good reason to trust),
should not be counted knowledge. The major difference between knowledge
and belief (in my opinion) is that the concept of knowledge is objective,
assuming the reality (and perhaps the assertainability, for justification) of
the absolute truth or falsity of a proposition, while the concept of belief
says nothing about the outside world, only indicating an aspect of a state of
mind.

Unfortunately (ain't it always so) I cannot recall which philosopher first
proposed this (first part of the foregoing) definition.

It don't bother me none, but how come this discussion hasn't been burned
right out of sci.philosophy.tech?

Robin Faichney     ..mcvax!ukc!rjf    rjf@uk.ac.ukc

sarge@thirdi.UUCP (Sarge Gerbode) (07/10/87)

In article <9889@duke.cs.duke.edu> mps@duke.UUCP (Michael P. Smith) writes:
>In article <54@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
>>So what you say is quite correct, that I can believe that I have at
>>least one false belief, without [believing any] any of my *specific*
>>beliefs [to be] false, so far as I am concerned.  This doesn't
>>invalidate, however, the equivalence of knowledge and belief, from a
>>subjective viewpoint. 
>[my insertions]
>
>First, your admission above contradicts your motto, which was my main
>point.  Further, it certainly disproves the subjective equivalence of
>knowledge and belief from my viewpoint, since I *do* believe I have
>false beliefs, and I *don't* believe I have false knowledge, and in
>fact I believe I don't have any false knowledge.  If that's not
>subjective non-equivalence, what is? 

I still don't see how my "motto" is violated here.  I defy you to enumerate
*any* false belief that you *currently* hold.  You can entertain the notion
that some of your beliefs are probably wrong, but you can't get down to cases
about it.  Any belief that is currently occupying your attention you must see
as true (or probable).  In other words, you must know it.  This doesn't mean
that, at some later time (maybe even the next minute) a belief you *held*
(perhaps as recently as a minute ago) can turn out to be false.  When you say
that you think some of your beliefs are false, to me that means that you don't
think some of them will stand the test of time.  As I stated in my last reply
to you, the saving grace that keeps us from being pig-headed is the
willingness to reconsider the beliefs that we currently hold to see if they
still hold after reconsideration.  This means we must consider the possibility
that, in general, our beliefs, or some of them, may be false.  This doesn't
mean we think any particular one is false.  So I would say my motto holds for
any belief you would care to mention.

I feel I've somehow failed to communicate my point clearly.  Much of the
problem is, I feel, a semantic one, as I've indicated in other postings.  I'm
mindful of Wittgenstein's warning about the bewitchment of our intelligence by
means of words. For instance, when you say:

>I would say that it was widely *believed* in 14th century Europe that
>tarantula venom produces melancholy best relieved by music and
>dancing, not that it was widely *known*.

[emphasis mine], I think there is a merely semantic problem.  The reason you
don't say it was widely *known* is that you don't currently believe that to be
true.  We generally don't apply the word "known", even to the past, when we
don't agree with the past belief.  But at the time, from the viewpoint of
those who might have believed that stuff about tarantulas, this was known.  In
other words, I'm not saying that it "was known" from our current viewpoint,
but that a person at the time would look at the world and say "It is known
that ...", when describing his belief.  Of course we don't consider that
knowledge now -- because we don't believe, now, that it's the case.  What's
happening, I think, is that our language is not well adapted to consistently
speaking from the viewpoint of an individual at a certain time (i.e. to a
subjective viewpoint), but keeps tricking us into shifting back and forth from
that viewpoint to the viewpoint of a (non-existent) omniscient observer.

You say:

>I know that I am sitting here, by this computer, wearing shorts,
>holding this book in my hands, and so on.  I know that FOL is complete
>and compact, and higher-order logics are not.  I know that whales ...
>...
>... that I might be wrong about any of these things.  But I don't think I am,
>else I wouldn't say that I know them.
>

What if it turned out you *were* wrong about one or all of these things (as,
for instance, if it were a dream)?  From your viewpoint at the time, you
would, truthfully, say "I know these things."  I don't doubt that you know
those things now.  But there is a potential future viewpoint from which you
could say, "I believed those things".  Anyway, I don't want to belabor this
point.  I think it's a semantic problem.

I have no argument with your last paragraph.  If knowledge is a:

1. Well-founded
2. True
3. Belief

then I agree that (1) and (3) are all we *could* ever have to work with.  We
can never know whether our beliefs are "true" in some absolute sense (which
subjectively means that we could never conceive of having to change our minds
about them).  Absent that, the truth of our ideas *is* our belief in them,
from our own present viewpoint.  The truth of others' ideas also *is* our
belief in or agreement with *them*.  For practical purposes, then knowledge
*is* well-founded belief.

But I think we can compress this even further.  By "well-founded", you would
have to mean "sufficient evidence".  Sufficient for what?  Sufficient to
engender belief!  But obviously, if you believe something, then the evidence
must have been sufficient, for you, to engender your belief.  Therefore, we
can drop out (1) also an unnecessary, and we are left with (3), from the
viewpoint of an individual at a specific time.  Therefore, from this viewpoint
(i.e. subjectively), knowledge is belief.

I warned you that this was a Devil's Advocate position, didn't I?  Prove me
wrong!  In other words, change my beliefs!

By the way, Californians aren't *all* weird.  Just most of us.
-- 
"From his own viewpoint, no one ever has false beliefs; he only *had* false
beliefs."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge

mps@duke.cs.duke.edu (Michael P. Smith) (07/11/87)

References:<9877@duke.cs.duke.edu> <1537@ubc-cs.UUCP>


In article <1537@ubc-cs.UUCP> andrews@ubc-cs.UUCP (Jamie Andrews) writes:
>     So Michael feels that ( \exists X (Bel(X) & ~X) ) ...but he cannot
>actually exhibit such an X, because he also feels that
>( \forall X (Bel(X) -> ~Bel(~X)) ).  However, he does feel that there
>is a "knows" connective, with the property that
>( \forall X (Kn(X) -> X) ).  Is this a good summary?  

Yes.  

>Can anyone suggest a good summary paper of logics which encompass the
>notion of belief?

Jaakko Hintikka's KNOWLEDGE & BELIEF started it all 25 years ago, and
is still one of the best and most accessible treatments.  Hintikka is
a philosopher, and many philosophical problems are considered.
Wolfgang Lenzen's RECENT WORK IN EPISTEMIC LOGIC covers results
through the 70s. Neither of these are summary papers, though.  I
believe you would find one in Dov Gabbay's HANDBOOK OF PHILOSOPHICAL
LOGIC (v.2, I think).  I don't have these volumes in front of me, and
I don't remember what is there.  I do remember that the quality of the
articles in uneven, so *caveat lector*.

>--Jamie.
>...!seismo!ubc-vision!ubc-cs!andrews

----------------------------------------------------------------------------
"The absurd assumption ... that a performance ... inherits its title to
 intelligence from some anterior internal operation of planning..." G. Ryle

Michael P. Smith	ARPA  mps@duke.cs.duke.edu
----------------------------------------------------------------------------

sarge@thirdi.UUCP (Sarge Gerbode) (07/11/87)

In a personal mailing, Jonathan Buss (pyramid!ames!seismo!watmath!water!jfbuss)
makes some interesting contributions to this discussion which I think deserve
posting:

>We having been discussing the following concepts:
>
>1) Absolute truth; facts which are true (regardless of whether anyone
>   knows or believes them).
>
>2) True beliefs; things someone believes which happen to be absolutely
>   true.
>
>3) Beliefs; someone's understanding of the world.
>
>4) Personal knowledge; the facts which a person knows.
>
>We are asking
>
>A) Are these concepts different?
>
>B) What concepts are relevant to behaviour?
>
>C) Is the concept of absolute truth even meaningful?
>
>D) To what do the common usages of terms like "knowledge," "belief,"
>   and "truth" refer?
>
>To `A' the answer is emphatically yes -- in particular, absulute truth
>is not the same as personal belief or knowledge.  If I read you
>correctly, you agree with me here.  You argue that absolute truth is
>not useful for behaviour, but there is no inconsistency.  Purple
>cows are not useful, either; but the concept of purple cow is
>different from that of milk cow.
>
>We choose actions, of course, according to what we believe.  But we
>don't follow our beliefs *because* they are true.  Rather, we must do
>something, and our beliefs are the best guide we have.  I may decide to
>see a movie if I believe I will like it, but I am rarely in the
>situation of knowing I will like the movie.
>
>In fact, we may act (following our beliefs) even if our beliefs are
>logically inconsistent.  I hold opinions on the proper way to run
>society, because I believe that the methods are best.  But I do not
>believe that *all* of my opinions are correct.  Because the issues are
>quite complex, I find it more likely that I have a mistaken belief
>somewhere.  If I believed that a *particular* belief was incorrect, of
>course I would change it.  But which belief should I change, if I
>can't decide which is incorrect?  I decide not to change any, and thus
>operate with a logically inconsistent set of beliefs.  So I have a
>false belief.  And I know I have a false belief.
>
>In everyday use, knowing and believing are distinguished by the
>certainty of the knowledge or belief.  Actual -- completely tested and
>verified -- knowledge is rare (and may not exist).  But most people
>have no compunction about referring to a firm belief as knowledge.
>
>Western philosophies hold that knowledge is obtained by logical
>reasoning and observation.  Eastern philosophies often admit knowledge
>obtained by mystical insight.  But both admit the possibility of
>actions based on false beliefs.
>
My comments follow:

     The issue of knowing I have a false belief I have dealt with in an earlier
posting, so I won't get into that here.

     I agree that the concept of absolute truth is not the same as personal
knowledge, although a person may be under what I would conceive to be the
misapprehension that he has absolute knowledge.  As I see it, an absolute truth
would be either:

     1.  A state of affairs that exists absolutely, from the viewpoint of an
         "omniscient observer", or independent of any observation, or an
         idea that corresponds to such a state of affairs.

or

     2.  An idea that is assented to with absolute certainty by a person and
         which he is unwilling or unable ever to doubt or reconsider.

I'm not sure I can make sense of the first definition, because the concept of
something the exists absolutely, not as seen from a given point of view, is
difficult to wrap one's wits around.  One could not assign such an object any
particular position, momentum, dimensions, or duration, because all these are
related to the frame of reference of the observer.  A person moving in a
certain direction with respect to the object would see it (per relativity
theory) as longer (or shorter -- I'm not sure which) in that dimension than in
the others.  Depending on the viewpoint of the observer, it would exist before
another event, and from the viewpoint of another observer it could be after
the same event.  One would be unable, in other words, to give a precise
specification of the object -- its location in space/time, shape, and momentum
without specifying which viewpoint one was looking at it from.  The hope of
establishing precise objective truth about the universe is thus long gone.  As
far as *general* truths or laws about the universe are concerned, Bohm and
other physicists have showed (to my satisfaction, at least) that any rules or
"truths" so discovered are highly context-related, not absolute for all
contexts.  So in neither specific detailed description of the universe nor in
general delineation of natural law can one arrive at an absolute truth.  So
I'm not entirely sure that the concept of absolute truth is meaningful, i.e.,
I'm not sure it's a well-formed concept at all.

But even if it is meaningful, there is no way of determining whether one has
arrived at an absolute truth, so it seems a fairly valueless concept, as you
also mentioned.

My inclination is to hang onto the person-centered viewpoint and give a
subjective definition of absolute truth, by which we could say that absolute
truth is possible to a person.  This is definition number 2.  If (as you
suggest -- and English usage would tend to back you up) "knowledge" can be used
to mean a very strong belief or conviction, then absolute knowledge would
simply be *absolute conviction*, which is what I, for practical purposes, think
it is.

If we define absolute knowledge as definition #2, a further question is
relevant: is absolute knowledge a good thing?  The answer to this question is
by no means obvious to me.  In one way, it is desirable, because it would give
stability and predictability to a person's world not to have to keep
questioning and wondering.  Absolute knowledge means never having to change
your mind (might be a good new slogan ;-) ).

In another sense, absolute knowledge in sense #2 is surely the root of much of
the evil that exists in the world.  It is also called bigotry.  A person who
has knowledge in sense #2 generally believes he has knowledge in sense #1.  He
thus feels justified in trying to force his view of the world on others,
because, after all, it is "absolutely true".  He doesn't necessarily feel a
need to *demonstrate* the truth to others.  If he can coerce them into
acknolwedging it as truth, he will -- especially if he has priorly tried,
unsuccessfully, to demonstrate it to them.  I think most interpersonal
conflicts, as well as all religious or ideological wars (i.e. most wars), are
based on people with knowledge #2 claiming knowledge #1 and forcing it on
others.  Since one can never tell whether or not one has knowledge #1, any
claim to have it has this fault.

Therefore, I think we are better off deleting or ignoring concept #1
altogether and not adopting concept #2 either.

In other words, we'll be much better off if we deep-six the whole concept of
absolute truth or absolute knowledge and speak, rather, of interpersonal
commonality or concurrence of belief.  In reading Eric Raymond's recent
thoughtful posting, I think one could describe this as a "verificationalist"
criterion.
-- 
"From his own viewpoint, no one ever has false beliefs; he only *had* false
beliefs."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge

laura@hoptoad.uucp (Laura Creighton) (07/13/87)

In article <4865@milano.UUCP> wex@milano.UUCP writes:
>Perhaps this is alright for small cases, but in the real world, people
>knowingly hold inconsistent beliefs.  My favorite example is the one
>of the proofreader.  He has just finished proofreading a 350-page book
>and seen all the typos corrected.  If we ask him "Do you believe there
>is a typo on page <n> of this book?" for all 350 possible values of
><n>, he will say "no" each time.
>However, if we ask "Do you believe there is a typo somewhere in the
>350 pages of this book?" he will answer "yes."  Inconsistent?  Yes.
>So why does he hold this set of beliefs?
>
>The best answer I could give him was that his beliefs were not a
>matter of simple truth/falsity, but were a matter of degree.  Thus,
>the correct questions should have been "Do you believe that there is a
>one-in-three-hundred-fifty chance that there is a typo on page <n> of
>this book?"  To this, I claimed, he would have answered "yes."  This
>makes consistent his reply of "yes" to the final question.
>
>That is, given that he understands probability, and that there is a
>1/n chance of a typo per page in an n-page book, it is reasonable to
>say that there is a typo in the book.
>
>[Side note: he was not satisfied with this answer.  He remarked that
>he did not actively consider such probabilities in his answers and, in
>fact, he really had no grasp of what a one-in-three-hundred-fifty
>chance meant for proofreading.  His counter-claim was that my answer
>was not an explanation, simply a way to rationalize a set of beliefs
>that he, the belief-holder, considered inconsistent.]

[Speaking as a proofreader] -- I have no faith in the accuracy of the
book.  I do have faith in my ability as a proofreader.  Therefore,
because I have great faith in my ability, I assume that for any
given page it is more likely that there will be no error than
there is one.  However, knowing my ability as a proofreader, I admit
that i tend to make 1 mistake over every 350 pages.
-- 
(C) Copyright 1987 Laura Creighton - you may redistribute only if your 
    recipients may.

	``One must pay dearly for immortality:  one has to die several
	times while alive.'' -- Nietzsche

Laura Creighton	
ihnp4!hoptoad!laura  utzoo!hoptoad!laura  sun!hoptoad!laura

lee@mulga.oz (Lee Naish) (07/17/87)

In article <2400@hoptoad.uucp> laura@hoptoad.uucp (Laura Creighton) writes:
>In article <4865@milano.UUCP> wex@milano.UUCP writes:
>>If we ask him "Do you believe there
>>is a typo on page <n> of this book?" for all 350 possible values of
>><n>, he will say "no" each time.
>>However, if we ask "Do you believe there is a typo somewhere in the
>>350 pages of this book?" he will answer "yes."  Inconsistent?  Yes.
>>
>>The best answer I could give him was that his beliefs were not a
>>matter of simple truth/falsity, but were a matter of degree.  Thus,
>>the correct questions should have been "Do you believe that there is a
>>one-in-three-hundred-fifty chance that there is a typo on page <n> of
>>this book?"  To this, I claimed, he would have answered "yes."  This
>>makes consistent his reply of "yes" to the final question.

Suppose each page of the book was simply a list of 100 numbers
which (should) add up to 1000.  Suppose also that the book source
was on-line and with the appropriate tools all the numbers added
by the computer and the result was 349999.  The probability of there
being an error is extremely high (say 0.999).  What do you believe is
the probability of an error on any given page?  If you say 1/350 then
the probability of an error in the book should be, according to
simple probability theory, 1-(349/350)^350 = 0.63.  If you say 10/350
(or whatever is needed to get the 0.999 figure) then the expected
number of errors greatly increases (which I think is unreasonable).

How can this paradox be resolved without admitting inconsistent
beliefs?

	Lee Naish

	lee@mulga.oz.au
	lee@munnari.oz.au
	munnari!lee@seismo.css.gov
	{seismo,mcvax,ukc,ubc-vision}!munnari!lee

rsl@ihlpl.ATT.COM (Richard S. Latimer ) (07/18/87)

In-Reply-To: your article <1537@ubc-cs.UUCP>

> Can anyone suggest a good summary paper of logics which encompass the
> notion of belief?
If believing is an act of accepting as true without or in spite of
evidence, then it is by its nature illogical (in the sense that
belief does not reach conclusions via non-contradictory proesses; 
logic, being the art of non-contradictory thinking).

An interesting question (to me) is why does anyone choose to
believe, since it is cleary illogical?  Any thought?
[Sounds a bit like Dr. Spock of Star Trek fiction, doesn't it?  If
you are familiar with the character, let me assure you that I am an
Earthian, complete with those wonderful feelings!  :-].
-- 
Eudaemonia,  Richard S. Latimer [(312)-416-7501, ihnp4!ihlpl!rsl]

sarge@thirdi.UUCP (Sarge Gerbode) (07/21/87)

In article <2099@mulga.oz> lee@mulga.UUCP (Lee Naish) writes:
>
>Suppose each page of the book was simply a list of 100 numbers
>which (should) add up to 1000.  Suppose also that the book source
>was on-line and with the appropriate tools all the numbers added
>by the computer and the result was 349999.  The probability of there
>being an error is extremely high (say 0.999).  What do you believe is
>the probability of an error on any given page?  If you say 1/350 then
>the probability of an error in the book should be, according to
>simple probability theory, 1-(349/350)^350 = 0.63.  If you say 10/350
>(or whatever is needed to get the 0.999 figure) then the expected
>number of errors greatly increases (which I think is unreasonable).
>
>How can this paradox be resolved without admitting inconsistent
>beliefs?

When you did the computer check, you introduced additional data that changed
the probability picture.  The probability before the check was one in 350;
after the check, it was 1/1000.  The main error was the changing of the time
parameter, allowing for the introduction of new knowledge.  People change their
beliefs frequently.  That doesn't mean that their beliefs are subjectively
inconsistent at any given time.  The interposition of time is, perhaps, the
only thing that allows for the holding of conflicting beliefs.
-- 
"Absolute knowledge means never having to change your mind."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge

cliff@rlgvax.UUCP (07/21/87)

In article <64@thirdi.UUCP>, sarge@thirdi.UUCP (Sarge Gerbode) writes:
> In article <2099@mulga.oz> lee@mulga.UUCP (Lee Naish) writes:
> >Suppose each page of the book was simply a list of 100 numbers
> >which (should) add up to 1000.  Suppose also that the book source
> >was on-line and with the appropriate tools all the numbers added
> >by the computer and the result was 349999.  The probability of there
> >being an error is extremely high (say 0.999).  

I didn't see the original posting, so perhaps I'm missing something.

But if they are *supposed* to add up to 350000, then (if we trust our
automated summation) it is *certain* that if instead they add up to
34999, then there is an error *somewhere*.  Therefore the probability of
there being an error *somehwere in the book* is 1, not .999. 

> >What do you believe is
> >the probability of an error on any given page?  If you say 1/350 then
> >the probability of an error in the book should be, according to
> >simple probability theory, 1-(349/350)^350 = 0.63.  If you say 10/350
> >(or whatever is needed to get the 0.999 figure) then the expected
> >number of errors greatly increases (which I think is unreasonable).

What about multiple offsetting errors on different pages (e.g.  p.  10
adds up to 1002, p.  11 to 999, all others correct)? I don't think that
there's any basis to make any other statementes besides Pr(error
somewhere in book)=1. 
-- 
O----------------------------------------------------------------------->
| Cliff Joslyn, Computer Consoles Inc., Reston, Virgnia, but my opinions.
| UUCP: ..!seismo!rlgvax!cliff
V All the world is biscuit shaped

sarge@thirdi.UUCP (Sarge Gerbode) (07/23/87)

In article <2401@ihlpl.ATT.COM> rsl@ihlpl.ATT.COM (Richard S. Latimer                         ) writes:
>If believing is an act of accepting as true without or in spite of
>evidence, then it is by its nature illogical....
>An interesting question (to me) is why does anyone choose to
>believe, since it is clearly illogical?  Any thought?

Believing is no more illogical than breathing or eating.  What can be illogical
are one's *reasons* for believing, but they need not be.

People choose to believe because life cannot be lived without believing.  For
instance, one cannot walk without believing that one can do so.  One cannot go
to work without believing that the workplace still exists and that one still
has a job.  Sometimes one's beliefs turn out to be wrong.  That is also part of
life, a part one tries to minimize.

One cannot stop believing; one can only improve one's *criteria* for belief.
-- 
"Absolute knowledge means never having to change your mind."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!17:4.EDU>  wh

hansw@cs.vu.nl (Hans Weigand) (07/23/87)

Summary:

Expires:

Sender:

Followup-To:


>In article <2401@ihlpl.ATT.COM> rsl@ihlpl.ATT.COM
>(Richard S. Latimer) writes:
>>If believing is an act of accepting as true without or in spite of
>>evidence, then it is by its nature illogical....
>>An interesting question (to me) is why does anyone choose to
>>believe, since it is clearly illogical?  Any thought?
>
>Believing is no more illogical than breathing or eating.  What can
>be illogical
>are one's *reasons* for believing, but they need not be.

I largely agree with the last comment. Two remarks:

(1) To believe is not always an act for which one has reasons.
When I enter a room and see a chair, I believe it is a chair
without weighting the evidence (this is the objection Thomas
Reid already made against John Locke). On the other hand, when I hear a
politician speaking, for example, I can choose whether to
believe him or not.

(2) With respect to the question why someone chooses to
believe: without belief there is no communication. Whether I read
a mathematical article, or a newspaper's report, or a database
record, etc., in all cases I can only accept the message when I
am prepared to believe it. The mathematical article may include a
proof that may help me in my choice. If I still do not believe it
(in fact, many "proofs" turn out to be wrong), I may get a hard time
defending this choice, but it is a possibility. Evidence certainly
contributes to belief, but it never determines the choice.
So if you want to communicate, you must also be willing to believe.
(This is one answer to the question raised above).

-

Hans Weigand, Dep. of Math and Computer Science,
Vrije Universiteit, Amsterdam

                       "credo ut intelligam"

wallace@degas.Berkeley.EDU (David E. Wallace) (07/24/87)

In article <2099@mulga.oz.R> lee@mulga.UUCP (Lee Naish) writes:
>Suppose each page of the book was simply a list of 100 numbers
>which (should) add up to 1000.  Suppose also that the book source
>was on-line and with the appropriate tools all the numbers added
>by the computer and the result was 349999.  The probability of there
>being an error is extremely high (say 0.999).  What do you believe is
>the probability of an error on any given page?  If you say 1/350 then
>the probability of an error in the book should be, according to
>simple probability theory, 1-(349/350)^350 = 0.63.  If you say 10/350
			    ^^^^^^^^^^^^^^^ This is the problem: see below.

>(or whatever is needed to get the 0.999 figure) then the expected
>number of errors greatly increases (which I think is unreasonable).
>
>How can this paradox be resolved without admitting inconsistent
>beliefs?

Simple: the formula you cite only applies if the probabilities are
independent.  The global knowledge you possess of the overall sum
means that the probabilities of errors on the separate pages are not
independent, so the formula doesn't apply.  To take a somewhat cleaner
example, if I have 350 identical sealed boxes on the table and tell you
that there is a red ball in one (and only one) of the boxes, the
probability that there is a red ball in any given box is clearly 1/350,
before any of the boxes have been inspected.  But the probability that
there is a red ball in *some* box (assuming you can trust the conditions
of the problem) is 1, not 0.63.  If you now open one of the boxes, the
probability that there is a red ball in the *second* box you inspect
will either rise to 1/349 (if you find the first box empty) or drop
to zero (if you find the ball in the first box), because the probabilities
are not independent.  For them to be independent, the probability of finding
a ball in the second box would have to remain the same regardless of what
you found in the first one.


Dave Wallace	UUCP: ...!ucbvax!wallace  ARPA: wallace@degas.Berkeley.EDU

sarge@thirdi.UUCP (Sarge Gerbode) (07/26/87)

In article <1537@botter.cs.vu.nl> hansw@cs.vu.nl (Hans Weigand) writes:

>Evidence certainly
>contributes to belief, but it never determines the choice.
>So if you want to communicate, you must also be willing to believe.

I'm not sure that belief is really always a matter of choice.  Certainly, one
can inculcate belief in oneself by various means, such as autosuggestion, or
Pascal's famous method of acting as if one believed, until one does believe.
Christians have, of course, been highly motivated to believe in the divinity
of Christ, as Pascal was, since believing is viewed as the route to personal
salvation.  And they have therefore sometimes been led to extraordinary
measures to create belief in themselves (and in others).  I'm sure that we all
do this to some degree.

However, apart from such deliberate and manipulative measures, I don't think
belief is generally consciously chosen.  Rather, it seems to be compelled (yet
in a somehow non-forceful manner) as the result of a combination of evidence
(perceptions) + certain underlying rules common to all, such as the rules of
logic and certain empirical assumptions, + other forms of thought and rules of
evidence that depend on education, culture, and habit.

The notion of what truly constitutes compulsion in the area of thought is
somewhat unclear (to me).  Is logic a form of application of force?  Does
demonstration enforce agreement?  Or should force be regarded as the
application of pain and duress (or a threat of some kind)?  My inclination is
to apply the term to the latter.  If anyone has views on the topic of what
consititutes the use of force or coercion, I'd be interested to hear them.
-- 
"Absolute knowledge means never having to change your mind."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge

hansw@cs.vu.nl (Hans Weigand) (07/27/87)

Summary:

Expires:

Sender:

Followup-To:


In article <68@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
>In article <1537@botter.cs.vu.nl> hansw@cs.vu.nl (Hans Weigand) writes:
>
>>Evidence certainly
>>contributes to belief, but it never determines the choice.
>>So if you want to communicate, you must also be willing to believe.
>
>I'm not sure that belief is really always a matter of choice. (..)

As I wrote in the first part of my article, I agree with you
that belief is not always a matter of choice. However, it is at
several dayly-life occasions. Witness the fact that the verb "believe" has
the feature +Control. Consider the sentences:
    Please believe me! (Don't believe it!)
    *Please know it! (*Don't know it!)
The ungrammaticality of the latter stems from the lack of control
of "knowing". Evidently, this does not apply to the former.

>The notion of what truly constitutes compulsion in the area of thought is
>somewhat unclear (to me).  Is logic a form of application of force?  Does
>demonstration enforce agreement?  Or should force be regarded as the
>application of pain and duress (or a threat of some kind)?  My inclination is
>to apply the term to the latter.  If anyone has views on the topic of what
>consititutes the use of force or coercion, I'd be interested to hear them.

According to Heidegger, logic has its roots in the original Logos
(cf. Heraclitus). This Logos is defined in a rather violent way
("The Logos holds men together not without violence"). Compare
also the view of violence as "ultimate reason". At the
other hand, philosophy (science) has attempted to escape
from violence since the days of Socrates (see in particular the
work of the Frankfurter Schule).

I think that any-body will agree there is an important practical
difference between the force of brute violence and the force
of logic, or (in general) language. But the relationship
between violence and truth has always been a fundamental and
unresolved problem of philosophy, and it is not likely to be
resolved in this discussion.

As to the question of what constitutes the force of logic, I
would like to repeat my original statement. Without belief there is no
communication. But humans are by nature social beings who can
not do without communication. Refusing to accept a logical
argument (not believing it) obviously endangers the communication
possibilities. So refusing to believe anything, is just like
refusing to eat anything. It is possible, but how long will you
hold out?

--
 Hans Weigand, Dep. of Computer Science,
 Vrije Universiteit, Amsterdam
-

planting@colby.WISC.EDU ( W. Harry Plantinga) (07/28/87)

In article <68@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:

>I'm not sure that belief is really always a matter of choice.  Certainly, one
>can inculcate belief in oneself by various means . . .
>Christians have, of course, been highly motivated to believe in the divinity
>of Christ, as Pascal was, since believing is viewed as the route to personal
>salvation.  And they have therefore sometimes been led to extraordinary
>measures to create belief in themselves (and in others).  

Even in this case I don't think one is generally able to "decide" to
believe something.  I believe in God, and if I am honest with myself,
I have no choice in the matter--I *know* that he exists and to believe 
otherwise would be like trying to believe that my parents don't exist.

Now, one can be more or less open to a certain belief, and one can
alter ones disposition toward certain beliefs.  For example, I can
decide (after considering the evidence) that Santa Claus doesn't
exist.  Then upon seeing someone who looks exactly like Santa Claus
(which would normally be enough to convince me that he does exist) I
figure he must be an imposter.  Also, because I am a democrat (or
republican) and I want to be like other democrats (republicans) I
might be predisposed to believe that a defense buildup is a bad thing
(good thing) and refuse to consider the evidence (refuse to consider
the evidence) for fear that my beliefs might be changed . . .

But one can't say "I want to believe in santa claus" and start
believing.

---------------
Harry Plantinga
planting@cs.wisc.edu
{seismo,allegra,inhp4,heurikon}!speedy!planting
(608) 233-1386

sarge@thirdi.UUCP (Sarge Gerbode) (07/30/87)

In article <3991@spool.WISC.EDU> planting@colby.WISC.EDU ( W. Harry Plantinga) writes:
>Even in this case I don't think one is generally able to "decide" to
>believe something.

I think you can decide to believe something.  Whether you succeed or not
depends on the methods you use to instill belief in yourself.  Theoretically,
you could hire a good hypnotist and get the belief implanted as a post-hypnotic
suggestion (a common trick).  This might not work for all beliefs, but it
surely works for *some*.

>But one can't say "I want to believe in santa claus" and start
>believing.

Probably not -- yo'd have to *do* something about it.  But then nothing
worthwhile is accomplished without some effort :-)  .
-- 
"Absolute knowledge means never having to change your mind."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge

alan@pdn.UUCP (Alan Lovejoy) (08/27/87)

In article <1541@botter.cs.vu.nl> hansw@cs.vu.nl (Hans Weigand) writes:
>Summary:
>Expires:
>Sender:
>Followup-To:
>
>In article <68@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
>As to the question of what constitutes the force of logic, I
>would like to repeat my original statement. Without belief there is no
>communication. But humans are by nature social beings who can

One of the more interesting results of semiotics is that in order for
something to serve as a "sign", it must be possible for the "sign" to
be in error.  In other words, there can be no symbolic communication
without the possibility of lying (or at least being mistaken).  If
messages can inherently be false, then communication inherently
requires belief in the veracity of the message.

It is precisely the ability to be false that gives messages their power;
it would otherwise be impossible to discuss the hypothetical cases, the "might
be's", "might have been's", "could be's" and "should have been's".
Abstractions require the ability to signify what is not.

--Alan "true, false, both true and false, neither true nor false"
  Lovejoy

franka@mmintl.UUCP (Frank Adams) (09/08/87)

In article <127@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
|In article <1161@pdn.UUCP> alan@pdn.UUCP (0000-Alan Lovejoy) writes:
||One of the more interesting results of semiotics is that in order for
||something to serve as a "sign", it must be possible for the "sign" to
||be in error.
|
|Some messages, such as recitations of poetry, paintings, and the like, as well
|as various aspects of body language, songs, and the like can't really be
|mistaken or true.  Such messages are meant to convey a certain experience
|(i.e. a mental picture or sensation or sense of experiencing something) to the
|receiver (sometimes the exact experience is not specified by the originator).

But suppose the sender is *not* having the sensation suggested by the
message?  I think even in the case of a mental picture it is possible to
"fake it", to deliberately convey a picture one does not have; although the
greater the art, the less believable this is.
-- 

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

sarge@thirdi.UUCP (Sarge Gerbode) (09/10/87)

In article <2353@mmintl.UUCP> franka@mmintl.UUCP (Frank Adams) writes:
>In article <127@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
>|Some messages, such as recitations of poetry, paintings, and the like, as well
>|as various aspects of body language, songs, and the like can't really be
>|mistaken or true.  Such messages are meant to convey a certain experience
>|(i.e. a mental picture or sensation or sense of experiencing something) to the
>|receiver (sometimes the exact experience is not specified by the originator).
>
>But suppose the sender is *not* having the sensation suggested by the
>message?  I think even in the case of a mental picture it is possible to
>"fake it", to deliberately convey a picture one does not have; although the
>greater the art, the less believable this is.

I think some forms of non-verbal expression *can* be intended as assertions.
A portrait, for instance, is (or used to be) intended as an assertion about how
someone looks, and (if badly done) can be "false".  "Social action" art also
makes assertions (as does art in commercials).  These forms of art or
non-verbal expressions can be mendacious.

However, there are other forms of messages -- what you might be referring to as
"great art", or, one might say, "Fine Arts" (as opposed to commercial art),
that do not make assertions.  These works of art, it seems to me, are attempts
to evoke a feeling or experience in the audience.  They could be regarded as
"successful" or "unsuccessful", in this respect, but not as "true" or "false".

Some kinds of art (not necessarily the best kind) seem to provide a sort of
"ink blot" against which the audience can project their own impressions.  This
kind of art is not really even communication, I think, but a form of
"stimulation".  Some modern art and music falls into that category.
Personally, I'm too lazy to try to create my own impressions.  I'd rather the
artist did it for me.

So, there are the following categories of messages:

1.  Those that make statements.
2.  Those that try to evoke specific experiences.
3.  Those that stimulate and act as "inkblots" for the imagination of the
    audience.

Of these three forms of messages, only the first has truth value or (therefore)
mendaciousness.
-- 
"Absolute knowledge means never having to change your mind."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge

alan@pdn.UUCP (09/12/87)

In article <164@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
/I think some forms of non-verbal expression *can* be intended as assertions.
/A portrait, for instance, is (or used to be) intended as an assertion about how
/someone looks, and (if badly done) can be "false".  "Social action" art also
/makes assertions (as does art in commercials).  These forms of art or
/non-verbal expressions can be mendacious.

/However, there are other forms of messages -- what you might be referring to as
/"great art", or, one might say, "Fine Arts" (as opposed to commercial art),
/that do not make assertions.  These works of art, it seems to me, are attempts
/to evoke a feeling or experience in the audience.  They could be regarded as
/"successful" or "unsuccessful", in this respect, but not as "true" or "false".

/So, there are the following categories of messages:

/1.  Those that make statements.
/2.  Those that try to evoke specific experiences.
/3.  Those that stimulate and act as "inkblots" for the imagination of the
/    audience.
/
/Of these three forms of messages, only the first has truth value or (therefore)
/mendaciousness.

A painting of some actual event or object "represents" something because
of its resemblance to something else.  This resemblance is totally in
the eye of the beholder, however.  Even among human beings, there are
those who would not perceive a drawing as anything but lines and or
colors on paper.  What an animal or an alien from Arcturas would
perceive when viewing the Mona Lisa is questionable.   

The point is this: just because I paint a picture of Gary Hart caught in the
act of adultery, this does not mean that he ever was (or will be) unfaithfull
to his wife.  The picture I paint has NO causal relationship with the
objects/events I or anyone else may see in it.  Did the lady in
DaVinci's famous picture ever actually exist?  Even a photograph or a
video tape can show that which does not exist or did not happen.

Information can be stored and/or transmitted in way that does not use
"signs" and which can not be mistaken.  Photons that have bounced off
or come through matter carry guaranteed true facts about that matter.
Such information carriers are called "tokens", to distinguish them  from
"signs", which can be mendacious. If you send me a message using tokens,
I can believe it.  If you use signs, I'll have to take you on faith.
Of course, It's very hard to communicate usefully without using signs.

--alan@pdn

tjhorton@utai.UUCP (09/15/87)

>alan@pdn writes:
>Information can be stored and/or transmitted in [a] way that does not use
>"signs" and which can not be mistaken.  Photons that have bounced off
>or come through matter carry guaranteed true facts about that matter.
>Such information carriers are called "tokens", to distinguish them  from
>"signs", which can be mendacious

"Photons that have bounced off or come through matter carry guaranteed true
facts about the matter?"  Like the proverbial husband coming home late at
night, "Only if you know exactly where they've been."  Generally, when those
truthful little photons get to us mortals, all this is not metaphysically
available to us.
-- 
Timothy J Horton (416) 979-3109   tjhorton@ai.toronto.edu (CSnet,UUCP,Bitnet)
Dept of Computer Science          tjhorton@ai.toronto.cdn (EAN X.400)
University of Toronto,            {seismo,watmath}!ai.toronto.edu!tjhorton
Toronto, Canada M5S 1A4

sarge@thirdi.UUCP (09/17/87)

In article <1312@pdn.UUCP> alan@pdn.UUCP (0000-Alan Lovejoy) writes:
>In article <164@thirdi.UUCP> sarge@thirdi.UUCP (Sarge Gerbode) writes:
>/So, there are the following categories of messages:
>
>/1.  Those that make statements.
>/2.  Those that try to evoke specific experiences.
>/3.  Those that stimulate and act as "inkblots" for the imagination of the
>/    audience.
>/
>/Of these three forms of messages, only the first has truth value or
>/(therefore) mendaciousness.
>
>A painting of some actual event or object "represents" something because
>of its resemblance to something else.  This resemblance is totally in
>the eye of the beholder, however.  Even among human beings, there are
>those who would not perceive a drawing as anything but lines and or
>colors on paper.  What an animal or an alien from Arcturas would
>perceive when viewing the Mona Lisa is questionable.   

Sorry.  I think I understand what you are saying, but I don't see how it
relates to what I wrote.  I think the point I was trying to make was that
certain communications could not be mendacious.  That's different from saying
that they couldn't be interpreted in different ways or misinterpreted.
Perhaps by "mendacious" you meant, not "lying", but "capable of being
misinterpreted".  In this case, though, surely all sorts of physical phenomena
are capable of being misinterpreted and will continue to be so indefinitely,
unless science reaches a final culmination that I think it is not going to
reach.

Also, lying is an *intentional* act, as is communicating, and a message, to be a
message (as I use the word) must be intended as a communication.  Otherwise, it
is just an index of something (a "token", to use your phraseology), not a
message.

>Information can be stored and/or transmitted in way that does not use
>"signs" and which can not be mistaken.  Photons that have bounced off
>or come through matter carry guaranteed true facts about that matter.

I think this assertion is incorrect.  The *photons* do not carry any facts at
all.  Facts arise as a result of the *interpretation* of the nature or pattern
of the photons.  Photons need to be interpreted just as much as messages do.

>Of course, It's very hard to communicate usefully without using signs.

Indeed!  In fact, it's impossible to communicate at all without using signs, if
one defines communication as an intentional act.

The difference between signs and tokens is not the reliability of the
interpretation.  It is solely in the intention to communicate contained in the
former.  A great actress, for instance, uses what you would call "tokens"
(flushing, crying, otherwise physically evincing emotion) very convincingly
and deceptively.  The physiology may be identical.  The difference is solely
in the intention.
-- 
"Absolute knowledge means never having to change your mind."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP:  pyramid!thirdi!sarge