[comp.ai] Chess, Reductionism, Probablistic Determinism.

zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) (03/14/90)

>>>>(Paul Steven Mccarthy) writes: [Everything is a consequence
>>>> of the laws of physics...]

>>>(Chris Malcolm) asks: [...How is {some property of chess} the 
>>> consequence of the laws of physics?...]

>>(Paul Steven Mccarthy) answered: [...reductionism...]
>>     The given property is a consequence of the rules of the game.
>>     The rules of the game are the consequence of human perceptions
>>         of pleasure. [pleasure <-- nuero-chemistry <-- chemistry
>>                       <-- physics]
>> . . .
>> [reference to role of history... basic reductionism...
>>  probablistic determinism...]

>(Ken Presting) objects: [...]
>It may be easy to write off chess as defined only for human pleasure,
>but but such a move is not so easy for the rules of arithmetic, the
>rules of logic, or the rules of a Turing machine.  [...]

>The question of handling abstract rules in a physical system is very
>important for AI.  [...] 
>What is the relationship between processes, perceptions, and rules?

I make no claims to be a philosopher.  I will try to lay out the logic
behind my beliefs, but I am sure you could find more persuasive
arguments in "philosophy somewhere...".  Anyway, here goes:

At the end of my article I made quick reference to the historical 
aspects of the game, and an appeal to "probablistic determinism".
It occurred to me afterwards that the real crux of my beliefs stems
from probablistic determinism.  You set up the dominoes 'just right'
at the beginning, introduce a convenient cosmic big bang, wait a
few eons, ... and voila!  Here we are, with this particular world,
and all of its interesting properties.

As for the development of "logic", "arithmetic" and "Turing machines",
there was a non-zero probability at the beginning that the human
organism would evolve, that this organism would be curious about its
environment, that it would develop reasoning tools to help it 
contemplate that environment and ultimately that those reasoning
tools would manifest themselves in the form that they have.

It is basically a long chain of (yes :-) logic where I have tried
to look at the "big picture" and not been particularly concerned
with the details of intervening steps.  It seems to me that 
reductionism naturally leads to probablistic determinism.  It is
a belief in "cause and effect", where the underlying "causal force"
is the laws of physics -- just "the way the universe works".

I _believe_ that I am correct.  I _believe_ that the body of reasoning
tools developed by humans are valid for describing the properties
of the universe.  I also believe that this belief is a consequence of
"the way that the universe works" (nice and recursive, isn't it?! :-).  
I may be completely wrong, but I don't let myself worry about that too 
much.  I am a computer scientist, not a philosopher.

Now, Ken, don't you agree that this kind of discussion really belongs
in "philosophy somewhere..."?  I appreciate your opinions.  I have 
devotedly read the articles that you have posted to this newsgroup,
but I will not pretend that I have understood even 10% of their content.
There is certainly value in these kinds of discussions, but I think the
value is mis-placed here.  I must say that I even _enjoy_ these
philosophical digressions;  I just think they are more "philosophy"
than they are "artificial intelligence".  

Your awe-struck, but uncomprehending fan,
---Paul...

aarons@syma.sussex.ac.uk (Aaron Sloman) (03/16/90)

zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) writes:

> >>>>(Paul Steven Mccarthy) writes: [Everything is a consequence
> >>>> of the laws of physics...]
> .....
> >>>(Chris Malcolm) asks: [...How is {some property of chess} the
> >>> consequence of the laws of physics?...]
> ......
> >>(Paul Steven Mccarthy) answered: [...reductionism...]
> >>     The given property is a consequence of the rules of the game.
> .....
> >(Ken Presting) objects: [...]
 .....

Just to stir things up a little:

Chess would have existed even if the whole physical universe hadn't,
just like all those infinitely many other games that nobody ever has
or ever will invent, and just like all those infinitely many
languages that nobody ever has learnt or ever will learn, and all
those infinitely many valid proofs in axiom systems that nobody will
ever formulate, and all those infinitely many thoughts that nobody
will ever think....

Aaron

kp@uts.amdahl.com (Ken Presting) (03/17/90)

In article <49331604.1285f@maize.engin.umich.edu> zarnuk@caen.UUCP (Paul Steven Mccarthy) writes:
>
>>>>(Chris Malcolm) asks: [...How is {some property of chess} the 
>>>> consequence of the laws of physics?...]
>
>>>(Paul Steven Mccarthy) answered: [...reductionism...]
>>>     The given property is a consequence of the rules of the game.
>>>     The rules of the game are the consequence of human perceptions
>>>         of pleasure. [pleasure <-- nuero-chemistry <-- chemistry
>>>                       <-- physics]
>
>>(Ken Presting) objects: [...]
>>It may be easy to write off chess as defined only for human pleasure,
>>but but such a move is not so easy for the rules of arithmetic, the
>>rules of logic, or the rules of a Turing machine.  [...]
>
>It occurred to me afterwards that the real crux of my beliefs stems
>from probablistic determinism.  You set up the dominoes 'just right'
>at the beginning, introduce a convenient cosmic big bang, wait a
>few eons, ... and voila!  Here we are, with this particular world,
>and all of its interesting properties.. . .
>It is a belief in "cause and effect", where the underlying "causal force"
>is the laws of physics -- just "the way the universe works".

What you have here is not so much an argument for reductionism, but
rather an argument against *emergence*.  By assuming that the starting
state has certain potential for making certain things happen later on,
you are saying that the ultimate effects were there, in some sense,
right from the start.  By bringing in the probabilistic type of
causality, you can say that chess was still a consequence of the initial
state, even though it was not certain to appear.  But this is not a
reduction of the rules of chess to the laws of physics.

I just happen to have another concept in my little catalog, which does
a much better job than "reductionism".  I call it "implementationism".
To give an example, let's switch from chess to algorithms:

When a computer calculates a function, its operation is entirely
controlled by the laws of physics.  But an algorithm cannot be reduced
to the operations of any one computer, or any one kind of computer, or
to computers at all.  Any physical operations which have the "right"
structure could be an instance of computing a function by the algorithm.
The "right" structure is not definable in terms of the laws of physics,
but *is* definable in terms of the logical structure of the algorithm.
There is a large class of physical objects which *could* be used as
automatic computers, and they all have such physical properties as allow
a homomorphism from the logical description of the algorithm's operation
to the causal description of the machine's operation.  (This is a case
of the "homomorphism of logical structure" I mentioned earlier this week).

This idea of implementing an abstraction in a physical device with a
"matching" logical structure is easily extended to sciences.  Chemistry
includes a very complex bunch of abstractions and general laws, which
most people think of as reducible to physics.  But - this is big - if
the physicists decide that say, Rutherford atoms are out, and Bohr atoms
are in, the chemists do NOT have to re-write the Periodic Table.  They
may perhaps change some of their plans and expectations for new
research.  The concepts of chemistry do not disappear into physics, and
I would even say that the concepts of chemistry do not change *meaning*
when physics changes.  Chemists' beliefs about the *reference* of their
terms changes, but that's about it.  That's because the relation between
chemical and physical theory is a like an implementation of a program.
If the implementation changes, most of the "high-level" functions are
unaffected.

Chess, mathematics, and logic now fit into the scheme with a lot less
mangling.  We know how to implement machines that can behave according
to any rules anyone would care to state.  And we ourselves know how to
follow rules.  What we don't know is how to implement a machine that
can learn all the rules human beings can learn, or even how much of
human behavior is based on rules and how much is based on causes.

But the general thesis of implementationism is: Everything is implemented
in Physics.  If implementationism is true, then we have a shot at re-
implementing intelligence in silicon.  We don't need Reductionism.

(Jerry Fodor has a great discussion of reduction in the first chapter of
 _The_Language_of_Thought_.  He makes a very good case against reducing
 psychology to physics, but I think Implementationism is much more elegant
 than his "Token Physicalism".  I could use a better name, though. :-)

> . . .   I _believe_ that the body of reasoning
>tools developed by humans are valid for describing the properties
>of the universe.  I also believe that this belief is a consequence of
>"the way that the universe works" (nice and recursive, isn't it?! :-).  

Statements such as this bring up an important point.  One part of being
an intelligent person is to recognize that one's beliefs come about in
a variety of ways.  Sometimes our beliefs seem to be "built in", and
sometimes our beliefs are deliberately adopted.  It is certainly foolish
to insist that all beliefs must have a justification, but on the other
hand, it would be disingenuous to hold any particular belief exempt
from all challenges.  Even the laws of arithmetic and logic can be
held up for scrutiny.  The intuitionists and constructivists may well
be wrong about doubting the law of excluded middle, but they are not
stupid or foolish to do so.

I would say that an AI which could not participate in a discussion of
the foundation of its beliefs was lacking in an important area of
human behavior.  I might go so far as to say that if a machine did
not make little jokes when it reveals the circularity of its reasoning,
it would lack another important human trait! (:-)

(Circularity in the foundations of reasoning is *very* difficult to
 avoid.  Even Kant could not avoid it.  He did not make many jokes,
 however...)

>Now, Ken, don't you agree that this kind of discussion really belongs
>in "philosophy somewhere..."?

I don't know.  I guess that "net protocol" requires extended discussions
to be conducted in talk.* groups.  I figure that as long as my articles
receive thoughtful replies here, I might as well continue to post here.
The value I derive from the ideas of the rest of the group is tremendous.

My impression is that the success of AI will probably entail the answers
to a bunch of philosophical questions.  I *love* answers, so I'm very
interested in the success of AI.

I have cross-posted and suggested followups to sci.philosophy.tech.

>Your awe-struck, but uncomprehending fan,
>---Paul...

Oops.  I'll try to be more comprehensible and less awful.  (:-)

Ken Presting

kp@uts.amdahl.com (Ken Presting) (03/22/90)

In article <351@ntpdvp1.UUCP> sandyz@ntpdvp1.UUCP (Sandy Zinn) writes:
>> Ken Presting writes:
>> What we don't know is how to implement a machine that
>> can learn all the rules human beings can learn, or even how much of
>> human behavior is based on rules and how much is based on causes.
> 
>Rules vs. Causes??  I'm curious; please elaborate.

When you play a game of chess, if someone asks you why you move the knight
in that funny way, or why the pawn doesn't capture the piece ahead of it,
the rational explanation is based on the rules of the game.  No such
explanation would be available if someone asked you why you jerk your
foot when the tendon of your kneecap is tapped (even if you know a whole
lot more about neurophysiology than I do, you couldn't give an account
of why jerking your foot was *rational*)

One thing that makes rules interesting is that they seem to be closely
related to conscious thought, abstract thinking and learning, and overt
behavior, all at the same time.  A *very* controversial issue revolves
around the rules of grammar.  A large number of bright people are
prepared to argue to the death that knowledge of grammar is knowledge
of certain rules, and that this knowledge is innate in every infant.
A similar number of equally bright people think this position is absurd.

Confusion over this and related issues (such as explicit representation
of symbolic data vs. "implementation" of symbolic process in hardware)
seems to me to infect much of the debate over the foundations of AI.
Implementationism and normative properties are my attempt to sort things
out.  Whether a certain process (natural or artificial) follows a certain
rule is a normative question - the rule states a "norm", and some
interpretation of our observations of the process may be required before
we can understand the relation between the process and the rule.

I know this is rather vague, but it may be enough for a start.

Ken Presting

smoliar@vaxa.isi.edu (Stephen Smoliar) (03/23/90)

In article <8eQP02EX94Fn01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken
Presting) writes:
>
>When you play a game of chess, if someone asks you why you move the knight
>in that funny way, or why the pawn doesn't capture the piece ahead of it,
>the rational explanation is based on the rules of the game.  No such
>explanation would be available if someone asked you why you jerk your
>foot when the tendon of your kneecap is tapped (even if you know a whole
>lot more about neurophysiology than I do, you couldn't give an account
>of why jerking your foot was *rational*)
>
We have discussed the problems involved in playing fast and loose with words
like "understand" and "intelligent."  Now I find myself ready to approach
"rational" with a similar degree of caution.  Unless I'm mistaken, however,
this is a situation in which epistemology has also tried to take the bull by
the horns.  What I have read certainly indicates that a lot of thought has gone
into the issue, but I'm not convinced that it has been resolved satisfactorily.
Let me try to pick on both of Ken's premises for the usual sake of argument.

Let's take the patella reflex first.  Why can't I give a RATIONAL account?  At
one level, I can sketch out a trace of activations of nerve and muscle cells;
and I have every reason to believe that a nerophysiologist could essentially
do the same much more thoroughly.  On the other hand, I can provide ethological
evidence as to why selection has favored phenotypes which have this reflex
(having to do with the way it breaks one's fall).  Thus, I can account for
it at both the level of the behavior of the organism in the world and at the
level of the internal functions of that organism.  Is the argument that the
reflex is not RATIONAL a consequence of the fact that it is a REFLEX, rather
than a conscious act?  If so, do we really want such a close coupling between
rationality and consciousness?

Now let's go back to chess.  Here, I admit, I may be a bit more OUTRE;  so let
me attribute my reaction to reading Gabriel Garcia Marquez.  To refresh the
memories of our readers, there is a scene in ONE HUNDRED YEARS OF SOLITUDE
in which a new priest tries to teach chess to Jose Arcadio Buendia (who is
tied to a tree in the public square).  Buendia responds that he cannot see
the point of playing a game in which both sides have agreed to the rules in
advance.  Why should we assume either that it is rational for a game to have
rules or for the players to follow them?  When all is said and done, I see more
rationality in the patella reflex, since I can observe that individuals who
have it survive better than those who don't, whereas I cannot make any similar
statements about chess.

=========================================================================

USPS:	Stephen Smoliar
	USC Information Sciences Institute
	4676 Admiralty Way  Suite 1001
	Marina del Rey, California  90292-6695

Internet:  smoliar@vaxa.isi.edu

"Only a schoolteacher innocent of how literature is made could have written
such a line."--Gore Vidal

sandyz@ntpdvp1.UUCP (Sandy Zinn) (04/06/90)

> Let me emphasize again that my view is opposed to the computationalist.
> Logic and computation, for me, are *methods*, not *models*.  (I think
> I have demonstrated that logic is not the only method (:-) but logic does
> have the advantage of sharpening objections).  Fantasy and ritual are
> the appropriate models for thought - that is my view.  With a position
> like this, I hope it is obvious why I concentrate on formal methods!
> 
> Ken Presting  ("Descartes never woke up")

This is my third hit on this posting.  (besides, our news feed is down.)

Ken, this fantasy-as-model idea just keeps getting more interesting to me,
as a springboard for further exploration of the divergences & convergences
in this discussion, if nothing else.  I agree with Stephen that it's a 
method as well; to my old "identity of incomparable categories", I'd add:

>   rules = representation = processes = perception = methods = models

The Dotland Identity. [ which has something to say but leaves a lot unsaid ]

Instead of fantasy-as-model, let me leap from dream-as-model.  I'm going to
use Bateson's discussion of the Freudian notion of primary process.  Dreams,
in the classical Freudian sense, translated material into metaphors to avoid
the Superego watchdog.  Bateson argues, and I agree with him, that dream
metaphors are not *result* but *source*.  Insofar as Mind is representa-
tion, it is a metaphor for whatever is being represented.  The Primary
Process is metaphorical, and involves the representation of *relationships*.
Bateson says:

  ...the subject matter of primary-process discourse is different from
  the subject matter of language and consciousness.  Consciousness talks
  about things or persons, and attaches predicates to the specific things
  which have been mentioned.  In primary process the things or persons
  are usually not identified, and the focus of the discourse is upon the
  *relationships* which are asserted to obtain betweeen them.

I suggest that this primary process is Dotland, is Implementationism.

  A metaphor retains unchanged the relationship which it "illustrates"
  while substituting other things or persons for the relata.

Gee, this sounds like a Normative Property! (or do I seriously mistake you?)

  Primary process is characterized (e.g., by Fenichel) as lacking
  negatives, lacking tense, lacking in any identification of
  linguistic mood (i.e., no identification of indicative, sub-
  junctive, optative, etc.) and metaphoric.

Dreams.  Fantasy.  Ritual.  The relationships just ARE, period.  These
relationships are primarily iconic, or analogic:  it is a *pattern*
which is represented, a style of relationship, if you will.  The 
digitalization of information comes only at the level of language.
Logic, a set of digital relationships, is imposed on dreams.

The question becomes, can Edelman's neural-processor code for this kind
of primary process?  My guess is yes.

@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
    Sandra Zinn              |   "The squirming facts
    (yep these are my ideas  |      exceed the squamous mind"
     they only own my kybd)  |         -- Wallace Stevens

kp@uts.amdahl.com (Ken Presting) (04/10/90)

In article <365@ntpdvp1.UUCP> sandyz@ntpdvp1.UUCP (Sandy Zinn) writes:
>(Ken Presting) writes:
>> It is very important to pick out the "right" frame of reference, ...
>> Every  person establishes a frame of reference;
>                            ^^^^^^^
>Be careful of this singular terminology here; we don't want to "entitize"
>that which is fluid, multi-dimensional, evolutionary and "self-contradictory"
>(how do you contradict a "self" which is constantly in flux?).

Very good point, but the flux in any personality is limited, at least
in some ways.  I would agree that parts of a personality can change
practically without limit, especially in the case of beliefs and
intellectual behavior (ie making assertions and arguments).

I think there is a useful comparison between the Society of Mind and the
Scentific Community.  At most times in the history of a science (Kuhn's
"normal science") there is one dominant abstraction which organizes the
interpretation of data, the design and evaluation of research programs,
and the results of (most) previous experiments.  Paradigm shifts don't
seem to be the result of any single experiment, but rather are a response
to unchecked growth in the complexity of data interpretation (eg,
epicycles).  As experimental results accumulate which strain the
interpretive framework provided by theory, it is increasingly likely that
an alternative theory will have a chance of showing itself to have a
real advantage.

Data interpretation, and its counterpart, the analysis of natural systems,
are more like *skills* than *algorithms*.  Physics students learn a lot
of specific cases of natural phenomena, and the mathematical techniques
that have been applied to those systems.  No attempt is made to prove
to anybody that the math is justified a priori - you try it, and if it
works, go for it!  If it fits into some grand formalism, that's great,
but as Dan Mocsny observed, a table in a handbook is where most numbers
come from, not from theoretical derivations.

I think the mind is very similar.  Skills are prior to abstractions, and
logic is applied to rationalize ("explain" is the polite term) the
effectiveness of the skills.  The entity is the collection of skills or
techniques or conditioned responses, the frame of reference (descriptive
abstraction) is a fantasy.

>
>> the problem for Cog. Sci. is to understand that self-constructed frame.
>                                                  ^^^^^^^^^^^^^^^^
>One of my professors convinced me how impoverished a representation we have
>for this process.  Most of our building materials are pre-fab, by our
>families, our culture, etc., . . .

Mathematics, and abstraction in general, is the solution to this problem.
To the extent that we commit ourselves to scientific method, we are
commited to abstraction in solutions whenever it is feasible (at least).
Strong AI is doubly committed to abstraction - as a method, and as a
model.

The literate/illiterate distinction might help to understand the
situation, although I am not prepared to give a clear account of the
psychology of literacy (print vs. speech is too simplistic).  An
illiterate has only the culturally developed tools, but with symbols you
can build any logical structure at all.

>> >         In the case of the knee-jerk reflex, the premise ...
>> 
>> I would analyze the situation just a little differently,
>> ... If the leverage system (and all the other physiology) is called a
>> "context", I would take the tap of the hammer to be a "premise".
>
> . . . your identification of the hammer as premise shifts
>the focus of the system -- we do this all the time, and necessarily --
>but I must note that such a re-punctuation will necessitate a different
>interpretation of the conclusion than the one I intended.  Neither of us
>is wrong, but we've lost the isomorphism of our metaphors....

This shows the interplay between analysis, interpretation, and
representation (eg expression in words).  To understand the relation
between perception and cognition, we have to understand all three
processes.  Sensation by itself does not require so much effort, I would
say.  Any attempt to build too much logical structure into the concept
of sensation can only be confusing.  I offer this remark mostly as a
suggestion for terminology - we need *some* level of description at which
there is nothing more than superposition of squirms, and we might as well
call that sensation.

Sensation seems to me to be precisely analogous to analysis, which I would
define as "redescription in a selected vocabulary".  I've begun working
out a homomorphism of logical structure between analysis and sensory
processes.  The basic idea is to compare transduction between phase
spaces (ie physical parameters) to syntactical transformations.


Ken Presting   ("The system will be ready next Sunday" - SK)

kp@uts.amdahl.com (Ken Presting) (04/10/90)

In article <366@ntpdvp1.UUCP> sandyz@ntpdvp1.UUCP (Sandy Zinn) writes:
>> (Ken Presting) responds:
>> > Fantasy and ritual are the appropriate models for thought . . .
>
>Why are rituals more appropriate?  Because they are phenomenological,
>as opposed to being logical?  This is what I think you are saying.

I got such a hard time for so long from Stephen about excessive formalism
that I decided if I'm going to get pounded for having a silly and
simplistic opinion, I might as well take a pounding for the silly opinion
that I actually do hold!  (The truth will out ...)

The attribute of rituals that gets my attention is their *concreteness*,
combined with their *futility*.  (Ahh, that explains everything! :-)

Not that all rituals are always futile - this is precisely why ritual
is fascinating.  Why would grown organisms with important things to
do like gathering nuts and berries, and dragging off the females (:-)
engage in behavior with *no discernable practical consequences*?  This
has got to be, without question, the strangest phenomenon in the animal
kingdom.  (I include posting netnews within this phenomenon - no joke)

To anticipate Sandy's later remark, I see no alternative to the
hypothesis that symbol manipulation originated historically in ritual
behavior, and that what we today describe as symbol manipulation remains
an enactment of a ritual.

I should admit that I am overstating a point for effect when I say "NO
practical consequences".  If there were no selective advantage to having
a tendency to ritual, then the tendency would probably disappear.  But
selection for a tendency does NOT entail that all consequences of having
the tendency are advantageous.  The gene which confers resistance to
malaria in West African populations also confers susceptibility to
sickle-cell disease.  It is an elementary fallacy to suppose that every
trait in a *phenotype* confers a selective advantage, and there are many
subtler fallacies lurking for selectionist explanations.

Having said this, I have finally made good on a promise I gave back
during the last big Chinese Room debate - I've finally defined "symbol
manipulation".  My definition does have the virtue of blasting Searle
out of the water, but I will understand if some readers continue to
reserve judgement!  (Posted or e-mailed objections requested, natch)


(Sandy makes her first mistake:)
> . . .  You are a superlative logician; . . .

In Sandy's defense, I should note that she must have composed this article
before I claimed that logic and calculation are also rituals.  (:-)

I did once take a course from Leon Henkin, but was utterly defeated by
ultrafilters.  (Don't ask me what they are)

>... a propositional mode, which elevates the concepts of separateness,
>levels, abstractions, distinctions of types, etc.

This brings up some interesting points.  I would say that *arguments*
*must* be conducted in terms of propositions, their relations, etc.  This
has very important implications for the symbolist vs. connectionist
issue.  I would still deny that any non-ritual thought is propositional,
that is, any thought that is not related to argumentation.  But if the
practical criteria for *evidence* of thought is to be communication,
then the capacity to apply recursive rules (which may be difficult for
NN's) is essential.


With regard to levels:  In order to be able to discuss the truth of
propositions, and other semantic issues, it is indispensable to use
seperate languages (ie abstractions), and let one of them refer to the
other.  If this is not done, then the discussion is conducted in a
semantically closed language, and argument is pointless.  (Other than
this object language/metalanguage distinction between levels, I would
de-emphasize any hierarchy of abstractions.)

This is not a pedantic triviality.  The Greeks had a terrible time with
arguments that depended on paradoxes to trip up opponents.  Some of these
were (more or less) well intended, as in the case of Zeno. But to conduct
an argument that reached an interesting conclusion (ie, anything other
than "change remains the same and changes" et. al.) required dedicated
collaboration between disputants.  Schools of thought tended to be more
like religions than intellectual investigations.

Aristotle finally cleared up the mess with the logic of syllogisms, which
are adequate for most of geometry and some number theory, but not for
as much mathematics as we do in Fregean logics.  The point is that some
set of rules is indispensable to control communication in disagreement,
or disagreement will become inescapable.

IMO, this is the beginnings of a justification of the survival value of
the logic ritual, part of an explanation of why we continue to perform it,
part of the motivation for learning the ritual, and most importantly,
for *internalizing* it.  Thought *is posterior* to negotiation.

(Simplified diagram:)
                       X      X
             .........X......X......
               .     X      X    .
              .     X      X    .
             .     X      X    .
            ......X......X..........
                 X      X

>Those X-bars are  your darned normative properties, which I, living in
>dotland, believe are really just more dots,
>just like the dot-planes (levels of abstraction),

Each normative property is part of an abstraction all of its own.  Truth
is a good example - it is part of the whole abstraction of Logic.  A
normative property is always exemplified by the dots, but can never be
defined in terms of dots.  Again, truth is the perfect example - truth
is not definable in terms of syntactic properties (Tarksi's Theorem).

>just like the dot-planes (levels of abstraction), which as you see I can
>move between, with or without your normative properties (though I will
>concede that all my bridges may well be labeled as such).

The dot-bridges correspond to *implementations*.  The relation between
normative properties and implementations is very close.  Both depend on
homomorphisms of logical structure.  But normative properties are more
"fluid" than implemented properties, because a normative property is most
important when it is *imperfectly* exemplified.  For example, people's
assertions are never the whole truth and nothing but.  Normatives
depend on *interpretations* which are always tentative.

>
>In dotland, all the patterns I see seem to pulse in this rhythm: 
>    propagation, selection     propagation, selection
>
>The resulting patterns are very different, but the essential processing 
>is the same.  Between levels, within levels, outside the levels: patterns
>begetting patterns.

Natural selection is an excellent analogy for the process of
interpretation, because the propagation is always tentative.  I think it
is undeniable that a process with similar logical structure is
responsible for scientific progress, though the degree of similarity is
open to question.

Also, I am personally convinced that some similar processes occur in the
brain during (at least) early cognitive development, as well as during
creative thought, throughout life.  I think there is a very suggestive
analogy between the operation of non-deterministic automata and, say,
the proliferation of antibodies.  How far this analogy can be pressed
is a good question.

>                  Maybe I like the idea of emergence because a syncretic
>reality sees levels as a coalescence phenomenon, whereas a propositional
>reality has built-in levels, and thus sees "emergence" as silly.

You're confusing me with a logician again!  Bertrand Russell suggested
that reality was composed of "facts".  I think reality is composed of
us and the rest of the stuff.  Whatever it is, I don't care, as long as
there's only one kind of stuff, and we are made of it.  I call this view
"indifferent monism".

I see emergence as unnecessary.  Normative properties can be clearly
defined, argued about, calculated, and are lots of fun to be with.
Emergence is vaguely slimy, or slimily vague, and I just don't trust it.


>You wrote that the relation between description and perception is fantasy.
>Literally. -- Let me propose that dotland is fantasyland.  Homomorphs,
>isomorphs, allomorphs -- they're all here, wildly procreative but lawfully
>selective.

I'm confused about this, I guess.  Where is Reality?


Ken Presting   ("I can't these squirms out of my mind")

edm002@muvms3.bitnet (04/10/90)

In article <16ai029B9byy01@amdahl.uts.amdahl.com>, kp@uts.amdahl.com (Ken Presting) writes:
> In article <366@ntpdvp1.UUCP> sandyz@ntpdvp1.UUCP (Sandy Zinn) writes:
>>> (Ken Presting) responds:
>>> > Fantasy and ritual are the appropriate models for thought . . .
>>
>>Why are rituals more appropriate?  Because they are phenomenological,
>>as opposed to being logical?  This is what I think you are saying.
> 
> The attribute of rituals that gets my attention is their *concreteness*,
> combined with their *futility*.  (Ahh, that explains everything! :-)
> 
> Not that all rituals are always futile - this is precisely why ritual
> is fascinating.  Why would grown organisms with important things to.....

	Isn't this what Skinner called "adventitious conditioning"?  Skinner's
hypothesis was that ritual behavior arose when an operant made a false
stimulus-response correlation.  We do it all the time, with behaviors like
wearing our "lucky hat" when fishing or such like.
	I've wondered if we couldn't extend the concept of adventitious
conditioning to a behavioral explanation of religion in human culture. 
Religious ritual is nothing more than a stimulus-response correlation that does
not correlate with the actual s-r sequence [cf. Monty Python's "Life of Brian"
as an example].
	Statistics is one more belief system, as the original message pointed
out.  A statistician *believes* that his/her results *probably* are 95% not due
to mere chance [p<.05], but the statistician cannot *know* if the results are
random or not.  There are alternative ways of knowing [or thinking we know],
and statistical inference is one such way--it is not the *only* such way.
-- 
         edm002@muvms3.bitnet,Marshall University
         Fred R. Reenstjerna     | Life is like a 'B' movie.  You
         400 Hal Greer Blvd      | don't want to leave in the middle,
         Huntington, WV 25755    | but you don't want to see it again.
         (304)696 - 2905         |      ---Ted Turner, 1990

kp@uts.amdahl.com (Ken Presting) (04/12/90)

In article <15855@muvms3.bitnet> edm002@muvms3.bitnet writes:
>In article <16ai029B9byy01@amdahl.uts.amdahl.com>, kp@uts.amdahl.com (Ken Presting) writes:
>> The attribute of rituals that gets my attention is their *concreteness*,
>> combined with their *futility*.  (Ahh, that explains everything! :-)
>> 
>> Not that all rituals are always futile - this is precisely why ritual
>> is fascinating.  Why would grown organisms with important things to...
>
>   Isn't this what Skinner called "adventitious conditioning"?  Skinner's
>hypothesis was that ritual behavior arose when an operant made a false
>stimulus-response correlation. . . .

The biggest problem I have with this suggestion is the the application
of "false" to "correlation".  Inverse or negative correlations I can
understand, and I can understand low correlation coefficients.  I can
even understand temporary, accidental, and non-repeatable correlations.

But to apply the term "false" at all, to any correlation, no matter how
pathological, is an example of the basic problem of behaviorism - there
are concepts that scientists apply to each other's work which are
completely useless in explanations of animal behavior.  Falsehood is one
of these concepts, and for Behaviorism to be consistent, it must eliminate
that term entirely from its vocabulary.

Now, I have no problem at all with using a system of describing animal
(including human) behavior which does not include any semantic,
intentional, or normative concepts.  On the contrary, I think that no
explanation of animal bahavior can be satisfying unless all *observations*
can be accounted for in non-intentional terms.

However, it is *clearly false* that all *concepts* can be reduced to
behavioristic terms.  Truth and falsehood cannot be reduced to syntactic
terms (by Tarski's theorem), and syntax cannot be reduced to typographic
terms (think of cyphers and Pig-Latin).  But a very significant (I would
say the most interesting) part of human behavior consists of applying
semantic, intentional, and normative terms to people, places and processes.  

The conjunction of these two positions makes theorizing about behavior
very difficult.  I would say that any attempt to simplify the issue
is at best a deferral, and at worst an evasion, of real questions.  I see
no alternative to the hypothesis of unconscious (ie non-intentional)
psychological processes, and a difficult and confusing project of
explaining subjective experience in terms of them.

>	I've wondered if we couldn't extend the concept of adventitious
>conditioning to a behavioral explanation of religion in human culture. 
>Religious ritual is nothing more than a stimulus-response correlation that does
>not correlate with the actual s-r sequence 

Oh?  How would you propose to distinguish religious ritual from 
mathematical calculation?  Or any other rule-governed activity with
a problematic epistemological foundation?

I would not attempt to defend (most) religious doctrines, but any
attentive philosophy undergraduate would be prepared to debunk most
contemporary justifications of mathematical theory and practice.  The
plain fact is that very many mathematicians believe wholeheartedly in
a Platonic Heaven.  This is an example of what I am calling a *fantasy*,
and when such a belief is used to explain one's own behavior, that
behavior is what I call a *ritual*.

I don't see much hope for finding an "actual s-r sequence" that 
correlates with the method of indirect proof, or mathematical 
induction.


Fred, I apologize for the harsh tone of this reply.  I actually have
a great deal of sympathy for any method that professes to have strict
scruples, and does not shy away from applying those scruples to the
"sacred cows".  But, please, not as a blunt instrument!


Ken Presting  ("A scruple is a two-edged blunt instrument")

sandyz@ntpdvp1.UUCP (Sandy Zinn) (04/14/90)

(Ken Presting) writes:
> 
> What I had in mind [is something that] I
> (eventually) want to call "Intentional" behavior, or more precisely,
> behavior _qua_ intentional.  This distinction is based on what abstraction
> is used to describe the behavior.  Physical, chemical, biological, or
> medical abstractions are non-intentional, while descriptions which
> refer to "propositional attitudes" such as belief and desire are
> intentional.
> 
> Whereas non-intentional descriptions are justified (or criticised) by
> citing observations *within* the vocabulary of a single abstraction,
> intentional descriptions are conditional on INTERPRETATIONS of cited
> observations in the vocabulary of *another* abstraction.  So intentional
> predicates are very much like normative predicates.  The only difference
> is in connotation - normative predicates involve value judgements, while
> intentional predicates do not.
 
Wait a minute, Ken.  There are some mixed referents here:  if intentional
descrs. are dependent on Interpretations, then it's clear that they MUST
involve value judgments.  Value is precisely the interpretation of an 
observation's (a behavior's or a property's) location in a scheme which is
held to be relevant.  The choice of a relevant scheme is a value judgment,
as is the activity of locating the observation in that scheme: interpreta-
tion.  VALUE is linked in the Socratic sense to *sense*.  I would even say
that *value* is a measure of order, of information (which means that money-
grubbers, since currency currently carries so little information, are indeed
grasping at straws -- just what I'd want to have operating in MY paradigm.)

What _particular_ frame of reference (or value) are you NOT wanting to apply
to intentional predicates which you think IS applied to normative predicates,
and why do you feel it is not *sensible* to do so?

I'll guess that you are talking about a sociocultural frame, in which case
the question becomes, why isn't it sensible to relate intentionality to
these values?  If you want to suspend the individual's *beliefs and desires*
from reference within a larger social context, I'm going to jump all over you,
so you'd better have a high-information-quotient reason.

(On second thought, maybe you're going to suspend *logic*, which you might
be able to get away with.)

(Ken Presting):
> >> Even a Universal Turing machine can apply only one interpretation to
> >> the data on its tape.
> 
> (Steven Daryl McCullough) writes:
> >... Are you saying that Searle's argument *does* prove that a Turing
> >machine can't think? . . .
>
> Yes.  ...observe that a 
> compiler will take any garbage input file, try to parse it, and barf if
> it hits a syntax error.  That's what I mean by a program "applying an
> interpretation".  The compiler acts as if every file it gets its hands on
> is a source program, and if the file doesn't fit the compiler's
> "conceptual scheme"  (ie parsing algorithm) then the compiler just dies.
> An AI program needs to manipulate its own "concepts" along with its input.
 
A very clear explication, which I agree with.  This means not only recur-
sive transformation, but also *excursive* transformation -- ie, the string 
of symbols itself needs a second pass.  One of the things that bugs me
about the TM model is its reduction of interaction to a serial stream
going merrily past.  TOO SIMPLE.  And therefore tasteless. (IMO, natch.)
 
> [ about the Symbol Grounding Problem. ]
> Hilary Putnam [ with reference to problems of unique interpretation ]
> mentions on the very last page.  He thinks it entails Behaviorism.  On
> Functionalist assumptions, it may, but on Implementationist assumptions,
> this is not so.
 
I have a friend who's trying to convince me that Skinner advocated a
very different view of grounding and interpretation than is commonly
assumed; more along the lines of contextual frame orientation and
analogic reference than the usual serial S => R model.  When I have a
little better grip on it, I might bring it up here in a grounding/
interpretation context, to see what flies.  (or what flies it attracts!)

@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
    Sandra Zinn              |   "The squirming facts
    (yep these are my ideas  |      exceed the squamous mind"
     they only own my kybd)  |         -- Wallace Stevens