[sci.philosophy.tech] Chess, Reductionism, Probablistic Determinism.

kp@uts.amdahl.com (Ken Presting) (03/17/90)

In article <49331604.1285f@maize.engin.umich.edu> zarnuk@caen.UUCP (Paul Steven Mccarthy) writes:
>
>>>>(Chris Malcolm) asks: [...How is {some property of chess} the 
>>>> consequence of the laws of physics?...]
>
>>>(Paul Steven Mccarthy) answered: [...reductionism...]
>>>     The given property is a consequence of the rules of the game.
>>>     The rules of the game are the consequence of human perceptions
>>>         of pleasure. [pleasure <-- nuero-chemistry <-- chemistry
>>>                       <-- physics]
>
>>(Ken Presting) objects: [...]
>>It may be easy to write off chess as defined only for human pleasure,
>>but but such a move is not so easy for the rules of arithmetic, the
>>rules of logic, or the rules of a Turing machine.  [...]
>
>It occurred to me afterwards that the real crux of my beliefs stems
>from probablistic determinism.  You set up the dominoes 'just right'
>at the beginning, introduce a convenient cosmic big bang, wait a
>few eons, ... and voila!  Here we are, with this particular world,
>and all of its interesting properties.. . .
>It is a belief in "cause and effect", where the underlying "causal force"
>is the laws of physics -- just "the way the universe works".

What you have here is not so much an argument for reductionism, but
rather an argument against *emergence*.  By assuming that the starting
state has certain potential for making certain things happen later on,
you are saying that the ultimate effects were there, in some sense,
right from the start.  By bringing in the probabilistic type of
causality, you can say that chess was still a consequence of the initial
state, even though it was not certain to appear.  But this is not a
reduction of the rules of chess to the laws of physics.

I just happen to have another concept in my little catalog, which does
a much better job than "reductionism".  I call it "implementationism".
To give an example, let's switch from chess to algorithms:

When a computer calculates a function, its operation is entirely
controlled by the laws of physics.  But an algorithm cannot be reduced
to the operations of any one computer, or any one kind of computer, or
to computers at all.  Any physical operations which have the "right"
structure could be an instance of computing a function by the algorithm.
The "right" structure is not definable in terms of the laws of physics,
but *is* definable in terms of the logical structure of the algorithm.
There is a large class of physical objects which *could* be used as
automatic computers, and they all have such physical properties as allow
a homomorphism from the logical description of the algorithm's operation
to the causal description of the machine's operation.  (This is a case
of the "homomorphism of logical structure" I mentioned earlier this week).

This idea of implementing an abstraction in a physical device with a
"matching" logical structure is easily extended to sciences.  Chemistry
includes a very complex bunch of abstractions and general laws, which
most people think of as reducible to physics.  But - this is big - if
the physicists decide that say, Rutherford atoms are out, and Bohr atoms
are in, the chemists do NOT have to re-write the Periodic Table.  They
may perhaps change some of their plans and expectations for new
research.  The concepts of chemistry do not disappear into physics, and
I would even say that the concepts of chemistry do not change *meaning*
when physics changes.  Chemists' beliefs about the *reference* of their
terms changes, but that's about it.  That's because the relation between
chemical and physical theory is a like an implementation of a program.
If the implementation changes, most of the "high-level" functions are
unaffected.

Chess, mathematics, and logic now fit into the scheme with a lot less
mangling.  We know how to implement machines that can behave according
to any rules anyone would care to state.  And we ourselves know how to
follow rules.  What we don't know is how to implement a machine that
can learn all the rules human beings can learn, or even how much of
human behavior is based on rules and how much is based on causes.

But the general thesis of implementationism is: Everything is implemented
in Physics.  If implementationism is true, then we have a shot at re-
implementing intelligence in silicon.  We don't need Reductionism.

(Jerry Fodor has a great discussion of reduction in the first chapter of
 _The_Language_of_Thought_.  He makes a very good case against reducing
 psychology to physics, but I think Implementationism is much more elegant
 than his "Token Physicalism".  I could use a better name, though. :-)

> . . .   I _believe_ that the body of reasoning
>tools developed by humans are valid for describing the properties
>of the universe.  I also believe that this belief is a consequence of
>"the way that the universe works" (nice and recursive, isn't it?! :-).  

Statements such as this bring up an important point.  One part of being
an intelligent person is to recognize that one's beliefs come about in
a variety of ways.  Sometimes our beliefs seem to be "built in", and
sometimes our beliefs are deliberately adopted.  It is certainly foolish
to insist that all beliefs must have a justification, but on the other
hand, it would be disingenuous to hold any particular belief exempt
from all challenges.  Even the laws of arithmetic and logic can be
held up for scrutiny.  The intuitionists and constructivists may well
be wrong about doubting the law of excluded middle, but they are not
stupid or foolish to do so.

I would say that an AI which could not participate in a discussion of
the foundation of its beliefs was lacking in an important area of
human behavior.  I might go so far as to say that if a machine did
not make little jokes when it reveals the circularity of its reasoning,
it would lack another important human trait! (:-)

(Circularity in the foundations of reasoning is *very* difficult to
 avoid.  Even Kant could not avoid it.  He did not make many jokes,
 however...)

>Now, Ken, don't you agree that this kind of discussion really belongs
>in "philosophy somewhere..."?

I don't know.  I guess that "net protocol" requires extended discussions
to be conducted in talk.* groups.  I figure that as long as my articles
receive thoughtful replies here, I might as well continue to post here.
The value I derive from the ideas of the rest of the group is tremendous.

My impression is that the success of AI will probably entail the answers
to a bunch of philosophical questions.  I *love* answers, so I'm very
interested in the success of AI.

I have cross-posted and suggested followups to sci.philosophy.tech.

>Your awe-struck, but uncomprehending fan,
>---Paul...

Oops.  I'll try to be more comprehensible and less awful.  (:-)

Ken Presting

kp@uts.amdahl.com (Ken Presting) (03/22/90)

In article <351@ntpdvp1.UUCP> sandyz@ntpdvp1.UUCP (Sandy Zinn) writes:
>> Ken Presting writes:
>> What we don't know is how to implement a machine that
>> can learn all the rules human beings can learn, or even how much of
>> human behavior is based on rules and how much is based on causes.
> 
>Rules vs. Causes??  I'm curious; please elaborate.

When you play a game of chess, if someone asks you why you move the knight
in that funny way, or why the pawn doesn't capture the piece ahead of it,
the rational explanation is based on the rules of the game.  No such
explanation would be available if someone asked you why you jerk your
foot when the tendon of your kneecap is tapped (even if you know a whole
lot more about neurophysiology than I do, you couldn't give an account
of why jerking your foot was *rational*)

One thing that makes rules interesting is that they seem to be closely
related to conscious thought, abstract thinking and learning, and overt
behavior, all at the same time.  A *very* controversial issue revolves
around the rules of grammar.  A large number of bright people are
prepared to argue to the death that knowledge of grammar is knowledge
of certain rules, and that this knowledge is innate in every infant.
A similar number of equally bright people think this position is absurd.

Confusion over this and related issues (such as explicit representation
of symbolic data vs. "implementation" of symbolic process in hardware)
seems to me to infect much of the debate over the foundations of AI.
Implementationism and normative properties are my attempt to sort things
out.  Whether a certain process (natural or artificial) follows a certain
rule is a normative question - the rule states a "norm", and some
interpretation of our observations of the process may be required before
we can understand the relation between the process and the rule.

I know this is rather vague, but it may be enough for a start.

Ken Presting

kp@uts.amdahl.com (Ken Presting) (04/10/90)

In article <365@ntpdvp1.UUCP> sandyz@ntpdvp1.UUCP (Sandy Zinn) writes:
>(Ken Presting) writes:
>> It is very important to pick out the "right" frame of reference, ...
>> Every  person establishes a frame of reference;
>                            ^^^^^^^
>Be careful of this singular terminology here; we don't want to "entitize"
>that which is fluid, multi-dimensional, evolutionary and "self-contradictory"
>(how do you contradict a "self" which is constantly in flux?).

Very good point, but the flux in any personality is limited, at least
in some ways.  I would agree that parts of a personality can change
practically without limit, especially in the case of beliefs and
intellectual behavior (ie making assertions and arguments).

I think there is a useful comparison between the Society of Mind and the
Scentific Community.  At most times in the history of a science (Kuhn's
"normal science") there is one dominant abstraction which organizes the
interpretation of data, the design and evaluation of research programs,
and the results of (most) previous experiments.  Paradigm shifts don't
seem to be the result of any single experiment, but rather are a response
to unchecked growth in the complexity of data interpretation (eg,
epicycles).  As experimental results accumulate which strain the
interpretive framework provided by theory, it is increasingly likely that
an alternative theory will have a chance of showing itself to have a
real advantage.

Data interpretation, and its counterpart, the analysis of natural systems,
are more like *skills* than *algorithms*.  Physics students learn a lot
of specific cases of natural phenomena, and the mathematical techniques
that have been applied to those systems.  No attempt is made to prove
to anybody that the math is justified a priori - you try it, and if it
works, go for it!  If it fits into some grand formalism, that's great,
but as Dan Mocsny observed, a table in a handbook is where most numbers
come from, not from theoretical derivations.

I think the mind is very similar.  Skills are prior to abstractions, and
logic is applied to rationalize ("explain" is the polite term) the
effectiveness of the skills.  The entity is the collection of skills or
techniques or conditioned responses, the frame of reference (descriptive
abstraction) is a fantasy.

>
>> the problem for Cog. Sci. is to understand that self-constructed frame.
>                                                  ^^^^^^^^^^^^^^^^
>One of my professors convinced me how impoverished a representation we have
>for this process.  Most of our building materials are pre-fab, by our
>families, our culture, etc., . . .

Mathematics, and abstraction in general, is the solution to this problem.
To the extent that we commit ourselves to scientific method, we are
commited to abstraction in solutions whenever it is feasible (at least).
Strong AI is doubly committed to abstraction - as a method, and as a
model.

The literate/illiterate distinction might help to understand the
situation, although I am not prepared to give a clear account of the
psychology of literacy (print vs. speech is too simplistic).  An
illiterate has only the culturally developed tools, but with symbols you
can build any logical structure at all.

>> >         In the case of the knee-jerk reflex, the premise ...
>> 
>> I would analyze the situation just a little differently,
>> ... If the leverage system (and all the other physiology) is called a
>> "context", I would take the tap of the hammer to be a "premise".
>
> . . . your identification of the hammer as premise shifts
>the focus of the system -- we do this all the time, and necessarily --
>but I must note that such a re-punctuation will necessitate a different
>interpretation of the conclusion than the one I intended.  Neither of us
>is wrong, but we've lost the isomorphism of our metaphors....

This shows the interplay between analysis, interpretation, and
representation (eg expression in words).  To understand the relation
between perception and cognition, we have to understand all three
processes.  Sensation by itself does not require so much effort, I would
say.  Any attempt to build too much logical structure into the concept
of sensation can only be confusing.  I offer this remark mostly as a
suggestion for terminology - we need *some* level of description at which
there is nothing more than superposition of squirms, and we might as well
call that sensation.

Sensation seems to me to be precisely analogous to analysis, which I would
define as "redescription in a selected vocabulary".  I've begun working
out a homomorphism of logical structure between analysis and sensory
processes.  The basic idea is to compare transduction between phase
spaces (ie physical parameters) to syntactical transformations.


Ken Presting   ("The system will be ready next Sunday" - SK)