[net.philosophy] Machines

tmoody@sjuvax.UUCP (T. Moody) (10/28/85)

[]
If we are going to spend any time discussing minds and machines, we
ought to take the time to reach a consensus on the meaning of
"machine".  This has already become problematic in the exchange
between Michael Ellis and Rich Rosen.

Rosen concedes, for "machine", what he has refused to concede for
"free will": that the term is inherently vague.  Astonishing as it is
that he should deny the vagueness of "free will" -- given that *its*
meaning has been the subject of dispute for centuries -- he is
certainly correct about "machine".  Philosophical interest in the
definition of "machine" is quite young, though.

The trouble is, I think, that the word "machine" is used both
literally and metaphorically in philosophical discussions (esp. about
the mind).  Literally, we all understand that machines are man-made
artifacts, devices, tools, and systems.  In this literal sense, no
living thing is a machine.

But this doesn't really help, because what we want to know is whether
any living things, or parts of living things are sufficiently *like*
machines, in essential respects, to justify the application of the
term to them.  Are cells, for example, "biological machines"?

In order to answer this question, we need to *abstract* from the
literal meaning of "machine" -- which clearly does *not* include cells
-- a more comprehensive meaning that captures what we think is
essential to machinehood.  For clarity (well, maybe), I shall refer to
machines in the latter sense as "Machines" (upper case "M").  It ought
to turn out that while all machines are Machines, not all Machines are
machines.

Proposed definition 1: A Machine is any deterministic system.  That
is, its current states are exhaustively determined by its prior
states.

Proposed definition 2: A Machine is any system whose behavior is
Turing-computable.  That is, its behavior can be completely specified
by some finite Turing Machine algorithm.

Now, I *think* these two definitions are coextensive...but I'd like to
know what you folks think.  Here is my reasoning: If a system is
deterministic, its behavior is completely described by laws
expressible as mathematical functions.  Granting the Church/Turing
thesis (anything computable is Turing-computable), these are
Turing-computable.

Okay.  Since quantum mechanics, it is no longer plausible to
characterize the universe itself as a Machine.  In fact, since quantum
mechanical systems are not Machines, we are forced to concede that
Machines are statistically emergent entities.  A digital computer, for
example, is a Machine (also a machine, of course), even though the
electronic events that animate it are subject to quantum mechanical
indeterminacies.  The computer is configured in such a way that these
indeterminacies are largely cancelled out, so that at a molar level of
description a digital computer is approximately deteministic.

Is the human brain like a digital computer in *these* respects?  Are
*its* indeterminacies approximately cancelled out, making it a
statistically emergent Machine?  This is, after all, an empirical
question.  Based on what I have read, the answer is no, but I think it
is fair to say that the case is not closed.  Note that not many
neuroscientists state any position on this question, because they are
not primarily interested in these metaphysical problems.  Sir John
Eccles (who is, interestingly, a strict dualist) argues that the
brain amplifies quantum indeterminacies, if anything.

Also note that this has nothing to do with the Searle argument.
Searle's point is that even if one grants the Machinehood of the
brain, that is not sufficient to establish its intentionality.


Todd Moody                 |  {allegra|astrovax|bpa|burdvax}!sjuvax!tmoody
Philosophy Department      |
St. Joseph's U.            |         "I couldn't fail to
Philadelphia, PA   19131   |          disagree with you less."

rlr@pyuxd.UUCP (Rich Rosen) (10/31/85)

> If we are going to spend any time discussing minds and machines, we
> ought to take the time to reach a consensus on the meaning of
> "machine".  This has already become problematic in the exchange
> between Michael Ellis and Rich Rosen.
> Rosen concedes, for "machine", what he has refused to concede for
> "free will": that the term is inherently vague.  Astonishing as it is
> that he should deny the vagueness of "free will" -- given that *its*
> meaning has been the subject of dispute for centuries -- he is
> certainly correct about "machine".  Philosophical interest in the
> definition of "machine" is quite young, though.  [MOODY]

Perhaps I "concede" this because it happens to be true in one case but not
in the other.  Every definition in the Dictionary of Philosophy about
free will points to the very notions I have described.  Tell me something,
Todd.  How you can have ANY honest debate/dispute for centuries where
people don't use the same definition for what they are debating about?
(Perhaps that is precisely how philosophers make a living.  :-)  These
include "the *feeling* of making uncaused/uncompelled choices", "the feeling
that given the same circumstances I could have done otherwise than that which
I did in fact do", "the feeling that I can will something, can exert
energy in some desired direction", "acts of free will are caused by willing
of an agent (!!!!!) but NOT BY MATERIAL CHANGES IN THE BRAIN AND NOT BY
EXTERNAL STIMULI (emphasis, of course, mine), "free in the sense of not
being caused or determined by anything else, independent of ANTECEDENT
physiological, neurological, psychological, or environmental conditions" (!!!),
all this pointing to an agent of "first cause" (because IT can will and
cause other things to happen, deliberately, by conscious choice, while
not being caused by anything (at least anything material, whatever that
means) itself).  Now, let's look at what is called the problem of free will
(because it is so very relevant to other discussions here).  "If all human
actions are caused, then how can concepts found in our everyday experience
such as blame, responsibility, [ETC.] be made meaningful?".  "IF EVERY
HUMAN ACT IS CAUSED, THEN HOW CAN THIS BE MADE COMPATIBLE WITH A HUMAN'S
*SENSE* OF FREE WILL"?  Not to mention the theological problems.  (OK,
I won't mention them. :-)  But let's note the emphasis of direction here.
Rather than acknowledge the causal implications of biochemistry, the direction
is to figure out how we can somehow keep that sense/notion of ours "true"
at whatever cost (e.g., asserting things about acausality and quantum
phenomena to "get" things to be "true", mocking coldly and harshly anyone
who proposes things that contradict the backwards wishful thinking [e.g.,
Skinner]).  (But, anyway, back to the topic of machines...)

> The trouble is, I think, that the word "machine" is used both
> literally and metaphorically in philosophical discussions (esp. about
> the mind).  Literally, we all understand that machines are man-made
> artifacts, devices, tools, and systems.  In this literal sense, no
> living thing is a machine.
> But this doesn't really help, because what we want to know is whether
> any living things, or parts of living things are sufficiently *like*
> machines, in essential respects, to justify the application of the
> term to them.  Are cells, for example, "biological machines"?

I.e., non-man-made machines.  Indeed, THAT is the pertinent question.
Are they ALIKE enough in characteristics (mechanism) such that we would
have the ability to build a "like" machine.

> In order to answer this question, we need to *abstract* from the
> literal meaning of "machine" -- which clearly does *not* include cells
> -- a more comprehensive meaning that captures what we think is
> essential to machinehood.  For clarity (well, maybe), I shall refer to
> machines in the latter sense as "Machines" (upper case "M").  It ought
> to turn out that while all machines are Machines, not all Machines are
> machines.
> Proposed definition 1: A Machine is any deterministic system.  That
> is, its current states are exhaustively determined by its prior
> states.
> Proposed definition 2: A Machine is any system whose behavior is
> Turing-computable.  That is, its behavior can be completely specified
> by some finite Turing Machine algorithm.
> Now, I *think* these two definitions are coextensive...but I'd like to
> know what you folks think.  Here is my reasoning: If a system is
> deterministic, its behavior is completely described by laws
> expressible as mathematical functions.  Granting the Church/Turing
> thesis (anything computable is Turing-computable), these are
> Turing-computable.
> Okay.  Since quantum mechanics, it is no longer plausible to
> characterize the universe itself as a Machine.  In fact, since quantum
> mechanical systems are not Machines, we are forced to concede that
> Machines are statistically emergent entities.  A digital computer, for
> example, is a Machine (also a machine, of course), even though the
> electronic events that animate it are subject to quantum mechanical
> indeterminacies.

But that's clearly not true.  If you boldly state that "the universe is
not a machine", how could anything within it be considered a machine?

> The computer is configured in such a way that these
> indeterminacies are largely cancelled out, so that at a molar level of
> description a digital computer is approximately deteministic.

I think you're waffling.  "Approximately deterministic" sounds an awful lot
like "partially pregnant".  If it functions in a deterministic manner, then
your theories have just had holes shot through them.

> Is the human brain like a digital computer in *these* respects?  Are
> *its* indeterminacies approximately cancelled out, making it a
> statistically emergent Machine?  This is, after all, an empirical
> question.  Based on what I have read, the answer is no, but I think it
> is fair to say that the case is not closed.  Note that not many
> neuroscientists state any position on this question, because they are
> not primarily interested in these metaphysical problems.  Sir John
> Eccles (who is, interestingly, a strict dualist) argues that the
> brain amplifies quantum indeterminacies, if anything.

Of course he does:  he has a specific conclusion about the brain in mind,
thus he asserts that this "must" be so.

> Also note that this has nothing to do with the Searle argument.
> Searle's point is that even if one grants the Machinehood of the
> brain, that is not sufficient to establish its intentionality.

So who is to say intentionality cannot be built into a machine?  You?  Searle?
-- 
"iY AHORA, INFORMACION INTERESANTE ACERCA DE... LA LLAMA!"
	Rich Rosen    ihnp4!pyuxd!rlr

franka@mmintl.UUCP (Frank Adams) (11/04/85)

In article <2464@sjuvax.UUCP> tmoody@sjuvax.UUCP (T. Moody) writes:
>Proposed definition 1: A Machine is any deterministic system.  That
>is, its current states are exhaustively determined by its prior
>states.

(I will assume you want to say "by its prior states and its inputs";
otherwise this is nonsense.)

Even with this emendation, I don't think this definition of Machine
encompasses the standard meaning of machine.  If I build a device which
includes a geiger counter, and performs differently depending on when
that geiger counter detects a particle, this is not a deterministic system.
(Some of the particles picked up by the counter will have been emitted
from the materials the device is made out of, so cannot be counted as
inputs.)  Most people would have no hesitation about calling this a
machine.

Frank Adams                           ihpn4!philabs!pwa-b!mmintl!franka
Multimate International    52 Oakland Ave North    E. Hartford, CT 06108