[comp.ai] More on Minsky on Mind

harnad@mind.UUCP (Stevan Harnad) (01/21/87)

In mod.ai MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU wrote:

>	I don't believe that the phenomenon of "first order consciousness"
>	exists, that Harnad talks about.  The part of the mind that speaks is
>	not experiencing the toothache, but is reacting to signals that were
>	sent some small time ago from other parts of the brain.

There seems to be a contradiction in the above set of statements. If
the meaning of "first order consciousness" (call it "C-1") has been
understood, then one cannot at the same time say one does not believe
C-1 exists AND that "the part of the mind that speaks is not
experiencing the toothache" -- unless of course one believes NO part
of the mind is experiencing the toothache; for whatever part of
the mind IS experiencing the toothache is the part of the mind having
C-1. If Minsky DOES mean that no part of the mind is experiencing the
toothache, then I wish to offer my own humble experience as a
counterexample: I (and therefore, a fortiori, some part of my mind)
certainly do experience toothache.

To minimize cross-talk and misunderstanding, I will explicitly
define C-1 and C-2 ("2nd order consciousness"):

	To have (or be) C-1 is to have ANY qualitative experience at all; to
	feel, see, hear. Philosophers call having C-1 "having qualia."
	A helpful portmanteau we owe to the philosopher Tom Nagel is
	that whenever one has C-1 -- i.e., whenever one experiences
	anything at all -- there is something it is "like" to have that
	experience, and we experience what that something is like directly.
	Note: Everyone who is not in the grip of some theoretical
	position knows EXACTLY what I mean by the above, and I use
	the example of having a current toothache merely as a standard
	illustration.

	To have (or be) C-2 (or C-N) is to be aware of having a
	lower-order experience, such as C-1. The distinction between
	C-1 and C-2 is often formulated as the distinction between
	"being aware of something" (say, having a toothache) and "being
	aware of being aware of something" (including, say, remembering,
	thinking about or talking about having a toothache, or about
	what it's like to have a toothache).

My critiques of the extracts from Minsky's book were based on the
following simple point: His hypotheses about the functional
substrates of consciousness are all based on analogies between things
that can go on in machines (and perhaps brains) and things that seem to
go on in C-2. But C-2 is really just a 2nd-order frill on the mind/body
problem, compared with the problem of capturing the machine/brain
substrates of C-1.  Worse than that, C-2 already presupposes C-1. You can't
have awareness-of-awareness without having awareness -- i.e., direct,
first-order experiences like toothaches -- in the first place. This
led directly to my challenge to Minsky: Why do any of the processes he
describes require C-1 (and hence any level of C) at all? Why can't all
the functions he describes be accomplished without being given the
interpretation that they are conscious -- i.e. that they are accompanied
by any experience -- at all? What is there about his scenario that could not
be accomplished COMPLETELY UNCONSCIOUSLY?

To answer the last question is finally to confront the real mind/body
problem. And if Minsky did so, he would find that the conscious
interpretation of all his machine processes is completely
supererogatory. There's no particular reason to believe that systems
with only the kinds of properties he describes would have (or be) C-1. Hence
there's no reason to be persuaded by the analogies between their inner
workings and some of our inferences and introspections about C-2 either.

To put it more concretely using Minsky's own example: There is perhaps
marginally more inclination to believe that systems with the inner workings
he describes [objectively, of course, minus the conscious interpretation
with which they are decorated] are more likely to be conscious
than a stone, but even this marginal additional credibility derives only
from the fact that such systems can (again, objectively) DO more than
a stone, rather than from the C-2 interpretations and analogies. [And
it is of course this performance criterion alone -- what I've called
elsewhere the Total Turing Test -- that I have argued is the ONLY
defensible criterion for inferring consciousness in any device other than
oneself.]


>	I think Harnad's phenomenology is too simple-minded to take seriously.
>	If he has ever had a toothache, he will remember that one is not
>	conscious of it all the time, even if it is very painful; one becomes
>	aware of it in episodes of various lengths. I suppose he'll argue that
>	he remains unconsciously conscious of it. I...ask him to review his
>	insistence that ANTHING can happen instantaneously - no matter how
>	convincing the illusion is...

I hope no one will ever catch me suggesting that we can be "unconsciously
conscious" of anything, since I regard that as an unmitigated contradiction
in terms (and probably a particularly unhelpful Nachlass from Freud).
I am also reasonably confident that my simple-minded phenomenology is
shared by anyone who can pry himself loose from prior theoretical
commitments.

I agree that toothaches fade in and out, and that conscious "instants"
are not punctate, but smeared across a fuzzy interval. But so what?
Call Delta-T one of those instants of consciousness of a toothache. It
is when I'm feeling that toothache RIGHT NOW that I am having a 1st
order conscious experience. Call it Delta-C-1 if you prefer, but it's
still C-1 (i.e., experiencing pain now) and not just C-2 (i.e.,
remembering, describing, or reflecting on experiencing pain) that's
going on then. And unless you can make a case for C-1, the case for C-2
is left trying to elevate itself by its boot-straps.

I also agree, of course, that conscious experiences (both C-1 and C-2)
involve illusions, including temporal illusions. [In an article in
Cognition and Brain Theory (5:29-47, 1982) entitled "Consciousness: An
Afterthought" I tried to show how an experience might be a pastische
of temporal and causal illusions.] But one thing's no illusion, and
that's the fact THAT we're having an experience. The toothache I feel
I'm having right now may in fact have its causal origin in a tooth
injury that happened 90 seconds ago, or a brain event that happened 30
milliseconds ago, but what I'm feeling when I feel it is a
here-and-now toothache, and that's real. It's even real if there's no
tooth injury at all. The point is that the temporal and causal
CONTENTS of an experience may be illusory in their relation to, or
representation of, real time and real causes, but they can't be illusions
AS experiences. And it is this "phenomenological validity" of
conscious experience (C-1 in particular) that is the real burden of
any machine/brain theory of consciousness.

It's a useful constraint to observe the following dichotomy (which
corresponds roughly to the objective/subjective dichotomy): Keep
behavioral performance and the processes that generate it on the
objective side (O) of the ledger, and leave them uninterpreted. On the
subjective (S) side, place conscious experience (1st order and
higher-order) and its contents, such as they are; these are of course
necessarily interpreted. You now need an argument for interpreting any
theory of O in terms of S. In particular, you must show why the
uninterpreted O story ALONE will not work (i.e., why ALL the processes
you posit cannot be completely unconscious). [The history of the
mind/body problem to date -- in my view, at least -- is that no one
has yet managed to do the latter in any remotely rigorous or
convincing way.]

Consider the toothache. On the O side there may (or may not) be
tooth injury, neural substrates of tooth injury, verbal and nonverbal
expressions of pain, and neural substrates of verbal and nonverbal
expressions of pain. These events may be arranged in real time in
various ways. On the S side there is my feeling -- fading
in and out, smeared across time, sometimes vocalized sometimes just
silently suffered -- of having a toothache.

The mind/body problem then becomes the problem of how (and why) to
equate those objective phenomena (environmental events, neural events,
behaviors) with those subjective phenomena (feelings of pain, etc.).
My critique of the excerpts from Minsky's book was that he was conferring
the subjective interpretation on his proposed objective processes and
events without any apparent argument about why the VERY SAME objective
story could not be told with equal objective validity WITHOUT the
subjective interpretation. [If that sounds like a Catch-22, then I've
succeeded in showing the true face of the mind/body problem at last.
It also perhaps shows why I recommend methodological epiphenomenalism --
i.e., not trying to account for consciousness, but only for the
objective substrates of our total performance capacity -- in place of
subjective over-interpretations of those same processes: Because, at
worst, the hermeneutic embellishments will mask or distract from
performance weaknesses, and at best they are theoretically (i.e.,
objectively) superfluous.

>	As for that "mind/body problem" I repeat my slogan, "Minds are simply
>	what brains do."

Easier said than done. And, as I've suggested, even when done, it's no
"solution."
-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

mwm@cuuxb.UUCP (01/24/87)

<line eater food>

In article <460@mind.UUCP> Sevan Harnad (harnad@mind.UUCP) writes:
> [ discussion of C-1 and C-2]

	It seems to me that the human conciousness is actually more
	of a C-n;  C-1 being "capable of experiencing sensation",
	C-2 being "capable of reasoning about being C-1", and C-n
	being "capable of reasoning about C-1..C-(n-1)" for some
	arbitrarily large n...  Or was that really the intent of
	the Minsky C-2?
-- 
 Marc Mengel
 ...!ihnp4!cuuxb!mwm

harnad@mind.UUCP (01/24/87)

mwm@cuuxb.UUCP (Marc W. Mengel) of AT&T-IS, Software Support, Lisle IL
writes:

>	It seems to me that the human conciousness is actually more
>	of a C-n;  C-1 being "capable of experiencing sensation",
>	C-2 being "capable of reasoning about being C-1", and C-n
>	being "capable of reasoning about C-1..C-(n-1)" for some
>	arbitrarily large n...  Or was that really the intent of
>	the Minsky C-2?

It's precisely this sort of overhasty overinterpretation that my critique
of the excerpts from Minsky's forthcoming book was meant to counteract. You
can't help yourself to higher-order C's until you've handled 1st-order C
-- unless you're satisfied with hanging them on a hermeneutic sky-hook.
-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

harnad@mind.UUCP (01/28/87)

Ken Laws <Laws@SRI-STRIPE.ARPA> wrote on mod.ai:

>	I'm inclined to grant a limited amount of consciousness to corporations
>	and even to ant colonies.  To do so, though, requires rethinking the
>	nature of pain and pleasure (to something related to homeostatis).

Unfortunately, the problem can't be resolved by mere magnanimity. Nor
by simply reinterpreting experience as something else -- at least not
without a VERY persuasive argument -- one no one in the history of the M/B
problem has managed to come up with so far. This history is just one of
hand-waving. Do you think "rethinking" pain as homeostastis does the trick?

>	computer operating systems and adaptive communications networks are
>	close [to conscious]. The issue is partly one of complexity, partly
>	of structure, partly of function.

I'll get back to the question of whether experiencing is an
all-or-none phenomenon or a matter of degree below. For now, I just
wonder what kind and degree of structural/functional "complexity" you
believe adds up to EXPERIENCING pain as opposed to merely behaving as
if experiencing pain.

>	I am assuming that neurons and other "simple" systems are C-1 but
>	not C-2  -- and C-2 is the kind of consciousness that people are
>	really interested in.

Yes, but do you really think that hard questions like these can be
settled by assumption? The question is: What justifies the inference
that an organism or device is experiencing ANYTHING AT ALL (C-1), and
what justifies interpreting internal functions as conscious ones?
Assumption does not seem like a very strong justification for an
inference or interpretation. What is the basis for your assumption?

I have proposed the TTT as the only justifiable basis, and I've given
arguments in support of that proposal. The default assumptions in the
AI/Cog-Sci community seem to be that sufficiently "complex" function
and performance capacity, preferably with "memory" and "learning," can be
dubbed "conscious," especially with the help of the subsidiary
assumption that consciousness admits of degrees. The thrust of my
critique is that this position is rather weak and arbitrary, and open
to telling counter-examples (like Searle's). But, more important, it
is not an issue on which the Cog-sci community even needs to take a
stand! For Cog-sci's objective goal -- of giving a causal explanation
of organisms' and devices' functional properties -- can be achieved
without embellishing any of its functional constructs with a conscious
interpretation. This is what I've called "methodological
epiphenomenalism." Moreover, the TTT (as an asymptotic goal) even
captures the intuitions about "sufficient functional complexity and
performance capacity," in a nonarbitrary way.

It is the resolution of these issues by unsupportable assumption, circularity,
arbitrary fiat and obiter dicta that I think is not doing the field
any good. And this is not at all because (1) it simply makes cog-sci look
silly to philosophers, but because, as I've repeatedly suggested, (2) the
unjustified embellishment of (otherwise trivial, toy-like) function
or performance as "conscious" can actually side-track cog-sci from its
objective, empirical goals, masking performance weaknesses by
anthropomorphically over-interpreting them. Finally (3), the
unrealizable goal of objectively capturing conscious phenomenology,
being illogical, threatens to derail cog-sci altogether, heading it in
the direction of hermeneutics (i.e., subjective interpretation of
mental states, i.e., C-2) rather than objective empirical explanation of
behavioral capacity. [If C-2 is "what people are really interested
in," then maybe they should turn to lit-crit instead of cog-sci.]

>	The mystery for me is why only >>one<< subsystem in my brain
>	seems to have that introspective property -- but
>	multiple personalities or split-brain subjects may be examples that
>	this is not a necessary condition.

Again, we'd probably be better off tackling the mystery of what the
brain can DO in the world, rather than what subjective states it can
generate. But, for the record, there is hardly agreement in clinical
psychology and neuropsychology about whether split-brain subjects or
multiple-personality patients really have more than one "mind," rather
than merely somewhat dissociated functions -- some conscious, some not --
that are not fully integrated, either temporally or experientially.
Inferring that someone has TWO minds seems to be an even trickier
problem than the usual problem ("solved" by the TTT) of inferring that 
someone has ONE (a variant of the mind/body problem called the "other-minds"
problem). At least in the case of the latter we have our own, normal unitary
experience to generalize from...

>	[Regarding the question of whether consciousness admits of degrees:]
>	An airplane either can fly or it can't. Yet there are
>	simpler forms of flight used by other entities-- kites, frisbees,
>	paper airplanes, butterflies, dandelion seeds... My own opinion
>	is that insects and fish feel pain, but often do so in a generalized,
>	nonlocalized way that is similar to a feeling of illness in humans.

Flight is an objective, objectively definable function. Experience is
not. We can, for example, say that a massive body that stays aloft in
space for any non-zero period of time is "flying" to a degree. There
is no logical problem with this. But what does it mean to say that
something is conscious to a degree? Does the entity in question
EXPERIENCE anything AT ALL? If so, it is conscious. If not, not. What
has degree to do with it (apart from how much, or how intensely it
experiences, which is not the issue)? 

I too believe that lower animals feel pain. I don't want to conjecture
what it feels like to them; but having conceded that it feels like
anything at all, you seem to have conceded that they are conscious.
Now where does the question of degree come into it?

The mind/body problem is the problem of subjectivity. When you ask
whether something is conscious, you're asking whether it has
subjective states at all, not which ones, how many, or how strong.
That is an all-or-none matter, and it concerns C-1. You can't speak of
C-2 at all until you have a principled handle on C-1.

>	I assume that lower forms experience lower forms of consciousness
>	along with lower levels of intelligence.  Such continuua seem natural
>	to me. If you wish to say that only humans and TTT-equivalents are
>	conscious, you should bear the burden of establishing the existence
>	and nature of the discontinuity.

I happen to share all those assumptions about consciousness in lower
forms, except that I don't see any continuum of consciousness there at
all. They're either conscious or not. I too believe they are conscious,
but that's an all-or-none matter. What's on a continuum is what they're
conscious OF, how much, to what degree, perhaps even what it's "like" for
them (although the latter is more a qualitative than a quantitative
matter). But THAT it's like SOMETHING is what it is that I am
assenting to when I agree that they are conscious at all. That's C-1.
And it's the biggest discontinuity we're ever likely to know of.

(Note that I didn't say "ever likely to experience," because of course
we DON'T experience the discontinuity: We know what it is like to
experience something, and to experience more or less things, more or less
intensely. But we don't know what it's like NOT to experience
something. [Be careful of the scope of the "not" here: I know what
it's like to see not-red, but not what it's like to not-see red, or be
unconscious, etc.] To know what it's like NOT to experience
anything at all is to experience not-experiencing, which is
a contradiction in terms. This is what I've called, in another paper,
the problem of "uncomplemented" categories. It is normally solved by
analogy. But where the categories are uncomplementable in principle,
analogy fails in principle. I think that this is what is behind our
incoherent intuition that consciousness admits of degrees: Because to
experience the conscious/unconscious discontinuity is logically
impossible, hence, a fortiori, experientially impossible.)

>	[About why neurons are conscious and atoms are not:]
>	When someone demonstrates that atoms can learn, I'll reconsider.

You're showing your assumptions here. What can be more evident about
the gratuitousness of mentalistic interpretation (in place of which I'm
recommending abstention or agnosticism on methodological grounds)
than that you're prepared to equate it with "learning"?

>	You are questioning my choice of discontinuity, but mine is easy
>	to defend (or give up) because I assume that the scale of
>	consciousness tapers off into meaninglessness. Asking whether
>	atoms are conscious is like asking whether aircraft bolts can fly.

So far, it's the continuum itself that seems meaningless (and the defense
a bit too easy-going). Asking questions about subjective phenomena
is not as easy as asking about objective ones, hopeful analogies
notwithstanding. The difficulty is called the mind/body problem.

>	I hope you're not insisting that no entity can be conscious without
>	passing the TTT. Even a rock could be conscious without our having
>	any justifiable means of deciding so.

Perhaps this is a good place to point out the frequent mistake of
mixing up "ontic" questions (about what's actually TRUE of the world)
and "epistemic" ones (about what we can KNOW about what's actually true of
the world, and how). I am not claiming that no entity can be conscious
without passing the TTT. I am not even claiming that every entity that
passes the TTT must be conscious. I am simply saying that IF there is
any defensible basis for inferring that an entity is conscious, it is
the TTT. The TTT is what we use with one another, when we daily
"solve" the informal "other-minds" problem. It is also cog-sci's
natural asymptotic goal in mind-modeling, and again the only one that
seems methodologically and logically defensible.

I believe that animals are conscious; I've even spoken of
species-specific variants of the TTT; but with these variants both our
intuitions and our ecological knowledge become weaker, and with them
the usefulness of the TTT in such cases. Our inability to devise or
administer an animal TTT doesn't make animals any less conscious. It just
makes it harder to know whether they are, and to justify our inferences.

(I'll leave the case of the stone as an exercise in applying the
ontic/epistemic distinction.)

>>SH:  "(To reply that synthetic substances with the same functional properties
>>	must be conscious under these conditions is to beg the question.)"
>KL: 	I presume that a synthetic replica of myself, or any number of such
>	replicas, would continue my consciousness.

I agree completely. The problem was justifying attributing consciousness
to neurons and denying it to, say, atoms. It's circular to say
neurons are conscious because they have certain functional properties
that atoms lack MERELY on the grounds that neurons are functional
parts of (obviously) conscious organisms. If synthetic components
would work just as well (as I agree they would), you need a better
justification for imputing consciousness to neurons than that they are
parts of conscious organisms. You also need a better argument for
imputing consciousness to their synthetic substitutes. The TTT is my
(epistemic) criterion for consciousness at the whole-organism level.
Its usefulness and applicability trail off drastically with lower and lower
organisms. I've criticized cog-sci's default criteria earlier in this
response. What criteria do you propose, and what is the supporting
justification, for imputing consciousness to, say, neurons?

>	Perhaps professional philosophers are able to strive for a totally
>	consistent world view.

The only thing at issue is logical consistency, not world view. And even
professional scientists have to strive for that.

>	Why is there Being instead of Nothingness?  Who cares?

These standard examples (along with the unheard sound of the tree
falling alone in the forest) are easily used to lampoon philosophical
inquiry. They tend to be based on naive misunderstandings of what
philosophers are actually doing -- which is usual as significant and
rigorous as any other area of logically constrained intellectual
inquiry (although I wouldn't vouch for all of it, in any area of
inquiry).

But in this case consider the actual ironic state of affairs:
It is cog-sci that is hopefully opening up and taking an ambitious
position on the problems that normally only concern philosophers,
such as the mind/body problem. NONphilosophers are claiming : "this is
conscious and that's not," and "this is why," and "this is what
consciousness is." So who's bringing it up, and who's the one that cares?

Moreover, I happen myself to be a nonphilosopher (although I have a
sizeable respect for that venerable discipline and its inevitable quota
of insightful exponents); yet I repeatedly find myself in the peculiar
role of having to point out the philosophically well-known howlers
that cog-sci keeps tumbling into in its self-initiated inquiry into
"Nothingness." More ironic still, in arguing for the TTT and methodological
epiphenomenalism, I am actually saying: "Why do you care? Worrying about
consciousness will get you nowhere, and there's objective empirical
work to do!"

>	If I had to build an aircraft, I would not begin by refuting
>	theological arguments about Man being given dominion over the
>	Earth rather than the Heavens. I would start from a premise that
>	flight was possible and would try to derive enabling conditions.

Building aircraft and devices that (attempt to) pass the TTT are objective,
do-able empirical tasks. Trying to model conscious phenomenology, or to
justify interpreting processes as conscious, gets you as embroiled in
"theology" as trying to justify interpreting the Communal wafer as the
body of Christ. Now who's the pragmatist and who's the theologian?

-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

mmt@dciem.UUCP (Martin Taylor) (01/31/87)

>     More ironic still, in arguing for the TTT and methodological
>epiphenomenalism, I am actually saying: "Why do you care? Worrying about
>consciousness will get you nowhere, and there's objective empirical
>work to do!"
>
That's a highly prejudiced, anti-empirical point of view: "Ignore Theory A.
It'll never help you.  Theory B will explain the data better, whatever
they may prove to be!"

Sure, there's all sorts of objective empirical work to do.  There's lots
of experimental work to do as well.  But there is also theoretical work
to be done, to find out how best to describe our world.  If the descriptions
are simpler using a theory that embodies consciousness than using one that
does not, then we SHOULD assume consciousness.  Whether this is the case
is itself an empirical question, which cannot be begged by asserting
(correctly) that all behaviour can be explained without resort to
consciousness.
-- 

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt

tim@hoptoad.uucp (Tim Maroney) (02/05/87)

How well respected is Minsky among cognitive psychologists?  I was rather
surprised to see him putting the stamp of approval on Drexler's "Engines of
Creation", since the psychology is so amazingly shallow; e.g., reducing
identity to a matter of memory, ignoring effects of the glands and digestion
on personality.  Drexler had apparently read no actual psychology, only AI
literature and neuro-linguistics, and in my opinion his approach is very
anti-humanistic.  (Much like that of hard sf authors.)

Is this true in general in the AI world?  Is it largely incestuous, without
reference to scientific observations of psychic function?  In short, does it
remain almost entirely speculative with respect to higher-order cognition?
-- 
Tim Maroney, Electronic Village Idiot
{ihnp4,sun,well,ptsfa,lll-crg,frog}!hoptoad!tim (uucp)
hoptoad!tim@lll-crg (arpa)

Second Coming Still Vaporware After 2,000 Years

wcalvin@well.UUCP (02/09/87)

 
     In following the replies to Minsky's excerpts from SOCIETY OF MIND, I
am struck by all the attempts to use slippery word-logic.  If that's all
one has to use, then one suffers with word-logic until something better
comes along.  But there are some mechanistic concepts from both
neurobiology and evolutionary biology which I find quite helpful in
thinking about consciousness -- or at least one major aspect of it, namely
what the writer Peter Brooks described in READING FOR THE PLOT (1985) as
follows:  
 
     "Our lives are ceaselessly intertwined with narrative, with the
     stories that we tell and hear told, those we dream or imagine or would
     like to tell, all of which are reworked in that story of our own lives
     that we narrate to ourselves in an episodic, sometimes semiconscious,
     but virtually uninterrupted monologue.  We live immersed in narrative,
     recounting and reassessing the meaning of our past actions,
     anticipating the outcome of our future projects, situating ourselves
     at the intersection of several stories not yet completed."
 
     Note the emphasis on both past and future, rather than the perceiving-
the-present and recalling-the-recent-past, e.g., Minsky:
 
>     although people usually assume that consciousness is knowing
>     what is happening in the minds, right at the
>     present time, consciousness never is really concerned with the
>     present, but with how we think about the records of our recent
>     thoughts...  how thinking about our short term memories changes them!
 
But simulation is more the issue, e.g., E.O. Wilson in ON HUMAN NATURE
1978:
"Since the mind recreates reality from abstractions of sense
     impressions, it can equally well simulate reality by recall and
     fantasy.  The brain invents stories and runs imagined and remembered
     events back and forth through time."
 
Rehearsing movements may be the key to appreciating the brain mechanisms,
if I may quote myself (THE RIVER THAT FLOWS UPHILL: A JOURNEY FROM THE BIG
BANG TO THE BIG BRAIN, 1986): 
 
     "We have an ability to run through a motion with our muscles detached
     from the circuit, then run through it again for real, the muscles
     actually carrying out the commands.  We can let our simulation run
     through the past and future, trying different scenarios and judging
     which is most advantageous -- it allows us to respond in advance to
     probable future environments, to imagine an accidental rockfall
     loosened by a climber above us and to therefore stay out of his fall
     line."
 
     Though how we acquired this foresight is a bit of a mystery.  Never
mind for a moment all those "surely it's useful" arguments which, using
compound interest reasoning, can justify anything (given enough
evolutionary time for compounding).  As Jacob Bronowski noted in THE
ORIGINS OF KNOWLEDGE AND IMAGINATION 1967, foresight hasn't been
widespread:
 
     "[Man's] unique ability to imagine, to make plans...  are generally
     included in the catchall phrase "free will." What we really mean by
     free will, of course, is the visualizing of alternatives and making a
     choice between them.  In my view, which not everyone shares, the
     central problem of human consciousness depends on this ability to
     imagine.....  Foresight is so obviously of great evolutionary
     advantage that one would say, `Why haven't all animals used it and
     come up with it?' But the fact is that obviously it is a very strange
     accident.  And I guess as human beings we must all pray that it will
     not strike any other species."
 
So if other animals have not evolved very much of our fussing-about-the-
future consciousness via its usefulness, what other avenues are there for
evolution?  A major one, noted by Darwin himself but forgotten by almost
everyone else, is conversion ("functional change in anatomical
continuity"), new functions from old structures.  Thus one looks at brain
circuitry for some aspects of the problem -- such as planning movements --
and sees if a secondary use can be made of it to yield other aspects of
consciousness -- such as spinning scenarios about past and future.
 
     And how do we generate a detailed PLAN A and PLAN B, and then compare
them?  First we recognize that detailed plans are rarely needed:  many
elaborate movements can get along fine on just a general goal and feedback
corrections, as when I pick up my cup of coffee and move it to my lips. 
But feedback has a loop time (nerve conduction time, plus decision-making,
often adds up to several hundred milliseconds of reaction time).  This
means the feedback arrives too late to do any good in the case of certain
rapid movements (saccadic eye flicks, hammering, throwing, swinging a golf
club).  Animals who utilize such "ballistic movements" (as we call them in
motor systems neurophysiology) simply have to evolve a serial command
buffer:  plan at leisure (as when we "get set" to throw) but then pump out
that whole detailed sequence of muscle commands without feedback.  And get
it right the first time.  Since it goes out on a series of channels (all
those muscles of arm and hand), it is something like planning a whole
fireworks display finale (carefully coordinated ignitions from a series of
launch platforms with different inherent delays, etc.).
 
     But once a species has such a serial command buffer, it may be useful
for all sorts of things besides the actions which were originally under
natural selection during evolution (throwing for hunting is my favorite
shaper-upper --see J.Theor.Biol. 104:121-135,1983 -- but word-order-coded
language is conceivably another way of selecting for a serial command
buffer).  Besides rehearsing slow movements better with the new-fangled
ballistic movement sequencer, perhaps one could also string together other
concepts-images-schemata with the same neural machinery: spin a scenario? 
 
     The other contribution from evolutionary biology is the notion that
one can randomly generate a whole family of such strings and then select
amongst them (imagine a railroad marshalling yard, a whole series of
possible trains being randomly assembled).  Each train is graded against
memory for reasonableness -- Does it have an engine at one end and a
caboose at the other? -- before one is let loose on the main line.  "Best"
is surely a value judgment determined by memories of the fate of similar
sequences in the past, and one presumes a series of selection steps that
shape up candidates into increasingly more realistic sequences, just as
many generations of evolution have shaped up increasingly more
sophisticated species.  To quote an abstract of mine called "Designing
Darwin Machines":
 
          This selection of stochastic sequences is more
          analogous to the ways of Darwinian evolutionary biology
          than to von Neumann machines.  One might call it a
          Darwin machine instead, but operating on a time scale
          of milliseconds rather than millennia, using innocuous
          virtual environments rather than noxious real-time
          ones.  
 
     Is this what Darwin's "bulldog," Thomas Henry Huxley, would have
agreed was the "mechanical equivalent of consciousness" which Huxley
thought possible, almost a century ago?  It would certainly be fitting.  
 
     We do not yet know how much of our mental life such stochastic
sequencers might explain.  But I tend to think that this approach using
mechanical analogies from motor systems neurophysiology and evolutionary
biology might have something to recommend it, in contrast to word-logic
attempts to describe consciousness.  At least it provides a different place
to start, hopefully less slippery than variants on the little person inside
the head with all their infinite regress.  
 
                                   William H. Calvin
                                        Biology Program NJ-15
                                        University of Washington
                                        Seattle WA 98195 USA
                                        206/328-1192
                                        USENET:  wcalvin@well.uucp

harnad@mind.UUCP (02/09/87)

wcalvin@well.UUCP (William Calvin), Whole Earth 'Lectronic Link, Sausalito, CA
writes:

>	Rehearsing movements may be the key to appreciating the brain
>	mechanisms [of consciousness and free will]

But WHY do the functional mechanisms of planning have to be conscious?
What does experience, awareness, etc., have to do with the causal
processes involved in the fanciest plan you may care to describe? This
is not a teleological why-question I'm asking (as other contributors
have mistakenly suggested); it is a purely causal and functional one:
Every one of the internal functions described for a planning,
past/future-oriented device of the kind Minsky describes (and we too
could conceivably be) would be physically, causally and functionally EXACTLY
THE SAME -- i.e., would accomplish the EXACT same things, by EXACTLY the same
means -- WITHOUT being interpreted as being conscious. So what functional
work is the consciousness doing? And if none, what is the justification
for the conscious interpretation of any such processes (except in
my own private case -- and of course that can't be claimed to the credit of
Minsky's hypothetical processes)? [As to "free will" -- apart from the aspect
that is redundant with the consciousness-problem [namely, the experience,
surely illusory, of free will], I sure wouldn't want to have to defend a
functional blueprint for that...]
 

-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

harnad@mind.UUCP (02/09/87)

Ken Laws <Laws@SRI-STRIPE.ARPA> wrote on mod.ai:

>	I'm not so sure that I'm conscious... I'm not sure I do experience
>	the pain because I'm not sure what "I" is doing the experiencing

This is a tough condition to remedy. How about this for a start: The
inferential story, involving "I" and objects, etc. (i.e., C-2) may
have the details wrong. Never mind who or what seems to be doing the
experiencing of what. The question of C-1 is whether there is any
experience going on at all. That's not a linguistic matter. And it's
something we presumably share with speechless, unreflective cows.

>	on the other hand, I'm not sure that silicon systems
>	can't experience pain in essentially the same way.

Neither am I. But there's been a critical inversion of the null hypothesis
here. From the certainty that there's experience going on in one privileged
case (the first one), one cannot be too triumphant about the ordinary inductive
uncertainty attending all other cases. That's called the other-minds
problem, and the validity of that ineference is what's at issue here.
The substantive problem is characterizing the functional capacities of
artificial and natural systems that warrant inferring they're conscious.

>	Instead of claiming that robots can be conscious, I am just as
>	willing to claim that consciousness is an illusion and that I am
>	just as unconscious as any robot.

If what you're saying is that you feel nothing (or, if you prefer, "no
feeling is going on") when I pinch you, then I must of course defer to
your higher authority on whether or not you are really an unconscious robot.
If you're simply saying that some features of the experience of pain and
how we describe it are inferential (or "linguistic," if you prefer)
and may be wrong, I agree, but that's beside the point (and a C-2
matter, not a C-1 matter). If you're saying that the contents of
experience, even its form of presentation, may be illusory -- i.e.,
the way things seem may not be the way things are -- I again agree,
and again remind you that that's not the issue. But if you're saying
that the fact THAT there's an experience going on is an illusion, then
it would seem that you're either saying something (1) incoherent or (in
MY case, in any event) (2) false. It's incoherent to say that it's
illusory that there is experience because the experience is illusory.
If it's an experience, it's an experience (rather than something else,
say, an inert event), irrespective of its relation to reality or to any
interpretations and inferences we may wrap it in. And it's false (of me,
at any rate) that there's no experience going on at all when I say (and
feel) I have a toothache. As for the case of the robot, well, that's
what's at issue here.

[Cartesian exercise: Try to apply Descartes' method of doubt -- which
so easily undermines "I have a toothache" -- to "It feels as if I have
a toothache." This, by the way, is to extend the "cogito" (validly) even
further than its author saw it as leading. You can doubt that things
ARE as they seem, but you can't doubt that things SEEM as they seem.
And that's the problem of experience (of appearances, if you will).
Calling them "illusions" just doesn't help.]

>	One way out is to assume that neurons themselves are aware of pain

Out of what? The other-minds problem? This sounds more like an
instance of it than a way out. (And assumption hardly seems to amount
to solution.)

>	How do we know that we experience pain?

I'm not sure about the "I," and the specifics of the pain and its
characterization are negotiable, but THAT there is SOME experience
going on when "I" feel "pain" is something that anyone but an
unconscious robot can experience for himself. And that's how one
"knows" it.

>	I propose that... our "experience" or "awareness" of pain is
>	an illusion, replicable in all relevant respects by inorganic systems.

Replicate that "illusion" -- design devices that can experience the
illusion of pain -- and you've won the battle. [One little question:
How are you going to know whether the device really experiences that
illusion, rather than your merely being under the illusion that it
does?]

As to inorganic systems: As ever, I think I have no more (or less)
reason to deny that an inorganic system that can pass the TTT has a
mind than I do to deny that anyone else other than myself has a mind.
That really is a "way out" of the other-minds problem. But inorganic
systems that can't pass the TTT...
-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

harnad@mind.UUCP (02/09/87)

Causality
Summary: On the "how" vs. the "why" of consciousness
References: <460@mind.UUCP> <1032@cuuxb.UUCP> <465@mind.UUCP> <2556@well.UUCP> <491@mind.UUCP>




Paul Davis (davis@embl.bitnet) EMBL,postfach 10.22.09, 6900 Heidleberg, FRG
wrote on mod.ai:


>	we see Harnad struggling with why's and not how's...
>	conciousness is a *biological* phenomenon... because
>	this is so, the question of *why* conciousness is used
>	is quite irrelevant in this context...[Davis cites Armstrong,
>	etc., on "conciousness as a means for social interaction"]...
>	conciousness would certainly seem to be here -- leave it to
>	the evolutionary biologists to sort out why, while we get on
>	with the how...

I'm concerned ONLY with "how," not "why." That's what the TTT and
methodological epiphenomenalism are about. When I ask pointedly about
"why," I am not asking a teleological question or even an evolutionary one.
[In prior iterations I explained why evolutionary accounts of the origins
and "survival value" of consciousness are doomed: because they're
turing-indistinguishable from the IDENTICAL selective-advantage scenario,
minus consciousness.] My "why" is a logical and methodological challenge
to inadequate, overinterpreted "how" stories (including evolutionary
"just-so" stories, e.g., "social" ones): Why couldn't the objectively
identical "how" features stand alone, without being conscious? What
functional work is the consciousness itself doing, as opposed to
piggy-backing on the real functional work? If there's no answer to that,
then there is no justification for the conscious interpretation of the "how."
[If we're not causal dualists, it's not even clear whether we would
WANT consciousness to be doing any independent work. But if we
wouldn't, then why does it figure in our functional accounts? -- Just
give me the objective "how," without the frills.]

>	the mystery of the C-1: How can ANYTHING *know* ANYTHING at all?

The problem of consciousness is not really the same as the problem of
knowledge (although they're linked, since, until shown otherwise, only
conscious devices have knowledge). To know X is not the same as to
experience X. In fact, I don't think knowledge is a C-1-level
phenomenon. [I know (C-2) THAT I experience pain, but does the cow know
THAT she experiences pain? Yet she presumably does experience pain (C-1).]
Moreover, "knowledge" is mired in epistemological and even
ontological issues that cog-sci would do well to steer clear of (such
as the difference between knowing X and merely believing X, with
justification, when X is true).
-- 

Stevan Harnad                                  (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard}  !princeton!mind!harnad
harnad%mind@princeton.csnet           

wcalvin@well.UUCP (02/10/87)

Sender: 
Reply-To: wcalvin@well.UUCP (William Calvin)
Followup-To: 
Distribution: 
Organization: Whole Earth 'Lectronic Link, Sausalito, CA
Keywords: Consciousness, throwing, command buffer, evolution, foresight


Reply to  Peter O. Mikes <lll-lcc!mordor!pom> email remarks:
>  The ability to form 'the model of reality' and to exercise that model is
>  (I believe) a necessary attribute of 'sentient' being and the richness 
>  of such model may one-day point a way to 'something better' then
>  word-logic. Certainly, the machines which exist so far, do not indeed 
>  have any model of universe 'to speak off' and are not conscious. 
 
     A model of reality is not uniquely human; I'd ascribe it to a spider
as well as my pet cat.  Similarly, rehearsing with peripherals switched off
is probably not very different from the "get set" behavior of said cat when
about to pounce.  Choosing between behaviors isn't unique either, as when
the cat chooses between taking an interest in my shoe-laces vs. washing a
little more.   What is, I suspect, different about humans is the wide range
of simulations and scenario-spinning.  To use the railroad analogy again,
it isn't having two short candidate trains to choose between, but having
many strings of a half-dozen each, being shaped up into more realistic
scenarios all the time by testing against memory -- and being able to
select the best of that lot as one's next act.
     I'd agree that present machines aren't conscious, but that's because
they aren't Darwin machines with this random element, followed by
successive selection steps.  Granted, they don't have even a spider's model
of the (spider's limited) universe; improve that all you like, and you
still won't have human-like forecasting-the-future worry-fretting-joy.  It
takes that touch of the random, as W. Ross Ashby noted back in 1956 in his
cybernetics book, to create anything really new -- and I'd bet on a Darwin-
machine-like process such as multitrack stochastic sequencing as the source
of both our continuing production of novelty and our uniquely-human aspects
of consciousness.
 
William H. Calvin
University of Washington           206/328-1192 or 206/543-1648
Biology Program    NJ-15           BITNET:  wcalvin@uwalocke
Seattle WA  98195    USA           USENET:  wcalvin@well.uucp
 
 
 
 
 
 
 
 
 
 
 
 
 

marty1@houem.UUCP (02/10/87)

In article <490@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> wcalvin@well.UUCP (William Calvin), Whole Earth 'Lectronic Link, Sausalito, CA
> writes:
> >	Rehearsing movements may be the key to appreciating the brain
> >	mechanisms [of consciousness and free will]
> 
> But WHY do the functional mechanisms of planning have to be conscious?
> What does experience, awareness, etc., have to do with the causal
> processes involved in the fanciest plan you may care to describe?...

I have the gall to answer an answer to an answer without having read
Minsky.  But then, my interest in AI is untutored and practical.
Here goes:

My notion is that a being that thinks is not necessarily conscious,
but a being that thinks about thinking, and knows when it is just
thinking and when it is actually doing, must be called conscious.

In UNIX(tm) there is a program called "make" that reads a script of
instructions, compares the ages of various files named in the
instructions, and follows the instructions by updating only the files
that need to be updated.  It can be said to be acting with some sort of
rudimentary intelligence.

If you invoke the "make" command with the "-n" flag, it doesn't do any
updating, it just tells you what it would do.  It is rehearsing a
potential future action.  In a sense, it's thinking about what it would
do.  But it doesn't have to know that it's only thinking and not
doing.  It could simply have its actuators cut off from its rudimentary
intelligence, so that it thinks it's acting but really isn't.

Now suppose the "make" command could, under its own internal program,
run through its instructions with a simulated "-n" flag, varying some
conditions until the result of the "thinking without doing" satisfied
some objective, and then could remove the "-n" flag and actually do
what it had just thought about.

This "make" would appear to know when it is thinking and when it is
acting, because it decided when to think and when to act.  In fact, in
its diagnostic output it could say first "I am thinking about the
following alternative," and then finally say, "The last run looked
good, so this time I'm really going to do it."  Not only would it
appear to be conscious, but it would be accomplishing a practical
purpose in a manner that requires it to distinguish internally between
introspection and action.

I think that version of "make" would be within the current state of the
art of programming, and I would call it conscious.  So we're not far
from artificial consciousness.

						Marty
M. B. Brilliant		(201)-949-1858
AT&T-BL HO 3D-520	houem!marty1

rosa@cheviot.UUCP (02/11/87)

This is really afollow up to Cuigini but I do not have the moderators address.

Please refer to McCarthy's seminal work "The conciousness of Thermostats".
All good AI believers emphasize with thermostats rather than other humans.
Thank goodness I do computer science...(:-)

Has Zen and the art of Programming not gone far enough??? Please no more
philosophy, I admit it I do Not care about conciousness/minsky/the mind
brain identity problem....

Is it the cursor that moves, the computer that thinks or the human that controls?
None of these grasshopper, only a small data error on the tape of life.

wcalvin@well.UUCP (02/14/87)

 
 
Stevan Harnad replies to my Darwin Machine proposal for consciousness
(2256@well.uucp) as follows: 
>  Summary: No objective account of planning for the future can give an 
         independent causal role to consciousness, so why bother? 
>  wcalvin@well.UUCP writes: 
>   
>>       Rehearsing movements may be the key to appreciating the brain 
>>       mechanisms [of consciousness and free will] 
>   
>  But WHY do the functional mechanisms of planning have to be conscious? 
>  ...Every one of the internal functions described for a planning, 
>  past/future-oriented device of the kind Minsky describes (and we too 
>  could conceivably be) would be physically, causally and functionally EXACTL
Y
>  THE SAME--i.e., would accomplish the EXACT same things, by EXACTLY the same
 
>  means -- WITHOUT being interpreted as being conscious. So what functional 
>  work is the consciousness doing? And if none, what is the justification 
>  for the conscious interpretation of any such processes...? 
> 
   Why bother?  Why bother to talk about the subject at all?  Because one
hopes to understand the subject, maybe extend our capabilities a little by
appreciating the mechanistic underpinning a little better.  I am describing a
stochastic-plus-selective process that, I suggest, accounts for many of the
things which are ordinarily subsumed under the topic of consciousness.  I'd
like the reactions of people who've argued consciousness more than I have,
who could perhaps improve on my characterization or point out what it can't
subsume.
     I don't claim that these functional aspects of planning (I prefer to
just say "scenario-spinning" rather than something as purposeful-sounding as
planning) are ALL of consciousness -- they seem a good bet to me, worthy of
careful examination, so as to better delineate what's left over after such
stochastic-plus-selective processes are accounted for.  But to talk about
consciousness as being purely personal and subjective and hence beyond
research -- that's just a turn-off to developing better approaches that are
less dependent on slippery words.
     That's why one bothers.  We tend to think that humans have something
special going for them in this area.  It is often confused with mere
appreciation of one's world (perceiving pain, etc.) but there's nothing
uniquely human about that.  The world we perceive is probably a lot more
detailed than that of a spider -- and even of a chimp, thanks to our constant
creation of new schemata via word combinations.  But if there is something
more than that, I tend to think that it is in the area of scenario-spinning: 
foresight, "free will" as we choose between candidate scenarios, self-
consciousness as we see ourselves poised at the intersection of several
scenarios leading to alternative futures.  I have proposed a mechanistic
neurophysiological model to get us started thinking about this aspect of
human experience; I expect it to pare away one aspect of "consciousness" so
as to better define, if anything, what remains.  Maybe there really is a
little person inside the head, but I am working on the assumption that such
distributed properties of stochastic neural networks will account for the
whole thing, including how we shift our attention from one thing to another. 
Even William James in 1890 saw attention as a matter of competing scenarios:
          [Attention] is the taking possession by the mind, in
          clear and vivid form, of one out of what seem several
          simultaneously possible objects or trains of thought."  
     To those offended by the notion that "chance rules," I would point out
that it doesn't:  like mutations and permutations of genes, neural stochastic
events serve as the generators of novelty -- but it is selection by one's
memories (often incorporated as values, ethics, and such) that determine what
survives.  Those values rule.  We choose between the options we generate, and
often without overt action -- we just form a new memory, a judgement on file
to guide future choices and actions.
     And apropos chance, I cannot resist quoting Halifax:
               "He that leavth nothing to chance
               will do few things ill,
               but he will do very few things."
He probably wasn't using "chance" in quite the sense that I am, but it's
still appropriate when said using my stochastic sense too.
 
                    William H. Calvin             BITNET: wcalvin@uwalocke
                    University of Washington      USENET: wcalvin@well.uucp
                    Biology Program    NJ-15      206/328-1192 or 543-1648
                    Seattle WA 98195