[comp.ai] Mind is What Brains Do?

cs_bob@gsbacd.uchicago.edu (05/16/89)

This group looks like it needs a new controversy, so...

There have lately been a number of currents running through this newsgroup
that on the surface seem to me to be inter-related in interesting and
perhaps useful ways, if only we can make them more explicit. To begin
with, however abrasive he may be, Gilbert Cockton is somewhat justified
in his attack upon 'strong AI', in the name ( I assume ) of epistemology.
Carl Jung once remarked of Freud that while he was a brilliant man, he was,
after all, only a medical doctor and suffered from the disadvantage of
'not knowing enough' about the philosophy of Mind. I think the same claim
can be made of a number of prominent figures in today's AI horizon. 

Take for example, Marvin Minsky's recent claim in "The Society of Mind" 
that "Mind is what brains do". This effectively reduces one of the greatest
mysteries of human thought to a question which Minsky believes can be
answered completely within his own field. Minsky's position is not only 
arrogant and in violation of one of the very principles Minsky pushes at 
us in  "Society of Mind" (namely, that we should be skeptical of simple 
explanations); it also betrays a terrible ignorance of the nature and history
of the philosophical question "What is Mind?" It seems to me that Minsky
wishes to use AI to accomplish with Epistemology what Skinner tried to
do to Psychology with Behaviorism. In both cases, we see an attempt to put
a discipline of rational inquiry on a 'sound, scientific basis' by simply
eliminating from consideration all of the problems which are not easily
addressed empirically. 

I do not think that it is a coincidence that this same Marvin Minsky is
also largely responsible for having derailed research into neural networks
to the extent that one could fairly say he set the field back at least 10
years. I have read both Minsky and Papert defending themselves from this
charge, and I do not accept their claims. For one thing, Minsky points out
that the claims of the Rosenblatt camp needed to be examined analytically,
and that we were being misled by the success of connectionism in small
laboratory experiments and that there were fundamental problems with
perceptrons which he felt compelled to point out. If Minsky really
believed this, he certainly wouldn't have written "Society of Mind", in
which he makes such bold claims as "Mind is what brains do" without
presenting the slightest bit of analytical support. The only support for
Minsky's K-lines, for example, is their utility in actual attempts to
model memory. As far as I know, Minsky has never taken the time to perform
the same sort of analysis of his own theories that he felt so compelled
to do with Rosenblatt's.

I have also read accounts of Minsky himself admitting that he may have gone 
a bit too far, that he saw himself as trying to counter a connectionist 
hysteria, and that he didn't intend to kill it. This comes, of course,
only after it is apparent that what we now call PDP isn't going to die
an easy death. It's unfortunate that he is put in the position of having
to defend himself for having misled so many people. I for one certainly
don't intend to accuse him of that, not because I believe him for an instant,
but because one really can't blame him if he managed to make fools of so
many. One can only blame the fools.

The fact is that Minsky succeeded because the large majority of AI researchers
wanted Minsky to be right, not because he was. Similarly, most AI researchers
have a vested interest in the possibilities of "Strong AI", and are therefore
unwilling to question it. I am sure, for instance, that many people are
quite happy with the propostion, "Mind is what brains do". Once
one understands it, it seems plausible enough. More importantly, it's very
convenient to a person who wants to go about his business of reproducing
Mind within the context of a machine. For if he were not to believe that
the brain, as a physical system, was a necessary and sufficient condition
for Mind, he might have second thoughts about devoting his life's work to
trying to reproduce (or simulate) Mind on a computer. 

Again I am reminded of Minsky, only this time it is his pooh-poohing of 
brain laterality. Minsky doesn't seem to believe in the difference between
the two hemispheres or, at any rate, he seems to feel that too much has been
made of this difference. He even goes so far in "Society of Mind" to make
the 'bsurd statement that the division of the brain into left and right
hemispheres is just as arbitrary as the division into top and bottom or 
front and back hemispheres. This is pop-logic in an extreme form; it appeals
exclusively to people who want to believe it in the first place, and
it contradicts a very extensive body of well respected *scientific* evidence.
One cannot argue, as Stephen Smoliar has done in Minsky's defense, that
he is cautioning against reading too much into the differences between the
two halves of the brain. If one reads Minsky's statements on brain laterality
closely, he will find him to equivocate quite nicely in this respect.
He never exactly says that there are no differences between the two hemispheres
of the brain, but he clearly leaves the reader with the impression that there
are no _important_ differences. This is exactly what he did with Perceptrons :
he never said they were useless, but he left the definite impression in the
bulk of the AI community that they would never be very useful.

To return to the dictum "Mind is what brains do", we have to ask how this
accounts for the qualitative difference between Mind and Digestion as
in the assertion "Digestion is what stomachs do". Admittedly, as scientists
we can discuss the process of thinking in much the same way as we can discuss
the process of digesting. That is to say, we can observe these processes and
analyze our observations in various ways. The problem arises when we realize
that we can also observe the actions of our brains in another way - from
the inside. One cannot do this with digestion. The by-product of the activities
of the brain is the observer. Who would like to compare this with the
by-products of digestion? Are they not radically different things? Indeed,
we cannot even say that the observer-within-the-brain is a thing at all,
and this is what leads certain people to say that it is nothing.

It's all well and good to talk of self-awareness as an emergent property
of the brain. I suppose that it is, but in saying this one really isn't
saying very much. One can easily imagine instructing a computer to 
answer "Yes" to the question "Are you self-aware?", but only a fool would
want to pretend that this is the same thing as what you and I are experiencing
as self. That my 'selfness' is real is, to me, beyond dispute. That it
is beyond the scope of empirical investigation can be shown in a number of
ways - is the 'red' you see the same color as the 'red' I see? are dogs
self-aware? Is a tree? The fact that we cannot answer these questions derives
from the fact that the observer cannot be observed. This does not mean that
the observer does not exist, as some scientists would have us believe.

It seems to me the absolute absurdity to claim that self is an illusion,
which is essentially what Minsky would have us believe. It certainly provides
an easy answer to the question, "How did _I_ emerge from this purely physical
world?" You didn't. You're just fooling yourself. You aren't, which is to say
_you_ don't really exist, you just think you do. The problem
is, if I don't exist, why should I need to be fooled? What's being misled
by the illusion?

I realize that questions such as this don't matter much to the daily activities
of AI researchers. I have no doubt that AI will make substantial and
significant progress without ever have to broach the question, "What is Mind?",
but I also feel that, eventually, someone with a perspective in both AI
and epistemology is going to have to address it, lest we content ourselves
with the meaningless dictum, "Mind is what brains do."


R.Kohout

#include >standard_disclaimer.h>

smoliar@vaxa.isi.edu (Stephen Smoliar) (05/16/89)

In article <3244@tank.uchicago.edu> cs_bob@gsbacd.uchicago.edu writes:
>
>This group looks like it needs a new controversy, so...
>
Apparently Bob Kohout regards Minsky-bashing as an appropriate source of
controversy.  Having put a fair amount of time into trying to make sense
of THE SOCIETY OF MIND, I would like to rise to his challenge (which he
probably expected me to do).  However, I think it is important to set a
few matters straight about Minsky's role in the AI community.

My first observation is that, while I do not travel around very much, I have
yet to encounter a gathering of AI practicioners who would come out and say
that they take Minsky seriously.  (At one site--which I shall leave unnamed
out of a sense of discretion--where I led a seminar, I was strongly urged not
to mention K-lines.  The most acceptable phrase I was told to use would be
"something that looks like K-lines.")  So if Bob is under the impression that
the AI community is now fanatically worshipping at Minsky's temple, he is quite
mistaken.  An interesting sign of this is that our most prestigious journal,
ARTIFICIAL INTELLIGENCE, has not yet run a review of THE SOCIETY OF MIND.

This is about to change, hopefully.  The reason for my own intense study of
Minsky's book was to write that long-overdue review.  I do not think my editor
will mind if I share my final paragraphs with this bulletin board, because what
one gets out of a book is very dependent on the attitude one brings to that
book.  Therefore, I think it is important to dwell upon who a book like THE
SOCIETY OF MIND  should be approached.  Here is how I ended my review:

	The reader should next accept the fact that this book is more
	a source of QUESTIONS than of ANSWERS.  In PHILOSOPHY IN A NEW
	KEY, Susanne Langer wrote:

		The way a question is asked limits and disposes
		the ways in which any answer to it--right or
		wrong--may be given.

	Many of the questions which Turing first raised in 1950 became
	side-tracked with the pursuit of artificial intelligence as a
	concrete" discipline.  THE SOCIETY OF MIND is an effort to get
	the questions of artificial intelligence "back on track"--Turing's
	track, at least.

	Because the emphasis is on questions, rather than answers, the
	reader should also approach some of the more flamboyant declarations
	of this book with caution.  Often their intent is more to provoke
	than to inform.  (Maturana calls such sentences "triggering
	perturbations.")  For example, Minsky's theory about split brains,
	as articulated in Section 11.8, is at odds with certain results in
	the published literature.  [Added as a footnote:  I wish to thank
	Bob Kohout for pointing this out to me.]  Unfortunately, Minsky
	does not provide citations to either support or rebut his own
	point of view.  For better or for worse, he seems to have assumed
	that the interested reader will pursue such references on his own.

	Nevertheless, taken on its own terms, this is a very exciting
	book . . . perhaps the most exciting book ever to have been
	published on the subject of mind.  It is provocative in the
	questions it raises and challenging because it does not provide
	cut-and-dried answers to those questions.  Ultimately, it serves
	the most important function which any book may serve:  IT INSPIRES
	THE READER TO THINK ABOUT THESE MATTERS ON HIS OWN.  Such books
	are rare in ANY subject.  We should all be thankful that Marvin
	Minsky has been able to serve the discipline of artificial
	intelligence so well.
>
>Take for example, Marvin Minsky's recent claim in "The Society of Mind" 
>that "Mind is what brains do". This effectively reduces one of the greatest
>mysteries of human thought to a question which Minsky believes can be
>answered completely within his own field. Minsky's position is not only 
>arrogant and in violation of one of the very principles Minsky pushes at 
>us in  "Society of Mind" (namely, that we should be skeptical of simple 
>explanations); it also betrays a terrible ignorance of the nature and history
>of the philosophical question "What is Mind?" It seems to me that Minsky
>wishes to use AI to accomplish with Epistemology what Skinner tried to
>do to Psychology with Behaviorism. In both cases, we see an attempt to put
>a discipline of rational inquiry on a 'sound, scientific basis' by simply
>eliminating from consideration all of the problems which are not easily
>addressed empirically. 
>
In light of my opening remarks, I would claim that this paragraph is basically
a misreading of Minsky's text.  While I can appreciate that there are those who
would read "Mind is what brains do" as a gesture of arrogant reductionism, I do
not think the text of THE SOCIETY OF MIND supports that claim.  Rather, the
book asks us to go back to that question of just what it is that brains
ACTUALLY DO.  It is not so much a matter of ignoring what philosophers
have had to say about mind as it is an observation that if one gets too
immersed in the philosophy of mind, one might lose touch with how the brain
is actually contributing to human behavior.  All Minsky has done is to
introduce a new perspective for the consideration of those questions which
have occupied so many philosophers of past and present.  The fact of the matter
is that, after one has read Minsky, one can go back to Descartes, Berkeley,
Hume, Husserl, and Wittgenstein (to name a few) and reconsider their
observations, their hyptotheses, and their intellectual struggles.  Now how
many books do we read which encourage such a review of past accomplishments?

>I do not think that it is a coincidence that this same Marvin Minsky is
>also largely responsible for having derailed research into neural networks
>to the extent that one could fairly say he set the field back at least 10
>years.

This is an example of attributing far too much political power to Minsky.  The
greatest enemy of neural network research has always been inadequate computing
facilities . . . both powerful hardware and accommodating software
environments.  The Rosenblatt camp was "stuck in the bits" of extremely
clunky equipment.  They could raise the occasional spark, but they lacked
the facilities to turn it into fire.  Even today, with much more powerful
equipment, achievements remain disappointingly modest.
>
>I have also read accounts of Minsky himself admitting that he may have gone 
>a bit too far, that he saw himself as trying to counter a connectionist 
>hysteria, and that he didn't intend to kill it. This comes, of course,
>only after it is apparent that what we now call PDP isn't going to die
>an easy death. It's unfortunate that he is put in the position of having
>to defend himself for having misled so many people. I for one certainly
>don't intend to accuse him of that, not because I believe him for an instant,
>but because one really can't blame him if he managed to make fools of so
>many. One can only blame the fools.
>
I do not think it has ever been Minsky's intention to make fools of his
readers.  If anything, I fear that he expects too much of his readers:
He expects them to THINK OVER his provocations rather than swallow them
whole.  The fools are the ones who are not willing (or able) to allow Minsky
the respect of such thought.  Unfortunately, they also tend to have the loudest
voices.

>The fact is that Minsky succeeded because the large majority of AI researchers
>wanted Minsky to be right, not because he was.

What makes you think that Minsky succeeded?  Have you any idea what the level
of research activity is that is based on THE SOCIETY OF MIND?  I have already
given my visiting seminar anecdote.  As another metric, I would invite you to
look at the preliminary Technical Program for this year's IJCAI.  I think you
will find that there is NOT ONE paper which is, in any way, a product of
Minsky's "vision."  Minsky is currently spending more time at the MIT Media
Lab than at the AI Lab.  THERE I have seen some attempts to apply his work
to music, but it is still very early stuff.  It hardly constitutes a massive
wave of AI research!
>
>It seems to me the absolute absurdity to claim that self is an illusion,
>which is essentially what Minsky would have us believe.

I think this is again a misreading.  If you look in Minsky's glossary, he is
actually calling it "the myth that each of us contains SOME SPECIAL PART that
embodies the essence of the mind" (my emphasis).  It is not the "self" that is
the illusion, rather the belief that self is some distinct component of body.
I see nothing wrong with Minsky questioning this belief.

In retrospect, I think that Bob has reacted in a way consisted with Minsky's
intentions.  He has obviously not swallowed the contents whole.  He is clearly
giving a lot of thought to what Minsky actually wrote, and even
misinterpretation can count for serious thought.  I, for one, wish
more people were reading Minsky with Bob's intensity.

lrm5110@tahoma.UUCP (Larry R. Masden) (05/18/89)

From article <3244@tank.uchicago.edu>, by cs_bob@gsbacd.uchicago.edu:

> Take for example, Marvin Minsky's recent claim in "The Society of Mind" 
> that "Mind is what brains do". This effectively reduces one of the greatest
> mysteries of human thought to a question which Minsky believes can be
> answered completely within his own field. Minsky's position is not only 
> arrogant and in violation of one of the very principles Minsky pushes at 
> us in  "Society of Mind" (namely, that we should be skeptical of simple 
> explanations); it also betrays a terrible ignorance of the nature and history
> of the philosophical question "What is Mind?" It seems to me that Minsky
> wishes to use AI to accomplish with Epistemology what Skinner tried to
> do to Psychology with Behaviorism. In both cases, we see an attempt to put
> a discipline of rational inquiry on a 'sound, scientific basis' by simply
> eliminating from consideration all of the problems which are not easily
> addressed empirically. 

I couldn't agree more, here are my views on the subject:

In my opinion there is no theory in current science that explains the 
emergence of consciousness (self awareness, "I") in complex physical
systems.  In my opinion there is no scientific definition of what 
consciousness is.  Sure, there are medical definitions of consciousness
that serve a useful purpose, but these don't address the issue of
what consciousness is and how it works.

Say we built a machine with complexity near that of the human mind.  We
would have no scientific reason to say the machine was conscious even if
it claimed it was.  We could explain the operation of the complex machine
in terms of cause effect relationships just as we do for simple machines.
The complex machine's "claim" of consciousness could be explained 
mechanically (lengthy explanation) just as the operation of today's
computers or simple machines can be explained mechanically.  We don't claim
that the simple machines are conscious.  With no theory to state otherwise,
we have no reason to claim that the complex ones are.

A person's state of consciousness is clearly dependent upon the physical 
state of their brain.  Damage to the brain's physical structure can cause
loss of consciousness.  Sleep is another example of the physical state
of the brain affecting a person's state of consciousness.  However, these
and similar examples do not prove that consciousness results solely from
the physical processes of the brain.

If a new theory comes along that explains how consciousness emerges in 
complex physical systems, great!  But until then all bets are off.

Following is an alternative model for consciousness that is a little wild
but will serve the purpose of this discussion.  It is seed for further
"brainstorming" if you will.

Consciousness could arise from a system that exists in a physical domain
that we have not yet learned to observe.  The system could be constantly
monitoring the operation of what we now know as the physical brain.  It
could be affected by the operation of the brain, but not affect the operation
of the brain (a one way link.)  

Obviously, this model will be just that until someone learns how to 
observe the new domain.  Some philosophers might suggest that the
as yet unobserved domain is the spiritual domain, and that consciousness
is the human "spirit."  A two way link between the spirit and the brain
(i.e. brain can affect spirit and spirit can affect brain) might be analogous
to free will.  But if a two way link exists, we would expect to see events
in the physical brain that are unexplained by our understanding of physics.
Future experimentation may show that the brain does operate according to
our understanding of physics, or the brain may forever elude physical 
modeling.

It may be that an explanation of consciousness is theoretically beyond the 
reach of all experiments, like uncertainty in quantum physics.  Whatever it
is, the study of consciousness is bound to be complex and interesting.  In 
my opinion "Mind is what brain does" just doesn't meet the criterion.
-- 
Larry Masden       	      Voice: (206) 237-2564
Boeing Commercial Airplanes   UUCP: ..!uw-beaver!ssc-vax!shuksan!tahoma!lrm5110
P.O. Box 3707, M/S 66-22
Seattle, WA  98124-2207

wallingf@cpsvax.cps.msu.edu (Eugene Wallingford) (05/19/89)

In article <408@tahoma.UUCP> lrm5110@tahoma.UUCP (Larry R. Masden) writes:
 
>In my opinion there is no theory in current science that explains the 
>emergence of consciousness (self awareness, "I") in complex physical
>systems.  In my opinion there is no scientific definition of what 
>consciousness is.  Sure, there are medical definitions of consciousness
>that serve a useful purpose, but these don't address the issue of
>what consciousness is and how it works.
>
>Say we built a machine with complexity near that of the human mind.  We
>would have no scientific reason to say the machine was conscious even if
>it claimed it was.  We could explain the operation of the complex machine
>in terms of cause effect relationships just as we do for simple machines.
>The complex machine's "claim" of consciousness could be explained 
>mechanically (lengthy explanation) just as the operation of today's
>computers or simple machines can be explained mechanically.  We don't claim
>that the simple machines are conscious.  With no theory to state otherwise,
>we have no reason to claim that the complex ones are.

Nor do we have a scientific reason to say that you and I are
conscious (as I suspect you are saying).  We accept this loose
modifier and go about our business, assuming that people who
act certain ways in the world are conscious.

Adopting this view essentially requires us to drop the use of the
term "conscious" from our active (scientific) vocabulary, which
includes with reference to human beings.  The question then becomes:
What is it about such a complex system (i.e., a computer of sufficient
complexity) that distinguishes its actions and interactions in the
world from "our own"?  *Is* there any reason to distinguish them at
all??

cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) (05/19/89)

In article <3244@tank.uchicago.edu> cs_bob@gsbacd.uchicago.edu writes:
>
>Carl Jung once remarked of Freud that while he was a brilliant man, he was,
>after all, only a medical doctor and suffered from the disadvantage of
>'not knowing enough' about the philosophy of Mind. I think the same claim
>can be made of a number of prominent figures in today's AI horizon. 

People in glass houses?

>Take for example, Marvin Minsky's recent claim....

[Several paragraphs of Minsky bashing omitted]

Ok, so you don't like Minsky. Now lets hear about the other prominent figures
on the AI horizon you mentioned.

>The fact is that Minsky succeeded [in killing perceptrons]
>because the large majority of AI researchers
>wanted Minsky to be right, not because he was.

Really? But in any case he was right about perceptrons, wasn't he?

>Similarly, most AI researchers
>have a vested interest in the possibilities of "Strong AI", and are therefore
>unwilling to question it. 

Most AI researchers find the categories of "strong" and "weak" AI to be too
clumsy, neither describing their position. 

>I am sure, for instance, that many people are
>quite happy with the propostion, "Mind is what brains do". Once
>one understands it, it seems plausible enough. More importantly, it's very
>convenient to a person who wants to go about his business of reproducing
>Mind within the context of a machine. For if he were not to believe that
>the brain, as a physical system, was a necessary and sufficient condition
>for Mind, he might have second thoughts about devoting his life's work to
>trying to reproduce (or simulate) Mind on a computer.

Straw man. I'm not the only AI researcher who supposes it impossible to
reproduce minds in computers; I think computers are going to come in very handy
when putting the robot's brain together, however.

>he problem arises when we realize
>that we can also observe the actions of our brains in another way - from
>the inside.

Speak for yourself. I don't realize this, and experiments have shown that
people can be consistently mistaken about the processes that go on in their
minds, let alone their brains.

>That my 'selfness' is real is, to me, beyond dispute.

Quite. The problem is that other people do dispute it, and your personal
conviction is not a persuasive argument.

>It seems to me the absolute absurdity to claim that self is an illusion,
>which is essentially what Minsky would have us believe.

Yes, I'm getting the hang of what you believe, the problem is that you haven't
given me any reasons for believing it too.

>I realize that questions such as this don't matter much to the daily 
>activities
>of AI researchers. 

There are planty of AI researchers who don't agree with that.

>I have no doubt that AI will make substantial and
>significant progress without ever have to broach the question, 
>"What is Mind?",

 - or that either.

>but I also feel that, eventually, someone with a perspective in both AI
>and epistemology is going to have to address it,

Check out the titles published by Bradford Books, MIT press, for starters. You
may even find that some of the posters to this group have that background, and
have published papers addressing the problem!
-- 
Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK		

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (05/23/89)

In article <3052@cps3xx.UUCP> wallingf@cpsvax.UUCP (Eugene Wallingford) writes:
>Nor do we have a scientific reason to say that you and I are
>conscious (as I suspect you are saying).
>
>Adopting this view essentially requires us to drop the use of the
>term "conscious" from our active (scientific) vocabulary

So how do we distinguish between the mental states of "asleep" and
"awake"?

Mental states are an established part of psychological research, with
hypnosis providing hours of fun for those who wish to draw sharp
distinctions between them.

Before we commit all the work here to the dustbin, can we please read
(some of) it :-)
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

matt@nbires.nbi.com (Matthew Meighan) (05/25/89)

In article <3244@tank.uchicago.edu> cs_bob@gsbacd.uchicago.edu writes:

>
>Take for example, Marvin Minsky's recent claim in "The Society of Mind" 
>that "Mind is what brains do". 
>
>  [ lots of other very good stuff deleted ]
>

Clearly, brain is what minds do.  

Matt Meighan          
matt@nbires.nbi.com
-- 

Matt Meighan          
matt@nbires.nbi.com (nbires\!matt)