[mod.ai] minds and minsky

uncle@ucbvax.Berkeley.EDU@ucsbcsl.UUCP (01/15/87)

  [The following message is in the style of comments on comments
  on quoted text that was common on the Phil-Sci list at MIT.
  While I recognize that the potential for such annotation is
  a major advantage of online discussion, I hope that members
  of the list will show restraint in order to keep the traffic
  volume down.  A well-reasoned argument is preferable to several
  quoted paragraphs and a one-line comment.  -- KIL ]


QUESTIONS (-->> ...) re: QUESTIONS (> ...) re: M.M.
ANNOTATIONS TO THE ARTICLE:
From harnad@seismo.CSS.GOV@mind.UUCP Sat Feb  5 22:28:16 206

On mod.ai, MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU (Marvin Minsky) wrote:

>	the phenomena we call consciousness are involved with our
>	short term memories. This explains why... it makes little sense to
>	attribute consciousness to rocks.

I agree that rocks probably don't have short-term memories. But I
don't see how having a short-term memory explains why we're conscious
and rocks aren't. In particular, why are any of our short-term
memories conscious, rather than all being unconscious?

-->> ?? maybe because `consciousness' has something to do with
	discriminating changes in a temporal sequence of events 
	whose time scale is more like the duration of a heartbeat
	than the duration of a whole life or an eon ??

The extracts from Marvin Minsky's new book look as if they will be
very insightful correlative accounts of the phenomenology of (i) subjective
experience and (ii) the objective processes going on in machines that can be
interpreted as analogous to (i) in various ways. What none of the
extracts he has presented even hints at is (a) why interpreting any of
these processes (and the performance they subserve) as conscious is
warranted, and (b) why even our own processes and performance should
be conscious, rather than completely unconscious. [...]
[...]
(2) Before claiming with conviction that one has shown "why" a certain
performance is accomplished by a process that is validly interpreted
as a conscious one, one should indicate why the very same performance could
not be accomplished by the very same process, perfectly UNconsciously
(thereby rendering the conscious interpretation supererogatory).

>	although people usually assume that consciousness is knowing
>	what is happening in the minds, right at the
>	present time, consciousness never is really concerned with the
>	present, but with how we think about the records of our recent
>	thoughts... how thinking about our short term memories changes them!

-->> ?? I think I agree with `>', is he not saying here something about
	discriminating changes in a temporal sequence on a short
	time scale ??

[...]
My question concerns how the memory hypothesis -- or any other --
accounts for the fact that what is going on there in real time is
conscious rather than unconscious; how does it account for my
EXPERIENCE of pain?) And once that's answered, the second question is
(2) why couldn't all that have been accomplished completely
unconsciously? [...]

-->>  ?? Hmmmmm:  THINKING and FEELING or rather:
	COMPUTING and FEELING  .  As a marginal intelligence,
	artificial or otherwise, I can only grab at a straw such
	as the organizational/functional notion `goal'.  Experience and
	Feeling are functions which evaluate elements of the short-term
	temporal sequence of events/representations with respect
	to `goals'.  Hmmmmm, do planaria think?  THE BIG QUESTION
	THAT DISTURBS ME IS MORE OR LESS IN LINE WITH THE QUESTIONER
	ABOVE:
		WHY SHOULD MATTER THINK?????? This, of course
	has nothing to do with the real universe where some
	material aggregates DO think;
	however, if the universe wants to blow up,
	convert itself into successive populations of stars etc
	etc, and then implode,  why does it need to have us think
	about it?

[...]
[Let me also add that there are good reasons why it is called the
"mind/body" problem and not the "mindS/body" problem, as Marvin Minsky's
tacit pluralizations would seem to imply. The phenomenological fact is that,
at any instant, I (singular) have a toothache experience (singular).
Having this (singular) conscious experience is what one calls having a
(singular) mind. Now it may well be that one can INFER multiple processes
underlying the capacity to have such singular experiences. But the processes
are unconscious ones, not directly EXPERIENCED ones, hence they are not plural
minds, properly speaking.

-->> ?? HOLD ON, aren't you indulging in a kind of , what is the word,
	psychologism, based upon a linguistic prejudice? The 
	get-food-subsystem doesn't go through a speech-act trip
	ending in the formulation of the well formed english
	phrase `i'm hungry', but it knows what it wants and 
	communicates its wishes by making `OUR' stomach hurt ??

[...]

>	Our brains have various agencies that learn to
>	recognize - and even name - various patterns of external sensations.
>	Similarly, there must be other agencies that learn to recognize
>	events *inside* the brain - for example, the activities of the
>	agencies that manage memories.  And those, I claim, are the bases
>	of the awarenesses we recognize as consciousness... I claim that to
>	understand what we call consciousness, we must understand the
>	activities of the agents that are engaged in using and changing our
>	most recent memories.

You need an argument for (1) why any process you propose is correctly
interpreted as the basis of 1st-order awareness of anything --
external or internal -- rather than just a mindless process, and (2)
why the functions you describe it as accomplishing in the way it does
need to be accomplished consciously at all, rather than mindlessly.

-->> ?? But (some) matter DOES think and we know that! Explaining
	WHY (some) matter should be conscious is like explaining why
	the universe is as it is.   As for the question of HOW
	it is conscious, it seems quite plausible that
	evolution changed MOTILITY into MOTIVATION and when
	motivation got hold of adequate methods and representations,
	je pense, donc clyde est un elephant! ??

[...]

>	When people ask, "Could a machine ever be conscious?" I'm often
>	tempted to ask back, "Could a person ever be conscious?"
>	...we can design our new machines as we wish, and
>	provide them with better ways to keep and examine records of their
>	own activities - and this means that machines are potentially capable
>	of far more consciousness than we are.

-->> Sounds plausible to me!

[...]

>	To "notice" change requires the ability to resist it, in order
>	to sense what persists through time, but one can do this only
>	by being able to examine and compare descriptions from the recent past.
-->> ?? Yes! ??
	
Why should a process that allows a device to notice (respond to,
encode, store) change, resist it, examine, compare, describe, remember,
etc. be interpreted as (1) a conscious process, and (2) why couldn't it
accomplish the exact same things unconsciously?

-->> ?? We already traversed this semantic loophole!!! ??

I am not, by the way, a spokesman for the point of view advocated by
Dreyfus or by Searle. In asking these pointed question I am trying to
show that the mind/body problem is a red herring for cognitive
science. I recommend methodological epiphenomenalism and performance
modeling as (what I believe is) the correct research strategy. Instead
of spending our time trying to build metaphorical perpetual motion
machines, I believe we should try to build real machines that capture our
total performance capacity (the Total Turing Test).

-------

--->> ?? methodological epiphenomenalism \?\? I don't know the
	exact significance of that as a Flachausdruck, but perhaps
	M.M. is describing just the epiphenomenon you are looking
	for\? ??