[comp.ai.philosophy] The Bandwidth of the Brain

smoliar@vaxa.isi.edu (Stephen Smoliar) (01/15/91)

In article <26250@uflorida.cis.ufl.EDU> bougie@pine.circa.ufl.edu writes:
>  Basically Reddy shows how we use a metaphor 
>to talk about language which doesn't fit the phenomenon very well and 
>leads us into a lot of false analogies.  We speak about "finding meaning 
>*in* words, packing too much/little meaning into a sentence, getting 
>meaning *out of* a phrase..." etc etc (the appendix is impressive).  We 
>tend to think of language as little boxes that we fill with meaning, and 
>send down a conduit to a receiver who then unpacks the boxes.  This 
>leads to the assumption that if I don't find any meaning in the box, 
>it can only be the fault of the sender!  
>
I think this metaphor (along with its contingent dangers) may be readily
extended from the concept of "language" to that of "knowledge."  (This
discussion may actually be more appropriate on comp.ai.philosophy, so
I am cross-posting this article.)  There seems to be an underlying theme
in Newell's Knowledge Level Hypothesis (which is expanded upon at some length
in Pylyshyn's COMPUTATION AND COGNITION) that knowledge is some kind of "stuff"
which we can use to fill "vessels" of some sort or transfer from one vessel to
another using a Reddy-like conduit.  Ultimately, Newell and Pylyshyn (not to
mention others, such as Fodor) argue that it either IS or, in a weaker case, CAN
BE MODELED BY symbolic expressions.  However, what if this whole "stuff"
metaphor is as misplaced for knowledge as it is for language?  This would
knock a fair amount of life out of Newell's Physical Symbol System Hypothesis
and all that follows from it (such as the Knowledge Level Hypothesis).

>        Reddy sees this as not only mistaken, but harmful. People often 
>find it difficult to talk about language at all without using the 
>Conduit Metaphor. He proposes the "Toolmaker's metaphor" as a better 
>description:  more like sending a *blueprint* for reconstructing 
>meaning, than sending *meaning* itself. 
>
>        Thus, the bulk of the *message* is not *sent*, but constructed by 
>the hearer from relatively very few bits that are actually sent along 
>the conduit. Inferences make up an enormous part of the meaning.
>
However, if we try to think about inferences in terms of a logical calculus, we
are back to the same symbolic "stuff" I am trying to get away from!  Even
connectionism, while some would stand it in opposition to the Physical Symbol
System Hypothesis, may ultimately be reduced to some sort of "stuff-like"
representation, where the "stuff" is now points of convergence, rather than
symbolic expressions.  In other words connectionism may be able to transcend
the symbols without escaping the "stuff" metaphor.

The only work I know which has tried to pull away from this metaphor is that of
Gerald Edelman and his colleagues.  The automata which Edelman has tried to
design treat perceptual categorization as a lowest-level task to be achieved.
However, Edelman's categories are not static entities, corresponding to the
sorts of local maxima sought out by a connectionist architecture.  Instead,
they are far more dynamic.  Memory is not a matter of accumulating more
"stuff."  Rather, it is a capacity for categorization and REcategorization,
the latter being his way of expressing the sort of processing involved when
confronted with stimuli one has encountered before.  This is very much in line
with the sort of constructive metaphor proposed by Reddy;  but Edelman carries
it to a much greater extreme, ultimately arguing that it lies at the heart of
all cognitive activity.

=========================================================================

USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

jmc@DEC-Lite.Stanford.EDU (John McCarthy) (01/15/91)

I think that writing about AI or philosophy or cognitive science
 in terms of metaphor is a big mistake.  It allows people to write
without clear meaning.  The debates about which metaphors are
applicable are almost meaningless.  There are several approaches
to AI, some based on neurophysiology, some on imitation neurophysiology,
some on psychological experimentation, some on formalizing the
facts about the common sense world in mathematical logical languages
(my own course).  There is no argument that any one of them
can't possibly work.  Therefore, AI research is a race among the
various approaches.  The arguments about metaphor are a game for
non-participants in the actual work.

Buy my book.

ggm@brolga.cc.uq.oz.au (George Michaelson) (01/15/91)

[re-blocked to suit interpolated comments]

jmc@DEC-Lite.Stanford.EDU (John McCarthy) writes:

>There is no argument that any one of them can't possibly work.  

...You mean AI workers don't disagree about the relative merits of
their model in pejorative terms? amazing! 

Outside of the field, I suspect scepticism remains that ANY of them can
possibly work.

>Therefore, AI research is a race among the various approaches.  

- a Red Queens race perhaps? 

>The arguments about metaphor are a game for
>non-participants in the actual work.

In my own case, undenyably true!

I think they also point to the weakness of available models. If nothing
else, its an overspill of ideas from the lofty heights of the castle
to the rude huts of the commoners below. When the metaphors start
becoming testable and/or (dis)provable theorems, then things will be a
little more solid perhaps.  

If you're asserting that behind this peat-bog of metaphors lies a more
solid ground of theory I'll sink back into the mud from whence I came.
Like all creatures of the (CS) slime, I tend to remain skeptical of
these (AI) attempts to walk on solid land.

	-George
-- 
	G.Michaelson
Internet: G.Michaelson@cc.uq.oz.au                     Phone: +61 7 365 4079
  Postal: George Michaelson, the Prentice Centre,
          The University of Queensland, St Lucia, QLD Australia 4072. 

smoliar@vaxa.isi.edu (Stephen Smoliar) (01/16/91)

In article <JMC.91Jan14145958@DEC-Lite.Stanford.EDU> jmc@DEC-Lite.Stanford.EDU
(John McCarthy) writes:
>
>Buy my book.


Wouldn't you be happier if we were to READ your book, John?  (Man does not live
by royalties alone!)  In any event, my current "addiction" to metaphor will not
dull my curiosity about what you have to say.  Would you be kind enough to
provide us all with publication details?

=========================================================================

USPS:	Stephen Smoliar
	5000 Centinela Avenue  #129
	Los Angeles, California  90066

Internet:  smoliar@vaxa.isi.edu

"It's only words . . . unless they're true."--David Mamet

jmc@DEC-Lite.Stanford.EDU (John McCarthy) (01/16/91)

I never thought you'd ask.  Formalizing Common Sense, Ablex 1990. (actually
1991).