sandyz@ntpdvp1.UUCP (Sandy Zinn) (04/11/90)
> (Sandy Zinn) writes: > > > >... the motivations behind behaviors such as imitation, > >tearing wings off flies, hitting your baby brother, and eventually posting > >articles, are actually meta-processes emerging from the plexus of physio- > >logical exigency X organizing processes X environmental patterns. > (Ken Presting) writes: > (Whose baby brother? *I* was a sweet child, a model of gentle decorum...) Ahh, the most dangerous kind!! > Implementationism is cheap (I can get it for you wholesale) but it only > works between abstractions. The three plaits of the plexus are all > within the organism/environment biological abstraction, so Imp'ism is > irrelevant for this. Whoa! I obviously still do not understand Imp'ism. If you don't use it here, where do you use it? (An Imp. w/o a home == Obvious Troublemaker!) What kind of abstractions is it good for? I have a very broad and deep definition of abstraction: representation of pattern/info in a different symbolic system. I don't consider that all abstractions are fully homo- morphic; they can be transforms, or partial mappings. Lots of biological examples of those. If you want to reserve "abstraction" only for full homomorphs, for formalized systems, then let's Capitalize it to indicate this more pristine use. > within the organism/environment biological abstraction, so Imp'ism is ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PUBLIC SERVICE ANNOUNCEMENT: I want to stress that the organism/environment is an Abstraction when we *think* it, but it *operates* as a reality. It's ecodisaster, blood and death not to realize that. And I don't think we do, not deeply. > To use a formal metaphor, we need an algebra for > squirms. If squirms turn out to be at all like vectors, which I think > is likely, then we have plenty of lovely linear and nasty non-linear > algebra to haggle with. (By "squirms" I mean trajectories of a physical > system through phase space) OK here. > I have a big problem with the idea of "meta-" processes. If this is > just metaphor, then it's OK by me, but if it's a use of logic in a model > of the mind there might be trouble. I worry a *lot* about homunculi - > we want to explain how it is that people think, and if we put logical > concepts into the explanation, it'll be hard to avoid begging the > question. Hierarchies as such are no problem. Okay, I'll go with "meta" == hierarchy -- I *currently* see no need to do otherwise. I've never intended *homunculi*. I talk primarily in metaphor, so you can help us out here by cleaning up terms. > Would "determined by" work as a substitute for "emerging from" in the > statement above? Please? How about "made of?" "Implemented in?" No, *not* "determined by". Too many bad implications for me. How about "abstracted from", or "which are transforms of"? For the sake of your *gentle decorum*, I'll give up "emergent", except when I'm in that gadfly mood... > Here's an improved version: > > <Abstract thought is *implemented* as:> > Comfortable squirms, that always fit some part of every other > squirm. That's on the subjective, private side. On the objective, > public side, you can make your pencil squirm out pretty squiggles > that other people can see, and if they really want to, they can make > their squamae squirm as comfortably as yours do. Ah, that's better. Comfort, eh? I like it. I really like it. > I've tried to emphasize the physical interaction of the squirms, and > eliminate the suggestion that squirms "observe" each other. I also > want to emphasize the issue of motivation in all communication. I > think it is very important to recognize that understanding someone else > can be affected by factors that are called "emotional", at least in > everyday speech. > > I believe that communication does not occur in the absence of an > emotional interaction (real or imagined). This is a problematic > assertion, I realize. Maybe for some. The biggest problem I have here is that you have tried to distinguish *real* from *imagined* emotional interaction. There is no make-believe emotion. It's a transform of real inter- actions. Now, whether those interactions are comfortably isomorphic is another story.... @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ Sandra Zinn | "The squirming facts (yep these are my ideas | exceed the squamous mind" they only own my kybd) | -- Wallace Stevens
sandyz@ntpdvp1.UUCP (Sandy Zinn) (04/11/90)
> > (Sandy Zinn) writes: > >>If one assumes that infants have rudimentary patterns of squirms which > >>direct their interaction with the world, then "errors" accumulate through > >>the use of comparator functions on input squirms. > > (Ken Presting) writes: > Bateson's model is probably flexible enough to do the job, but the notion > of natural systems making "errors" is unsettling to me. We have agreed > that the knee-jerk reflex makes perfect sense in its natural setting. I > have argued that *every* natural phenomenon makes sense in its natural > setting. This would apply to biological phenomena as well as to physical. With your emphasis on *setting*, which I am set on myself, I understand your being unsettled. But fear not, my position is more subtly bizarre than this! > "Error" is one of the normative concepts, and I think AI is trying to > explain how a system which always makes perfect sense in one system of > descriptions can be said to make mistakes in another. In fact, this is a pretty good description of intelligent activity in general. > For example, a > program always does what its instructions specify - in the frame of > reference of the CPU, there are no software bugs. Even invalid > instructions are handled according to plan, with a program check or trap. > That's what's supposed to happen, and it does. But the program is also > part of a system of descriptions defined by its specifications. *That* > is the frame of reference in which the term "bug" has meaning. Ah, that is the sort of frame of reference I was assuming. Only, in this case, the program specs are in the infant, and the "bugs" are in the operation of the environment. Sometimes, of course, there *are* errors in the specs, which makes the detection of bugs problematic... > Also, I'm not sure how to make "expectations" innate. Certainly there is > no problem with innate reflexes or instinctive behavior, such as suckling, > or crying when hungry. Would you say that an error is detected when > sucking on a pacifier does not eliminate hunger pangs? Yes, I would, I do, and I did. A fine leap on your part! > I might go > along with this, actually. I would point out that the "expectation" > in this case is defined by all three plaits of the plexus, and is not > strictly epistemic. (It's implemented, I would say!) I might go along with this, potentially. (I did.) > "Differences" bother me less than "errors", because there is no normative > judgement in saying that two events are different. But before two > things can be compared, they must each be identified. Depending on how > it's used, "difference" can be intentional... > Detecting difference can depend on *reference*. I mean different = divergent from the specs. Intention unintended. > In the suckling case, we can describe the infant as a biological system > which implements all its expectations, not with a squirm for each one, > but all of them in the structure of all its squirms. Unsatisfied > expectations need not be individuated to add to the squirming. No. This is too simple. I don't think there is a nice set of individual squirms, or carrying circuits, or synaptic wave-form processors, for each items in the specs. But neither is it global wiggling: there's a can of worms here, but it's not in this neophyte mind, but rather in our neophyte relationship to our wanting to know. (She goes for a can opener.) * * * * * * * * * * * * * * * * * * * * > (Stephen Smoliar) writes: * > >Where Minsky may depart from Bateson and Pribram, however, is in his desire to > >push the processing of differences to a "meta-level:" * > > * > > The ability to consider differences between differences is * > > important because it lies at the heart of our abilities to * > > solve new problems. This is because these "second-order- * > > differences" . . . * * > Here's another apparently logical concept appearing in a model of the * > mind. * * > >I think this comes very close to Edelman's model of memory as RECATEGORIZATION > * > Recategorization sounds more like *learning* than *memory*. * * * * * * * * * * * * * * * * * * * * * Now, here, HERE, is where I think our wrestling mat lies. A difference that makes a difference (Bateson's rough & tender phrase), and Minsky's differences between differences, may be logical concepts, but have mercy! These are also ideas that unfold a fantasy. Here's the scenario. Streams of information coming in, striped, let's say, by perceptual physiology. Brain stem collation centers add a few more stripes, in a different color. The info gets split into several different channels, so that different sets of squirms get bounced off of it, overlaying the stripes with various grids. (geez this is risky!) The squirms that get bounced off are *memories of differences that make a difference* -- applying the grids isolates the difference to a striped square -- you've got analog info going digital -- and that digitization is the breaker switch for selection. The "paths" of selection are where the specs are encoded -- and some of the bounces of the selections get fed into squirm-bounce-generators, so that they become filters for the next round...plexus plaits, plied! Now, differences of differences -- ah, here's the fantasy. Suppose one day a new squirm bounce sector arose, one which had the ability to reach in on the path of nearly any little striped square, and *magically* turn it off or on, regardless of pre-existing relations in the braid. Probably it would divert half the stream in the little striped square to its own lair, leaving the rest to flow. Of course, rather than targeting one little square, it would target some pattern of squares, maybe having some generic patterns for its initial stock, to give it a better than average chance of success at diverting patterns that would -- I hesitate to say "make sense", because then of course I presuppose some context for meaning, but there's room here I think, and maybe a need, for some prior organization -- but eventually these diversions are going to refine their technique, and what we have, ladies and gentlemen, is selections of categories. All squirm-made. Now the child can juice up that renegade sector, let loose a bounce, and say of the diverted squirms, "This is me. That other stuff ain't." Pure fantasy. But the child gets on in the world, bouncing the bounces off the bounces, squealing with delight when the bounced diversions have real similar patterns. AHA! But I didn't know to go looking there except I knew I needed a difference of differences. A hierarchy. A metadifference. Sets of selections to select. A normative or two. Concepts as binaries. Binaries as concepts. All our maps select features, contain clues. So all our discussion, with all our different terms, and agendas, creates real movement. (Fantasy always has moved us.) @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ Sandra Zinn | "The squirming facts (yep these are my ideas | exceed the squamous mind" (except tonite) they only own my kybd) | -- Wallace Stevens -- me
kp@uts.amdahl.com (Ken Presting) (04/13/90)
In article <369@ntpdvp1.UUCP> sandyz@ntpdvp1.UUCP (Sandy Zinn) writes: >>(Ken Presting wrote:) > ... to my old "identity of incomparable categories", I'd add: > >> rules = representation = processes = perception = methods = models > >The Dotland Identity. [ which has something to say but leaves a lot unsaid ] Hmm. Sooner or later, we have to start taking our algebra seriously, or we'll end up saying everything ... Dotty allegories are fine for now :-) > > . . . Bateson argues, and I agree with him, that dream >metaphors are not *result* but *source*. Insofar as Mind is representa- >tion, it is a metaphor for whatever is being represented. This account of how representations can exist in the mind is IMO the only possible coherent view. (It needs to made a little more coherent itself, of course). If Quine and Davidson are to be taken seriously on the issue of indeterminacy of translation, inscrutability of reference, and the holism of meaning, then we cannot build a machine which learns a human language in the way humans do unless REFERENCE IS EXCLUDED FROM REPRESENTATION. This means: no concept training, no semantic nets, and NO FRAMES! (Minsky has the gall to describe _The Society of Mind_ as "neo-Freudian". But perhaps that should be excused as an easily explainable parapraxis: Freudian :: Froodian :: Fodorian!) >Bateson says: > > ...the subject matter of primary-process discourse is different from > the subject matter of language and consciousness. Consciousness talks > about things or persons, and attaches predicates to the specific things > which have been mentioned. The easy identification of consciousness with language here is entirely legitimate within the Freudian framework, but can be very confusing. Experience as a whole must not be identified with consciousness alone, and conscious experience itself is not exhausted by linguistic events. Perception and imagination are of course structured, but often very differently from language. > In primary process the things or persons > are usually not identified, and the focus of the discourse is upon the > *relationships* which are asserted to obtain betweeen them. > >I suggest that this primary process is Dotland, is Implementationism. Homomorphisms of logical structure work much better here. Impl'ism is about the relation between complete abstractions (ie whole formal systems). But within an abstraction, each predicate, singular or relational, also has a logical structure which is determined by the sentences in which the predicate is used. > > A metaphor retains unchanged the relationship which it "illustrates" > while substituting other things or persons for the relata. > >Gee, this sounds like a Normative Property! (or do I seriously mistake you?) Pretty close! Since the logical structure of a term is independent of (ie invariant over) any particular semantic mappings or interpretations, *any* term or concept can function in a metaphor. Or primary process. Properties are normative, intentional or merely descriptive based on the rest of the abstraction in which they are being defined, and on how that abstraction is supplied with a semantics: Perceptual mapping - descriptive, eg "It's hot today" Interpretive mapping - intentional, eg "She thinks it's hot today" Mapping onto preferences - normative, eg "It's a good day for volleyball!" Fantasized mapping - abstract, eg "PV = nkT & V = 4(pi)r**3/3" Ritual - no semantics, just part of that squirmin' Way of Life, eg "Wanna play a language game? I'll go first: SLAB!, no, THERMOMETER!" (cf Wittgenstein, _Philosophical Investigations of Primary Process_) > > Primary process is characterized (e.g., by Fenichel) as lacking > negatives, lacking tense, lacking in any identification of > linguistic mood (i.e., no identification of indicative, sub- > junctive, optative, etc.) and metaphoric. > >Dreams. Fantasy. Ritual. The relationships just ARE, period. NoPunctuation!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!! >relationships are primarily iconic, or analogic: it is a *pattern* >which is represented, a style of relationship, if you will. The >digitalization of information comes only at the level of language. >Logic, a set of digital relationships, is imposed on dreams. 000 000 000 00 0 0 # 0 0 # 0 # 0 # 0 # 0 # 0 # # 010 10 10 10 10 10 10 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 See - some relationships are *not*. I have not completely figured out how to do this non-digitally, but I am engaged in a process of Elimination. Question: What is the original normative property? The leading hypotheses are "alone", "hard/wet", "breast", and "ca-ca". > >The question becomes, can Edelman's neural-processor code for this kind >of primary process? My guess is yes. I can't make heads nor tails out of Edelman's head. Itelya, *Heidegger* is easier reading than Edelman. But my impression is that Edelman's reentry is out of phase, and his imagination is degenerate. This may just be a negative transference, of course. Ken Presting ("OK, Anatomy IS Destiny. Now put away that cigar")
smoliar@vaxa.isi.edu (Stephen Smoliar) (04/13/90)
In article <43pQ024H9chW01@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >In article <369@ntpdvp1.UUCP> sandyz@ntpdvp1.UUCP (Sandy Zinn) writes: >> >> Insofar as Mind is representa- >>tion, it is a metaphor for whatever is being represented. > >This account of how representations can exist in the mind is IMO the >only possible coherent view. (It needs to made a little more coherent >itself, of course). > >If Quine and Davidson are to be taken seriously on the issue of >indeterminacy of translation, inscrutability of reference, and the holism >of meaning, then we cannot build a machine which learns a human language >in the way humans do unless REFERENCE IS EXCLUDED FROM REPRESENTATION. > >This means: no concept training, no semantic nets, and NO FRAMES! > Just don't throw out the body! So, Ken, you're having trouble with Edelman? Let us see if we can start off with a relatively straightforward remark in THE REMEMBERED PRESENT: "It is not sufficient . . . to start with an idea of mental representations in the absence of physical mechanisms." Ultimately, this is what the man is all about. He wants to throw Cartesian dualism out the window and say that the study of RES COGITANS cannot proceed in isolation from RES EXTENSA. So go ahead and throw out your semantic nets and frames, but you had better also turn a critical eye towards any logical calculus you happen to be carrying around! In all fairness, I should say that, having survived two Edelman books, I am not prepared to say he has made his case. However, I am still inclined to look where he is pointing. From my own point of view, this means trying to figure out whether he is pointing somewhere in the direction of Minsky's society of mind. >(Minsky has the gall to describe _The Society of Mind_ as "neo-Freudian". >But perhaps that should be excused as an easily explainable parapraxis: >Freudian :: Froodian :: Fodorian!) > Now that you've had your turn to be cute, let us try to clear the air here. If you have trouble with Gerry Edelman, I have trouble with Jerry Fodor. Fodor certainly knows how to entertain; but after the show is over, I always seem to come away wondering if he said anything. Edelman may not be so entertaining (although I like the jokes he tells at his lectures); but he never fails to leave me with serious thoughts of termites gnawing away at foundations my teachers told me were firm. I like that in a writer. One thing seems certain, at least from my vantage point: there is no "language of thought" in either Edelman's biological theory of consciousness or Minsky's society. Indeed, the whole point of a Minsky society is that it does not NEED a language of thought. That is because it does not have any well-formed "thought objects" (or semantic nets or frames or whatever you want to call them) which need a language for their manipulation (whatever that manipulation may be)! All it has are lots of agents which do lots of things; and when they all work together, we can say of some object that embodies all those agents that it looks like that object is thinking. (You have no idea how hard I am trying to suppress use of the word "emergent!") Edelman is in a similar camp. However, he wants to go a step further than Minsky. Minsky is content to deal with agents which are imaginary machines. He just wants to impose the constraint that each agent be very simple, without pinning that constraint down to any specific criteria. He figures that we should start by figuring out how to build societies of these agents before we worry too much about building the agents, themselves. That seems to be his agenda. Edelman, on the other hand, wants to make sure that everything is grounded in the reality of biology and physics. Thus, he requires that his agents be models of things we find in the body. This is why he is concerned about issues such as the fact that no two bodies have exactly the same "neural wiring." Hopefully, this will help you, or anyone else, who has been trying to make sense out of either Edelman or Minsky. If I'm lucky, I shall be able to refine these remarks and make use of them in the review I'm trying to write of THE REMEMBERED PRESENT (when I'm not trying to clear the air on this bulletin board)! Meanwhile, you can go back to your Heidegger. Some researchers feel compelled to burrow as deep as they can into the primitive physical mechanisms which make us tick. Others would rather be high-diggers. (Sorry, I couldn't resist. What do you want on Friday the thirteenth?) ========================================================================= USPS: Stephen Smoliar USC Information Sciences Institute 4676 Admiralty Way Suite 1001 Marina del Rey, California 90292-6695 Internet: smoliar@vaxa.isi.edu "Only a schoolteacher innocent of how literature is made could have written such a line."--Gore Vidal
kp@uts.amdahl.com (Ken Presting) (04/14/90)
In article <372@ntpdvp1.UUCP> sandyz@ntpdvp1.UUCP (Sandy Zinn) writes: >(Ken Presting) wrote: > >> Implementationism is cheap (I can get it for you wholesale) but it only >> works between abstractions. The three plaits of the plexus are all >> within the organism/environment biological abstraction, so Imp'ism is >> irrelevant for this. > >definition of abstraction: representation of pattern/info in a different >symbolic system. Your definition corresponds more to *interpretation* or *analysis* than to "abstraction". Any symbol system is an example of an abstraction - musical notation, paper money, a file system namespace, or powdered wigs. Symbol *tokens*, such as a particular wig, are concrete, of course. When symbol tokens are manipulated by an organism or artifact, the manipulation is always a physical phenomenon, and is explainable in phsyical (chemical, biological, etc) terms. An explanation is arrived at through two steps: analysis and demonstration. The analysis is a *description* of the process - just a set of sentences which (a) refer to the process and (b) are true. Often a process or object will be identified in one vocabulary, with the hope of obtaining and analysis in a different vocabulary, as in "What *are* those powdered wigs"? or "What *are* you doing?" On other occaisions, an analysis will be desired in the same vocabulary, as in "What chemicals are in this solution?" An analysis is *complete* iff every true statement about the process (in the relevant vocabulary) is included in the analysis. An explanation organizes the description provided by the analysis. There are many philosophical theories of explanation, but IMO they all boil down to "an axiomatization of an analysis". An interpretation is *always* between two abstractions, and is somewhat more complex than an analysis. Interpretation is a lot like translation, but a translation is always between two languages. An intepretation can be between real events or objects (after they are analyzed) and a language. (Cf Quine, _Word and Object_, and Davidson, _Radical Interpretation_. David Lewis' _Radical Interpretation_ is very good also, and is an important counterpoint to Davidson). When we engineers *implement* a design, we start with a fixed analysis and interpretation, and build something to fit. In Impl'ism as applied to natural sciences, each science provides its own analysis of "reality" and is responsible for the interpretability of that analysis into the abstraction of Physics. Physics is responsible for (a) the analyzability of all processes within its abstraction, and (b) the interpretability or all experimental procedures and results, as analyzed, into deduction. (This picture is so idealized as to border on absurdity, but I would claim that my proposal is not an *incoherent* ideal, as Logical Positivism was. I would owe a great debt to Carnap, if I had not swiped this view from Aristotle first. :-) Impl'ism requires a HLS from the entire implemented abstraction to the implementation, but the HLS concept can be applied in other contexts. > I don't consider that all abstractions are fully homo- >morphic; they can be transforms, or partial mappings. Lots of biological >examples of those. If you want to reserve "abstraction" only for full >homomorphs, for formalized systems, then let's Capitalize it to indicate >this more pristine use. I do want to preserve a close relationship between "abstraction" and "formal system", and I would suggest that we consider "analysis" as an analysis of "transform", and "restriction of HLS" for "partial mapping". Freudian "primary process" is one example. Conceptual abstraction, simile, and metaphor (but not allegory) is another. Eventually, I want to claim that all *sensory* processes have the logical structure of an analysis, and that *perception* has the logical structure of an interpretation. I'm not sure yet, but I think *memory* might turn out to be just *entropy*. I do think it is possible to demonstrate formally that *force* has the logical structure of *choice* and NOT the logical structure of goal-directedness. Memory is the hard part (!), but I don't see much role for categorization anywhere. For example, an ANTIBODY does not recognize any natural kind. It will bind as firmly to an artificial anti-antibody as to a virus. All natural processes have similar difficulties - their behavior is independent of any higher-level analyses. I doubt that categorization (as opposed to discrimination) enters the psychological picture before language does. This is Hume's view, if that helps. > >> Would "determined by" work as a substitute for "emerging from" in the >> statement above? Please? How about "made of?" "Implemented in?" > >No, *not* "determined by". Too many bad implications for me. How about >"abstracted from", or "which are transforms of"? For the sake of your >*gentle decorum*, I'll give up "emergent", except when I'm in that gadfly >mood... I expect I'll be catching a few gadflies with determinism. It's sticky stuff; the more things seem to make sense, the more they seem to be determined. Most secretions, quantum or existential, have no effect. (Statistical determinism is almost as bad as the old-fashioned kind) My gut feeling on determinism is that the issue is undecidable. I have good arguments against Davidson/Dennett/Kant style free will - I read a seminar paper on the subject at a philosophy conference. The one argument that won't go away is the stupid old "predict yourself and do the opposite." For TM's, which can be analyzed in an abstraction which is not semantically closed, this paradox is useful. For any system which speaks a semantically closed language, there are a few, well, "complications": >> >> I believe that communication does not occur in the absence of an >> emotional interaction (real or imagined). This is a problematic >> assertion, I realize. > >Maybe for some. The biggest problem I have here is that you have >tried to distinguish *real* from *imagined* emotional interaction. >There is no make-believe emotion. It's a transform of real inter- >actions. Now, whether those interactions are comfortably isomorphic >is another story.... Too true. Who would want to be isomorphic to an inconsistency? Who has any choice? Ken Presting ("Dark thoughts on a dark day")
smoliar@vaxa.isi.edu (Stephen Smoliar) (05/26/90)
In article <a6AO02d9a9y801@amdahl.uts.amdahl.com> kp@amdahl.uts.amdahl.com (Ken Presting) writes: >In article <13516@venera.isi.edu> smoliar@vaxa.isi.edu (Stephen Smoliar) >writes: > >So far, the only answer has been, "Innate Universal Grammar". Not a >very satisfying answer, I think we agree. We need to improve our >ability to detect repetitions of the same wrong answer, even when it is >camouflaged in neologisms. > >>"By long custom, social discourse in Cambridge is intended to impart and only >>rarely to obtain information. People talk; it is not expected that anyone >>will listen. A respectful show of attention is all that is required until >>the listener takes over in his or her turn. No one has ever been known to >>repeat what he or she has heard at a party or other social gathering." >> John Kenneth Galbraith >> A TENURED PROFESSOR > >I hope you're starting to understand why I hate this epigram so much - >it describes a type of interpersonal behavior in which conscious thought >is *irrelevant*. I know it's realistic - that's the problem! > > >Ken Presting ("Read any *good* books lately?") Of course, one criterion for goodness is to capture a situation SO accurately that it is painful to the reader. Provocation means the reader is paying attention! I think what is bothering you is that Galbraith is describing an "intellectual" community. However, what he is describing is an interesting response to an information overload. In other words when we are confronted with more information than we can handle, we just shut off the inputs. We can continue to provide output according to a rather simple social protocol. What makes matters interesting is that, as long as we follow the protocol, we shall continue to be PERCEIVED AS INTELLIGENT (even with all cognitive inputs shut down)! What does THAT say about Turing's test in our brave new world? ========================================================================= USPS: Stephen Smoliar USC Information Sciences Institute 4676 Admiralty Way Suite 1001 Marina del Rey, California 90292-6695 Internet: smoliar@vaxa.isi.edu "By long custom, social discourse in Cambridge is intended to impart and only rarely to obtain information. People talk; it is not expected that anyone will listen. A respectful show of attention is all that is required until the listener takes over in his or her turn. No one has ever been known to repeat what he or she has heard at a party or other social gathering." John Kenneth Galbraith A TENURED PROFESSOR