[net.cog-eng] An idea...

bill@hao.UUCP (Bill Roberts) (03/15/86)

I am not a cognitive scientist.  My background is physics and astrophysics.  I 
am currently studying comuter science with the intention of doing research in 
cognitive science some day.  Anyway, I've been thinking of ways to model human
behavior in machines.  The idea is this:

	Suppose you have a function defined as follows:
	   f:MS * context ------> Behavior

	where 
	 *  ::= a projection operation that takes a subset of MS and 
		"projects" this subset onto the space defined by context.

	 MS ::= a compostion of abstract data types,

         context ::= an enviroment in which the ADTs are "applied",

	 Behavior::= a class of "observable responses".

Clearly this idea is simplistic and perhaps has been proposed elsewhere.
My reason for posting it is to get some response from the professional 
(and unprofessional) cog-sci community.  

Terms like "projects", "applied" and "observable responses" are
open for definition.  Constructive responses would be appreciated as well as,
some pointers into any literature that deals with notions such as the above.
Again, I am not a cognitive scientist and would hope this idea is taken in that
context. 

							-Bill Roberts
							 NCAR/HAO
							 Boulder, CO
							 !hao!bill

kort@hounx.UUCP (B.KORT) (03/16/86)

Bill Roberts invites a discussion of modeling human feelings
in a computer.

I have been toying with a speculative theory on this topic.

If you put into a computer an input stream which is syntactically
parsible and semantically meaningful, the computer invokes the
appropriate procedures to process the input and produce the
desired and anticipated output.  But what happens if the input
doesn't parse, isn't meaningful, or requires processing by a
procedure not resident in the addressed machine?  Well, the
computer issues a diagnostice message, which is at times cryptic,
imparsible, and irritating to the human user.  That is, the
human gave the computer an unwanted input, and got back an
unwanted output.  Now that scenario is very much like an
emotional interchange between two people.  That is, there
is an interesting analogy between diagnostic messages and
emotional reactions. The computer doesn't know what to make
of the input, and the human doesn't know what to make of the
diagnostic output.  In their mutual ignorance, both parties are stuck
with an unsolved problem.  At least one party has some learning to do.
Can you detect the underlying symmetry of the situation?

If the above paragraph has piqued your interest, I would be interested
in unfolding more ideas along these lines.  It helps me if you pose
a few questions that are uppermost in your mind.  Then I can take
as a goal the discovery of interesting and practical answers to them.

--Barry Kort   ...ihnp4!hounx!kort

munro@sdics.UUCP (Paul Munro) (03/17/86)

In article <2008@hao.UUCP> bill@hao.UUCP (Bill Roberts) writes:
>
>	Suppose you have a function defined as follows:
>	   f:MS * context ------> Behavior
>
>	where 
>	 *  ::= a projection operation that takes a subset of MS and 
>		"projects" this subset onto the space defined by context.
>
>	 MS ::= a compostion of abstract data types,
>
>         context ::= an enviroment in which the ADTs are "applied",
>
>	 Behavior::= a class of "observable responses".
>

The PDP (parallel distributed processing) or connectionist approach
to cognitive science is an instance of the more general framework proposed
above.  PDP networks are essentially nonlinear mappings of input arrays
("context") to output arrays ("behavior") and hence fulfill the role of the
MS (mental state?) in the proposed scheme.  Many, if not most, PDP models
incorporate rules governing the dynamics of network parameters and hence
exhibit various kinds of learning phenomena.   In general, such learning
mechanisms tend to optimize some quantity with respect to the statistics of
the input environment.  Indeed they are often designed with some optimization
criterion built in (such as to minimize the error between the observed
and desired output characteristics).

In the language of the original posting, PDP learning theories generally
contain two functions one for processing (F) and one for learning (L):

 	   F: MS * context ------> Behavior

	   L: [old MS, context, behavior] ------> new MS



Here is a bibliography of some books and articles you may 
find interesting.  You may well already be familiar with some or all of
these.  First I should mention two journals that include articles concerned
with neural network processes: Biological Cybernetics (Springer) and the
IEEE Proceedings in System, Man, & Cybernetics.  The September 1983 issue
of this journal is particularly rich is network brain models.

Some books you might find interesting are:

T. Kohonen:   Self-Organization ans Associative Memory.  Springer (1984)
G. Palm:      Neural Assemblies.  Springer (1982)

The following books are collections of papers:

J. Anderson & G. Hinton:  
Parallel Models of Associative Memory. Erlbaum (1981)

S. Amari & M. A. Arbib: 
Competition and Cooperation in Neural Nets.  Springer (1982)

W. B. Levy, J. A. Anderson, S. Lehmkuhle:
Synaptic Modification, Neuron Selectivity, and Nervous System
Organization.  Erlbaum (1984)

J. L. McClelland & D. E. Rumelhart:
Parallel Distributed Processing: Explorations in the Microstructure of 
Cognition.  MIT Press (in press)

Between their content and their bibliographies, the above books should
direct to most of the important work in this field.  But let me note a
few articles for you also:

Kohonen, T., Oja, E.: Fast adaptive formation of orthogonalizing filters
and associative memory in recurrent networks of neuron-like elements.
Biol. Cybern. Vol 21: 85-95 (1976)

Sutton, R. S., Barto, A. G.: Toward a modern theory of adaptive networks:
expectation and prediction.  Psych. Rev. Vol. 88: 135-170 (1981)

Miyake, S., Fukushima, K.: A neural network model for the mechanism of 

fgtbell@kcl-cs.UUCP (ZNAC450) (03/19/86)

In article <701@hounx.UUCP> kort@hounx.UUCP writes:
>Bill Roberts invites a discussion of modeling human feelings
>in a computer.
>
>.....  That is, there
>is an interesting analogy between diagnostic messages and
>emotional reactions.

There is,but if one were to seriously attempt to model human emotions,
it would involve much deeper knowledge of how human emotions work.For 
example,I suggest that there are at least four factors involved in any
display of human emotions:

		1) The background ,or history of the individual involved.I
don't think that children can grow up without absorbing a lot from the
culture they find themselves in.

		2) The immediate situation that a human is in.Is work going
well ?  Are you getting enough pay ?  Questions like these affect one's
emotional outlook over a period of months.

		3) Physiological state. There's no mistaking it ,if I haven't
eaten for a few hours,and my blood-sugar levels are low,or if I've got a
headache,tolerance and patience take a turn for the worse.

		4) Role. How do I interact with the people / group I am with
at the moment.? (Or, in the terms of the present discussion,how do I interact
with the computer / operating system I am using ? (I just LOVE UN*X  :-) )).
. Is the relationship colleague-colleague , secretary-boss , client-advisor ,
 vendor-customer or any number of other things ?

My point is this :
	Any computer system that tries to model human emotions is going to
have to model some,not necessarily all of these factors.ALSO, in order to
be really plausible, it's going to have to form a model of the user and 
his /her emotional state. That sets the scene for INTERACTION as opposed to
the `canned response to keyword' approach.

	At the moment,responses such as "syntax error line 49" are more like
the response of a dog that won't lie down when it's told to.

	I hope this article sparks off some more ideas,comments,whatever.