[net.ai] mental states

gutfreund%umass-cs%CSNet-Relay@sri-unix.UUCP (12/12/83)

From:  Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>

Ken Laws in his little editorializing comment on my last note seems to
have completely missed the point. Whether FSA's can display mental
states is an argument I leave to others on this list. However, John
McCarthy's definition allows ant hills and colloidal suspensions to
have mental states.

gutfreund%umass-cs%CSNet-Relay@sri-unix.UUCP (12/13/83)

From:  Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>

I am very intriguied by Ferenando Pereira's last comment:

    Sorry, you missed the point that JMC and then I were making. Prygogine's
    work (which I know relatively well) has nothing to say about systems
    which have to model in their internal states equivalence classes of
    states of OTHER systems. It seems to me impossible to describe such
    systems unless certain sets of states are labeled with things
    like "believe(John,have(I,book))". That is, we start associating
    classes of internal states to terms that include mentalistic
    predicates.

I may be missing the point, since I am not sure what "model in their internal
states equivelence classes of states of OTHER systems" means. But I think
you are saying is that `reasoning systems' that encode in their state
information about the states of other systems (or their own) are not
coverered by Ilya Prygogine's work.

I think think you are engaging in a leap of faith here. What is the basis
for believing that any sort of encoding of the state of other systems is
going on here. I don't think even the philosophical guard phrase
`equivalence class' protects you in this case.

To continue in my role of sceptic: if you make claims that you are constructing
systems that model their internal state (or other systems' internal states)
[or even an equivalence class of those states]. I will make a claim that
my Linear Programming Model of an computer parts inventory is also
exhibiting `mental reasoning' since it is modeling the internal states
of that computer parts inventory.

This means that Prygogine's work is operative in the case of FSA based
`reasoning systems' since they can do no more modeling of the internal
state of another system than a colloidal suspension, or an inventory
control system built by an operations research person.


                                - Steven Gutfreund
                                  Gutfreund.umass@csnet-relay