[mod.ai] common sense

KVQJ@CORNELLA.BITNET (07/03/86)

I have been thinking a lot about the notion of common sense and
its possible implementation into expert systems. Here are my ideas;
I would appreciate your thoughts.
Webster's Dictionary defines common sense as a 'practical knowledge'.
I contend that all knowledge both informal and formal comes from
this 'practical knowledge'.
After all, if one thinks about Physics,Logic,or Chemistry,much of it
makes practical sense in the real world. For example,a truck colliding
with a Honda civic will cause more destruction than 2 Hondas colliding
together. I think that people took this practical knowledge of the world
and developed formal principles.
It is common sense which distiguishes man from machine. If a bum on
the street were to tell you that if you give him $5.00 he will make you
a million dollars in a week, you would generally walk away and ignore him.
If the same man were to input it into a so called intelligent machine,the
machine would not know if he was Rockefeller or an indigent.
My point is this, I think it is intrinically impossible to program
common sense because a computer is not a man. A computer cannot
experience what man can;it can not see or make ubiquitous judgements
that man can. We may be able to program common-sense like rules into
it,but this is not tantamount to real world common sense because real
world common sense is drawn from a 'database' that could never be
matched by a simulated one.
Thank you for listening.
                       sherry marcus kvqj@cornella

Newman.pasa@XEROX.COM (07/09/86)

Philosophically, Sherry Marcus' ideas about common sense are poor in the
same sense that I think Searle and Dreyfus' ideas about why AI won't
ever happen are poor. As near as I can tell all three end up with some
feature of human intelligence which cannot be automated for basically
unexplained reasons. Marcus' problem is simpler than the others (why
can't a computer have a real world common sense database like a
human's?), but it is the same basic philosophical trap. All three appear
to believe that there is some magical property of human intelligence
(Searle and Dreyfus appear to believe that there is something special
about the biological nature of human intelligence) which cannot be
automated, but none can come up with a reason for why this is so.

Comments?? I would particularly like to hear what you think Searle or
Dreyfus would say to this.

>>Dave

kube%cogsci@.UUCP (07/14/86)

From Newman.pasa@Xerox.COM,  AIList Digest   V4 #165:
>...All three appear
>to believe that there is some magical property of human intelligence
>(Searle and Dreyfus appear to believe that there is something special
>about the biological nature of human intelligence) which cannot be
>automated, but none can come up with a reason for why this is so.
>
>Comments?? I would particularly like to hear what you think Searle or
>Dreyfus would say to this.

Searle and Dreyfus agree that human intelligence is biological (and so
*not* magical), and in fact believe that artificial intelligences
probably can be created.  What they doubt is that a class of currently
popular techniques for attempting to produce artificial intelligence
will succeed.  Beyond this, the scope of their conclusions, and their
arguments for them, are pretty different.  They have given reasons for
their views at length in various publications, so I hesitate to post
such a short summary, but here goes:

Dreyfus has been heavily influenced by the existential
phenomenologists Heidegger and Merleau-Ponty.  This stuff is extremely
dense going, but the main idea seems to be a reaction against the
Platonic or Cartesian picture of intelligent behavior as being
necessarily rational, reasoned, and rule-described.  Instead,
attention is called to the vast bulk of unreflective, fluent, adaptive
coping that constitutes most of human interaction with the world.
That the phenomenology of this kind of intelligent behavior shows it
to not be produced by reasoning about facts, or applying rules to
propositional representations, etc., and that every system designed to
produce such behavior by these means has been brittle and not
extensible, are reasons to suppose that (1) it's not done that way and
(2) it can't be done that way.  (These considerations are not intended
to apply to systems which are only rule-described at a sufficiently
subpersonal level, say at the level of weights of neuronal
interconnections.  Last I heard, Dreyfus thinks that some flavors of
connectionism might be on the right track.)

Searle, on the other hand, talks about intentional mental states
(states which have semantic content, i.e., which are `about'
something), not behavior.  His (I guess by now kind of classic)
Chinese Room argument is intended to show that no formal structure of
states of the sort required to satisfy a computational description of
a system will guarantee that any of the system's states are
intentional.  And if it's not the structure of the states that does
the trick, it's probably what the states are instanced in, viz.
neurochemistry and neurophysiology, that lends them intentionality.
So, for Searle, if you want to build an artificial agent that will not
only behave intelligently but also really have beliefs, etc., you will
probably have to wire it up out of neurons, not transistors.  (Anyway,
brains are the only kind of substance that we know of that produce
intentional states; Searle regards it as an open empirical question
whether it's possible to do it with silicon.)

Now you can think that these reasons are more or less awful, but it's
just not right to say that these guys have come up with no reasons at all.

Paul Kube
kube@berkeley.edu
...ucbvax!kube

Newman.pasa@XEROX.COM.UUCP (07/14/86)

Thanks for the reply.

Dreyfus' view seems to have changed a bit since I last read anything of
his, so I will let that go. However, I suspect that what I am about to
say applies to him too.

I like your description of Searle's argument. It puts some things in a
clearer light than Searle's own stuff. However, I think that my point
still stands. Searle's argument seems to assume some "magical" property
(I really should be more careful when I use this term; please understand
that I mean only that the property is unexplained, and that I find its
existence highly unintuitive and unlikely) of biology that allows
neurons (governed by the laws of physics, probably entirely
deterministic) to produce a phenomena (or epiphenomena if you prefer -
intelligence) that is not producible by other deterministic systems.

What is this strange feature of neurobiology? What reason do we have to
believe that it exists other than the fact that it must exist if the
Chineese Room argument is correct? I personally think it much more
likely that there is a flaw somewhere in the Chineese Room argument.

>>Dave

wagle%iuvax.indiana.edu@CSNET-RELAY.ARPA (Perry Wagle) (07/15/86)

[this is a response to ucbjade!KVQJ's note on common sense. ]

  The flaw in Searle's Chinese Room Experiment is that he gets bogged down
in considering the demon to be doing the "understanding" rather than the
formal rule system itself.  And of course it is absurd to claim that the
demon is understanding anything -- just as it is absurd to claim that the
individual neurons in your brain are understanding anything.

Perry Wagle, Indiana University, Bloomington Indiana.
...!ihnp4!inuxc!iuvax!wagle	(USENET)
wagle@indiana			(CSNET)
wagle%indiana@csnet-relay	(ARPA)

colonel@buffalo.CSNET ("Col. G. L. Sicherman") (07/15/86)

In article <860714-094227-1917@Xerox>, Newman.pasa@XEROX.COM asks:
> 
> However, I think that my point still stands. Searle's argument seems to
> assume some "magical" property ... of biology that allows neurons ...
> to produce a phenomenon ...  that is not producible by other
> deterministic systems.
> 
> What is this strange feature of neurobiology?

I believe that the mysterious factor is not literally "magic" (in your
broad sense), but merely "invisible" to the classical scientific method.
A man's brain is very much an _interactive_ system.  It interacts con-
tinually with all of the world that it can sense.

On the other hand, laboratory experiments are designed to be closed
systems.  They are designed to be controllable; they rely on artificial
input, at least in the experimental stage.  (When they are used in the
field, they may be regarded as intelligent; even a door controlled by
an electric eye meets our intuitive criterion for intelligence.)

Just what do we demand of "artificial intelligence?" Opening doors
for us?  Writing music and poems for us?  Discoursing on philosophy
for us?  --Or doing things for _itself,_ and to Hell with humans?
I don't think that A.I. people agree about this.