[mod.comp-soc] Common Sense and Expert Systems

taylor@hplabsc.UUCP (07/10/86)

This article is from Sherry Marcus at Cornell University
 and was extracted from the news system on June 9, 1986

[from a posting to "mod.ai"]

I have been thinking a lot about the notion of common sense and its possible 
implementation into expert systems. Here are my ideas; I would appreciate your 
thoughts.

Webster's Dictionary defines common sense as a 'practical knowledge'.  I 
contend that all knowledge both informal and formal comes from this 'practical 
knowledge'.  After all, if one thinks about Physics, Logic,or Chemistry, much 
of it makes practical sense in the real world. 

For example,a truck colliding with a Honda Civic will cause more destruction 
than 2 Hondas colliding together. I think that people took this practical 
knowledge of the world and developed formal principles.  

It is common sense which distiguishes man from machine. 

If a bum on the street were to tell you that if you give him $5.00 he will 
make you a million dollars in a week, you would generally walk away and ignore 
him.  If the same man were to input it into a so called intelligent machine,
the machine would not know if he was Rockefeller or an indigent.

My point is this, I think it is intrinically impossible to program common 
sense because a computer is not a man. A computer cannot experience what man 
can; it cannot see or make ubiquitous judgements that man can.  We may be 
able to program common-sense like rules into it,but this is not tantamount 
to real world common sense because real world common sense is drawn from a 
'database' that could never be matched by a simulated one.

Thank you for listening.

                       sherry marcus kvqj@cornella

taylor@hplabsc.UUCP (Dave Taylor) (07/11/86)

This article is from hpccc!mcgregor@hplabs.HP.COM (Scott McGregor)
 and was received on Thu Jul 10 17:40:44 1986
 
>My point is this, I think it is intrinically impossible to program common
>sense because a computer is not a man. A computer cannot experience what man
>can; it cannot see or make ubiquitous judgements that man can.  We may be
>able to program common-sense like rules into it,but this is not tantamount
>to real world common sense because real world common sense is drawn from a
>'database' that could never be matched by a simulated one.


If I give you a set of 1000 constraints and and 800 variables and an
objective function to maximize,  I doubt that you could do as well in
an afternoon in selecting an optimal policy as any mainframe with
a passable linear programming package despite human "common sense".

You might say that the computer has understanding in that domain, as it
has the ability to predict optimal behavior under those circumstances
whereas you are not so good at doing so. You might consider this use
of "understanding" to be metaphorical, or literal depending on your
view of the difference in cognition amoung man and machines.

Most work, whether human or computer takes place in such limited "knowledge
domains".  In a good many of them there is the potential to put together
a good database of "common sense" for that limited problem domain.
The remarkable thing about humans is that they 1) operate reasonably
competently in a LARGE number of these domains, and 2) they are pretty
good at indentifying the boundaries between different domains. (Humans
seem good at choosing the data and tools that are and are not useful for
for solving problems in differing domains).

Programs need not support such breadth in problem domain to be useful.
I know when to use a linear program, and when to use a word processor and
it doesn't bother me that they are different programs with different domains
of expertise.  Also note that a computer may run many of these
programs just as I do both mathematics and writing.  But I might
even be willing to have separate computers for separate tasks (I already
have separate microprocessors controlling my microwave oven, my sprinkler
system, and my outdoor lighting).

The fact that a computer program is not and cannot be a human being is true.
But it is also a tautology.  It is a tool to be domesticated, just as
horses and cows, and steam engines and internal combustion engines are.
The computer has the potential to make contributions in a great many
more areas than anything we have ever domesticated before. That might
tempt us to argue (out of fear of inferiority?) that it can never equal/replace
humans in all their diverse tasks.  But this is really a straw argument--
who would want that to happen?  No one.  What people want, I think,
is to have access to information and advice on their own terms -- when
they want it, at a low cost, and in a seemingly objective manner.

Today people turn to magazines like Consumer Reports and Money for information
on how to spend their money.  They may also turn to stock brokers,
bankers, and other investment specialists.  But these people are only
accessable at certain times.  They charge expensive commisions. And
each one has their own biases (maybe the banker advises to put your
money in CDs, the broker says put it in stocks). An investment program
limited in its knowledge area might be a useful improvement for some
people over these specialists and magazines.  It doesn't matter if
the investment program doesn't know anything about traffic laws or
flying a plane (that's what Flight Simulator is for).

People arguing about the inability of computers to "experience" the world as
we do need to explain what that means.  Does the brain experience blue
in the same way as the eye does?  The physical mechanisms that take place
in the eye are certainly different than those in the brain. Is it even
meaningful to discuss experiences of the senses as opposed to the brain,
or does "experiencing" take place in a larger system that encompasses
both (mind?). To what extent are the human senses vital to what it means to
experience (e.g. is a blind person less human because they cannot
perceive 'blue').  Now distinguish the human mental system from a
system in which humans (at keyboards, etc.) and instruments are the
sensors (senses), and displays, and printers, and robot arms are the
motor network, and the 'brain' is composed of a computer and its human
generated software.  Maybe it is foolish to speak of the computer "brain"
as "experiencing". It might be more sensible to talk about the
organization, factory, or whatever name we give to the combination of
humans, computers and supporting devices, as being the locus of the
"understanding" that is not within any of these individual components alone.

    Scott McGregor
    {hplabs, hpfcla, hpcea, hpisla, hpl-opus}!hpccc!mcgregor
    HP Corporate Computing Center

taylor@hplabsc.UUCP (07/11/86)

References:<429@hplabsc.UUCP>


This article is from lll-crg!caip!think!mit-eddie!mck-csc!bmg
 and was received on Thu Jul 10 22:28:15 1986
 

 Why can't the database be huge?  Who is to say that large teams of
 programmers can't instill in one massively parallel and *huge* machine
 much of that common sense?  Once the computer/complex is large enough,
 why not give it robot sensors which can interact with the world and
 allow it to "learn" and watch what others do?

 I'm not saying it necessarily possible currently (given both technology
 and resources people are willing to devote to the subject), but who is
 to say that it's impossible?

      Benrie Gunther

taylor@hplabsc.UUCP (07/11/86)

This article is from mit-eddie!mck-csc!bmg (B. Gunther)
 and was received on Thu Jul 10 22:28:15 1986

 Why can't the database be huge?  Who is to say that large teams of
 programmers can't instill in one massively parallel and *huge* machine
 much of that common sense?  Once the computer/complex is large enough,
 why not give it robot sensors which can interact with the world and
 allow it to "learn" and watch what others do?

 I'm not saying it necessarily possible currently (given both technology
 and resources people are willing to devote to the subject), but who is
 to say that it's impossible?

     Benrie Gunther

taylor@hplabsc.UUCP (07/13/86)

This article is from Len Popp <tektronix!watmath!watdaisy!lmpopp>
 and was received on Sat Jul 12 10:15:08 1986
 
In article <429@hplabsc.UUCP> Sherry Marcus writes:

>Webster's Dictionary defines common sense as a 'practical knowledge'.  I 
>contend that all knowledge both informal and formal comes from this 'practical 
>knowledge'.  After all, if one thinks about Physics, Logic,or Chemistry, much 
>of it makes practical sense in the real world. 

One would hope so, inasmuch as the *purpose* Physics and Chemistry is to
explain the "real world".  Unfortunately, much of quantum physics goes
totally *against* common sense!  So our "common sense" does not by any means
apply to all of the "real world".

>It is common sense which distiguishes man from machine. 

It is fingers which distinguishes man from machine. :-)

>If a bum on the street were to tell you that if you give him $5.00 he will 
>make you a million dollars in a week, you would generally walk away and ignore 
>him.  If the same man were to input it into a so called intelligent machine,
>the machine would not know if he was Rockefeller or an indigent.

This may be true, but it may have little to do with the machine's
"intelligence" or "common sense".  Most computers simply have not been told
that Rockefeller is rich, or what poverty is; in fact, the computer usually
isn't even told its users' real names!  A failure on *our* part to provide
the computer with sufficient data upon which to base its "common sense"
cannot be construed as a failure of the *computer* to reason.

[good point!  --Dave]

>My point is this, I think it is intrinically impossible to program common 
>sense because a computer is not a man. A computer cannot experience what man 
>can; it cannot see or make ubiquitous judgements that man can.  We may be 
>able to program common-sense like rules into it,but this is not tantamount 
>to real world common sense because real world common sense is drawn from a 
>'database' that could never be matched by a simulated one.

Why not?  Computers *have* been programmed with common-sense rules and
information in limited domains (i.e., expert systems).  They sometimes
exhibit better "common-sense" reasoning than people, within these domains.
What is the theoretical or philosophical reason that these domains could not
be extended to the larger but still limited ones that humans use?


							   Len Popp
{allegra,decvax,ihnp4,tektronix,ubc-vision}!watmath!watdaisy!lmpopp

taylor@hplabsc.UUCP (07/18/86)

This article is from Eugene miya <ames!aurora!eugene>
 and was received on Thu Jul 17 19:19:17 1986
 
> From mit-eddie!mck-csc!bmg (B. Gunther);
> 
>  Why can't the database be huge?

Because if we have learned anything over the past 30 years it's that
size is not the only issue.  Structure another issue.  There are others
but we don't know them all.

>  Who is to say that large teams of programmers can't instill in one 
>  massively parallel and *huge* machine much of that common sense?

People are trying this.  Again, size is only one issue.  Parallelism is
a convenient buzz word.  Tell me how to build a parallel machine.
People cannot even agree how to put two processors together.  I kid you
not.

>  Once the computer/complex is large enough, why not give it robot sensors 
>  which can interact with the world and allow it to "learn" and watch what 
>  others do?

This was tried.  The problem is that we don't understand learning.
This is where logic and common sense and the natural world clash.
Give Aristole a computer, and you would find it clash with Galileo.
"But that's what I saw [or heard, or felt]" is a version of our
own limitations.

>  I'm not saying it necessarily possible currently (given both technology
>  and resources people are willing to devote to the subject), but who is
>  to say that it's impossible?

I for one won't.  We have to study the problem more.  The young have to
goad the old.  Go do it.

>      Benrie Gunther