[comp.ai] Goal of AI: where are we going?

josh@topaz.rutgers.edu (J Storrs Hall) (01/01/70)

   krulwich@yale.ARPA (Bruce Krulwich):
   If I expect my car to take me to the moon and it doesn't, is it
   flawed??

If you expect your car to take you to the moon, then I would say
your mind *is* flawed...

--JoSH

:^)

eugene@pioneer.arpa (Eugene Miya N.) (01/01/70)

In article <578@louie.udel.EDU> montgome@udel.EDU (Kevin Montgomery) writes:
>>> In article <2281@umn-cs.UUCP>, ramarao@umn-cs.UUCP (Bindu Rama Rao) writes:
>>> > 	Is the Human mind flawed?
>C'mon guys, lighten up for a sec.  Flawed implies a defect from it's
>design.  Therefore, if someone's mind doesn't do what it's designed

Having read the postings which followed this, consider that the human eye
has many blind spots, the largest where the optic nerve is and many
smaller ones.  The ear isn't perfect either.  Also consider how we can
be fooled by Necker illusions, visual, verbal, auditory, etc.  Flawed
many be too strong a word.  Is the greater "mind" be flawed if it's
components and inputs are "flawed?"  I prefer the "Just is" hypothesis. 

On emotions: you may have something there, but AI people are not the people
to answer that question.  A fellow I corresponded with on AI-Digest a
while noted he had a difficult time writing a Social Worker expert
system.  Harder to dish out artificial compassion than artificial
Discrimination.

From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene

khl@usl (Calvin K. H. Leung) (09/25/87)

Should the ultimate goal of AI be the perfecting of human  intel-
ligence,  or the imitating of intelligence in human behavior?

We all admit that the human mind is not flawless.  Bias decisions
can be made due to emotional problems, for instance.  So there is
no point trying to imitate  the  human  thinking  process.   Some
current  research  areas  (neural  networks, for example) use the
brain as the basic model.  Should we also spend some time on  the
investigation  of some other models which could be more efficient
and reliable?

Provided that we have the necessary technology  to  build  robots
that  are highly intelligent; they are efficient and reliable and
they do not possess any "bad" characteristic that man has.   Then
what  will be the roles man plays in the society where his intel-
ligence can be viewed as comparatively "lower form"?

AI, where are we going?

nakashim@su-russell.ARPA (Hideyuki Nakashima) (09/27/87)

In article <178@usl> khl@usl.usl.edu.UUCP (Calvin Kee-Hong Leung) writes:
>
>We all admit that the human mind is not flawless.  Bias decisions
>can be made due to emotional problems, for instance.  So there is
>no point trying to imitate  the  human  thinking  process.

I believe that those "bad" characteristics of human are necessary
evils to intelligence.  For example, although we still don't understand
the function of emotion in human mind, a psychologist Toda saids that
it is a device for servival.  When an urgent danger is approaching, you
don't have much time to think.  You must PANIC!  Emotion is a meta-
inference device to control your inference mode (mainly of recources).

If we ever make a really intelligent machine, I bet the machine
also has the "bad" characteristics. In summary, we have to study
why human has those characteristics to understand the mechanism of
intelligence.


Hideyuki Nakashima
nakashima@csli.stanford.edu
(or nakashima%etl.jp@relay.cs.net)

bware@csm9a.UUCP (Bob Ware) (09/28/87)

>We all admit that the human mind is not flawless.  Bias decisions
>can be made due to emotional problems, for instance. ...

The above has been true for all of recorded history and remains true
for almost everyone today.  While almost everyone's mind is flawed due
to emotional problems, new data is emerging that indicates the mind can
be "fixed" in that regard.  To see what I am referring to, read L Ron
Hubbard's book on "Dianetics".

MAIL: Bob Ware, Colorado School of Mines, Golden, Co 80401, USA
PHONE: (303) 273-3987
UUCP: hplabs!hao!isis!csm9a!bware or ucbvax!nbires!udenva!csm9a!bware

marty1@houdi.UUCP (M.BRILLIANT) (09/29/87)

In article <178@usl>, khl@usl (Calvin K. H. Leung) writes:
> Should the ultimate goal of AI be the perfecting of human  intel-
> ligence,  or the imitating of intelligence in human behavior?
> 
> We all admit that the human mind is not flawless...  So there is
> no point trying to imitate  the  human  thinking  process.   Some
> current  research  areas  (neural  networks, for example) use the
> brain as the basic model.  Should we also spend some time on  the
> investigation  of some other models which could be more efficient
> and reliable?

I always thought there were several different currents going in AI.
One stream is trying to learn how the human mind works and imitate it. 
Another stream is trying to fill in the gaps in the capabilities of the
human mind by using unique machine capabilities in combination with
imitations of the mind.  Some people are working with research
objectives, some have application objectives.

We don't need a unique goal for AI.  We contain multitudes.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201)-949-1858
Holmdel, NJ 07733	ihnp4!houdi!marty1

goldfain@osiris.cso.uiuc.edu.UUCP (09/30/87)

Bob Ware, of Colorado School of Mines, writes :

> ...  While almost everyone's mind is  flawed due to  emotional problems, new
> data is emerging that indicates the mind can be "fixed" in  that regard.  To
> see what I am referring to, read L Ron Hubbard's book on "Dianetics".

I suppose that if someone feels they have emotional problems and turned to Mr.
Hubbard for help, there is some sense  to that.  He ought  to know about them,
since reports have indicated  over  the  years that he  has more than his fair
share of them ...  :-)

Alternatively,   one  could consult  someone  who actually  has credentials in
psychology.  "You pays your money and you takes yer choice."

                                     - Mark Goldfain
                                       (ARPA: goldfain@osiris.cso.uiuc.edu)

lishka@uwslh.UUCP (09/30/87)

***Warning: FLAME ON***

In article <549@csm9a.UUCP> bware@csm9a.UUCP (Bob Ware) writes:
>>We all admit that the human mind is not flawless.  Bias decisions...
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The expression "we all" does not apply to me, at very least.  Some of
us (at least myself)like to believe that the human mind should not be
considered to be either flawed or flawless...it only "is."  I feel
that making a judgement on whether or not everyone admits that the
human mind is flawed happens to be a biased decision on the above
net-reader's part.  Realize that not everyone has the same views as
the above...

>>...can be made due to emotional problems, for instance. ...
			^^^^^^^^^^^^^^^^^^

Is this statement to be read as "emotional problems can cause bias
decisions, which are flaws in the human mind?"  If it does, then I
heartily disagree, because I once again feel that emotional problems
and/or bias decisions are not indicative of flaws in the human
mind...see above for my reasons.

>
>The above has been true for all of recorded history and remains true
>for almost everyone today.  While almost everyone's mind is flawed due
							     ^^^^^^^^^^
>to emotional problems, new data is emerging that indicates the mind can...
 ^^^^^^^^^^^^^^^^^^^^^

Again, I don't feel that my mind is "flawed" by emotional problems.
To me that seems to be a very "Western" (and I am making a rather
stereotyped remark here) method of thinking.  As I have grown up with
parents who have Buddhist values and beliefs, I think that making a
value judgement such as "human minds are flawed because of..." should
be indicated as such...there is no way to prove that sort of "fact."
For all I know or care, the human mind is neither perfect nor flawed;
it just "is," and I don't wish to make sweeping generalities such as
the above.  There are many other views of the mind out there, and I
recommend looking into *all* Religious views as well as *all*
Scientific views before even attempting a statement like the above
(which would easily take more than a lifetime).

>...be "fixed" in that regard.  To see what I am referring to, read L Ron
        ^^^^^
>Hubbard's book on "Dianetics".

To me this seems to be one of many problems in A.I.: the assumption
that the human mind can be looked at as a machine, and can be analyzed
as having flaws or not, and subsequently be fixed or not.  That sort
of thinking in my opinion belongs more in ones Personal Philosophy and
probably should not be used in a "Scientific" (ugghh, another
hard-to-pin-down word) argument, because it is damned hard to prove,
if it is able to be proven at all.

I feel that the mind just "is," and one cannot go around making value
judgements on another's thoughts.  Who gives anyone else the right to
say a person's mind is "flawed?"  To me that kind of judgement can
only be made by the person "owning" the mind (i.e. who is thinking and
communicating with it!), and others should leave well enough alone.
Now I realize that this brings up arguments in other fields (such as
Psychology), but I feel A.I. should try and move away from these sort
of value judgements.

A comment: why don't A.I. "people" use the human mind as a model, for
better or for worse, and not try to label it as "flawed" or "perfect?"
In the first place, it is like saying that something big (like the
U.S.  Government) is "flawed;" this kind of thing can only be proven
under *certain*conditions*, and is unlikely to hold for all possible
"states" that the world can be in.  In the second place, making that
kind of judgement would seem to be fruitless given all that we
*do*not* know about the human brain/mind/soul.  It seems to me to be
like saying "hmmmm, those damned quarks are fundamentally flawed", or
"neuronal activity is primarily flawed in the lipid bilayer membrane."
I feel that we as humans just do not know diddley about the world
around us, and to say it is flawed is a naive statement.  Why not just
look at the human mind/brain as something that has evolved and existed
over time, and therefore may be a good model for A.I. techniques UNDER
CERTAIN CIRCUMSTANCES?  A lot less people would be offended...

***FLAME*OFF***

Sorry if the above offends anyone...but the previous remarks offended
me enough to send a followup message around the world.  If one is
going to make remarks based on very personal opinions, try to indicate
that they are such, and please remember that not everyone thinks the
way you do.

Of course, pretty much everything I said above is a personal opinion,
and I don't presume that even one other person thinks the same way as
I do (but it would be nice to know that others think similarily ;-).
Disclaimer: the above views are my thoughts only, and do not reflect
the views of my employer, although there is eveidence that my
cockatiels are controlling my thoughts !!! ;-)

					-Chris

-- 
Chris Lishka                    /lishka@uwslh.uucp
Wisconsin State Lab of Hygiene <-lishka%uwslh.uucp@rsch.wisc.edu
                                \{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

josh@topaz.rutgers.edu.UUCP (10/01/87)

lishka@uwslh.UUCP (Christopher Lishka) writes:
    In article <549@csm9a.UUCP> bware@csm9a.UUCP (Bob Ware) writes:
    >>We all admit that the human mind is not flawless.  Bias decisions...
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    The expression "we all" does not apply to me, at very least.  Some of
    us (at least myself)like to believe that the human mind should not be
    considered to be either flawed or flawless...it only "is." 

It seems to me that this simply means that you hold the words "flawed"
and "flawless" to be meaningless.  It is as if Bob Ware were saying 
that the human mind were not plegrontless.  Only I don't see why I 
would get so upset if I saw people saying that minds are plegronted
at best, even if I didn't understand what they meant by the term.
I would instead make an effort to comprehend the concepts being used.

    >>...can be made due to emotional problems, for instance. ...

    Is this statement to be read as "emotional problems can cause bias
    decisions, which are flaws in the human mind?"  If it does, then I
    heartily disagree, because I once again feel that emotional problems
    and/or bias decisions are not indicative of flaws in the human
    mind...see above for my reasons.

I would say that an emotional *problem* is by definition a flaw. 
If you believe that Manson and Hitler and Caligula were not flawed,
but that is just the "way they were", and there is no reason to 
prefer Thomas Aquinas over Lyndon LaRouche, then your own reasoning
is distinctly flawed.

    To me that seems to be a very "Western" (and I am making a rather
    stereotyped remark here) method of thinking.  As I have grown up with
    parents who have Buddhist values and beliefs, I think that making a
    value judgement such as "human minds are flawed because of..." should
    be indicated as such...there is no way to prove that sort of "fact."

Can you say "evangelical fundamentalist mysticism"?  Your Eastern
values seem to be flavored by a strong Western intellectual
aggressiveness, which seems contradictory.  Twice the irony in a 
pound of holy calves liver.

    There are many other views of the mind out there, and I
    recommend looking into *all* Religious views as well as *all*
    Scientific views before even attempting a statement like the above
    (which would easily take more than a lifetime).

What an easy way to sidestep doing any real thinking.  Do you suggest 
that we should read all the religious writings having to do with
angels before we attempt to build an airplane?  Do you think that one
must be an expert on faith healing and the casting out of demons
before he is allowed to make a statement about this interesting mold
that seems to kill bacteria?  

In Western thought it has been realized at long and arduous last that
the appeal to authority is fallacious.  Experiment works;  the real
world exists;  objective standards can be applied.  Even to people.


    >...be "fixed" in that regard.  To see what I am referring to, read L Ron
    >Hubbard's book on "Dianetics".

Experiment (the church of scientology) shows that Hubbards ideas in
this regard are hogwash.  Hubbard's phenomenon had much more to do
with the charismatic religious leaders of the past, than the rational
enlightenment of the future.

    To me this seems to be one of many problems in A.I.: the assumption
    that the human mind can be looked at as a machine, and can be analyzed
    as having flaws or not, and subsequently be fixed or not. 

Surely this is independent of the major thrust of AI, which is to 
build a machine that exhibits behaviors which, in a human, would be
called intelligent.  It is true that most AI researchers "believe that
the mind is a machine", but it seems that the alternative is to
suggest that human intelligence has a supernatural mechanism.

    That sort
    of thinking in my opinion belongs more in ones Personal Philosophy and
    probably should not be used in a "Scientific" (ugghh, another
    hard-to-pin-down word) argument, because it is damned hard to prove,
    if it is able to be proven at all.

My personal philosophy *is* scientific, thank you, and it is an
objectively better one than yours is.

    I feel that the mind just "is," and one cannot go around making value
    judgements on another's thoughts.  Who gives anyone else the right to
    say a person's mind is "flawed?" 

Who gives me the right to say that 2+2=4 when you feel that it should
be 5?  If the Wisconsin State Legislature passed a law saying that it 
was 5, they would be wrong;  if everybody in the world believed it was
5, they would be wrong;  if God Himself claimed it was 5, He would be
wrong.

    A comment: why don't A.I. "people" use the human mind as a model, for
    better or for worse, and not try to label it as "flawed" or "perfect?"
    In the first place, it is like saying that something big (like the
    U.S.  Government) is "flawed;" this kind of thing can only be proven
    under *certain*conditions*, and is unlikely to hold for all possible
    "states" that the world can be in.

But the U.S. Government IS flawed...

    In the second place, making that
    kind of judgement would seem to be fruitless given all that we
    *do*not* know about the human brain/mind/soul.

Back in the middle ages, we didn't know much about the Black Plague,
but it was obvious that someone who caught it became pretty flawed 
pretty fast.  Furthermore, this small understanding was considered
sufficient grounds to inflict the social snubs of not associating 
with such a person.  

It is incredibly arrogant to declare that we must not make any
judgements until we know everything.  The whole point of having
a human mind rather than a rutabaga is that you *are* able to make
judgements in the absence of complete information.  Brains evolving in
a natural setting have always had to make *life-and-death* decisions 
on the spur of the moment with whatever information was available.
Is that large furry creature dangerous?  You've never seen a grizzly
bear before.  No time to consult the views of all the world's ancient
religions on the subject...

    I feel that we as humans just do not know diddley about the world
    around us, and to say it is flawed is a naive statement. 

To say that it is not flawed is just simply idiotic.  If you apply
enough sophistry you may manage to get the conversation to a level 
where the original statement is meaningless.  For example, there are
(or may be) no "flawed" atoms in a broken radio.  But to change the
level of discussion as a rhetorical device is tantamount to lying.
To do it without realizing you are doing it is tantamount to
gibberish.

    Sorry if the above offends anyone...

It offends me greatly.  The anti-scientific mentality is an emotional
excuse used to avoid thinking clearly.  It would be much more honest
to say "I don't want to think, it's too hard work."  Can't you see the
contradiction involved in criticizing someone for exercising his
judgement?  

The champions of irrationality, mysticism, and superstition have
emotional problems which bias their cognitive processes.  Their minds
are flawed.

--JoSH

waldau@kuling.UUCP (Mattias Waldau) (10/01/87)

In article <178@usl> khl@usl.usl.edu.UUCP (Calvin Kee-Hong Leung) writes:
>Provided that we have the necessary technology  to  build  robots
>that  are highly intelligent; they are efficient and reliable and
>they do not possess any "bad" characteristic that man has.   Then
>what  will be the roles man plays in the society where his intel-
>ligence can be viewed as comparatively "lower form"?
>
One of the short stories in Asimov's "I, robot" is about the problem
mentioned in the previous paragraph. It is about a robot and two humans
on a space station near our own sun. I can not tell more, otherwise I 
spoil your fun. It is very good!

tanner@tut.cis.ohio-state.edu (Mike Tanner) (10/01/87)

In article <270@uwslh.UUCP> lishka@uwslh.UUCP (Christopher Lishka) writes:
>In article <549@csm9a.UUCP> bware@csm9a.UUCP (Bob Ware) writes:
>>We all admit that the human mind is not flawless.  Bias decisions...
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> [the underscored bit above indicates a number of faulty assumptions,
> e.g., that it makes sense to talk about "flaws" in the mind.]
>

I liked this reply.  Whether the problem is "western" philosophy or not, I'm
not sure.  It may be true for the casual AI dabbler.  I.e., the average
intelligent person on first thinking or hearing of the topic of AI will often
say things like, "But people make mistakes, do you really want to build
human-like machines?"

Within AI itself this attitude manifests itself as rampant normativism.
Somebody adopts a model of so-called correct reasoning, e.g., Bayesian
decision theory, logic, etc., and then assumes that the abundant empirical
evidence that people are unable to reason this way shows human reasoning to be
flawed.  These people want to build "correct" reasoning machines.

I say, OK, go ahead.  But that's not what I want to do.  I want to understand
thinking, intelligent information processing, problem-solving, etc.  And I
think the empirical evidence is trying to tell us something important.  I am
not sure just what.  It seems clear that thinking is not logical (which is not
to say "flawed" or "incorrect", merely "not logical").  An interesting
question is, "why not?"  People are able to use language, solve problems -- to
think -- but is that in spite of illogic or because of it or neither?  I don't
think we're going to understand intelligence by adopting an a priori correct
model and trying to build machines that work that way (except by negative
results).  

If you want to say that what I'm doing is not AI, fine.  I think it is, but if
you'll give me a better name I'll take it and leave AI to the logicians.  It
is not psychology (my experiments involve building programs and generally
thinking about computational issues, not torturing college freshmen).  And I'm
not really interested in duplicating the human mind, it's just that the human
mind is the only intelligence I know.


-- mike tanner

Dept. of Computer and Info. Science         tanner@ohio-state.arpa
Ohio State University                       ...cbosgd!osu-eddie!tanner
2036 Neil Ave Mall
Columbus, OH 43210

morgan@uxe.cso.uiuc.edu.UUCP (10/01/87)

Maybe you should approach it as a scientist, rather than an engineer.  Think
of the physicists: they aren't out to fix the universe, or construct an
imitation; they want to understand it.  What AI really ought to be is a
science that studies intelligence, with the goal of understanding it by
rigorous theoretical work, and by empirical study of
systems that appear to have intelligence, whatever that is.  The best work
in AI, in my opinion, has this scientific flavor.  Then it's up to the
engineers (or society at large) to decide what to do with the knowledge
gained, in terms of constructing practical systems.

marty1@houdi.UUCP (10/01/87)

In article <270@uwslh.UUCP>, lishka@uwslh.UUCP (Christopher Lishka) writes:
> ***Warning: FLAME ON***
> 
> In article <549@csm9a.UUCP> bware@csm9a.UUCP (Bob Ware) writes:
> >>We all admit that the human mind is not flawless.  Bias decisions...
>   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> The expression "we all" does not apply to me, at very least.  Some of
> us (at least myself)like to believe that the human mind should not be
> considered to be either flawed or flawless...it only "is."  ....

Some interesting points here.

Point one, the human mind is in fact a phenomenon, and phenomena are
neither flawed nor perfect, they are the stuff that observation is made
of.  Score one for Lishka.

Point two, we keep using the human mind as a tool, to solve problems. 
As such, it is not merely a phenomenon, but a means to an end, and is
subject to judgments of its utility for that purpose.  Now we can say
whether it is perfect or flawed.  Obviously, it is not perfect, since
we often make mistakes when we use it.  Score one for Ware.

Point three, when we try to make better tools, or tools to supplement
the human mind, all these improvements are created by the human mind. 
In fact, the purposes of these tools are created by the human mind. 
The human mind is thus the ultimate reasoning tool. Score one for the
human mind.

You might say the same of the human hand.  As a phenomenon, it exists.
As a tool, it is imperfect.  And it is the ultimate mechanical tool,
since all mechanical tools are directly or indirectly made by it.

It is from these multiple standpoints that we derive the multiple goals
of AI: to study the mind, to supplement the mind, and to serve the mind.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201)-949-1858
Holmdel, NJ 07733	ihnp4!houdi!marty1

czhj@vax1.UUCP (10/03/87)

In article <46400008@uxe.cso.uiuc.edu> morgan@uxe.cso.uiuc.edu writes:
>
>Maybe you should approach it as a scientist, rather than an engineer.  Think
>...
>What AI really ought to be is a
>science that studies intelligence, with the goal of understanding it by
>rigorous theoretical work, and by empirical study of
>systems that appear to have intelligence, whatever that is.  The best work
>in AI, in my opinion, has this scientific flavor.  Then it's up to the
>engineers (or society at large) to decide what to do with the knowledge
>gained, in terms of constructing practical systems.


I wholeheartedly support this idea.  I'd go even further however, and say that 
most "AI" research is a huge waste of time.  I liken it to using trial and
error methods like those used by Edison which led him to try thousands of
possibilities before hitting one that made a good lightbulb.  With AI, the
problem is infinitely more complicated, and the chance of finding a solution
by blind experimentation is nil.

On the other hand, if we take an educated approach to the problem, and study
'intelligent' systems, we have a much greater chance of solving the mysteries
of the mind.

Some of you may remember my postings from last year where I expounded on the
virtues of cognitive psychology.  After investigating research in this field
in more detail, I came up very disillusioned.  Here is a field of study in 
which the soul purpose is to scientifically discover the nature of thought.
Even with some very bright people working on these problems, I found that the
research left me cold.  Paper after paper describe isolated phenomena, then go
on to present some absurdly narrow minded theory of how such phenomena could
occur.

I've reached the conclusion that we cannot study the mind in isolated pieces
which we try to put together to form a whole.  But rather we have to study
the interactions between the pieces in order to learn about the pieces 
themselves.  For example, take vision research.  Many papers have been written
about edge detection algorithms, possible geometries, and similarly
reductionist algorithms for making sense of scenes.  I assert that the
interplay between the senses and the experiential memory is huge.  Further,
because of these interactions, no simple approach will ever work well.  In
fact, what we need is to study the entire set of processes involved in seeing
before we can determine how we perceive objects in space.

This is but a single example of the complexity of studying such aspects of the
mind.  I found that virtually every aspect of cognition has such problems. 
That is, no aspect is isolated!

Because of this immensely complex set of interactions, I believe that the 
connectionist theories are heading in the right direction.  However, these
theories are somewhat too reductionistic for my tastes as well.  I want to 
understand how the mind works at a high level (if possible).  The actual
implementation is the easy part.  The understanding is the hard part.

---Ted Inoue

lishka@uwslh.UUCP (10/04/87)

In article <46400008@uxe.cso.uiuc.edu> morgan@uxe.cso.uiuc.edu writes:
>
>Maybe you should approach it as a scientist, rather than an engineer.  Think
>of the physicists: they aren't out to fix the universe, or construct an
>imitation; they want to understand it.  

I think this is a good point.  I have always thought that Science was
a method used to predict natural events with some accuracy (as opposed
to guessing).  Whether this is understanding, well I guess that
depends on one's definition.  I like this view because it (to me at
least) parallels the attempts by nearly all (if not all) religions to
do the same thing, and possibly provide some form of meaning to this
strange world we live in.  It also opens the possibility of sharing
views between scientists and other people explaining the world they
see with their own methods.

>What AI really ought to be is a
>science that studies intelligence, with the goal of understanding it by
>rigorous theoretical work, and by empirical study of
>systems that appear to have intelligence, whatever that is.  The best work
>in AI, in my opinion, has this scientific flavor.  Then it's up to the
>engineers (or society at large) to decide what to do with the knowledge
>gained, in terms of constructing practical systems.

I like this view also, and feel that A.I. might go a little further in
studying other areas in conjunction with the human mind.  Maybe this
isn't pure A.I., but I'm not sure what pure A.I. is.  One interesting
note is that maybe the people who are implementing various Expert
Systems (which grew out of A.I. research) for real-world applications
are the "engineers" of which morgan@uxe speaks of.  And more power to
both the "scientists" and "engineers" then, and those in the gray area
in between.  It's good to be able to work together like this, and not
have the "scientists" only come up with research that cannot be
applied.

Disclaimer: I am sitting here typing this because my girfriends cat is
holding a gun at my head, and am in no way responsible for the content
;-)

[If anyone really wants to flame me, please mail me; if you really
think there is some benefit in posting the flame, go ahead.  I reply
to all flames, but if my reply doesn't get to you, it is because I am
not able to find a reliable mail path (which is too damned often!)]

					-Chris

-- 
Chris Lishka                    /lishka@uwslh.uucp
Wisconsin State Lab of Hygiene <-lishka%uwslh.uucp@rsch.wisc.edu
                                \{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

alan@pdn.UUCP (Alan Lovejoy) (10/04/87)

In article <46400008@uxe.cso.uiuc.edu> morgan@uxe.cso.uiuc.edu writes:
/Maybe you should approach it as a scientist, rather than an engineer.  Think
/of the physicists: they aren't out to fix the universe, or construct an
/imitation; they want to understand it.  What AI really ought to be is a
/science that studies intelligence, with the goal of understanding it by
/rigorous theoretical work, and by empirical study of
/systems that appear to have intelligence, whatever that is.  The best work
/in AI, in my opinion, has this scientific flavor.  Then it's up to the
/engineers (or society at large) to decide what to do with the knowledge
/gained, in terms of constructing practical systems.


The word "artificial" implies either an imitation or synthetic object, 
or the general/abstract laws governing an entire class of such objects.
The question is, does "artifical intelligence" mean "synthetic and/or
imitation intelligence" (most computer programs currently fall into this
category :-) ) or "real intelligence exhibited by artifical systems"?
Is AI mostly concerned with the *faking* of intelligence, with intelligence 
per se or with intelligence as exhibited by artificial systems?
Given the current state of the art, perhaps it should be called
"Real Stupidity".  (Only half :-) ).

The "scientific" study of intelligence would involve such subfields as
cognition, semantics, linguistics, semiotics, psychology, mathematics,
cybernetics and a host of other disciplines I can't think of right now,
some of which probably don't exist yet.  Creating an intelligent
"artifact" (artificial intelligence) is only a "scientific" endeavor to
the extent it serves as experimental proof (or refutation) of some
*scientific* theory, or else as the raw data from which a theory is induced.

If the purpose of AI is to build a computer just as smart as a human
being because that would be a useful tool, then it's engineering.  
If the purpose is to prove or induce theories about intelligence, then it's 
scientific.  It appears that both cases probably apply. 
 
It is disturbing how often "science" is confused with "technology"
and/or "engineering".  People also tend to forget that science involves
both the formulating of theories AND experiments.  Experiments often
require a great deal of mundane (and sometimes not so mundane)
engineering work.  AI came about because computers opened up a whole
new way to experimentally test theories about intelligence.

Physicists might very well try to construct an "artificial" universe,
if it would help to prove or induce a physical theory (the "Big Bang",
for instance).  They'd probably require a lot of help from the engineers, 
though (and probably a permit from the EPA :-) ).

--alan@pdn

eyal@WISDOM.BITNET (Eyal mozes) (10/04/87)

> I believe that those "bad" characteristics of human are necessary
> evils to intelligence.  For example, although we still don't understand
> the function of emotion in human mind, a psychologist Toda saids that
> it is a device for servival.  When an urgent danger is approaching, you
> don't have much time to think.  You must PANIC!  Emotion is a meta-
> inference device to control your inference mode (mainly of recources).
>
> If we ever make a really intelligent machine, I bet the machine
> also has the "bad" characteristics. In summary, we have to study
> why human has those characteristics to understand the mechanism of
> intelligence.

I think what you mean by "the bad characteristics" is, simply, free
will. Free will includes the ability to fail to think about some
things, and even to actively evade thinking about them; this is the
source of biased decisions and of all other "flaws" of human thought.

Emotions, by themselves, are certainly not a problem; on the contrary,
they're a crucial function of the human mind, and their role is not
limited to emergencies. Emotions are the result of subconscious
evaluations, caused by identifications and value-judgments made
consciously in the past and then automatized; their role is not "to
control your inference mode", but to inform you of your subconscious
conclusions. Emotional problems are the result of the automatization of
wrong identifications and evaluations, which may have been reached
either because of insufficient information or because of volitional
failure to think.

A theory of emotions and of free will, explaining their role in the
human mind, was developed by Ayn Rand, and the theory of free will was
more recently expanded by David Kelley.

Basically, the survival value of free will, and the reason why the
process of evolution had to create it, is man's ability to deal with a
wide range of abstractions.  A man can form concepts, gain abstract
knowledge, and plan actions on a scale that is in principle unlimited.
He needs some control on the amount of time and effort he will spend on
each area, concept or action. But because his range his unlimited, this
can't be controlled by built-in rules such as "always spend 1 hour
thinking about computers, 2 hours thinking about physics" etc.; man has
to be free to control it in each case by his own decision.  And this
necessarily implies also freedom to fail to think and to evade.

It seems, therefore, that free will is inherent in intelligence. If we
ever manage to build an intelligent robot, we would have to either
narrowly limit the range of thoughts and actions possible to it (in
which case we could create built-in rules for controlling the amount of
time it spends on each area), or give it free will (which will clearly
require some great research breakthroughs, probably in hardware as well
as software); and in the later case, it will also have "the bad
characteristics" of human beings.

        Eyal Mozes

        BITNET:                 eyal@wisdom
        CSNET and ARPA:         eyal%wisdom.bitnet@wiscvm.wisc.edu
        UUCP:                   ...!ihnp4!talcott!WISDOM!eyal

wcalvin@well.UUCP (William Calvin) (10/04/87)

Making AI a real science suffers from the attitude of many of its founders:
they'd rather re-invent the wheel than "cheat" by looking at brain research.
While Minsky's SOCIETY OF MIND is very interesting, one gets the impression
that he hasn't looked at neurophysiology since the 1950s.  Contrast that to
Braitenberg's little book VEHICLES (MIT Press 1984), which summarizes a lot
of ideas kicking around neurobiology at the simple circuit level.
	The other thing strikingly missing, besides a working knowledge of
neurobiology beyond the Hubel-Wiesel level, is a knowledge of evolutionary
biology beyond the "survival of the fittest" level.  Emergent properties are
a big aspect of complex systems, but one seldom hears much talk about them
in AI.
William H. Calvin
University of Washington NJ-15, Seattle WA 98195

spf@moss.ATT.COM (10/05/87)

In article <493@vax1.UUCP> czhj@vax1.UUCP (Ted Inoue) writes:
}Some of you may remember my postings from last year where I expounded on the
}virtues of cognitive psychology.  After investigating research in this field
}in more detail, I came up very disillusioned.  Here is a field of study in 
}which the soul purpose is to scientifically discover the nature of thought.
}Even with some very bright people working on these problems, I found that the
}research left me cold.  Paper after paper describe isolated phenomena, then go
}on to present some absurdly narrow minded theory of how such phenomena could
}occur.

Perhaps you're right; there is not doubt that the system in question
is highly complex and interconnected.

However, the same claim can be made about the domain of physics.  And
(in the west at least) research and progress in physics has been built
upon small pieces of the problem, complete with small theories (which
usually seemed incredibily naive when disproved).  Now another
approach to physics is possible (see Kapra's "Tao of Physics"). It
would probably not be observational (which I require of any science)
but introspective instead.  Me?  I like both.  When I do science, I
build up from measurable components, creating and discarding petty
theories along the way.  When I do zen, it's another matter entirely
(no pun intended).

The principal difficulty in cognitive science is that it is in its
infancy.  I think that psychology is today where physics was in
Newton's time.  And a LOT of "narrow minded" theories came and went in
Newton's time.  Including Newton's theories.

Steve Frysinger

cycy@isl1.ri.cmu.edu (Christopher Young) (10/06/87)

In article <270@uwslh.UUCP>, lishka@uwslh.UUCP (Christopher Lishka) writes:
> To me this seems to be one of many problems in A.I.: the assumption
> that the human mind can be looked at as a machine, and can be analyzed
> as having flaws or not, and subsequently be fixed or not.
> 
> A comment: why don't A.I. "people" use the human mind as a model, for
> better or for worse, and not try to label it as "flawed" or "perfect?"

I guess I basically agree, though I certainly feel that there are some
people whose reasoning is either flawed or barely existent, and it is true
in fact that physiological parameters can affect thought, and that these
parameters can be adjusted in certain ways to cause depression, and to recover
from depression (etc). So in that way, one might say that human minds may
become flawed, I suppose.

On the other hand, since we pretty much define "mind" based on human ones,
it's hard to say that they are flawed. If there was something "perfect"
(whatever that might be", then it might very well not be a mind.

I do believe that there is some mechanism to minds (or perhaps a variety of
them). One reason why I am interested in AI (perhaps this is very Cog. Sci.
of me, actually) is because I think perhaps it will help elucidate the ways
in which the human mind works, and thus increase our understanding of human
behaviour. I don't know; perhaps I am naive in that respect. At anyrate,
I do try to use the human mind as a model in at least some of what I am doing.

Just thought I'd throw in my two cents.
-- 

					-- Chris. (cycy@isl1.ri.cmu.edu)

I know you believe you understand what you think I said, but I am not sure
you realise that what you heard is not what I meant.

cycy@isl1.ri.cmu.edu (Christopher Young) (10/06/87)

In article <1330@houdi.UUCP>, marty1@houdi.UUCP (M.BRILLIANT) writes:
> Point two, we keep using the human mind as a tool, to solve problems. 
> As such, it is not merely a phenomenon, but a means to an end, and is
> subject to judgments of its utility for that purpose.  Now we can say
> whether it is perfect or flawed.  Obviously, it is not perfect, since
> we often make mistakes when we use it.  Score one for Ware.

This is true. However, this is not the only use for the human mind. The
human mind is also used to imagine fanciful dreams, to love and hate and
otherwise feel emotion, and to make value judgement even when there is no
real logical reason for choosing option one over option two. So perhaps it
can be flawed in one way, but not in others (since it is difficult to say
what is flawed in some of these instances).
-- 

					-- Chris. (cycy@isl1.ri.cmu.edu)

I know you believe you understand what you think I said, but I am not sure
you realise that what you heard is not what I meant.

ong@uiucdcsp.cs.uiuc.edu (10/06/87)

/* Written  1:14 am  Oct  3, 1987 by czhj@vax1.UUCP in uiucdcsp:comp.ai */
/* ---------- "Re: Goal of AI: where are we going?" ---------- */
In article <46400008@uxe.cso.uiuc.edu> morgan@uxe.cso.uiuc.edu writes:
>
>Maybe you should approach it as a scientist, rather than an engineer.  Think
>...
>What AI really ought to be is a
>science that studies intelligence, with the goal of understanding it by
>rigorous theoretical work, and by empirical study of
>systems that appear to have intelligence, whatever that is.  The best work
>in AI, in my opinion, has this scientific flavor.  Then it's up to the
>engineers (or society at large) to decide what to do with the knowledge
>gained, in terms of constructing practical systems.


I wholeheartedly support this idea.  I'd go even further however, and say that 
most "AI" research is a huge waste of time.  I liken it to using trial and
error methods like those used by Edison which led him to try thousands of
possibilities before hitting one that made a good lightbulb.  With AI, the
problem is infinitely more complicated, and the chance of finding a solution
by blind experimentation is nil.

On the other hand, if we take an educated approach to the problem, and study
'intelligent' systems, we have a much greater chance of solving the mysteries
of the mind.

Some of you may remember my postings from last year where I expounded on the
virtues of cognitive psychology.  After investigating research in this field
in more detail, I came up very disillusioned.  Here is a field of study in 
which the soul purpose is to scientifically discover the nature of thought.
Even with some very bright people working on these problems, I found that the
research left me cold.  Paper after paper describe isolated phenomena, then go
on to present some absurdly narrow minded theory of how such phenomena could
occur.

I've reached the conclusion that we cannot study the mind in isolated pieces
which we try to put together to form a whole.  But rather we have to study
the interactions between the pieces in order to learn about the pieces 
themselves.  For example, take vision research.  Many papers have been written
about edge detection algorithms, possible geometries, and similarly
reductionist algorithms for making sense of scenes.  I assert that the
interplay between the senses and the experiential memory is huge.  Further,
because of these interactions, no simple approach will ever work well.  In
fact, what we need is to study the entire set of processes involved in seeing
before we can determine how we perceive objects in space.

This is but a single example of the complexity of studying such aspects of the
mind.  I found that virtually every aspect of cognition has such problems. 
That is, no aspect is isolated!

Because of this immensely complex set of interactions, I believe that the 
connectionist theories are heading in the right direction.  However, these
theories are somewhat too reductionistic for my tastes as well.  I want to 
understand how the mind works at a high level (if possible).  The actual
implementation is the easy part.  The understanding is the hard part.

---Ted Inoue
/* End of text from uiucdcsp:comp.ai */

litow@uwm-cs.UUCP (Dr. B. Litow) (10/06/87)

> 
> The principal difficulty in cognitive science is that it is in its
> infancy.  I think that psychology is today where physics was in
> Newton's time.  And a LOT of "narrow minded" theories came and went in
> Newton's time.  Including Newton's theories.
> 
> Steve Frysinger

Newton's primary contribution in Principia is a method. The method has NOT
been modified at its core in the elapsed three centuries. It is still at
the basis of all western physical science. Newton understood its importance
as V.Arnold has pointed out in his book Geometric Methods in the Theory of
Ordinary Differential Equations (Springer). The method is very simple to
state: pose and solve differential equations for the phenomena. Prior to
anything else in western physics there is this method. In this respect all
of quantum mechanics is only a conservative (almost in the sense of logic)
extension of rational mechanics. Incidentally rational mechanics was not
developed explicitly by Newton. It is a product of the Enlightenment
researchers,e.g. the Bernoullis and especially Euler. Underlying the method
is something nameless which when it is finally investigated (the time is
approaching) will be a decisive element in actually showing what is really
conveyed by the adjective "western".

ramarao@umn-cs.UUCP (Bindu Rama Rao) (10/07/87)

	Is the Human mind flawed?

Can we pass such a judgement without knowing anything about the human mind?

Do we really understand how the mind works?

Aren't we trying to model the mind because we are in awe of all the
power the mind posesses?

Is the mind flawed just because humans make decisions based on
their emotional involvement? Isn't the mind used for analysis only
while emotions play a major part in formulating the final decision?

	Let's not hastily dismiss the human mind as flawed.

-bindu rama rao.

marty1@houdi.UUCP (M.BRILLIANT) (10/09/87)

In article <2281@umn-cs.UUCP>, ramarao@umn-cs.UUCP (Bindu Rama Rao) writes:
> 
> 	Is the Human mind flawed?
> 
> Can we pass such a judgement without knowing anything about the human mind?
> 
> Do we really understand how the mind works?

Let's draw an analogy.  You are driving an X-Brand car from Pittsburgh to
Atlanta and halfway there it bursts into flame.  Without knowing how the
car works you can conclude it was flawed.

Mr X. goes to an employment interview and gets angry or flustered and
says something that causes him to be rejected.  Without knowing how his
mind works you can conclude it was flawed.

> Aren't we trying to model the mind because we are in awe of all the
> power the mind posesses?

Of course we are.  But saying the mind is enormously powerful is not
contradicted by saying it's not perfect.  A car with a big engine is
enormously powerful and almost certainly not perfect.

> Is the mind flawed just because humans make decisions based on
> their emotional involvement? Isn't the mind used for analysis only
> while emotions play a major part in formulating the final decision?

Factually, we know the mind is flawed because we observe that it does
not do what we expect of it.  As a hypothesis, we can test the idea
that it is flawed because of the action of what we call emotions.  As
a further hypothesis, we can also test the idea that emotions motivate
all human activity.  Personally, I like both those hypotheses.

Question of definition here: do we agree that emotion, reason,
consciousness, will, etc., are all functions of the mind?

> 	Let's not hastily dismiss the human mind as flawed.

Who's dismissing it?  I know my car is flawed, but I can't afford to
dismiss it.  I'm not dismissing my mind either.  How could I?  :-)

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201)-949-1858
Holmdel, NJ 07733	ihnp4!houdi!marty1

khl@usl (Calvin K. H. Leung) (10/10/87)

In article <1270@isl1.ri.cmu.edu> cycy@isl1.ri.cmu.edu (Christopher Young) writes:
> I do believe that there is some mechanism to minds (or perhaps a variety of
> them). One reason why I am interested in AI (perhaps this is very Cog. Sci.
> of me, actually) is because I think perhaps it will help elucidate the ways
> in which the human mind works, and thus increase our understanding of human
> behaviour.

I agree with the idea that there must be some mechanisms that our
minds  are  using.   But  the different reasoning methods (proba-
bilistic reasoning, for instance) that we  are  studying  in  the
area  of  AI are not the way one reasons: we never use the Bayes'
Theorem in our thinking process.   The  use  of  those  reasoning
methods, in my point of view, will never help increase our under-
standing of human behavior.  Because our minds  just  don't  work
that way.


                                           Calvin K H Leung

-- 
Calvin K. H. Leung                               USL P.O. Box 41821
                                                 Lafayette, LA 70504
khl@usl.usl.edu.csnet                            318-237-7128

cik@l.cc.purdue.edu (Herman Rubin) (10/10/87)

In article <1368@houdi.UUCP>, marty1@houdi.UUCP (M.BRILLIANT) writes:
> In article <2281@umn-cs.UUCP>, ramarao@umn-cs.UUCP (Bindu Rama Rao) writes:
> > 
> > 	Is the Human mind flawed?
> > 
> > Can we pass such a judgement without knowing anything about the human mind?
> > 
> > Do we really understand how the mind works?
> 
The human mind is definitely flawed, very fortunately.  I do not see how an
intelligent entity can fail to be flawed if it has only the computing power
of the universe available.

I define intelligence as the ability to deal with a _totally unforeseen
situation_.  It is easy to give examples in which the amount of information
needed to effect a logical decision would require more memory than the size
of the universe permits.  Therefore, dealing with such a situation _requires_
that such extralogical procedures as intuition, judgment, somewhat instinctive
reactions, etc., must be involved.  That is not to say that one cannot find out
that certain factors are of lesser importance.  But the decision that these
less important factors can or should be ignored is still a matter of judgment.

Therefore, an intelligent treatment of a problem of even moderate complexity
requires that nonrational procedures must be used.  These cannot be correct;
at most we can determine in _some_ cases that they are not too bad.  In other
cases, we can only hope that we are not too far off.

There is no "rational" intelligent entity for moderately difficult problems!

-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (ARPA or UUCP) or hrubin@purccvm.bitnet

montgome@udel.EDU (Kevin Montgomery) (10/10/87)

>> In article <2281@umn-cs.UUCP>, ramarao@umn-cs.UUCP (Bindu Rama Rao) writes:
>> > 	Is the Human mind flawed?

C'mon guys, lighten up for a sec.  Flawed implies a defect from it's
design.  Therefore, if someone's mind doesn't do what it's designed
to do (namely help keep the organism alive, etc), THEN it's flawed 
(ex: schizos, manics, etc).  A "normal" person does NOT have a flawed
mind, just an illogical one.  What do you expect when the old brain
(producing emotions, feelings and the like) is still in the design?
So the $64K answer is: no, the mind is not (usually) flawed, but it
is illogical.  Is having an illogical mind a problem?  Hell no!  It's
what keeps organisms going- drives for self-preservation, procreation,
etc.  While striving to be logical IS (i feel) a noble aspiration,
there's no way to totally shut out something like emotions so deeply 
ingrained into the mental architecture.  (one may even argue that 
if we were to consider all things logically, then civilization would
die out rather quickly, but i'm not gonna touch that one)  At any rate,
if you want to do some neato cognitive modelling stuff, then you've
got to (eventually) incorporate the functions of the old brain (illogic) 
with the logical processes we normally consider.  If you're gonna
do some neato expert system stuff involving pure logic, then don't worry
about it.  `kay?    `kay.


-- 
Kevin Desperately-trying-to-get-into-Stanford Montgomery

krulwich@gator..arpa (Bruce Krulwich) (10/11/87)

In article <1368@houdi.UUCP> marty1@houdi.UUCP (M.BRILLIANT) writes:
>Factually, we know the mind is flawed because we observe that it does
>not do what we expect of it.

If I expect my car to take me to the moon and it doesn't, is it
flawed??  No, rather my expectation of it is wrong.  Similarly, we
shouldn't say that the mind is flawed until we're sure that our
definition of "intelligence" is perfect.

> As a hypothesis, we can test the idea
>that it is flawed because of the action of what we call emotions.  

Why do you assume that emotions are a flaw??  Just maybe emotions are
at the core of intellegence, and logic is just a side issue.

>As
>a further hypothesis, we can also test the idea that emotions motivate
>all human activity.  Personally, I like both those hypotheses.

If you think that emotions motivate all human activity, why do you
dismiss emotions as a flaw in the mind??  It seems to me that human
activity is a lot more "intelligent" than any AI system as of yet.

>Question of definition here: do we agree that emotion, reason,
>consciousness, will, etc., are all functions of the mind?

Yes, and not necessarily "flawed" ones.


Bruce Krulwich

ARPA:   krulwich@yale.arpa		      	Being true heros,
     or krulwich@cs.yale.edu		          they lept into action.
Bitnet:	krulwich@yalecs.bitnet			        	(Bullwinkle)
UUCP:   {harvard, seismo, ihnp4}!yale!krulwich  (Any B-CC'ers out there??)

gilbert@hci.hw.ac.uk (Gilbert Cockton) (10/22/87)

In article <15196@topaz.rutgers.edu> josh@topaz.rutgers.edu (J Storrs Hall) writes:
>In Western thought it has been realized at long and arduous last that
>the appeal to authority is fallacious.

Tell that to the judge. This understanding of Western social
practices seems weak given its confusion of intellectual idealism with
social reality. Authority counts for far more than rationality or science.

>Experiment works; the real world exists;

Not true all the time - scientific method is flawed, as any sophomore
who's studied epistemology can tell you. The modern command over nature
is due, not to a slavish and unimagintive application of statistical
inference and hypothetico-deductive reasoning, but to an engagement
which combines rigour, rationality (self-critical candour) and
imagination. This view of reality and experiment is very dated and it's
time some of us ignored the off-the-cuff dogma of our chemistry and
physics teachers (rarely real people :-) ) and caught up with modern
Western thinking (and eternal practice).

> objective standards can be applied.  Even to people.

They must be proved objective first though, so this argument is empty.
What is an objective standard? I admit the value of the idea,
otherwise our concepts of morality would be weakened. But the term is
not to be used lightly. "flawed" is not an objective standard, though
it can be defined idiosyncratically and after the fact to correspond
to standards which are. Calling the human mind "flawed" in essence
could be being motivated by a lack of fit with an AI model - now
shouldn't this lack of fit suggest the model is flawed and not the
human mind? Note that at the end of the day, the unimaginative
application of any method is less important than the people who are
convinced, and remain convinced over the rest of their life. Science
and convincement are not one and the same, and it is the latter which
guides human life.  

>It is true that most AI researchers "believe that 
>the mind is a machine", but it seems that the alternative is to
>suggest that human intelligence has a supernatural mechanism.

No, Mind is extra/para-natural - we cannot observe it as we do nature,
and thus the values of science do not apply. More spiritual and
humanist approaches do. By the way, as a historan originally, I would
hold that humanist and spiritual views of human nature have dominated,
and continue to dominate, the public thinking on Man.  Reductionist
mechanical scientists appear to be an ugly minority who have little
*respectful* social contact outside their own self-congratulating cliques.

>The anti-scientific mentality is an emotional 
>excuse used to avoid thinking clearly.  It would be much more honest
>to say "I don't want to think, it's too hard work."  

There are other interpretations of this. I wouldn't use, for example,
predicate logic (and thus Frames, semantic nets, etc),
to describe the design process, not because it is too hard, but
because it becomes a cretinous tool when describing such a rich
human phenomenum. Thus I am not avoiding hard work; I am avoiding
*fruitless* work. Many workers in AI would do better if they stopped
trying to cram the world into an impoverished computational
representation and actually explored the rich range of non-computable
knowledge representations (e.g. the Novel, the painting, psalms, the
monographs of the liberal arts). If this is all too inaccessible to their
critical abilities, they could at least read some of the established
works of scholarship on semantics (e.g. Lyons' 2 volumes).

>The champions of irrationality, mysticism, and superstition have
>emotional problems which bias their cognitive processes. Their minds are flawed

This is very sad. I think the author is missing something, somewhere.
I cannot believe that those who share a same higher view of humanity
are misleading themselves. What does the author's friends think?
-- 
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
		UUCP:	..{backbone}!mcvax!ukc!hwcs!hci!gilbert

gilbert@hci.hw.ac.uk (Gilbert Cockton) (10/22/87)

In article <1368@houdi.UUCP> marty1@houdi.UUCP (M.BRILLIANT) writes:
>Factually, we know the mind is flawed because we observe that it does
>not do what we expect of it.

I expect my car to fetch my shoes
I observe that my car does not fetch my shoes
My car is flawed.

I expect my dog to not move from the fire when I come to put more coal on
I observe that my dog is moving when I come to put more coal on
My dog is flawed

I expect the word foliage to mean any "leaves" on trees shrubs
I observe that people in New England use the word to mean Autumn leaves
People in New England are flawed

Wow! This must be logic we're seeing :-)

Now for an argument based only on my understanding of what it is to
convince: We can expect nothing untoward from something we do not
fully understand at the level of a predictive model. I understand my
car, I do not understand dogs or New Englanders. 
-- 
   Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
   JANET:  gilbert@uk.ac.hw.hci    ARPA:   gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
		UUCP:	..{backbone}!mcvax!ukc!hwcs!hci!gilbert

mdl@cci632.UUCP (Michael Liss) (10/26/87)

In article <285@usl> khl@usl.usl.edu.UUCP (Calvin K. H. Leung) writes:
>I agree with the idea that there must be some mechanisms that our
>minds  are  using.   But  the different reasoning methods (proba-
>bilistic reasoning, for instance) that we  are  studying  in  the
>area  of  AI are not the way one reasons: we never use the Bayes'
>Theorem in our thinking process.   The  use  of  those  reasoning
>methods, in my point of view, will never help increase our under-
>standing of human behavior.  Because our minds  just  don't  work
>that way.

I read an interesting article recently which had the title:
"If AI = The Human Brain, Cars Should Have Legs"

The author's premise was that most of our other machines that mimic human abilites
do not do so through strict copying of our physical processes.

What we have done, in the case of the automobile, is to make use of wheels and
axles and the internal combustion engine to produce a transportation device
which owes nothing tothe study of human legs.

In the case of AI, he state that artificial intelligence should not be
assumed to be the equivalent of human intelligence and thus, the disection of
the human mind's functionality will not necessarily yield a solution to AI.

He closes with the following:
"And I suspect it [AI] will develop without reference to natural intelligence
and should so develop. And I am sure it will not replace human thinking any more than
the autombile replaces human walking."

-- 
================================================================================
"Why am I so soft in the middle when the rest of my life is so hard?" -- P.Simon

Mike Liss	{rochester, ritcv}!cci632!mdl		(716) 482-5000

murrayw@utai.UUCP (10/29/87)

In article <2072@cci632.UUCP> mdl@cci632.UUCP (Michael Liss) writes:
.
.
>I read an interesting article recently which had the title:
>"If AI = The Human Brain, Cars Should Have Legs"
>
>The author's premise was that most of our other machines that mimic human 
>abilites do not do so through strict copying of our physical processes.
>
>What we have done, in the case of the automobile, is to make use of wheels and
>axles and the internal combustion engine to produce a transportation device
>which owes nothing tothe study of human legs.
>
>In the case of AI, he state that artificial intelligence should not be
>assumed to be the equivalent of human intelligence and thus, the disection of
>the human mind's functionality will not necessarily yield a solution to AI.
>
"THE USE AND MISUSE OF ANALOGIES"

Transporation (or movement) is not a property unique to human beings.
If one were to refine the goal better, the analogy flips sides.   
If the goal is to design a device that can climb rocky hills it may 
have something like legs. If the goal is to design a device that can 
fly it may have something like wings. (Okay so there not the same type of
wings, but what about streamlining?)

AS I UNDERSTAND IT, one goal of AI is to design systems that perform well
in areas that the human brain performs well. Current computer systems can do 
things (like add numbers) better than we can. I would not suggest creating
an A.I. system for generating telephone bills! However, don't tell me 
that understanding the human brain doesn't tell me anything about natural
language!

The more analogies I see the less I like them. However, they seem handy to
convince the masses of completely false doctrines.

e.g. "Jesus accepted food and shelter from his friends, so sign over
      your paycheck to me." (I am waiting Michael) 8-)
      
                                   Murray Watt (murrayw@utai.toronto.edu)

The views of my colleagues do not necessarily reflect my opinions.