[comp.ai.philosophy] Fresh Ideas

BINDNER@auvm.auvm.edu (10/04/90)

A few thoughts on the limits and potentials of AI.

First the limits.  I don't think AI will ever be able to really duplicate
human judgement.  It may one day be the compliment of man's rational
thought, but it will never duplicate man as it will not evolve the same way.
Thus, the thinking machine will never happen and the expert system is limited
to an automated rulebook with data processing functions.  Emotion, intuition,
inspiration (gasp, a dualist) and other things which are physical or spiritual
are outside the scope of our abilities (at least I hope they are).

Next, the potentials.  Although AI will never duplicate thought it may
enhance it.  It will do this by aiding human's in their use of computers.
Managers and scientists have the ability to think, and in fact they do it
well.  What they can't do is digest the enormous amounts of data which
automation makes gatherable.  Expert systems and their successors can aid
this analysis.  However, hunches and judgement are beyond the capablities of
automation (at least for the present) as they are non-rational.

A further potential, possibly AIs grandest is to make computers accessable
to all.  Let me elaborate.  Nothing discourages a new user more than the
literal nature of computers.   As all hackers know, computers like exact
commands (and will accept nothing less).  Correct this problem and AI
will have served its function nicely.  An attempt has been made to work
around it with the rise of the menu driven system.  However, this is not
a true solution (though it has the same effect).  Here's what I would like
in an AI system:

   - user affection (as opposed to user friendliness).  I expect my
     PC to cuss at me if I cuss at it and compliment me if I compliment
     it.  It should know the answer I want based on context.

   - mistake correction.  If I type Logim and it needs Logon it should
     ask me "Do you mean Logon?" and if I respond Yes (or Y or sure
     depending upon how well it knows me) after a number of repeated
     trial (2 to 5 depending upon how similar the command or error is
     to other commands) it will automatically say "I assume you meant
     Logon" and implement the command without asking me.

     Instead of saying command not understood it would search for
     permutations of the command from the front or back.  These might
     be based on context (for instance, if machine is not logged on
     it would query for logon command if expected, and if not found
     query for synonyms.  Model high level pseudo code:

       If command = error search permutations
       If search positive go to presenter
       if search negative search synonymes
       If search positive go to presenter
       Present commandwith question "do you mean ()
       If answer=y or synonym or synonym or close permutation execute
            and add the command or misprint to synonym structure
       If answer=n ""   ""    ""    ""   "   ""      ""         ""
            ask "what do you mean?" (in case of typos and not ignorance)
            and requery.  If after 2 tries nothing comes up ask
            "do you wish to try something else?"  if n etc. offer a help
            screen or menu, if y restart.
        Loops should also be included for vulgarity, etc.

   - voice recognition, response, OCR and handwriting deciphering (of
     course).  I suggest a closed loop between VR and speaking with
     the system attempting to answer me in my own voice.  When the
     comparitor figures a close enough match (or my ear does) it should
     be able to decipher most words.  A training vocab could be developed
     (possibly a personalized version which could be recorded once and
     plugged into any similar machine).

   - VR will make language content access easier.  This is because
     language interaction could occur all the time.  The mistake
     correction/language acquisition feature would obviously be
     incorporated into the DOS and Root systems.  A dual processor
     would also be helpful.  If it determines a job takes over 2
     minutes to run the job will be sent to a batch "subconcious"
     while the talk system chats with the operator using every
     opportunity to build associations between concepts, i.e. if
     a new word is found it will try to put it into its synonym
     structure.  This structure would contain such things as
     emotional loading (polite to vulgar scales and sterile to
     emotive scales) tense, gender, etc.  This time might be
     used to clean up synonym ambiguities or be hooked into
     a news net which gives briefs on current events or
     discusses them (events tailored to operator from sport to
     politics to sex).  Time would also be used to identify which
     subjects are important to the operator.

  -  If keyboards are eliminated an input/edit toggle would be
     necessary, as would a larger screen with standard keys
     listed in a sidebar.

  -  Patterns of use could be recorded for possible duplication.
     The macro storage facility does this.  However, it is not
     sensative to environmental variation.  For instance, if a common
     set of commands (macro) is used in cell a5, but a different
     spreadsheet is similar, but has an extra collumn a2 the macro
     would start at a6.

  -  Systems would have diagnostics (temperature, memory) built
     into them to complain in quite human terms (anthropromorphism
     strikes again) if a problem occurs or could occur.

   - The guiding principle here is to make the computer seem human,
     though the thought processes are far from human (though maybe
     not too far).  The key is to take fear out of computing.

There are drawbacks to this approach.  A new DOS, memory structure,
and hardware would be needed.  However, advances in memory are made
every day, so this might be feasible soon (comments on doability?).

I hope some of these ideas are useful.  I've been kicking them
around for a few years now.  Have any been tried? Am I too late and
just ill informed on the state of the art?  Discussion please.  If
by some chance I have hit on something feel free to use it, but I
want a working a copy (or 6).

I'll interface (internet?) with you all later,

Mike Bindner

rickert@mp.cs.niu.edu (Neil Rickert) (10/06/90)

In article <90277.034819BINDNER@auvm.auvm.edu> <BINDNER@auvm.auvm.edu> writes:
>A few thoughts on the limits and potentials of AI.
>
>First the limits.  I don't think AI will ever be able to really duplicate
>human judgement.  It may one day be the compliment of man's rational

  You give up too easily.

>   - mistake correction.  If I type Logim and it needs Logon it should
>     ask me "Do you mean Logon?" and if I respond Yes (or Y or sure
>     depending upon how well it knows me) after a number of repeated
>     trial (2 to 5 depending upon how similar the command or error is
>     to other commands) it will automatically say "I assume you meant
>     Logon" and implement the command without asking me.
>
  Great.

  So you type:
	rem *
intending to prepare a comment for a language such as BASIC.  The computer
says:
	Do you mean rm * ?
You, being used to the computer making excellent guesses, reply Y
without a moment's hesitation.  And there goes a weeks careful work.

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115.                                  +1-815-753-6940

jjewett@math.lsa.umich.edu (Jim Jewett) (10/08/90)

In article <90277.034819BINDNER@auvm.auvm.edu>, <BINDNER@auvm.auvm.edu> writes:


|> ... However, hunches and judgement are beyond the capablities of
|> automation (at least for the present) as they are non-rational.

To me, most of a chess game between Karpov and Kasporov would seem 
non-rational.  It isn't that their moves are bad; it is that my
understanding isn't nearly sufficient.

A hunch is like a move bya chess master -- it happens to be right 
more often than we would expect, and we don't know why, but that
doesn't make it non-rational -- it just means that we lack the
meta-understanding to realize this.

We might not be able to follow the rules needed to produce (good,
as opposed to random) hunches, but that doesn't make them non-rational.

|> A further potential, possibly AIs grandest is to make computers accessable
|> to all.  Let me elaborate.  Nothing discourages a new user more than the
|> literal nature of computers.   As all hackers know, computers like exact
|> commands (and will accept nothing less).  

Mine accepts "mroe" for "more" because I told it to ... alias files are
the beginning of what you are about to suggest, but I think that you
go to far.

|> Correct this problem and AI
|> will have served its function nicely.  An attempt has been made to work
|> around it with the rise of the menu driven system.  However, this is not
|> a true solution (though it has the same effect).  Here's what I would like
|> in an AI system:
|> 
|>    - user affection (as opposed to user friendliness).  I expect my
|>      PC to cuss at me if I cuss at it and compliment me if I compliment
|>      it.

When I cuss at the computer, I really *don't* need any more aggravation.
Sometimes this is because it is doing something it shouldn't (like
freezing up), and these situations are, perhaps, impossible to program
around.  There are situations in which I *do* want it to freeze.
(eg, someone trying to read my mail.)  So this still has to be
customizable ... and then eventually you get into programming languages,
and ...

|> It should know the answer I want based on context.

Agreed.  Though I won't go so far as to claim it is possible.  I can't
always figure out what my Dad means based on context.  And I've
had some problems with girlfriends because someone *did* figure
something out based on context, and figured wrong.  If humans
still err so often, how can we expect more of a computer ... and
deleting a week's worth of work is a pretty bad misunderstanding.

|>    - mistake correction.  If I type Logim and it needs Logon it should
|>      ask me "Do you mean Logon?" and if I respond Yes (or Y or sure
|>      depending upon how well it knows me) after a number of repeated
|>      trial (2 to 5 depending upon how similar the command or error is
|>      to other commands) it will automatically say "I assume you meant
|>      Logon" and implement the command without asking me.

Automatic creation of aliases is good -- if the user has the ability to
veto the initial creation.  If I use dleete for delete, I want it
to ask me ... some commands are just bad to do easily.  (As was pointed
out in article <1990Oct5.184125.7044@mp.cs.niu.edu> by Neil Rickert
(rickert@mp.cs.niu.edu).  

|>      Instead of saying command not understood it would search for
|>      permutations of the command from the front or back.  These might
|>      be based on context (for instance, if machine is not logged on
|>      it would query for logon command if expected, and if not found
|>      query for synonyms. 
|> ...

MTS (Michigan Terminal System) has something a bit like this.  It is *NOT*
my favorite feature.  Some of this is just implementation, but it can
be very annoying to get asked about 4 wrong commands, and not even be
able to say "Forget it, I'll try again."  At least make these loops
breakable.
	

|>    - VR will make language content access easier.  This is because
|>      language interaction could occur all the time.  The mistake
|>      correction/language acquisition feature would obviously be
|>      incorporated into the DOS and Root systems. 

This is what bothered me the most ... if it can't be turned off, it's
a bug.  (H????'s law, seen on cfutures recently.)

And much of this (and the portions I deleted) seem to be about interface,
rather than the AI itself.  Much of it *could* be done with today's
technology ... albeit slowly.

-jJ 
jjewett@math.lsa.umich.edu       Take only memories.
Jewett@ub.cc.umich.edu           Leave not even footprints.

BINDNER@auvm.auvm.edu (10/11/90)

To all those who wrote about my comments on the limits of AI:

What I was trying to say that in the short run the discipline would
be more marketable by making computers understandable (affectionate)
to the laity.  Design what amounts to an expert system on how to use
computers and the world will love you.  It is interesting to the expert
to test whether computers (or people, for that matter) really can or
will think.  Honestly, though, I don't think it will capture the imagination
or the overt support of the average citizen.  Making a computer for intelligent
non-experts will.

For those who commented that a DOS which says "do you mean this?" or goes
ahead and does it if a pattern develops would be dangerous:

Failsafes can be designed to avoid catastrophe's.  Especially in a machine
to human relations mode which I am proposing.

For those who complain that not enough students post on the net:

Here I am, a student.

Michael Bindner
American U., Washington, D.C.

lev@suned0.nswses.navy.mil (Lloyd E Vancil) (10/11/90)

In article <90277.034819BINDNER@auvm.auvm.edu> BINDNER@auvm.auvm.edu writes:
>First the limits.  I don't think AI will ever be able to really duplicate
>human judgement.  It may one day be the compliment of man's rational
>thought, but it will never duplicate man as it will not evolve the same way.
What about the Idea of Cyborgs?  Man machine interfaces so interwoven as to
be indistiguishable?  Would these be machines or ??
I think I agree with you as far as the different evolution goes but I wonder
if we would recognise a truly "awake" computer or Computer system.  Could
the inter related systems we already have created be awake in some way and
if they are, or are not, how would we "prove it"?  
At this level this is the same question the SETI people face.  For us to
recognise intelligence the observed and observer must have some common
mental ground.

>automation makes gatherable.  Expert systems and their successors can aid
>this analysis.  However, hunches and judgement are beyond the capablities of
>automation (at least for the present) as they are non-rational.
This asserts the thought that "hunches" are some mystical, synergism of
intelligence.  Are they?  Could they be the unseen, logical ,even machine-like,
process of the subconscious?  I speak only from my own problem solving 
experience.  When I have a really knotty problem I consign it to the back of my mind and proceed with my other tasks.  As often as not at some point later
that same day a new approach will POP into my mind.  I have observed that these
"inspirations" are combinations of things I know, and may not have related to
the orginal problem.  They POP up with a hunch feel, "I wonder if this will
work?

>A further potential, possibly AIs grandest is to make computers accessable
>to all.  Let me elaborate.  Nothing discourages a new user more than the
>literal nature of computers.   As all hackers know, computers like exact
>commands (and will accept nothing less).  Correct this problem and AI
>
Using an "expert system" to interpret, understand and act on commands given
in "plain english"  would be a boon to all types of people.  An expansion on
this would be services to the blind, and the otherwise handicapped.  I have a
friend, who is an electronic engineer, and who has been blind all of his life.
The technology he uses to understand what the machine wants is phenomonal.  To
have an "AI" as the user interface, that would talk to him and understand his
words would be SOMETHING WONDERFUL!

L.

--
            suned1!lev@elroy.JPL.Nasa.Gov sun!suntzu!suned1!lev
                          lev@suned1.nswses.navy.mil
                     My employer has no opinions, these are MINE!

jjewett@math.lsa.umich.edu (Jim Jewett) (10/12/90)

In article <5640@suned1.Nswses.Navy.MIL>, lev@suned0.nswses.navy.mil
(Lloyd E Vancil) writes:

|> I think I agree with you as far as the different evolution goes but I wonder
|> if we would recognise a truly "awake" computer or Computer system.  Could
|> the inter related systems we already have created be awake in some way and
|> if they are, or are not, how would we "prove it"? 

So maybe the internet is alive, and "heals" when major sites leave, and
grows, and reproduces, and ... ?  Crystals also meet many of the definitions
of life ... I won't say that they aren't alive, but computers aren't the
only place we run into ambguities.  Is a virus (biological) alive?

But I do like the idea that the Mean Time Between Failure is just how long
the computer can stay up.  ;)
 
|> At this level this is the same question the SETI people face.  For us to
|> recognise intelligence the observed and observer must have some common
|> mental ground.

The intelligence in science thread, and the semantics thread (text)
are about this.

-jJ 
jjewett@math.lsa.umich.edu       Take only memories.
Jewett@ub.cc.umich.edu           Leave not even footprints.