[comp.sys.amiga.tech] System Level support for AI stuff

cbaron@icc.afit.arpa (Christopher T. Baron) (07/27/88)

This is my first message on the net so please let me know if I have done 
something stupid ( in a nice way please.)  

Anyway, while I was returning to Dayton from Ami-Expo an idea came to me that 
the Amiga, being a multi-tasking system, could benifit from system level support
for several of the traditional AI type of functions.  By this I mean an 
inference engine, a state space search routine such as A*, and say a built
in natural language interface (text).

The idea is that by having a generalized support for say the inference engine
many software packages could provide AI like services without having to 
re-invent the inferencing wheel for each package.  All they would have to do
would be provide the rule and fact bases and the system inference engine 
could do the hard work.  This would make possible many services that are
desirable but too much effort to include at present.  A list of the ones
that sprang to mind are:

Intelligent task priority adjustment.  i.e. set the priority of your
word processor to a low value if you are working on something else.  this
could provide apparent performance increases.

AI type diagnositive program debugging.  with rule bases provided by the 
compiler maker new users could benifit from the expertise of experienced
programmers and significantly cut down on the learning curve.

Resident advisors for programming syntax, or style.  The compiler maker could
provide a rule base for observing the user entering source code and could
provide warnings for erroneous code.  

Global optimizing compilers.  Since only the rule base would have to be
generated making intellegent compilers would be easier.

Word processors with rule bases for English language syntax, and or 
advice on formats of different document types such as business letter,
scientific report etc.

Optimizing disk track and sector allocation.  By optimizing the head
tragectory to eliminate extra track jumps and waits for sector positioning
the DOS could improve in speed without any hardware changes.

Intellegent Facc like programs could observe disk usage and try to predict
what tracks and sectors will be needed next.  Or the type of disk operation
could be used to determine the allocation of read/write buffers, i.e. a 
program load operation doesn't need any write buffers and all of the sectors
of the program code should be read in without waiting for them to be requested.

The mind reels under the number of ways system and user perfomance could 
be improved.  Since the proposed inference engine would be well defined and
uniform, even third partys could produce say a rule base for debugging C
programs.  The result could be development of a whole new industry for
providing expert systems for all kinds of tasks from intellegent games to
stock market advisors implemented from within a spreadsheet or database.

My initial thoughts on implementation is that the services would be 
implemented as a shared library, with functions for:
  
   A backward and forward chaining inferencing engine
   A rule base compiler
   One or more state space searching algorithms
   system modification such as task priority modification
   the ability to start/suspend/kill tasks
   an i/o section for interface with the user i.e. a uniform query facility
   special alerts for output
   natural language parsing into a language independent form with
   system and/or user dictionarys

Programs could either utilize the functions directly or could start a sub-
task which would communicate through message ports.  

I'm sure there are many many possible uses for such facilities that I have
not thought of so let's here what people think.

Chris Baron
net address: cbaron@afit-ab.arpa
	     cbaron@galaxy.afit.af.mil
US mail : 201 Buel Ct
	  WPAFB, OH 45433


Sorry, no cute closing yet. I'll think of something.
P.S. if this looks like a message in comp.sys.amiga it is!

jesup@cbmvax.UUCP (Randell Jesup) (07/28/88)

In article <446@afit-ab.arpa> cbaron@icc.UUCP (Christopher T. Baron) writes:
>Anyway, while I was returning to Dayton from Ami-Expo an idea came to me that 
>the Amiga, being a multi-tasking system, could benifit from system level support
>for several of the traditional AI type of functions.  By this I mean an 
>inference engine, a state space search routine such as A*, and say a built
>in natural language interface (text).
...
>My initial thoughts on implementation is that the services would be 
>implemented as a shared library, with functions for:
>  
>   A backward and forward chaining inferencing engine
>   A rule base compiler
>   One or more state space searching algorithms
>   system modification such as task priority modification
>   the ability to start/suspend/kill tasks
>   an i/o section for interface with the user i.e. a uniform query facility
>   special alerts for output
>   natural language parsing into a language independent form with
>   system and/or user dictionarys

	So write it, and publish the interface (look at ARexx for a good
example of this type of server, you might want to use ARexx ports for
compatiblity).  If you think it's good enough to put on the WB disk,
after it's ready (or at least well along) get in touch with Gail
Wellington, but don't count on anything.  You can always sell it yourself.

-- 
Randell Jesup, Commodore Engineering {uunet|rutgers|allegra}!cbmvax!jesup

eachus@mitre-bedford.ARPA (Robert Eachus) (07/28/88)

In article <446@afit-ab.arpa> cbaron@icc.UUCP (Christopher T. Baron) writes:
>Anyway, while I was returning to Dayton from Ami-Expo an idea came to
>me that the Amiga, being  a multi-tasking system,  could benifit from
>system  level  support  for  several of  the  traditional AI type  of
>functions.  By this I mean an inference engine, a  state space search
>routine such as A*, and  say a built   in natural language  interface
>(text).

-- Sure would be nice!

>The mind reels under  the number of  ways system and  user perfomance
>could be improved.

-- The mind also reels at the processor bandwidth required to actually
-- improve performance by using a rule based engine to do all this.
-- :-) It might help on an 68030 machine, which really should have
-- several spindles to feed it, but by now I assume that everybody
-- realizes that Amigas only use hard disks as slow non-volitile
-- memories.

>My initial thoughts on implementation is that the services would be 
>implemented as a shared library, with functions for:
>  
>   A backward and forward chaining inferencing engine
>   A rule base compiler
>   One or more state space searching algorithms
>   system modification such as task priority modification
>   the ability to start/suspend/kill tasks
>   an i/o section for interface with the user i.e. a uniform query facility
>   special alerts for output
>   natural language parsing into a language independent form with
>   system and/or user dictionarys

-- The first three items would be an excellent product. The
-- compiler should probably be a tool with a CLI interface, but in any case
-- should be packaged separately.  There would be many applications
-- which woould have a static rule base, and could not afford the
-- space for a large library.

-- These tools would  be a  hell  of a   product. Add Common  Lisp and
-- Scheme, and  Amigas will replace Sun Workstations  on a lot   of AI
-- projects.  (Just   like Suns have  replaced  Lisp Machines...) Next
-- project after IPC anyone?

					Robert I. Eachus

with STANDARD_DISCLAIMER;
use  STANDARD_DISCLAIMER;
function MESSAGE (TEXT: in CLEVER_IDEAS) return BETTER_IDEAS is...

daveh@cbmvax.UUCP (Dave Haynie) (07/28/88)

in article <37202@linus.UUCP>, eachus@mitre-bedford.ARPA (Robert Eachus) says:
> In article <446@afit-ab.arpa> cbaron@icc.UUCP (Christopher T. Baron) writes:
>>Anyway, while I was returning to Dayton from Ami-Expo an idea came to
>>me that the Amiga, being  a multi-tasking system,  could benifit from
>>system  level  support  for  several of  the  traditional AI type  of
>>functions.  By this I mean an inference engine, a  state space search
>>routine such as A*, and  say a built   in natural language  interface
>>(text).
> 
> -- Sure would be nice!

I've actually read that this kind of thing has been proposed as an optional
op-chip Execution Unit for the MC88100.  Currently, there are floating, 
integer, and execution units on the 88k that function in parallel.  And there's
support for more.  Possibilities include a graphics engine, a vector engine,
an inference engine, etc.  It makes lots of sense to put it on-chip, if at all
possible (like, does it fit?).  That give you an order of magnitude greater
performance than an off-chip unit, considering the speed and number of internal
busses vs. external busses.

The 680x0 family could certainly benefit from the same kind of things, and is
slowly moving in that direction, as witnessed by the internal Harvard split busses
in the 68030, and some of the things planned for the 68040.  However, the size
of the 680x0 chips right now would seem to prevent many extra functional units 
on-chip.  I'll be happy enough to see the FPU move on-chip.

> 					Robert I. Eachus
-- 
Dave Haynie  "The 32 Bit Guy"     Commodore-Amiga  "The Crew That Never Rests"
   {ihnp4|uunet|rutgers}!cbmvax!daveh      PLINK: D-DAVE H     BIX: hazy
		"I can't relax, 'cause I'm a Boinger!"

cbaron@icc.afit.arpa (Christopher T. Baron) (08/02/88)

In article <37202@linus.UUCP> news@linus.UUCP (Robert Eachus) writes:
>>The mind reels under the number of ways system and user performance
>>could be improved.

>-- The mind also reels at the processor bandwidth required to actually
>-- improve performance by using a rule based engine to do all this.
>-- :-)  It might help on an 68030 machine, which really should have
>. . . 

I must disagree!  Most of the time when a user is interactivly using the
machine the processor is idling.  The extra cycles wasted while a user is
hitting keys or moving a mouse can be used by the AI routines.  Of course
the number of free cycles goes down as the number of background tasks 
increases.

Further, the AI routines need not be running at a high priority to provide
benifits.  For example, the disk allocation planner could be planning the 
next series of tracks/sectors to allocate during  the relativly long waits
between disk accesses ( I mean file loads or writes.)

Still further, the implication is that this system will be similar in 
operation to the many expert system shells on the market.  These shells
are slow for a number of reasons some of which would be eliminated by my
concept of the rule base.  The rule and fact bases would be compiled to
a non-text symbolic form, which would include pointers to all related facts
and rules.  This eliminates two of the current shell slow downs: text 
interpretation and rule-base search pattern matching.  One of the rule
base search algorithms such as Rete could also be used.

Making the Amiga into a lisp-machine is not my intention here.  What the 
mind actually reels under is the memory and processor space required to 
impliment Common Lisp on a micro.  I have used a Common Lisp SUBSET on a
PC with a 16 MHz 386 and 9 Megs of memory and it was pathetic!  Someone 
else will have to do that to the Amiga.  Scheme is much better and could
be implemented well on the Amiga.

In reply to Randell Jesup, Commodore Engineering.

I would love to write this thing ( or things ) but am going to grad school
right now...  I think this type of software should be supported at the 
OS level, ideally included in the WB librarys.  Failing that it should be
PD or at least shareware so that the maximum number of users can benifit from
it.  

=========================================================================
Chris Baron
The Air Force would be wise to accept my opinions but so far no luck.

cbaron@afit-ab.arpa
cbaron@galaxy.afit.af.mil

tj@tut.cis.ohio-state.edu (Todd R. Johnson) (08/03/88)

In article <465@afit-ab.arpa> cbaron@icc.UUCP (Christopher T. Baron) writes:
>In article <37202@linus.UUCP> news@linus.UUCP (Robert Eachus) writes:
>>>The mind reels under the number of ways system and user performance
>>>could be improved.
>
>Making the Amiga into a lisp-machine is not my intention here.  What the 
>mind actually reels under is the memory and processor space required to 
>impliment Common Lisp on a micro.  I have used a Common Lisp SUBSET on a
>PC with a 16 MHz 386 and 9 Megs of memory and it was pathetic!  Someone 
>else will have to do that to the Amiga.  Scheme is much better and could
>be implemented well on the Amiga.

	I have been using Allegro Coral Common Lisp (not a subset) on a
Mac II (5 meg memory and 40meg hard dirve) to run SOAR (280 K,
contains OPS5).  It runs at about the same speed as an IBM PC-RT
running CMU common lisp and SOAR.  All in all, it is quite acceptable.
I would think that adding a 68020 and extra memory to an Amiga would
result in a nice environment for running common lisp.

	I've been trying to stay out of the Mac II vs. Amiga wars, but
I have to say something now.  I like 4 things about the Mac II: nice
hi-res color (flicker-free!) display, 68020, good commercial common
lisp, and a nice looking case.  After using one for an extended period
of time though, I must say that I would rather have a full blown 2000.
My 1000 is a much more interesting (and in many ways more advanced)
machine than the Mac II.

	---Todd

bts@sas.UUCP (Brian T. Schellenberger) (08/03/88)

[tried to send EMail; no luck]

Nice pipe dream, but I doubt that you have any conception of the amount of
system resources this would be likely to require . . . in particular, it is
not, alas, feasible while runing on a floppy-based system with 512k or at
most 1 meg, which where the huge number of users still are.  Too bad, and I
hope we see some of this for 2.0 (in 1995).  :-)

Also, it is unlikely to yeild much system performance; most intelligent 
systems use so much time that the benefit from intelligently allocating 
task priorities, memory, &c. is more than eaten up by the processing to do
so.  (I mean using real AI techniques, not just paying attention to what
a task is doing in assigning priorities.)

The only real performance gains in AI systems tend to be with highly 
parallel processors; AI techniques tend to parallelize (<- my own word!)
better.
-- 
--Brian,                     __________________________________________________
  the man from              |Brian T. Schellenberger   ...!mcnc!rti!sas!bts
  Babble-On                 |104 Willoughby Lane     work: (919) 467-8000 x7783
____________________________|Cary, NC   27513        home: (919) 469-9389 



-- 
--Brian,                     __________________________________________________
  the man from              |Brian T. Schellenberger   ...!mcnc!rti!sas!bts
  Babble-On                 |104 Willoughby Lane     work: (919) 467-8000 x7783
____________________________|Cary, NC   27513        home: (919) 469-9389 

mjw@f.gp.cs.cmu.edu (Michael Witbrock) (08/04/88)

Chris Baron Writes:

"Making the Amiga into a lisp-machine is not my intention here.  What the
mind actually reels under is the memory and processor space required to
impliment Common Lisp on a micro.  I have used a Common Lisp SUBSET on a PC
with a 16 MHz 386 and 9 Megs of memory and it was pathetic!  Someone else
will have to do that to the Ami Making the Amiga into a lisp-machine is not
my intention here.  What the  mind
actually reels under is the memory and processor space required to impliment
Common Lisp on a micro. "

I don't know why a 9m 386 Implementation of common lisp should be a problem.
I run CMU Common Lisp on an RT APC, which is, almost by definition,
 a complete implementation,  on which it takes about 4-5m. Since 
the RT is a Risc Machine, code for it tends to be about 2-3 times larger
than equivalent 68000 program. 9 megs should be more thank adequate.

I don't know about scheme; it looks more theoretically pure, but harder to
write real software in. I mostly hack C anyway, but I think the attack on
Common lisp is unwarrented.

---
P.S. If you must put some 'ai' hardware in, rete might be useful, but don't
expect me to buy it. 

My brain doesn't do AI is prolog or with RETE, and neither do connectionists.
What we want is more memory and more cycles, which is probably the best way
for Commodore to appeal to all AI people, rather than just the rule based
few.
-- 
Michael.Witbrock@cs.cmu.edu mjw@cs.cmu.edu                          \
US Mail: Michael Witbrock/ Dept of Computer Science                  \
         Carnegie Mellon University/ Pittsburgh PA 15213-6890/ USA   /\
Telephone : (412) 268 3621 [Office]  (412) 441 1724 [Home]          /  \