[net.ai] AIList Digest V2 #147

LAWS@SRI-AI.ARPA (11/14/84)

From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>


AIList Digest           Wednesday, 31 Oct 1984    Volume 2 : Issue 147

Today's Topics:
  LISP - Function-itis & Comparison with C,
  Algorithms - Pessimal Algorithms & Real Programmers,
  Seminars - Robot Navigation & Accessibility of Analogies & Student Models
----------------------------------------------------------------------

Date: Sun 28 Oct 84 17:40:10-PST
From: Shawn Amirsardary <SHAWN@SU-SCORE.ARPA>
Subject: Lisp Function-itis

         [Forwarded from the Stanford bboard by Laws@SRI-AI.]

Lisp with its very elegant  syntax suffers from acute function-itis.   When
adhering to the traditional  lispish style of using  very few setq and  god
forbid even  fewer progs,  you  end up  with about  a  million and  a  half
functions that get called from usually  only one place.  Of course LET  and
lambda help, but not that much.  My question is does anybody know of a good
method for ordering and perhaps even naming the little suckers?  In  pascal
you have to define  procedures before you  use them, but  the lack of  such
restrictions in lisp means that functions are all over the place.  What  is
the cure?

                                --Shawn

------------------------------

Date: Tue, 30 Oct 84 21:42:58 -0200
From: eyal@wisdom (Eyal mozes)
Subject: Re: different language types

  > But seriously, one can adopt a very abstract (i.e.
  applicative/functional) > programming style or a very imperative
  (C-like) style when using Lisp.  > On the other hand, adopting an
  applicative style in C is difficult (yes, > I've tried!).  So Lisp is
  certainly more versatile.

Really!! I've never yet seen an "imperative" non-trivial LISP program
which is not impossible to read, full of bugs which nobody knows how to
correct, and horribly time-consuming (most of UNIX's VAXIMA is a good
example of what I mean). Writing "imperative" style in LISP is a
programming equivalent of "badgorithms".

You can be as abstract as you want to be in C or Pascal.  I don't think
there is anything for which you can't come up with a good program in C,
if your writing style is good (and if it isn't, no language will help).
Of course, there are some activities, especially in some areas of AI,
which are made much easier by the functional style of LISP, and its
representation of programs as data - but even this wouldn't be true for
*all* AI systems. But in terms of versatility, I don't think there can
be much question about the big advantage of C, Pascal, and languages of
this type.

------------------------------

Date: 29 Oct 84 14:41 PST
From: JonL.pa@XEROX.ARPA
Subject: Pessimal Algorithms ("Badgorithms"?)

The following "badgorithm" comes from a practical joke, perpretated many
years ago; although it is not a naturally occuring "badgorithm", it does
have a humorous side.

In high school, I worked part-time for a university computing center **
programming on an IBM 650 (don't ask how many years ago!).  [One really
shouldn't pooh-pooh the 650 -- it was the world's first List processing
computer!  explication follows].  It's main memory consisted of 2000
10-digit decimal words, stored on a rotating magnetic drum; there were
40 rotational channels on the drum, with each channel holding 50 words
(one complete revolution).  Since the various instructions took a
variable amount of time, it would be most unwise to sequence the
instructions by merely incrementing each instruction address by 1, for
an instruction which took more time than that which would elapse between
two successive words in a "channel" would thus be blocked for one full
drum rotation time.  An instruction consisted of an operator, an operand
address, and a "next instruction address" (i.e., a CDR link in Lisp
parlance); thus one could assign the sequenceing of instructions to be
"optimal" in that the successor of an instruction at word offset A (mod
50) would be A+n (mod 50) where n is the time in, fiftieths of a drum
rotation, required for the instruction exection of the operator stored
at A.

The IBM 704/709 series had a machine language assembler called SAP, for
"Symbolic Assembly Program"; the 650 had SOAP, for "Symbolic Optimal
Assembly Program".  One would speak of "Soaping" a program, meaning to
assemble a symbolic deck of cards into a self-loading machine-language
deck.  My cohorts and I dubbed the obvious pessimizing version of SOAP
as SUDS, for "Symbolic Un-optimal Disassembly System" (it would assign
the "next instruction" to a word offset just 1 short of optimal, and
thus would slow down the resultant object code by up to a factor of 50).

As a gag, we SUDS'd the SOAP deck, and left it for others to use.
Imagine the consternation when a program that normally took 10 minutes
to assemble suddenly began taking over an hour!  Of course, we were
quickly found out, and SUDS was relagated to the circular hacks file.

-- Jon L White --

------------------------------

Date: 22 Oct 84 16:18:41 EDT
From: Michael.Jones@CMU-CS-SPICE
Subject: Real Programmers

           [Excerpted from the CMU bboard by Laws@SRI-AI.]

[I regret having to truncate this, but the original was too long to
distribute on AIList.  I have decide to proceed with the following
because it fits in with other recent AIList messages.  -- KIL]


     A recent article devoted to the *macho* side of programming
     made the bald and unvarnished statement:

                Real Programmers write in Fortran.

     [...]
     I feel duty-bound to describe,
     as best I can through the generation gap,
     how a Real Programmer wrote code.
     I'll call him Mel,
     because that was his name.

     I first met Mel when I went to work for Royal McBee Computer Corp.,
     a now-defunct subsidiary of the typewriter company.  [...]
     Mel's job was to re-write
     the blackjack program for the RPC-4000.  [...]
     The new computer had a one-plus-one
     addressing scheme,
     in which each machine instruction,
     in addition to the operation code
     and the address of the needed operand,
     had a second address that indicated where, on the revolving drum,
     the next instruction was located.
     In modern parlance,
     every single instruction was followed by a GO TO!  [...]

     Since Mel knew the numerical value
     of every operation code,
     and assigned his own drum addresses,
     every instruction he wrote could also be considered
     a numerical constant.
     He could pick up an earlier "add" instruction, say,
     and multiply by it,
     if it had the right numeric value.
     His code was not easy for someone else to modify.

     I compared Mel's hand-optimized programs
     with the same code massaged by the optimizing assembler program,
     and Mel's always ran faster.
     That was because the "top-down" method of program design
     hadn't been invented yet,
     and Mel wouldn't have used it anyway.
     He wrote the innermost parts of his program loops first,
     so they would get first choice
     of the optimum address locations on the drum.
     The optimizing assembler wasn't smart enough to do it that way.

     Mel never wrote time-delay loops, either,
     even when the balky Flexowriter
     required a delay between output characters to work right.
     He just located instructions on the drum
     so each successive one was just *past* the read head
     when it was needed;
     the drum had to execute another complete revolution
     to find the next instruction.  [...]
     Mel called the maximum time-delay locations
     the "most pessimum".  [...]

     Perhaps my greatest shock came
     when I found an innocent loop that had no test in it.
     No test. *None*.
     Common sense said it had to be a closed loop,
     where the program would circle, forever, endlessly.
     Program control passed right through it, however,
     and safely out the other side.
     It took me two weeks to figure it out.

     The RPC-4000 computer had a really modern facility
     called an index register.
     It allowed the programmer to write a program loop
     that used an indexed instruction inside;
     each time through,
     the number in the index register
     was added to the address of that instruction,
     so it would refer
     to the next datum in a series.
     He had only to increment the index register
     each time through.
     Mel never used it.

     Instead, he would pull the instruction into a machine register,
     add one to its address,
     and store it back.
     He would then execute the modified instruction
     right from the register.
     The loop was written so this additional execution time
     was taken into account --
     just as this instruction finished,
     the next one was right under the drum's read head,
     ready to go.
     But the loop had no test in it.

     The vital clue came when I noticed
     the index register bit,
     the bit that lay between the address
     and the operation code in the instruction word,
     was turned on--
     yet Mel never used the index register,
     leaving it zero all the time.
     When the light went on it nearly blinded me.

     He had located the data he was working on
     near the top of memory --
     the largest locations the instructions could address --
     so, after the last datum was handled,
     incrementing the instruction address
     would make it overflow.
     The carry would add one to the
     operation code, changing it to the next one in the instruction set:
     a jump instruction.
     Sure enough, the next program instruction was
     in address location zero,
     and the program went happily on its way.

     I haven't kept in touch with Mel,
     so I don't know if he ever gave in to the flood of
     change that has washed over programming techniques
     since those long-gone days.
     I like to think he didn't.
     In any event,
     I was impressed enough that I quit looking for the
     offending test,
     telling the Big Boss I couldn't find it.  [...]
     I didn't feel comfortable
     hacking up the code of a Real Programmer."


         -- Source: usenet: utastro!nather, May 21, 1983.


[The Cray is so fast it can execute an infinite loop in three minutes?
This machine might beat it!  -- KIL]

------------------------------

Date: 28 Oct 1984  14:43 EST (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Robot Navigation

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


Wednesday, October 31;      4:00pm;     8th Floor Playroom

Navigation for Mobile Robots

Rodney A. Brooks

There are a large number of interesting questions in how to build a
mobile robot capable of navigating through unknown surroundings in
order to complete some desired task. Issues include obstacle avoidance
using local observations, overall path planning, registration with a
map and building a map from observations. There is a lot of ongoing
and promising work on the first two of these problems. Less has been
done on the last two.  Registration work has been most succesful with
detailed a priori maps in two domains: (1) indoors uncluttered areas
with flat walls giving unambigous geometric clues, and (2) areas with
reliably identifiable and accurately locatable landmarks visible over
a large area.  Re-registration with maps generated from a robot's own
observations has mainly been successful in two modes: (1) incremental
re-registration involving small motions from a known location, or (2)
in an environment with active beacons providing reliably indentifiable
and locatable landmarks.

This talk focus on some of the issues in building a map from
unreliable observations and in re-registering the robot to that map
much later, again using unreliable observations. In particular we
consider a new map represention, the requirements on the
representations of the world produced by vision, the role of
landmarks, and whether other sensors such as compasses or inertial
navigation systems are needed.

COMING SOON: Kent Pitman [Nov 7], Ryszard Michalski [Nov 14],
             Phil Agre   [Nov 28]

------------------------------

Date: 29 Oct 1984 14:10-EST
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Accessibility of Analogies

           [Forwarded from the MIT bboard by SASW@MIT-MC.]


"Mental Models of Electricity"

Yvette Tenney, BBN Laboratories
Hermann Hartel, University of Kiel, West Germany

BBN Laboratories, 10 Moulton St, Cambridge.
Third floor large conference room, 10:30 AM.
Monday November 5th.


The presentation will consist of two short talks that were part of a
conference on Representations of Students' Knowledge in Electricity and
the Improvement of Teaching, held in Ludwigsburg, Germany this fall.

Talk 1:  Yvette Tenney (in collaboration with Dedre Gentner)
         "What makes analogies accessible:  Experiments on the
         water-flow analogy for electricity."

         In analogy, knowledge can be transferred from a known
         (base) domain to a target domain, provided the learner
         accesses the analogy.  We used the water-electric current
         analogy to test the hypothesis that prior familiarity
         with the base domain (Experiment 1) and pre-training
         on the base domain (Experiment 2) increase the
         likelihood of noticing the analogy.  Results showed
         that greater knowledge of the base domain did not
         increase accessibility, although it did increase the
         power of the analogy if detected.

Talk 2:  Hermann Hartel
         "The electric circuit as a system:  A new approach."
         [...]

------------------------------

Date: Tue 30 Oct 84 09:28:59-PST
From: Paula Edmisten <Edmisten@SUMEX-AIM.ARPA>
Subject: Seminar - Student Models

 [Forwarded from the Stanford SIGLUNCH distribution by Laws@SRI-AI.]

DATE:        Friday, November 2, 1984
LOCATION:    Chemistry Gazebo, between Physical and Organic Chemistry
TIME:        12:05

SPEAKER:     Derek Sleeman
             School of Education & HPP

ABSTRACT:    The PIXIE Project: The Inference and Use of Student
             (user) Models

For a decade or more the importance of having accurate student models
to guide Intelligent Tutoring Systems (ITSs) has been stressed.  I
will give an overview of the several types of models which have been
inferred and will talk in some detail about a system which infers
overlay models and Pixie which uses process-orientated models.
Currently, all these techniques effectively determine whether the
current user's behaviour falls within a previously defined
model-space.  The focus of some current work is to see whether these
techniques can be extended to be more data-sensitive.  (Analogous
issues arise when an ITS or ES is attempting to reason with an
incomplete database.)

Issues which arise in the use of models to control (remedial)
dialogues will be addressed.

The seminar will conclude with an overview of the fieldwork shortly to
be undertaken.  PIXIE now runs on a PC (in LISP) and several of these
machines will be used to "diagnose" the difficulties which high school
students have with Algebra and maybe Arithmetic.  It is envisaged that
PIXIE will be used to screen several classes, and that the class
teachers will remediate students on the basis of the diagnostic
information provided by PIXIE.  These sessions will then be analyzed
to determine how "real" teachers remediate; remedial subsystem(s) for
PIXIE will then be implemented.




Paula

------------------------------

End of AIList Digest
********************