[net.ai] Notes from a talk by Alan Kay

malis%BBN-UNIX@sri-unix.UUCP (03/31/84)

From:  Andrew Malis <malis@BBN-UNIX>

        [Forwarded from the Info-Atari list by Tyson@SRI-AI.]

  Date: 23 Mar 1984 1214-EST (Friday)
  From: mit-athena!dm@mit-eddie (Dave Mankins )
  Subject: Notes from talk by Alan Kay at MIT

Dr. Alan Kay, one of the developers of Smalltalk and the Xerox Alto, and
currently a Vice President and Chief Scientist at Atari, gave a talk at
MIT yesterday (22 March 1984) titled: "Too many smart people: a personal
view of design in the computer field"

The abstract:

    This talk is about the battle between Form and Content in Design and
    why "being smart" usually causes content to lose.  "Insightful
    laziness" is better because (1) it takes maximum advantage of others
    work and (2) it encourages "rotating" the problem into its simplest
    essence -- often by changing it completely.  In other words: Point
    of view is worth 80 IQ points!

Here are some tidbits gleaned from my notes:

One of the problems with smart people is that they deal with
difficulties by fixing them, rather than taking the difficulty as a
symptom of a flaw in the design, and noticing "a rotation into a new
simplicity."

When preparing his talk he realized that what he wanted to say was
basically inconsistent, that

    1) You should do things over, and
    2) You shouldn't do things over.

"Both of these are true as long as you get the boundary conditions
right."  (There ensues an anecdote about working with Seymour Cray to
get an early CDC6500 up at NCAR.  The 6500 hardware did not normalize
its floating point operations, but that was "okay" because "any sensible
model will converge".  When the NCAR meteorologists (who answer the
question "what will the weather be like?" by looking out the window)
tried to put their models up on the CDC6500, they didn't work.  They
insisted that the Fortran compiler do the normalization for them.  Kay
cited this as evidence that their model was wrong.  Hmph, it's easy to
make fun of meteorologists...)

Kay cited Minsky's Turing award lecture, in the Apr. 1970 JACM (or maybe
CACM, I didn't catch it): "Form and content aren't enough."  What has
happened to the computer science field over the last twenty years is
myopia:  "a myopia so acute that only the very brilliant can achieve
it."

As an example of this, Kay cited the decline from the STS940 in 1965 to
UNIX ("a mere shadow of what an operating system should be") to CPM.  The
myopia in question is best illustrated by a failure of Kay's own: "When
we got our first IMSAI (mumble) we put Smalltalk up on it.  We had to do
a lot of machine coding on it, and we thought that wasn't right.  And it
performed about as well as BASIC does today.  We said 'This is clearly
inadequate.  What we need is 2Mb of memory and a fast disk.'  Thus we
left the door open for BASIC to crawl back out of its crypt."

He should be lynched.  At least he realizes the error of his ways.

He cited an article by Vannevar Bush, in a 1945 Atlantic Monthly,
titled, "As we may think", in which Bush described a multi-screened,
pointer-based system with access to the world's libraries, drawing
programs, etc.  Bush, of course, thought it was just a few years away
(he called it "Memex").

He alluded to Minsky's notion of "science-envy": Natural scientists look
at the universe and discover its laws.  Computer scientists make up
their universes.  "What we do is more like an art."  "You can judge
whether or not a field is overcome by science-envy if it sticks the word
'science' into its name: 'computer science', 'cognitive science',
'political science'..."

He talked about some of his early work, with Ed Teitel, developing an
early personal computer (ca. 1965) calligraphic display with a pointer.
It had "a wonderful language I developed, influenced by Sutherland's
Sketchpad (the best thesis ever done in computer science) and
Simula--everything I've ever done has been influenced by Sketchpad and
Simula).  Everyone who tried to use it hated it.  They all had about the
same reaction to it that everyone has to APL today."  Shortly after
working on this he saw Papert's work with LOGO and children, and
resolved that everything he did from that day forth would be
programmable by children.

Part of the machine's problem stemmed from the fact that it didn't have
enough memory.  This in turn stems from the fact that we cast hardware
in concrete before we know what we're going to do with it.

Some relevant maxims from my notes:

    "Hardware is software crysallized early."
    "We shouldn't try to build a supercomputer until we have something
        to compute."

His point in these two maxims was, I think, that we're very good at
building hardware before we really know what we're going to do with it
(is there a lesson here for Project Athena with its tons of Ethernetted
VAXes "which will be used for undergraduate education" but a lack of
vision when it comes to educational software?)

He then described the Dynabook: a note-book sized interactive computer,
with about the same kind of interface as a notebook: you can doodle with
it, scribble, but it can also peruse the whole Library of Congress, as
well as past doodles.  "So portable you can carry something else, too."
[For a more complete description of Dynabook, see ``Fanatic Life and
Symbolic Death among the Computer Bums'', in "Two Cybernetic Frontiers"
by Stewart Brand.]

[An aside: one of the proposed forms of the Dynabook was a Walkman with
eyeglass flat-screen stereoptic displays (real 3-d complete with hidden
surfaces!).  This was punted because "no one would want to put something
on their head."  (Times change.)  Kay asserted that such displays ought
to be easier to produce than a note-book sized display, since there
would be fewer picture-elements required (a notebook would require maybe
1.5M pixels, while "the human eye can resolve only 140,000 points, so
you'd only have to put 140,000 pixels into your eyeglasses".  The flaw
in this argument is that most of those points the eye can resolve are in
the fovea, and you would have to put foveal-resolution over the entire
field of the glasses, meaning, more pixels.  This is the opposite of
window-oriented displays.  Instead of a cluttered desk you have an
orderly bulletin-board: just display everything at once, the user
can look around the room at all the stuff.  If this room isn't enough
you can walk into the next room and look at more stuff.]

More maxims:
    "Great ideas are better than good ones because they both take about
    the same amount of time to develop and the great ideas aren't
    obsolete when you're done."

An observation:
    "In all the years that we had the Altos no one at Xerox ever
    designed anything by starting with a drawing on an Alto.  They
    always started with a sketch on the back of an envelope."
    Nicholas Negroponte and the Architecture Machine (ArcMac) group
    did the only study of what sketching is and what really is going
    on when you sketch in 1970 in a project called "Architecture by
    yourself" but their funding dried up and no one remembers that
    stuff now.

    [An aside: the Macintosh's MacPaint program is the best drawing
    program that Kay has ever seen.  (The Macintosh people called him
    up one day and said, "Come on over, we have a present for you.")
    When he started playing with it he had a two-fold reaction:
    "Finally", and "Why did it take 12 years?"]

Homage was paid to the Burroughs B5000, a computer developed in 1961:

    It's operating system was entirely written in a higher level
        language (ALGOL)
    It had hardware protection (which was later recognized to be
        a capability protection system)
    It had an object-oriented virtual memory system
    It had virtual data
        (any data reference could have a procedure attached to it for
        fetching and storing the real data--a bit was set as to which
        side of the assignment statement it went on)
    It was a multiprocessor (it had two processors, and much of the
        protection scheme was built in order to allow the two processors
        to work together).
    It had an integrated stack (which, sadly, is the only thing that
        people seem to remember).

"This was twenty years ago!  What happened, people?"

The B5000 had some flaws:
    The virtual data wasn't done right
        there were too many architectural assumptions about physical data
        formats
    "Char mode: which eliminated all the protections."  This was
        provided to let programmers used to the 1401 (I think) be
        comfortable.

User interface observations:

Piaget's three stages of development:

    Doing ----> Images -----> Symbols

doing: "a hole is to dig"
images: "getting the answer wrong in the water glass experiment"
symbols: "so we can say things that aren't true"

Brunner did a study that indicated these weren't stages, they were three
areas conflicting for dominance--as we mature, symbols begin to win out.

Ha...man did a study of inventiveness and creativity among
mathematicians and discovered that most mathematicians do their work
imagistically, very few of them work by manipulating symbols.  Some
mathematicians (notably Einstein) actually have a kinesthetic ability to
FEEL the spaces they are dealing with.

>From psychology comes a principle applicable to user interfaces:

Kay's law: Doing with Images generates Symbols.

He cites Papert's "Mindstorms", where Papert describes programming a
computer to draw a circle.  A high school student, working with BASIC
would have before her the dubious assertion that a circle and
x**2+y**2=C are related.  A child, instructed to "play turtle" will
close her eyes while walking in a circle and say "I move forward a
little, then I turn a little, and I keep doing that until I make a
circle".  This is how a differential geometer views a circle.  Papert's
whole book is an illustration of Kay's Law.

User interface maxims:
    Immediacy
        What you see is what you get (WYSIWYG)
    Modeless
        Always be able to start a new command without having to clean up
        after the old one.
    Generic
        What works in one place works in another
    User illusion
        User's make models of what goes on inside the machine.  Make the
        system in which most of the user's guesses are valid.  Not "some
        of the time it's wonderful, but most of the time you get

***Sender closed connection***

=== Network Mail from host sri-ai on Tue Apr  3 23:57:59  ===

malis%BBN-UNIX@sri-unix.UUCP (03/31/84)

From:  Andrew Malis <malis@BBN-UNIX>

        [Forwarded from the Info-Atari list by Tyson@SRI-AI.]

  Date: 23 Mar 1984 1214-EST (Friday)
  From: mit-athena!dm@mit-eddie (Dave Mankins )
  Subject: Notes from talk by Alan Kay at MIT

Dr. Alan Kay, one of the developers of Smalltalk and the Xerox Alto, and
currently a Vice President and Chief Scientist at Atari, gave a talk at
MIT yesterday (22 March 1984) titled: "Too many smart people: a personal
view of design in the computer field"

The abstract:

    This talk is about the battle between Form and Content in Design and
    why "being smart" usually causes content to lose.  "Insightful
    laziness" is better because (1) it takes maximum advantage of others
    work and (2) it encourages "rotating" the problem into its simplest
    essence -- often by changing it completely.  In other words: Point
    of view is worth 80 IQ points!

Here are some tidbits gleaned from my notes:

One of the problems with smart people is that they deal with
difficulties by fixing them, rather than taking the difficulty as a
symptom of a flaw in the design, and noticing "a rotation into a new
simplicity."

When preparing his talk he realized that what he wanted to say was
basically inconsistent, that

    1) You should do things over, and
    2) You shouldn't do things over.

"Both of these are true as long as you get the boundary conditions
right."  (There ensues an anecdote about working with Seymour Cray to
get an early CDC6500 up at NCAR.  The 6500 hardware did not normalize
its floating point operations, but that was "okay" because "any sensible
model will converge".  When the NCAR meteorologists (who answer the
question "what will the weather be like?" by looking out the window)
tried to put their models up on the CDC6500, they didn't work.  They
insisted that the Fortran compiler do the normalization for them.  Kay
cited this as evidence that their model was wrong.  Hmph, it's easy to
make fun of meteorologists...)

Kay cited Minsky's Turing award lecture, in the Apr. 1970 JACM (or maybe
CACM, I didn't catch it): "Form and content aren't enough."  What has
happened to the computer science field over the last twenty years is
myopia:  "a myopia so acute that only the very brilliant can achieve
it."

As an example of this, Kay cited the decline from the STS940 in 1965 to
UNIX ("a mere shadow of what an operating system should be") to CPM.  The
myopia in question is best illustrated by a failure of Kay's own: "When
we got our first IMSAI (mumble) we put Smalltalk up on it.  We had to do
a lot of machine coding on it, and we thought that wasn't right.  And it
performed about as well as BASIC does today.  We said 'This is clearly
inadequate.  What we need is 2Mb of memory and a fast disk.'  Thus we
left the door open for BASIC to crawl back out of its crypt."

He should be lynched.  At least he realizes the error of his ways.

He cited an article by Vannevar Bush, in a 1945 Atlantic Monthly,
titled, "As we may think", in which Bush described a multi-screened,
pointer-based system with access to the world's libraries, drawing
programs, etc.  Bush, of course, thought it was just a few years away
(he called it "Memex").

He alluded to Minsky's notion of "science-envy": Natural scientists look
at the universe and discover its laws.  Computer scientists make up
their universes.  "What we do is more like an art."  "You can judge
whether or not a field is overcome by science-envy if it sticks the word
'science' into its name: 'computer science', 'cognitive science',
'political science'..."

He talked about some of his early work, with Ed Teitel, developing an
early personal computer (ca. 1965) calligraphic display with a pointer.
It had "a wonderful language I developed, influenced by Sutherland's
Sketchpad (the best thesis ever done in computer science) and
Simula--everything I've ever done has been influenced by Sketchpad and
Simula).  Everyone who tried to use it hated it.  They all had about the
same reaction to it that everyone has to APL today."  Shortly after
working on this he saw Papert's work with LOGO and children, and
resolved that everything he did from that day forth would be
programmable by children.

Part of the machine's problem stemmed from the fact that it didn't have
enough memory.  This in turn stems from the fact that we cast hardware
in concrete before we know what we're going to do with it.

Some relevant maxims from my notes:

    "Hardware is software crysallized early."
    "We shouldn't try to build a supercomputer until we have something
        to compute."

His point in these two maxims was, I think, that we're very good at

***Sender closed connection***

=== Network Mail from host sri-ai on Tue Apr  3 23:59:34  ===