[comp.sys.next] consistency and human learnability

aarons@cvaxa.sussex.ac.uk (Aaron Sloman) (02/03/89)

[Re-posting. First attempt seems to have failed.]
Hi Don,

I've only just seen this

>From: norman@cogsci.ucsd.EDU (Donald A Norman-UCSD Cog Sci Dept)
>Subject: Re: replacing the desktop metaphor (Why any metaphor?)
>Message-ID: <673@cogsci.ucsd.EDU>
>Date: 25 Dec 88 17:34:01 GMT
>Subject: Re: replacing the desktop metaphor (Why any metaphor?)

>I suspect that metaphors are useful in keeping consistency.  But
>now Jonathan Grudin is about to present a paper in CHI 89 arguing about
>the foolishness of consistency: systems are often improved by
>violations.

I was pleased to see this. I have often annoyed people interested in
programming languages and learning environments, by attempting to defend
the following slogan:

	"Power is more important than consistency"

The most obvious example of the trade-off is the comparison between any
natural language (all of which, I believe, are very powerful but riddled
with inconsistencies) and either predicate calculus or any other
formalism that logicians and mathematicians have devised. For reasons
which I do not fully understand <but see below>, natural languages,
despite all their complexity and inconsistencies, seem to be things that
all (or should I say most?) human beings learn with far less resistance
than the simpler, more consistent, artificial formalisms.

Let's call the former "scruffy" formalisms, the latter "neat"
formalisms, following Bob Abelson's labelling of AI types. (Clearly
there's a whole spectrum of cases, with most programming languages a
curious mixture of neatness and scruffiness.)

Neat formalisms, including Predicate calculus, BNF and number notations
are learnt, and put to very good use, by a subset of the population, for
a subset of their activities. So this is not an all-or-nothing issue.
(I've never met a logician or mathematician who attempts to communicate
with her children solely using a neat formalism.)

Also, although it is probably clear that overall natural languages are
more powerful and general than any artificial and neat formalism so far
devised (e.g. natural languages, or at least the ones I know about,
contain their meta-languages, and allow creative deployment using
metaphor and other devices) there are specific kinds of power that they
don't have. E.g. try to explain in English what it means for something
to be increasing its speed while decreasing its acceleration, then
explain it using the notation of differential calculus. So further
development of this topic would require a taxonomy of kinds of power of
formalisms.

I suspect that one reason why the scruffy more powerful natural systems
are more suited to the human mind is that they handle far more special
cases directly e.g. using particular words, phrases, idioms, etc., that
just have to be memorised, rather than interpreted on the basis of
general rules. By contrast, the neat artificial systems require you to
do some problem-solving to find the right construction, or some analysis
and interpretation to understand one produced by someone else.

(The best way to explain how 'Can you pass the salt?' is interpreted as
a request rather than a question, is probably by saying that people
simply remember that that is how it is used. Of course, it is possible
to derive the interpretation using very general principles and
assumptions, but nobody need bother to derive this if they simply learn
the usage along with all the other bizarre special forms of expression
encountered in natural languages. E.g. I can do something for your sake
whether you have a sake or not. Of course, general principles may
explain how something got into the language in the first place, even if
they play no role in the particular uses of the construct.)

A common observation may explain all this:
Human brains appear to contain very powerful and fast associative
storage mechanisms with very large storage capacity. They also appear to
have relatively slow and incomplete problem solving mechanisms. This
suits the learning and use of large numbers of particular cases, rather
than the derivation of particular cases using powerful generative
principles.

Moreover, I don't think this is simply a feature of the human brain -
pressures toward this kind of imbalance are probably a result of design
requirements for any physical implementation of an intelligent system
that generally has to act within severe time constraints. This is
because (almost) all symbolic derivational processes are inherently
combinatorially explosive.

However, any formalism that copes directly with lots of special cases,
i.e. has constructs defined specifically to deal with them, is far more
likely to exhibit inconsistencies than a formalism that has a relatively
small and powerful set of primitives which can be combined to generate
all the special cases in a principled way. This is because checking for
consistency is also an inherently combinatorially explosive process: the
number of things to check is an alarmingly fast-growing function of the
number of items in the system when the inconsistencies can involve
relationships between arbitrarily large sets of items.


Of course there are all sorts of exceptions, including the case of
people using a system that is inherently simple and therefore needs only
a relatively simple formalism (e.g. arithmetic?) or people using a
system only infrequently, so that they can't be expected to remember all
the special cases. Perhaps if human languages were not used so
frequently in daily life they'd have evolved different characteristics?


If, for the reasons indicated, scruffy and powerful systems are
generally easier for people to learn and use (on a regular basis) than
neat consistent systems that obtain their power from generative rules,
then people designing learning environments (and increasingly ALL
computing systems will be learning environments for their users), will
be under strong pressure to sacrifice the requirement of consistency.

Dare I say QED?

Incidentally, all this is one reason why I favour Pop-11 over Lisp (or
LOGO) as a programming language for beginners. The syntax of Lisp is
elegant and is very powerful if you can parse it, whereas that of Pop-11
has lots of special case constructs, and is highly redundant, and
apparently simpler for people to parse (though not simpler for computers
to parse). I think the redundancy helps to make it easier for human
brains to take in, despite the greater surface complexity such as the
use of matching pairs of opening and closing keywords
	until ... enduntil
	for ..	  endfor
	define .... enddefine
	if .... endif etc.

[This needs systematic research]

Returning to Macs and the like:
The desktop metaphor may be a simple and consistent one for a range of
tasks, that are relatively simple. But what about

	'show me all the files in folders A and B that have the
	substring "prog" in their names'

	'Move everything that I haven't looked at for at least 5 days
	to folder OLD'

	'Whenever anyone else looks at any of my files, please add their
	names to my nosey file'

	When I'm getting near my disc quota send me a mail message.

	If any mail message arrives mentioning grants tell me immediately.

"Direct manipulation" analogous to shoving things around on desk-tops or
rooms etc is only relevant to a tiny subset of the things most of us
really want to do with information systems. Maybe only the first few
things we want to do...

>  Where consistency and mepaphor and consistent
> system images-mental models help and where they hinder is not yet
> properly understood.

> Time for some more research, folks.

> don norman

I agree!

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QN, England
    ARPANET : aarons%uk.ac.sussex.cogs@nss.cs.ucl.ac.uk
              aarons%uk.ac.sussex.cogs%nss.cs.ucl.ac.uk@relay.cs.net
    JANET     aarons@cogs.sussex.ac.uk
    BITNET:   aarons%uk.ac.sussex.cogs@uk.ac
        or    aarons%uk.ac.sussex.cogs%ukacrl.bitnet@cunyvm.cuny.edu

    UUCP:     ...mcvax!ukc!cogs!aarons
            or aarons@cogs.uucp
IN CASE OF DIFFICULTY use "syma" instead of "cogs"

dykimber@phoenix.Princeton.EDU (Daniel Yaron Kimberg) (02/05/89)

[I've lost the attribution line, but >> prefaces the remarks of Donald Norman]
In article <548@cvaxa.sussex.ac.uk> aarons@cvaxa.sussex.ac.uk (Aaron Sloman) writes:
>>I suspect that metaphors are useful in keeping consistency.  But
>>now Jonathan Grudin is about to present a paper in CHI 89 arguing about
>>the foolishness of consistency: systems are often improved by
>>violations.

>	"Power is more important than consistency"

    I wonder if it might not also be worthwhile developing the idea that
consistency is important, but misunderstood.  Of course, I'm speaking from
relative ignorance since I haven't read Grudin's paper.  But take, for
instance, a set of options displayed to the user.  Does consistency require
that all the buttons look the same, or different?  [ugly example, but you
get the idea]  Should the user interface be internally consistent? or
should it be consistent in its relationship to the system as a whole?
Not having seen Grudin's
work, I tend towards the idea that the interface should be made
consistent with the functioning of the system (different looking menus for
different sorts of options, perhaps), and only then made internally consistent
towards the goal of simplicity.  Two similar looking buttons for different
functions is a form of interface-function inconsistency.  I suspect that
interface-interface consistency (same look/feel for all buttons) is only
appropriate in situations where it doesn't reduce the salience of the
important relation between the interface and the system itself, but instead
serves to reduce the number of potentially confusing inconsistencies which
are equally salient but are not indicative in any useful (e.g. non-redundant)
way of state differences.  This is somewhat echoed in the literature in the
concern that visual representations of internal state variables be availalble
on-screen.
    I wonder if someone who's familiar with Grudin's work could mention
whether or not there's any indication as to which of these sorts of
consistency are covered, or if he makes this sort of distinction explicitly.

>(The best way to explain how 'Can you pass the salt?' is interpreted as
>a request rather than a question, is probably by saying that people
>simply remember that that is how it is used. Of course, it is possible
>to derive the interpretation using very general principles and...

It would be a good thing to note at this point that "remember" as you've
used it above probably doesn't represent memory in the traditional sense,
but rather (and more in the spirit of Dr. Norman's work) in the sense of
instantiating a certain schema.  We would have no more trouble deriving
the same interpretation from "might you pass the salt?" or "do you think
you could get me a glass of water?"

>If, for the reasons indicated, scruffy and powerful systems are
>generally easier for people to learn and use (on a regular basis) than
>neat consistent systems that obtain their power from generative rules,
>then people designing learning environments (and increasingly ALL
>computing systems will be learning environments for their users), will
>be under strong pressure to sacrifice the requirement of consistency.
>
>Dare I say QED?

I hope not.  As Dr. Norman wrote:

>>  Where consistency and mepaphor and consistent
>> system images-mental models help and where they hinder is not yet
>> properly understood.

And then he said something like "Time for more research" [i lost the line].
Not a bad idea.

                                                 -Dan