[comp.cog-eng] Technological overcomplexity in 1523

jbn@glacier.STANFORD.EDU (John B. Nagle) (08/01/88)

In article <585@sdics.ucsd.EDU> norman@sdics.UUCP (Donald A. Norman - 
danorman@ucsd.edu (or .bitnet)) writes about the overcomplexity of plows.
But the complexity of the problem is very real.  The difference between
a good plow and a bad one, in terms of the amount of plowing one can
do in a day with the same amount of pulling power, is huge.  Thomas
Jefferson, who, after all, made his money running a farm, worked, and published,
on the proper curves for plow blades.  The problem is comparable to propellor
design, which makes sense when you think about it.

There are interrelationships between plow design and whether a farmer should
use horses or oxen.  Horses eat more, but go faster.  On the other hand,
horses need more complex harness; a yoke will work, but the horse cuts off
his own wind when he really pulls, and a horse in a yoke does less useful
work than an ox.  It took people several thousand years to figure out
the solution to that problem.  In fact, draft horse harness wasn't perfected
until the 1930s, and only as part of a last-ditch effort to compete with
tractors.  The problems of hitching up multiple animals so that all of
them share the load equally, none of them can goof off, one man can drive
a large team, the turning radius is small enough to be useful in 
farming, the horses don't interfere with each other's movements, and
the harness can be put on in a reasonable time, are quite complicated.  See 
"The Draft Horse Primer" for details on this esoteric subject.

So the problem is inherently complicated.  The easy solution is available
today; just get a big tractor and bull your way through.  With good steels,
good plow designs, and a big engine, the technology becomes "user friendly",
because the tough design problems were solved back at the farm-equipment
factory.  

We see an exact analogy in computing.  It takes a lot of CPU power to make
a system user-friendly.  Systems such as the Macintosh use most of their
CPU power operating the user interface.  This only became feasible when
CPU power became cheap enough that it could be used as a big hammer to
hide the internal complexities of the system.

					John Nagle

bwk@mitre-bedford.ARPA (Barry W. Kort) (08/01/88)

The colloquoy on comlexity is most interesting.  

I saw an item in Scientific American regarding the measurement of
complexity:  The complexity of a system is measured by the amount
of information thrown away.

In a good design, many variations were tried and discarded.  An
assembled jigsaw puzzle represents an enormous amount of discarded
information.  There is a stage in the creation of a system where the
creator is manipulating a maximum amount of information.  That's
where things bog down.

In the old days, complexity was symbolized by the Gordian Knot.

--Barry Kort

	"Nothing is as simple as it seems at first,

		or as hopeless as it seems in the middle,

			or as finished as it seems in the end."

nick@hp-sdd.HP.COM (Nick Flor) (08/02/88)

In article <587@sdics.ucsd.EDU> norman@sdics.UUCP (Donald A. Norman - danorman@ucsd.edu (or .bitnet)) writes:
>
>John Nagle provides a much more detailed analysis of why the plow is
>so complex a device.  Plus an anology with modern computerware.  But
>that now lets us discuss how one should design so as to deal with this
>apparent complexity.
>
>One solution is to provide lots of options and adjustments (this was
>the solution adopted in 1523) -- this guarantees great flexibility at
>the cost of great complexity for the user. (Consider InterLisp or
>MacLisp, or vi, or emacs).
>

I think a better variation on the above, is to provide the user with
a set of basis operations from which a larger set can be derived.
There are probably a number of such bases, so probably the best thing
to do is to select the set that results in the smallest number of
derivation steps for the more common operations, while providing
the main operations directly and efficiently.  

So, here's one design rule:

Design with complexity inversely proportional to usage.

and it's corollary :-)

Design with efficiency proportional to usage.

My problem is that I'm not quite sure what is meant by "complexity".  
(And a whole bunch of other software engineers don't know either, 
 which is why we have programs that only the designers can use efficiently).

We usually say something is complex when:
a) We don't understand it
b) We can't do it
c) We understand it, and can do it, but it takes too long.  e.g. too many
   commands to type in just to compile and link a program.

The user can eliminate (a) through education i.e. reading the manual, 
and (b) through practice.  It's the designers responsibility to 
eliminate (c); keeping in mind the intelligience and capabilities
of the target users.

I guess the point I'm trying to make is that the reduction of complexity
is the responsibility of both the designer and the user, and the designer
alone cannot create something that is intrinsically uncomplex/complex.


               "Yeah, that's it.  That's what I'm really trying to say.  
                Forget what I said before."
	        -- Saturday Night Live


Nick Flor
Just A Psychacker
-- 
+ Disclaimer: The above opinions are my own, not necessarily my employer's.   +
+ Oh, sure. Sure. I'm going. But I got  | Nick V. Flor           * * o * *    +
+ your number, see? And one of these    | Hewlett Packard SDD   * * /I\ * *   +
+ days, the joke's gonna be on you.     | ..hplabs!hp-sdd!nick  * * / \ * *   +

embick@tetra.NOSC.MIL (Edward M. Embick) (08/04/88)

>>John Nagle provides a much more detailed analysis of why the plow is
>>so complex a device.  Plus an anology with modern computerware.  But
>>that now lets us discuss how one should design so as to deal with this
>>apparent complexity.
>>
>I think a better variation on the above, is to provide the user with
>a set of basis operations from which a larger set can be derived.
>So, here's one design rule:
>Design with complexity inversely proportional to usage.
>and it's corollary :-)
>Design with efficiency proportional to usage.
>
>My problem is that I'm not quite sure what is meant by "complexity".  
>(And a whole bunch of other software engineers don't know either, 
> which is why we have programs that only the designers can use efficiently).
>
The ideal approach is to have an underlying design that adjusts the level 
of prompts, and the macro capabilities of the user's commands, to the
perceived level of user competence.  Such indicators as the user entered
command complexity, the way menus are traversed, how often and what type 
of help screens/menus the user calls up, can be tracked and logged. 

The sophistication level of the user could be "automatically" acknowledged 
through more terse prompts, the system "suggesting" macros for repetitious 
actions, and by allowing more primitive (i.e. bit picking) interaction
with the system.  The system would, of course, back off at increasing the
options if user confusion or excessive errors were detected.

Our farmer would have a plough that would automatically adjust blade angle,
ox, horse or tractor, etc., or the farmer could lock the blade and hook
onto a cow.
------------------------------------------------------------------------------
Ed Embick (If God wanted me to write legibly, He wouldn't have invented email)
Computer Sciences Corporation                               ___   ___   ___
4045 Hancock St.         MILNET:  embick@tetra.nosc.mil    /     /     /
San Diego, CA 92110                                        \___  ___/  \___
(619) 225-8401 x287

pluto@beowulf.ucsd.edu (Mark E. P. Plutowski) (08/04/88)

In article <693@tetra.NOSC.MIL> embick@tetra.nosc.mil.UUCP (Edward M. Embick) writes:
>>>John Nagle provides a much more detailed analysis of why the plow is
>>>so complex a device.  Plus an anology with modern computerware.  But
>>>that now lets us discuss how one should design so as to deal with this
>>>apparent complexity.
>>>
>>I think a better variation on the above, is to provide the user with
>>a set of basis operations from which a larger set can be derived.
>>		... etc. ...

>The ideal approach is to have an underlying design that adjusts the level 
>of prompts, and the macro capabilities of the user's commands, to the
>perceived level of user competence. ... 
>...  The system would, of course, back off at increasing the
>options if user confusion or excessive errors were detected.
>
>Our farmer would have a plough that would automatically adjust blade angle,
>ox, horse or tractor, etc., or the farmer could lock the blade and hook
>onto a cow.

Having been a farmer, i can say that this is something one might have
seen back when plows were relatively new technology.  Look at the 
antiques, they sometimes have all kinds of neat features, not found
common on current models.  Current models are well-suited to the land
upon which they are to be used.  So you buy the right plow for the land.
(same thing is done for automobiles, they are tuned for the area
in which they are to be sold).  Result: cheaper plow, less complex,
easier to use, nothing that isn't needed is needed to be purchased.  
If something is needed at a later date, it *can* be added, 
by adding modules (colters, shanks,etc.).

----------------------------------------------------------------------
Mark Plutowski
Department of Computer Science, C-024
University of California, San Diego
La Jolla, California 92093
INTERNET: pluto%cs@ucsd.edu	pluto@beowulf.ucsd.edu	
BITNET:	  pluto@ucsd.bitnet
UNIX:	  {...}!sdcsvax!pluto

nick@hp-sdd.HP.COM (Nick Flor) (08/04/88)

In article <693@tetra.NOSC.MIL> embick@tetra.nosc.mil.UUCP (Edward M. Embick) writes:
>The ideal approach is to have an underlying design that adjusts the level 
>of prompts, and the macro capabilities of the user's commands, to the
>perceived level of user competence.  Such indicators as the user entered
>command complexity, the way menus are traversed, how often and what type 
>of help screens/menus the user calls up, can be tracked and logged. 
>

You've introduced a potentially annoying variable into your tool --
time.  Your system must learn the user before it can perform efficiently
and in a less complex manner.  Like I said before, we must first define 
what is meant by complexity.  If I have to type in a whole bunch of simple 
commands because my tool hasn't figured me out yet, then I'm inclined to 
call the tool complex (i.e. too many steps to accomplish the goal).
There are other problems too like, misinterpreting actions, and relearning 
for each new user.  More importantly, however, is the fact that a tool
designed with intelligience has the potential to make many mistakes since
it must *predict* what the user's true intentions are.

A tool must do what the user asks of it, not what it thinks the user wants 
it to do.  

I guess an extreme example would be a "smart" gun that learns to fire
based on statistics it takes from the user.  Let's say it "figures out"
that the soldier always pulls the trigger when he aims at something moving
fast.  So, the soldier takes this gun into battle, hears a noise, quickly
points his gun in the direction of the noise.  Unfortunately, the noise
is made by one of his friends running away from the enemy.  <BANG>.
(yeah, I know, it is an extreme example.  It does however illustrate
 a potentially dangerous problem with smart tools)


Nick
-- 
+ Disclaimer: The above opinions are my own, not necessarily my employer's.   +
+ Oh, sure. Sure. I'm going. But I got  | Nick V. Flor           * * o * *    +
+ your number, see? And one of these    | Hewlett Packard SDD   * * /I\ * *   +
+ days, the joke's gonna be on you.     | ..hplabs!hp-sdd!nick  * * / \ * *   +

embick@tetra.NOSC.MIL (Edward M. Embick) (08/05/88)

In article <1391@hp-sdd.HP.COM> nick@hp-sdd.UUCP (Nick Flor) writes:
>In article <693@tetra.NOSC.MIL> embick@tetra.nosc.mil.UUCP (Edward M. Embick) writes:
>>The ideal approach is to have an underlying design that adjusts the level 
>>of prompts, etc.
>
>You've introduced a potentially annoying variable into your tool --
>time.  Your system must learn the user before it can perform efficiently
>and in a less complex manner.  
          ^^^^^^^^^^^^
>
>A tool must do what the user asks of it, not what it thinks the user wants 
>it to do.  
>
The premise I proposed is that the system will assume (yes, I know what
assuming can lead to) the user is a novice at the start.  Depending upon
the type system, a "standard" user profile (or a choice of profiles to
determine user type) is established as the template for system interface.
Profiles could determine the command/menu/prompt terminology.  e.g. An
accountant may be comfortable referring to files, accounts, records, 
and fields, whereas a journalist may prefer stories, paragraphs, lines
and words.  Obviously the mapping of different terminologies wouldn't
necessarily be one to one.  In fact, over a period of time the system
might ask the user for preferred naming of objects.

I am also suggesting that the system not presume or preempt the user's
actions.  Rather, the system allow the user to easily define macro actions
and also to break down system novice level macro actions into primitives.
This would allow the user to, over time, shape the behavior of the system
to efficiently perform the user's tasks in a manner easily comprehended
by others in the same environment performing like activities.

Some people are detail oriented, and given the choice, prefer to see and
be control of all activity in a system.  Others want to just have to push
the button.  The system should accomodate both without making one or the
other have misgivings about the human/machine interface.
------------------------------------------------------------------------------
Ed Embick (If God wanted me to write legibly, He wouldn't have invented email)
Computer Sciences Corporation                               ___   ___   ___
4045 Hancock St.         MILNET:  embick@tetra.nosc.mil    /     /     /
San Diego, CA 92110                                        \___  ___/  \___
(619) 225-8401 x287

spf@whuts.UUCP (Steve Frysinger of Blue Feather Farm) (08/08/88)

>Thomas Jefferson, who, after all, made his money running a farm...

Actually, he consistently lost money running the farm; he survived
principally by selling off land periodically.  He also had a nail
factory on the place, with labor provided by some of his ~100 slaves,
and I think that operation turned a profit for awhile, until nail
cutting machines took off around the turn of the 19th century.
Don't get me wrong, there's a lot I like about Jefferson, but business
and farming sense were not his forte, though he was indeed enthralled
with gardening.

liz@grian.UUCP (Liz Allen-Mitchell) (08/10/88)

Re Edward M. Embick's proposal about adjusting an interface to adjust
to the user's skills:

It occurs to me that people do this all the time.  It is particularly
striking when it doesn't work right (eg when I walk into a computer
store and even though I am a lisp hacker, most sales people assume I
know nothing about computers and it is sometimes difficult to persuade
them otherwise).  It would be nice for computers to be this flexible as
well, but to do this right will require it to adjust quickly to a new
user.  If it is too slow, a sophisticated user will become impatient at
being treated as a novice and a less sophisticated user will become
confused if the interface changes too many times...  Yet, it will be
difficult for a computer to gather enough data to judge the skills of a
user in such a short time.  Maybe a more direct approach would work
better?  That is, ask the user something to try to judge his level?

Just some thoughts...
-- 
		- Liz Allen-Mitchell	grian!liz@elroy.jpl.nasa.gov
					ames!elroy!grian!liz
"God is light; in him there is no darkness at all." -- 1 John 1:5b

pete@wor-mein.UUCP (Pete Turner) (08/11/88)

In article <1664@grian.UUCP> liz@grian.UUCP (Liz Allen-Mitchell) writes:
>user in such a short time.  Maybe a more direct approach would work
>better?  That is, ask the user something to try to judge his level?

Why not start by asking what level the user is?


Pete

mdw@inf.rl.ac.uk (Mike Wilson) (08/16/88)

In article <1664@grian.UUCP> liz@grian.UUCP (Liz Allen-Mitchell) says:


>Re Edward M. Embick's proposal about adjusting an interface to adjust
>to the user's skills:

>It occurs to me that people do this all the time.  It is particularly
>striking when it doesn't work right
... [example about computer store salespersons incorrectly judging the
knowledge of potential purchasers]
>Maybe a more direct approach would work
>better?  That is, ask the user something to try to judge his level?

Without stating all the arguments for and against auto-adaptive
interfaces, there are several people working on this whose work
may be of interest here. Computer store salespersons appear to
have a very small set (usually 1) of dialogue styles/scripts they
follow with potential customers, probably because they are inexperienced
both in the social skills required to generate new styles and in
confronting a range of customer types. 

Pierre Falzon at INRIA in Paris has done a series of studies looking
at the different dialogue styles/scripts in a variety of situations
of human-human communication. One example is that of medical 
receptionists answering the phone to people wishing to make appointments
with doctors. He clearly demonstrates several different scripts being
used and how the variety used by a receptionist is dependent on her/his
experience. He is using these studies as the basis for interactive
interfaces which can offer different dialogue styles/scripts for 
users with different levels of expertise in either computers or
the domain of application. Both the humans and the system interfaces
should select the appropriate dialogue styles/scripts on the basis of
the information imparted in the first few conversation turns.

Ref:

Falzon, P., Amalberti, R. and Carbonell, N. (1986) Dialogue control 
strategies in oral communication. In Foundations for Human-Computer
Communication, K. Hopper adn L.A. Newman (eds.). Elsevier Science
Publishers B.V. (North-Holland), IFIP: Netherlands.

A second line of research has been into more psychological measures
of cognitive style (e.g. extravertion/introversion, visualiser/verbaliser,
field-dependent/independent) and how interfaces can be modified to 
optomise on these variables. Obviously, part of the problem here is 
selecting a suitable test to establish a categorisation of the user.

Ref: Fowler, C.J.H. and Murray, D.M. (1987) Gender and cognitive
style differences at the human-computer interface. In Proceedings
of INTERACT '87, Stuttgart, West Germany, September 1987.

mas@cs.bham.ac.uk (Angela Sasse <SasseMA>) (08/31/88)

>A second line of research has been into more psychological measures
>of cognitive style (e.g. extravertion/introversion, visualiser/verbaliser,
>field-dependent/independent) and how interfaces can be modified to 
>optomise on these variables. Obviously, part of the problem here is 
>selecting a suitable test to establish a categorisation of the user.
>
>Ref: Fowler, C.J.H. and Murray, D.M. (1987) Gender and cognitive
>style differences at the human-computer interface. In Proceedings
>of INTERACT '87, Stuttgart, West Germany, September 1987.

Wrong - a categorisation of the user won't get you anywhere.

The problem is that categorising users according to psychological
measures of style doesn't tell you anything about preferences for
different styles of interfaces or styles of learning, since the
theories behind these tests (a) are often extremely narrow in their
application; (b) in some cases there is no theory at all behind the 
measurement; and (c) these theories simply haven't got anything to
say about user-interface design.

In my personal experience, non-psychologists tend to be extremely
fond of applying psychological (or pseudo-psychological) constructs
to users. The line of thinking goes "If we could only categorise users
correctly, we could pull out the ideal interface for each user group."
But we still haven't got anywhere near compiling/integrating/developing
general principles of good (= sound in terms of cognitive ergonomics)
user interface design. This categorisation business is a gimmick -
trying to put icing on a cake that doesn't exist. Once we've developed
a science of the user interface, we can start worrying about that sort
of thing. And even then, the emphasis should be on theories that make
sense within the context of cognitive ergonomics, rather than recycling
dead ducks that have been thrown out by psychology (e.g. A and B types
and system response time, knowledge elicitation from introvert and 
extrovert experts).

Typically, Sheiderman (1987) lists 20 of these constructs, which could
potentially influence user behaviour, but has to admit that there is
no conclusive evidence to support this intuition. The only empirical
finding he cites is that females are significantly more fond of Pac-
man than males - due to, he speculates "stronger oral tendencies"
amongst females. What do we conclude from that - that women want an
edible interface? I rest my case.
-- 
Martina-Angela Sasse                   SasseMA@cs.bham.ac.uk
Dept. of Computer Science
The University of Birmingham
Birmingham, B15 2TT, UK