[comp.windows.misc] Sigh. Multitasking debate & research thereon

craig@unicus.UUCP (Craig D. Hubley) (03/09/88)

Well, yet another `is multitasking good' debate has flamed up
without even a good explanation of what multitasking is.
Pardon me for cross-posting this, but there's abundance of heat
in comp.windows.misc, and usually an abundance of water in comp.cog-eng.
For background, this began as yet another `is multitasking good' debate.
It would be nice if people could be pointed to research literature
instead of being `corrected'.  On that note:

What multitasking means, to me at least, is the availability of more
than one *tool* at a time.  By availability, I mean it is visible/usable
with no more than a gesture and/or a command of some form.  No waiting,
though under some circumstances the new task may have to wait for resources
such as CPU time.  This is a from-the-hardware definition, that is,
a `task' is a computer task, not a user task.  This is an OS definition.

Multitasking in the other sense is `human beings doing more than one thing'.
Now, `more than one thing' is also open to debate.  Suffice it to say that
usually humans have one intent, or a closely defined set of intents, which
are subject to interruption, change, etc.  Now the important thing:

		SINGLE INTENT does not equal SINGLE TOOL

			which is like saying

		a USER TASK is not a SYSTEM TASK

In order to send email, that is, to communicate with someone, I must 
specify or select his address or name, create and edit the message,
and route the message to him.  Although these can be integrated so as
to appear to be one tool, in fact they are not.  Although in this example
they act sequentially, with only the message-routing portion being
backgrounded, there are many examples, such as the ubiquitous edit-compile
routine or the ubiquitous print-spooler, where you want things to go on in 
parallel.  Multitasking in operating systems is a *good thing*.

For multi-humantasking in interfaces, a good review *with* design
implications is in the ACM SIGCHI's CHI+GI`87 Proceedings, the Rooms
paper by Card and Henderson.  Their work extends the window idea to
groups of windows in Rooms, each of which is related to a class of 
human tasks.  Their interface is most certainly multitasking.

Another good example of multi-humantasking is driving a car.  Although
there is a single intent (getting to a particular place), and a single
tool (the car), the human being must use *two hand and two foot controls*,
which most people become accustomed to.  Imagine driving a car where the
dashboard was on a screen, and you had only one mouse!?!?  I think your
insurance premiums would go up.  I believe Buxton and others at U of Toronto
did some research on this, I don't know if it got formally into print.
They were experimenting with two pointers, one on each hand, for a while,
and using the head for pointing too.

I submit that limitations such as small screen size made pulldown menus
common, and that similar considerations ruled for the one-button mouse.
Both were acceptable *the first time* you used a system, and easy to 
predict. However, over longer periods of time, the Xerox STAR research
indicated that almost everyone preferred a two-button mouse, *with a
consistent protocol*.  The importance of rules like `all selecting with
the left, all choosing with the right' turned out to be much more important
than the number of buttons.  Three did seem to be too many, though.
Of course, all this proves is that *most* people liked that system.
This is statistical evidence, and if you like one-button and find that
more consistent, fine, and you probably aren't alone.

There's another factor to that:  Why did the much-more capable Lisa fail
and it's stripped-down kid brother Mac succeed ?  For that matter, why
did Multics fail and it's stripped-down kid brother Unix succeed?
Cost is only part of the answer.  Unix was comprehensible to one person,
and I understand Multics was not.  That may have been true of the Lisa
as well, at least in the early `80s.  Now, of course, everyone wants
those `new' features.  Sigh.

I for one think that if you can point, you should be able to grasp,
and given a choice, I'll move my mouse a smidgeon to get the menu item I want,
rather than a screenful.  I find the great advantage to be in knowing where,
*relatively*, the menu will be.  Pulldown menus don't give me this.
Furthermore, I have to move my eyes a fair way from the area I'm concerned
with.

When it comes to pointing devices, try using a trackball sometime.  It's
harder to learn than a mouse, but *much* faster once you know it,
at least it was for me, and more intuitively suited to some protocols
like scrolling.

When I manipulate a desktop, I have *two* hands to do it with.  When I use
a mouse, I have one.  Given that I have to move my hand off the keyboard
anyway (unless I had a board-mounted trackball), I would prefer to have
a trackball for my left hand.  Then I could scroll about the screen with
it, showing a portion perhaps of a larger workspace, and use my right hand
for selecting and grasping and all those `finer things'.  When you write,
how many of you hold or move the paper with your left hand as you write
with your right, or vice versa ?  I find mouse-controlled scroll bars to
be a pretty weak piece of magic.

Well, cog-eng people, flame away.  The debate has been brought to your door.
Defend your honour with theory, please, not with guns.

This impending flame war brought to you by

	Craig Hubley, Unicus Corporation, Toronto, Ont.
	craig@Unicus.COM				(Internet)
	{uunet!mnetor, utzoo!utcsri}!unicus!craig	(dumb uucp)
	mnetor!unicus!craig@uunet.uu.net		(dumb arpa)

rjf@eagle.ukc.ac.uk (R.J.Faichney) (03/11/88)

(Followups directed to comp.cog-eng only)

In article <2313@unicus.UUCP> craig@unicus.UUCP (Craig D. Hubley) writes:
>
>..I believe Buxton and others at U of Toronto
>did some research on this, I don't know if it got formally into print.
>They were experimenting with two pointers, one on each hand, for a while,
>and using the head for pointing too.

According to Ralph Hill [1], 'Buxton and Myers [2] have demonstrated the value
of two-handed, or concurrent, input in real world tasks'. Hill goes on
to describe a UIMS which 'supports the implementation of user interfaces
where the user is free to manipulate multiple input devices and perform
several (possibly related) tasks concurrently.'

[1] Ralph D. Hill (U. Toronto), Supporting Concurrency, Communication, and
Synchronisation in Human-Computer Interaction -- The Sassafras UIMS, ACM
Trans Graphics, 5(3), July 86, 179-210.

[2] W. Buxton and B. Myers, A study in two-handed input, in Proc CHI 86
Human Factors in Computing Systems, ACM, New York, 86, 321-326.

I've only read [1], not [2], so can't comment on it. The work's been done,
though, folks!

Robin Faichney