[comp.society.futures] New Input Devices

XLACHA1%WEIZMANN@BUACCA.BU.EDU (Omer Zak) (01/13/89)

A recent poster suggested that the participants of this discussion
group design a new input device which doesn't suffer from the problems
of keyboards, and then ask one of the computer companies to manufacture
it.

Modern computers (such as IBM PC and its clones) have detachable keyboards
i.e. you can pull a plug out and substitute your own input device instead
of the ordinary keyboard.  So there is no need to beg IBM to produce an
alternative keyboard.  Any small company can do it, with the interface
specs.  Some improvements can be implemented by software alone (case
in point:  the Dvorak keyboard layout).

It is my understanding that the real roadblock toward introducing a
better keyboard is the need to re-learn to use such a new device.  The usage
of a standard keyboard is learnt once for all.  For a new keyboard design,
one would need to re-learn and to re-practice to use the new design optimally.
Perhaps the reason the companies are not exactly flooding the market with
alternative keyboards (even though several designs have been proposed by
researchers) is the rather small demand for this kind of change.

Besides the above, there is another question:  how many words (or concepts)
per minute does your brain form when you think?  If you can write or type
as fast as you think, then further improvements in input technology wouldn't
increase your productivity (except when you are trying to transcribe
a courtroom discussion into writing).
According to my own experience, my productivity would increase not by
optimal design of the keyboard, but by fast way of moving the cursor
(a touch-screen would be a good idea for this, more than a mouse), ability
to have online access to a library of phrases (cliques?) which I commonly use,
and would like to enter them by a single operation (such as a keypress or
pointing).  Those phrases could be commonly-used commands, long words
which I use often, oftenly-used FORTRAN code fragments, etc.

Interface efficiency is not only WPM.  Let's think in terms of TPM (Thoughts
Per Minute).  And let's try to make it possible to express every idea in
as few keystrokes as possible, and to be able to figure out what keystrokes
are needed at no time (aside from relying upon one's memory).
                                                           --- Omer
"Be accessible via phone to deaf persons.  Make sure you have a Bell 103
compatible modem at home."

XLACHA1@WEIZMANN.BITNET (Omer Zak) (01/13/89)

(This is a resend of a mail which was sent to info-futures@bu-cs.bu.edu.
If there was a duplication, then you have my apologies for having been
brainwashed once too many.  :-)             )

A recent poster suggested that the participants of this discussion
group design a new input device which doesn't suffer from the problems
of keyboards, and then ask one of the computer companies to manufacture
it.

Modern computers (such as IBM PC and its clones) have detachable keyboards
i.e. you can pull a plug out and substitute your own input device instead
of the ordinary keyboard.  So there is no need to beg IBM to produce an
alternative keyboard.  Any small company can do it, with the interface
specs.  Some improvements can be implemented by software alone (case
in point:  the Dvorak keyboard layout).

It is my understanding that the real roadblock toward introducing a
better keyboard is the need to re-learn to use such a new device.  The usage
of a standard keyboard is learnt once for all.  For a new keyboard design,
one would need to re-learn and to re-practice to use the new design optimally.
Perhaps the reason the companies are not exactly flooding the market with
alternative keyboards (even though several designs have been proposed by
researchers) is the rather small demand for this kind of change.

Besides the above, there is another question:  how many words (or concepts)
per minute does your brain form when you think?  If you can write or type
as fast as you think, then further improvements in input technology wouldn't
increase your productivity (except when you are trying to transcribe
a courtroom discussion into writing).
According to my own experience, my productivity would increase not by
optimal design of the keyboard, but by fast way of moving the cursor
(a touch-screen would be a good idea for this, more than a mouse), ability
to have online access to a library of phrases (cliques?) which I commonly use,
and would like to enter them by a single operation (such as a keypress or
pointing).  Those phrases could be commonly-used commands, long words
which I use often, oftenly-used FORTRAN code fragments, etc.

Interface efficiency is not only WPM.  Let's think in terms of TPM (Thoughts
Per Minute).  And let's try to make it possible to express every idea in
as few keystrokes as possible, and to be able to figure out what keystrokes
are needed at no time (aside from relying upon one's memory).
                                                           --- Omer
"Be accessible via phone to deaf persons.  Make sure you have a Bell 103
compatible modem at home."

garye@hpdsla.HP.COM (Gary Ericson) (01/18/89)

Pardon if this is a little long.  I've been mulling it over for a while now...

> ... there is another question:  how many words (or concepts)
> per minute does your brain form when you think?  If you can write or type
> as fast as you think, then further improvements in input technology wouldn't
> increase your productivity...
> 
> Interface efficiency is not only WPM.  Let's think in terms of TPM (Thoughts
> Per Minute).  And let's try to make it possible to express every idea in
> as few keystrokes as possible, and to be able to figure out what keystrokes
> are needed at no time (aside from relying upon one's memory).
> 
> Omer Zak
> ----------

This articulates something I've been thinking about.  Taking a step back, what
is it we're trying to do by typing on a keyboard?  We're communicating, both
with the computer and with other human beings (sometimes indirectly through
written documents and directly with email, etc.).  We don't have to use a
natural-looking language to talk to the computer (in fact, a lot of what I type
for the computer looks like garbage to me, e.g., awk, yacc, csh, etc. 8^).  We
do have to use something 'human' in talking with others, though, although that
doesn't strictly have to be written language either.

Without a computer, how do I communicate with others?  By talking (English),
writing (English), gesturing (body language), and drawing pictures
(graphics[?]).  I would say that these display an inverse relationship between
precision and speed of information transmission.  That is:

	writing		highest precision	lowest speed
	talking			.		      .
	gesturing		.		      .
	drawing		lowest precision	highest speed

Gesturing might be lower precision than drawing, I'm just guessing.  But I do
think there is one fundamental difference between writing/talking and drawing
(and maybe gesturing), and that is that writing/talking is a serial exercise
while drawing conveys information in a parallel manner.

The fact that drawings convey a lot of information can, I think, be reinforced
by the existence of a number of techniques that have been developed that 
utilize drawing/sketching to help organize and express a person's thinking 
(I'm thinking of techniques that have been designed that allow you to use 
symbols and figures to "brainstorm" ideas, create associations between 
thoughts, do design, etc.).  This makes me think that maybe we're too myopic
with respect to input devices, considering only textual input.  True, most
computer systems today lean heavily on textual communication (even windowing
systems with pull-down menus tend to just allow the user to enter text by
selecting words from a list - it's still text, you just are able to avoid
explicitly typing it), but I think that's just historical, caused by the lack
of alternate input devices resulting from limited technology.  I wonder if
things would change much if DataGloves and other exotic devices were cheap and 
available.

To incorporate drawings into communications, I think the interface would have
to allow you to *fluidly* (is that a word?) move between text and drawings.
My first thought on this has been a pen and tablet (where the screen is the
tablet, none of this indirect stuff) where I can make sketches and hand-write
text (and walk thru menus and execute programs and etc.) without having to
change input devices.  If I wanted to write a large block of text quickly, a
keyboard (or some other text-intensive device) would be available, but I should
be able to go back and edit the text with the pen ("gestural text editing").

This last sentence acknowledges that there are times when I want to enter words
into the computer, and if there are a lot of them, then I want to do that in 
the most efficient manner.  At this point, then, we can discuss keyboards or 
their replacement.  

When you write, you record the individual letters of words, and thus we have 
invented keyboards that let you enter those letters quickly.  But when you 
talk (and when you read, especially if you're a speed-reader), you communicate
whole words/sentences/ideas - you don't spell out each letter (in talking, you
do pronounce most letters, but not all of them, e.g., "silent" letters and 
combinations like 'ch').  That's why talking is typically faster than writing.

Is there any way to come up with a new device that will allow you to input
thoughts at this kind of higher level, rather than letter-by-letter?  My
impression is that shorthand does this as does court reporting.

Gary Ericson - Hewlett-Packard, Workstation Technology Division
               phone: (408)746-5098  mailstop: 101N  email: gary@hpdsla9.hp.com

bowles@MICA.BERKELEY.EDU (Jeff A. Bowles) (01/20/89)

"Is there any way to come up with a new device that will allow you
to input thoughts at this higher-level, rather than letter-by-letter?"

This sounds like a request to do something like what exists for
people who know sign language for the deaf, in which "most common words"
have a sign (or two or three) and others are spelled out when necessary.
Let's look at the one word I remember....

"Not" - Make a fist with your right hand, with thumb resting on top of
	index finger. Put that hand to under your chin, thumb touching
	your throat. Bring hand forward, running thumb along chin, as if
	you're making a rude gesture. (At least, that's what I remember
	'not' being, taken from a play....)

"Not" - N-O-T, signed as three letters. Harder to recognize quickly, takes
	more practice, I suspect.

One immediate difference is that the body gestures require a wider
bandwidth - more than just fingers and possibly wrists. Are we limited
to using 26 letters to talk? Similarly, are we limited to a typewriter
keyboard, QWERTY or not?

A lot has come out of the notion that, for inputting music a music keyboard
is used, or perhaps something that looks like a MIDI sax/clarinet; the mouse
(and touchscreens) are similar examples.

We spend all this time trying to figure out how to present data to make
it as intuitive as possible, but not nearly enough trying to input it easily.

And for letters, I think that what I'd want is a dictating machine, albeit
entirely electronic. For inputting English text, there is little else that
would be analogous.

Put that on your [electronic] desktop.

	Jeff Bowles

duncan@geppetto.ctt.bellcore.com (Scott Duncan) (01/20/89)

In article <400011@hpdsla.HP.COM> garye@hpdsla.HP.COM (Gary Ericson) writes:
>
>...what is it we're trying to do by typing on a keyboard?  ... communicating,
>both with the computer and with other human beings (sometimes indirectly
>through written documents and directly with email, etc.).  We don't have to
>use a natural-looking language to talk to the computer (in fact, a lot of what
>I type for the computer looks like garbage to me
	[...]
>Without a computer, how do I communicate with others?  By talking (English),
>writing (English), gesturing (body language), and drawing pictures
>(graphics[?]).
	[...]
>                             This makes me think that maybe we're too myopic
>with respect to input devices, considering only textual input.  True, most
>computer systems today lean heavily on textual communication (even windowing
>systems with pull-down menus tend to just allow the user to enter text by
>selecting words from a list - it's still text, you just are able to avoid
>explicitly typing it), but I think that's just historical, caused by the lack
>of alternate input devices resulting from limited technology.  I wonder if
>things would change much if DataGloves and other exotic devices were cheap and 
>available.
	[...]
>To incorporate drawings into communications, I think the interface would have
>to allow you to *fluidly* (is that a word?) move between text and drawings.
	[...]
>When you write, you record the individual letters of words, and thus we have 
>invented keyboards that let you enter those letters quickly.  But when you 
>talk (and when you read, especially if you're a speed-reader), you communicate
>whole words/sentences/ideas - you don't spell out each letter (in talking, you
>do pronounce most letters, but not all of them, e.g., "silent" letters and 
>combinations like 'ch').  That's why talking is typically faster than writing.
	[...]
>Is there any way to come up with a new device that will allow you to input
>thoughts at this kind of higher level, rather than letter-by-letter?  My
>impression is that shorthand does this as does court reporting.

I think one important issue here is that we have managed to more easily come up
with computer applications which can process text more effectively than they
can process other forms of information.  It is not simply a matter of getting
computers to accept (and reproduce upon output) other forms of information than
text characters, it is also a matter of representing this data internally and
having some agreed upon rules (syntactic and semantic) for processing it.

In the above discussion, one aspect of why talking, and gesturing, and drawing
are faster than writing is that the human processor does a darn good job at
comprehending the meaning of the data coming into it!  We do NOT have widely
available, efficient software to do lots of the processing of images and voice
input that the human brain has (matphorically speaking regarding brain software
that is).

At a local ACM meeting a few years ago, Brian Kernighan was asked about various
workstation technologies as compared to the 'little languages' approach.  He
made the point that, once you introduce the need to hve a human being interact
with the computer, i.e., the computer cannot (or is not supposed to) process
on its own, then you reduce some of the power of the processing that could go
on and diminish the ability to link together tools as effectively.  (All this
is my interpretation of what I heard, so you can ignore that I claim Kernighan
said it and just accept the opinion on face value if you like.)

I guess what I'm getting at is:  talk about new input and output technology is
fine, but what do we expect the computing system, apart from human interven-
tion, to DO with the other kinds of data?  It was have to be translated into
something the computing system can manipulate and that's still a serious prob-
lem since images, gestures, etc. are more open to interpretation as to their
meaning than text.  (Yes, I know the myriad interpretations that can be made
of a given natural language expression, but we have, at least, agreed on some
subset of these for programming languages, etc.  People still argue over what
icons should mean and whether dragging a disk icon to a trash can is proper.
We need to agree to some of this before we can expect the computing system to
deal with raw graphic input I would imagine.)

Speaking only for myself, of course, I am...
Scott P. Duncan (duncan@ctt.bellcore.com OR ...!bellcore!ctt!duncan)
                (Bellcore, 444 Hoes Lane  RRC 1H-210, Piscataway, NJ  08854)
                (201-699-3910 (w)   201-463-3683 (h))

peter@ficc.uu.net (Peter da Silva) (01/24/89)

In article <13549@bellcore.bellcore.com>, duncan@geppetto.ctt.bellcore.com
(Scott Duncan) asks what good it is to let computers accept multimedia
data (voice, imagery, video, music) when it can't really process it very
well...

[pardon if I summarised you badly, but my input device has a broken finger]

One of the things you have to be aware of when you consider how a computer is
to process data is that it's processing it for people to eventually use again.
My experience with a multimedia machine (the Amiga) has been that all sorts
of stuff becomes grist for the mill. All sorts of data... animations, images,
sampled sounds, music, and so on... are floating around. There is a pretty
good standard file format (IFF) for all these things.

I imagine the Mac people have similar experiences with images, drawings, and
so on... the Mac is stronger in those feilds.

So, in practice, it does a lot of good to let your cpu chew on bitplanes and
the like...
-- 
Peter da Silva, Xenix Support, Ferranti International Controls Corporation.
Work: uunet.uu.net!ficc!peter, peter@ficc.uu.net, +1 713 274 5180.   `-_-'
Home: bigtex!texbell!sugar!peter, peter@sugar.uu.net.                 'U`
Opinions may not represent the policies of FICC or the Xenix Support group.

tom@PHOENIX.PRINCETON.EDU (Thomas C Hajdu) (01/24/89)

please remove me from this group