[comp.society.futures] IEF007ACA: WRONG THOUGHT OR PRE-VOCALIZATION

bzs@BU-CS.BU.EDU (Barry Shein) (11/11/87)

Date: Tue, 10 Nov 87 22:00:17 EST
From: Michael Travers <mt@MEDIA-LAB.MEDIA.MIT.EDU>
To: viv-core@MEDIA-LAB.MEDIA.MIT.EDU, prog-d@MEDIA-LAB.MEDIA.MIT.EDU
Subject: interesting but scary interface technology

I thought of this technique a few years ago, but I was too scared by
the possible repressive uses of it to pursue it, or even mention it
much.  Now I see other people are.

-----------------------------------------------------------------

This message is aimed at you people out there on the UsetNet community
who are doing (or know of someone doing) research in the area of
Electromyography and Covert Oral Behavior.

An abstract of what we are doing follows --- if you know of anyone who is
working on something similiar, we would most certainly like to get in
touch.

Please contact:
	Dr. Howard I. Thorsheim
	Department of Psychology
	St. Olaf College
	Northfield, MN  55057	USA
	(507)-663-3144

---------------------------- ABSTRACT FOLLOWS -------------------------------

The Possibility of a Thought-Recognition Interface

Walter D. Poxon

Craig D. Rice

Academic Computing Center
St. Olaf College

Howard I. Thorsheim

Department of Psychology
St. Olaf College
Northfield, MN 55057

An ideal for human-computer interaction is approaching the
here-and-now.  To date, any interaction between humans and computers
has taken place through the use of keyboards, or more recently, voice
recognition hardware, and other specially-designed input devices.
Keyboards provide a reasonably efficient means of interacting with a
machine and allow for the entry of very complex commands and data
sets.  Voice recognition systems offer a more natural means of
communication, but have limitations in their ability to recognize
large or specialized vocabularies. The drawback common to these and
all current human-computer interaction systems is that they require
that human thoughts be transformed into overt actions in order to be
recognized by the computer; that is, users must type their thoughts,
or say them, or draw them.  A more natural human-computer interface
would have none of these drawbacks. It would be able to recognize and
respond to covert human behavior rather than requiring overt signals.
Determining the possibility and practicality of such an interface is
one of our current goals.

Through the analysis of electromyographic (EMG) signals of the covert
oral behavior phenomenon [McGuigan and Winstead, 1974], [Thorsheim,
McGuigan, and Davis, 1975], produced while a subject speaks or thinks
preassigned syllables, it is anticipated that unique, reproduceable
digital patterns may be identified.  By linking these patterns to the
syllables or words that they represent, a new system of communication
may be built whereby the machine interprets these signals as its
source of input in place of the codes generated by the traditional
keyboard interface.

The current stage of our research is to characterize the EMG signals
generated during covert oral behavior and assess the feasibility of
using currently-available microcomputer resources to process and
interpret these signals using common analog to digital conversion
equipment and Fourier analysis techniques.  Long-term goals include
the building of subject-independent libraries of "command patterns"
and the referencing of these libraries to allow for the control of a
text editor without the use of a keyboard.  


-----------------------------------------------------------------------------
Craig D. Rice   Math Deptartment Computer Systems Manager
		Academic Computer Center Systems Programmer

  USMAIL:	St. Olaf College   Northfield, MN   55057
    UUCP:	..{ihnp4,umn-cs}!stolaf!ricec
    AT&T:	Work: (507)-663-3096	Home: (507)-663-2191
		    Data: (507)-663-2191 ("cinta" midnight-6am)

OWENSJ@VTVM1.BITNET (John Owens) (11/12/87)

>From: Michael Travers <mt@MEDIA-LAB.MEDIA.MIT.EDU>
>To: viv-core@MEDIA-LAB.MEDIA.MIT.EDU, prog-d@MEDIA-LAB.MEDIA.MIT.EDU
>Subject: interesting but scary interface technology
>Through the analysis of electromyographic (EMG) signals of the covert
>oral behavior phenomenon [McGuigan and Winstead, 1974], [Thorsheim,
>McGuigan, and Davis, 1975], produced while a subject speaks or thinks
>preassigned syllables, it is anticipated that unique, reproduceable
>digital patterns may be identified.

I remember reading a science fiction story several years ago (in
Year's Best S.F. 1955 or something like that) in which "Big Brother"
used such a recognition device to read the thoughts of the main
character without her knowing it.  The story concerned a woman
who navigated a ship through hyperspace (or somesuch) by psychic
means.  In the course of doing this, she fell in love with another
psychic navigator, and they created their own private world (which
was part of the navigation process, but took on special meaning for
them).  When she found out that they (I don't remember who "they"
were) had been monitoring her thoughts and invading the privacy
of their communications, she was crushed.  Anyway, at the time I
thought that reading thoughts by detecting "subvocalizations" was
purely an invention of the author, but perhaps not.  Does anyone
remember the story and the year so we can compare it to the
papers referenced above?  Maybe the researchers got their idea
from this storybe

riddle@woton.UUCP (11/12/87)

I agree that wiring people up in such a way as to detect "covert oral
behavior" would constitute "interesting but scary interface
technology."  But how much of thought do you really believe is expressed
in syllables?  It's my opinion that only some thought is verbally (let
alone orally) based.  The Sapir-Whorf hypothesis notwithstanding, I
think that on introspection most of us would realize that there are
plenty of ideas, experiences, fantasies and other thoughts which we can
think but can't verbalize.  Visual and tactile images would be simple
examples.  Even linguistic thinking doesn't always map successfully
into words -- how many times have you had a thought "on the tip of your
tongue" but been unable to verbalize it? 

Furthermore, some of the "scarier" potential applications of this
technology might be easy to thwart, given a concerted effort to do so. 
If communities of speakers have managed to come up with cant and jargon
to avoid being understood by outsiders in the past, it should be even
easier for an individual to purposely obfuscate the language used for
his or her internal monologue. 

Reading the abstract you posted, one thing jumps out at me: I doubt
that the system described would be more "natural" than keyboard or
speech interfaces because it would avoid the drawback "that human
thoughts be transformed into overt actions in order to be recognized by
the computer."  In fact I suspect that at best it would contain all of
the problems of current speech recognition systems -- limited
vocabulary and an ability to handle only an artificial subset of speech
behavior.  If this were the case, in order to use the interface people
would have to *overtly* control their "covert oral behavior."

--- Prentiss Riddle ("Aprendiz de todo, maestro de nada.")
--- Opinions expressed are not necessarily those of Shriners Burns Institute.
--- riddle@woton.UUCP  {ihnp4,harvard}!ut-sally!im4u!woton!riddle

riddle@woton.UUCP (Prentiss Riddle ) (11/13/87)

OWENSJ@VTVM1.BITNET (John Owens) writes:
> I remember reading a science fiction story several years ago (in
> Year's Best S.F. 1955 or something like that) in which...

I read a similar story, but the twist in this one was that the
hyperspace pilot carried on a stormy dialogue with another crew member
whom he never saw, who turned out to be the personfication of his own
subconscious "subvocalizations".  This was his employers' solution to
keeping pilots sane in the solitude of deep space. 

I suspect that this is well-trodden ground in sf. 

--- Prentiss Riddle ("Aprendiz de todo, maestro de nada.")
--- Opinions expressed are not necessarily those of Shriners Burns Institute.
--- riddle@woton.UUCP  {ihnp4,harvard}!ut-sally!im4u!woton!riddle