[comp.society.futures] Computer Interfaces

scratch@unix.cis.pitt.edu (Steven J Owens) (01/18/90)

A lot of the predictions of late in this newsgroup have been about the
interface between the personal computer and the user - not just the software
interface, the hardware.  Discussions have covered voice I/O, dataglove &
eyephones (virtual reality), touch screens, and more.

	Something I'm curious about, and I hope the more technically minded
out there could advise me about, is the possibility of the dream-sf
"mindlink" technology.  I'm not so optimistic as to imagine the fantastic
full bandwidth visual/kinesthetic two-way link of various sf novels, but
I think there is a strong possibility for a limited version - perhaps to be
expanded as time goes by.

	Here's a rough outline of what I have in mind:  Years and years ago
I remember seeing films in high school and later on TV about biofeedback
training involving hooking an EEG up to a computer, thence to a toy electric
train.  The subject would sit there with the EEG and think "stop" and the
train would stop.  He would think "go" and the train would go.

	Since this is possible, why isn't it being explored as a possible
replacement for the keyboard?  Voice is imperfect, keyboard and touchscreen
are slow, why not an EEG?

	Of course, I understand, there is a BIG gap to span between "stop/go"
and a full range of possibilities.  But I think these could be overcome.  To
start with, I see two basic directions.  

A)  Research & Develop a helmet-style monitor that uses SQUID technology
	(incredibly more sensitive than an EEG) to monitor the eletromagnetic
	activity of the user's brain and translate those synaptic sparks to
	electronic impulses intelligible to the computer. 

B)  Using similar technology, build a basic interface that recognizes (or
	will "learn" to recognize, with suitable practice) a few hundred
	basic commands (words or images, when concentrated upon by the
	user).

	Method A) would be the true way to go to attain that sf dream mind-
link.  It would also be damn hard, I am sure, to develop the software to
interpret the brain activity to computer information.  Still, the spinoffs
and reverse engineering possibilities (for output, pure research and for
medical research on nervous disorders and similar problems) could be
tremendous.)

	Method B) would be simpler, and though not as impressive method A),
much more feasible.  In B), instead of trying to understand the information
received by scanning the EM output of the brain, the point would be to 
write software that would recognize the patterns of EM activity associated
with concentrating on various keywords, images, or concepts.  

	There would probably have to be some AI programming involved to
allow the software to recognize minor variations in patterns over time.  The
job would be simplified by using a unique set of patterns for each user, and
"teaching" the program a new set of patterns each time a new user starts
to use it (if the method became common, people would like carry their set
of patterns on a disk or similar portable media).

	I'm not sure on this, but I suspect that users could use patterns
based on feelings and emotions and similar ephemeral items.  After a while,
the interface would function so smoothly the user wouldn't have to
consciously "think" a command just as a good typist doesn't consciously
select which key to type.  This would lead to the development of
"confirmation" modes where each command results in a request for
confirmation to prevent mental garbage from spilling over into the 
work.

	Much like voice I/O, at first there would be a limited vocabulary,
but as research progressed, several hundred different commands would be
included in the basic vocabulary (including basic cursor movment, and the
alphanumerical set, perhaps even meta-keys).  Paired with a set of eyephones
and some sort of virtual reality software, it might prove to be the best
interface possible (short of method A) above).

	How about some hard data from the tech types here?  I'm just a
communications major, so while I may have a good idea of how people think
and what they'd want, I have no idea as to the technical feasability of
this.  How much data would have to be processed for method B?  How
sophisticated (and expensive) would the EEG gear have to be?  Would EEG
be discriminate enough to serve our purposes or would a SQUID be necessary?
How much software power are we talking about to discern patterns and
interpret input?

	For output, does anybody know about the research done on hooking up
cameras to the optic nerve?  This might or might not be a good idea - is it
possible to influence the nerves indirectly? (by induction, for example)?

Steven J. Owens    |   Scratch@Pittvms    |   Scratch@unix.cis.pitt.edu

"There's a long hard road and a full, hard drive / And a sector there where
 I feel alive / Every bit of every byte / Is written down once on the night
 / Networking, I'm user friendly..."

	-- Warren Zevon, Networking, Transverse City

clw@headcrash.Berkeley.EDU (Nobody you know) (01/18/90)

In article <21686@unix.cis.pitt.edu> scratch@unix.cis.pitt.edu (Steven J Owens) writes:
>
>	Here's a rough outline of what I have in mind:  Years and years ago
>I remember seeing films in high school and later on TV about biofeedback
>training involving hooking an EEG up to a computer, thence to a toy electric
>train.  The subject would sit there with the EEG and think "stop" and the
>train would stop.  He would think "go" and the train would go.
>
>	Since this is possible, why isn't it being explored as a possible
>replacement for the keyboard?  Voice is imperfect, keyboard and touchscreen
>are slow, why not an EEG?

	The subject of the experiment described above was not thinking
'stop' or 'go', he was changing the frequency of his brainwaves from
beta waves to alpha waves.  This is a change of brain-state, not thought
content.  The detection technology is closely equivalent to detecting
a change in sound from a high tone to a lower tone.
	I have not heard that a means has been developed to unambiguously
distiguish between thoughts, even general catagories of thoughts.  Should
a means to do that be developed, it could conceivably be refined into an
interface.  The detection of one's integrated brainwave would be less useful,
since one has much more control over one's voice, and vocal control is
faster, more reliable, cheaper, and easier to use, imperfect as it is...

	--clw@ocf