[sci.virtual-worlds] Computer thought translation/pickup

robertc@sco.COM (Robert Chansky) (11/29/90)

        This is a request for information.  Someone here posted, not long
ago, that a simple form of thought translation was possible via EEG equipment
hooked to computer software.  If someone has any more information about this
issue I would appreciate hearing from them.  If the equipment isn't too
expensive, I'd like to fiddle with it and see if I can come up with anything,
using a neural net approach.


-- 
####  #       #   #    # #    # # #    #  ###   #  ##    #### # # ### ## # # ###
#   # #      # #  ##   # ##   # # ##   # #   #  # #      #    # # #   # # #   #
#   # #     #   # # #  # # #  # # # #  # #      #  ##    ##   # # ##  ##  #   #
####  #     ##### #  # # #  # # # #  # # # ###  #    #   #    # # #   # # #   #
#     #     #   # #   ## #   ## # #   ## #   #  #    #   #    # # #   # # #   #
#     ##### #   # #    # #    # # #    #  ###   #  ##    ####  #  ### # # #   #
Robert Chansky, smq@ucscb.ucsc.edu, robertc@sco.com, uunet!ucscc!ucscb!smq, etc.

chris@ug.cs.dal.ca (Chris Robertson) (12/01/90)

In article <11924@milton.u.washington.edu> robertc@sco.COM (Robert Chansky) writ
es:
>
>        This is a request for information.  Someone here posted, not long
>ago, that a simple form of thought translation was possible via EEG equipment
>hooked to computer software.  If someone has any more information about this
>issue I would appreciate hearing from them.  If the equipment isn't too
>expensive, I'd like to fiddle with it and see if I can come up with anything,
>using a neural net approach.



        I dont know if this is what you're thinking of, but i've
        recently read of something in this vein.

        Seems that at some South US school (cant remember) someone
        wanted a hands-free flight simulator.

        Well, the idea is so simple, you'll groan.  Certain classes
        of neurons in the early stages of the optic nerve, just 
        behind the eyeball and beneath the temple, are already
        sympathetic to a frequency somewhere around 13 hz.

        Now the EEG alone is a mess - a DSP nightmare for analysis.
        But what they did is mount four flourescent bulbs around
        the edges of the display and had them flickering at 13 hz
        (or whatever the exact figure) - anyway, this reinforced
        the 13 hz-resonance activity already occuring in the 
        optic nerve.  The signal was then quite easy to pick out
        against the background noise of the EEG.

        Then apparently with a bit of training aided by a simple
        bio-feedback display, one can learn to grossly 'control'
        the strength of these resonances, as well as shift the
        frequency distribution within a very narrow band around 13 hz.
        Like learning a new muscle.  Ever try to wiggle your ears?

        anyway, with 2 parameters of the signal now user controllable
        to a sufficient extent for the computer to distinguish them,
        four separate inputs could be translated into one of 4
        signals.

        Yep, you guessed it - left, right, up, and down.  EXTRA
        cool IMHO.

        So there you go - its gross, but its a start!


                                - Chris Robertson
                                  chris@ug.cs.dal.ca

FC137501@YSUB.YSU.EDU (Paul M. Mullins) (12/02/90)

>        This is a request for information.  Someone here posted, not long
> ago, that a simple form of thought translation was possible via EEG equipment
                             -------------------

> Robert Chansky, smq@ucscb.ucsc.edu, robertc@sco.com, uunet!ucscc!ucscb!smq   .

I belive the original posting (I don't recall the author) did imply
"thought translation," however my posts consisted mainly of pointers to the
literature on "evoked potentials" (and a few specific references for J. Vidal).

NONE of my research indicated the ability to perform what might (reasonably)
be called thought translation.  Rather, certain specific events caused a
reaction which could be measured (in this case by EEGs), hence the name
evoked potentials.  Interpretation of the measured response is, to my
knowledge, an open question.  It could be used for simple interface tasks
such as controlling the display rate of text on the screen for example, but
not to select an icon - unless you cycle through each possibility, providing
semantic feedback, and wait for the user to "react" to one.  This might
be useful for the physically impaired, but better interface tools exist
already (for most of us).

My favorite comparison is voice interfaces, which are becoming quite good,
but still are not widely used.

But enough cold water, I look forward to learning that you have been more
successful.  I will be happy to provide more information on request.

Paul M. Mullins
Dept. of Math. & Comp. Sci.               (216) 742-3796
Youngstown State University               mullins@macs.ysu.edu
Youngstown, OH  44555                     FC137501@YSUB