[sci.virtual-worlds] Question about Harmonization of Computer/Eye Electronic Impulses

a1860@mindlink.UUCP (B.F. Painter) (03/26/91)

        An inquiring neophyte mind would like to know. I have been unable 
to find any documentation on this subject and would appreciate any 
suggestions and directions:

        IMHO one of the main stumbling blocks of VR is the imaging quality.
Resolution, graphics are all cited as being poor, cartoonish, and unable to
operate in "real time" or at least 30 fps. The user is required to suspend
disbelief conciously and make the imagination leap.  My question is has any
research been undertaken to look directly into linking computer electric
impulses with the eyes own electric impluses? or taking this further the body?
Granted a translator would probably be required.

        As mentioned earlier I was unable to find any documentation responding
to this issue.  So any help would be appreciated. Thanks.

isr@rodan.acs.syr.edu (Michael S. Schechter - ISR group account) (04/02/91)

In article <5282@mindlink.UUCP> a1860@mindlink.UUCP (B.F. Painter) writes:
>research been undertaken to look directly into linking computer electric
>impulses with the eyes own electric impluses? or taking this further the body?
>Granted a translator would probably be required.

As far as I know, (I'm a computer engineer, not a neuroscientist, but i do
work for/with them, studying just that field), theres three layers of
biological processing before the signal hits the brain.

1. Sensor layer: retinal sensors pick up light signals, convert to
                 impulses

2. Retinal cross-stimulation: The retina cells "horizontally" excite and
                 inhibit each other to make a muddled picture of things

3. ???         : I don't know what this layer is called, but the retinal
                 neuron cells pass their signals on to the next layer which
                 then sorts 'em all out and passes signals into the optic
                 nerve to the brain.

At ISR, last year some people created a model of the 1st and 2nd layers of
a horshoe crab eye (only 1000 retina sensors). This used a quarter of a CM2.
Even this, i believe, didn't generate any of the pattern going towards the
brain, but it's from a lack of understanding, not CPU power.
The human eye has, i believe, 3 million sensor cells and is much more
interconnected than the horseshoe crab eye.
So we're a long way away....
But, deciphering sensory coding for both vision and audition IS
being worked on.
Mike the ex-lurker
-- 
InterNet:Mike_Schechter@isr.syr.edu  BITNET: SENSORY@SUNRISE

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) (04/03/91)

a1860@mindlink.UUCP (B.F. Painter) writes:

>                                                  ...My question is has any
>research been undertaken to look directly into linking computer electric
>impulses with the eyes own electric impluses? or taking this further the body?
>Granted a translator would probably be required.

>        As mentioned earlier I was unable to find any documentation responding
>to this issue.

   The reason that you haven't been able to find anything is that there has
been little direct research doing precisely what you are talking about.

   However, there has been quite a bit that is related, and would provide
enough information to give some pretty good estimates.  First, as mentioned in
another thread, reading up on neurophysiology would probably be a great place
to start.  There has been a lot of work on elucidating not only the functional
subdivisions of other parts of the brain, but *especially* on the visual and
auditory tracts (and when other senses are involved, those as well...  for
example there has even been work done studying the electric field sensors of
electric fish).

   Now...  there has been work on interfacing directly with nerves, but I am
unclear on who is doing it and how far they have progressed.  A fairly
substantiated rumor passed by me saying that there has been quite a bit of
work done by a company (in the North Western area of the U.S., I think) on
interfacing nerves with microchips by encouraging them to grow through holes
and connect with the pads on the chip.  The rumor also included info saying
that they are surprisingly successful.  I have no further info at present...
although I am attempting to find out more.

   Connect these two together and you have the basis for some really neat
experiments.

   Erich
             "I haven't lost my mind; I know exactly where it is."
     / --  Erich Stefan Boleyn  -- \       --=> *Mad Genius wanna-be* <=--
    { Honorary Grad. Student (Math) }--> Internet E-mail: <erich@cs.pdx.edu>
     \  Portland State University  /        Phone #:  (503) 246-6120

mccool@dgp.toronto.edu (Michael McCool) (04/03/91)

erich@eecs.cs.pdx.edu (Erich Stefan Boleyn) writes:

>   Now...  there has been work on interfacing directly with nerves, but I am
>unclear on who is doing it and how far they have progressed.  A fairly
>substantiated rumor passed by me saying that there has been quite a bit of
>work done by a company (in the North Western area of the U.S., I think) on
>interfacing nerves with microchips by encouraging them to grow through holes
>and connect with the pads on the chip.  The rumor also included info saying
>that they are surprisingly successful.  I have no further info at present...
>although I am attempting to find out more.

Well, if you read the IEEE Transactions on Biomedical Computing, you will see
that articles on multi-contact neural probes are very popular just at the 
moment.  This is not as interesting as growing neurons into a chip.  The 
current state-of-the art is putting many contacts on a single probe fashioned
out of a sliver of silicon, so that nearby neurons can be recorded, and
should allow research into small "circuits" rather than just recordings
of single neurons and then guesses about what circuits they could
possibly be a part of :-)

How does this relate to direct neural interfacing?  I don't know, but
one COULD imagine an array of these things inserted into a nerve bundle,
which would give a three-dimensional grid of points for recording and 
stimulation.  And the scale wouldn't necessarily be that crude.  Of course,
the main problem is the research that still has to be done to understand
the signals we would record from such an array, and the kind of stimulation
that would make sense to the system.

Somehow, though, it all seems very crude, inserting PHYSICAL objects into
a neural bundle.  Guess I'm just a soft-palmed computer engineer and
will never have the guts to handle the bloody art of wetware hacking.

Michael McCool@dgp.toronto.edu

hibbett@prcs3.decnet.philips.be (04/03/91)

>From: a1860@mindlink.UUCP (B.F. Painter)
>
>[Deleted] .. My question is has any
>research been undertaken to look directly into linking computer electric
>impulses with the eyes own electric impluses? or taking this further the body?
>Granted a translator would probably be required.
>

Several years ago I saw a report (on a show called Tomorrows World) about an
experiment into giving a totally blind person sight. From what I remember, 
a blind man was hooked up to a computer via a multiway socket implanted on the
back of his head. By some form of stimulation a small matrix of light dots 
(approx 5 x 8) could be caused to "appear". The demonstration showed images
in the form of single numbers being "projected".

The computer was connected directly to some part of this guys visual system,
though where exactly I'm not sure.

Appologies for the vague nature of this report - I hope that it stimulates
someone memory as to the exact nature of the system being shown. I wouldn't
mind finding out more, though I doubt if anyone would want major surgery
just to have a built in monitor!

Mike.
--------------------------------------------------------------------------------
Mike Hibbett                      |   Philips Radio Communication Systems Ltd
Tel: INT + 44 223 358985 Ext.3310 | St Andrews Road, Cambridge, CB4 1DP, England
----------------------------------+---------------------------------------------
Philips DECnet:  PRCS3::HIBBETT   | EUnet:  hibbett@prcs3.decnet.philips.be
HAM   :          G6COQ@GB7DDX     | TCP/IP:  g6coq@g6coq.ampr.org
--------------------------------------------------------------------------------
/EX

chris@ug.cs.dal.ca (Chris Robertson) (04/05/91)

Besides an actual silcon-nerve connection, I also recall a Scientific
American story in the early eighties about a young physicist at MIT who,
quite accidentally, discovered that oscillating micro-ampere currents applied
to the temples produce a similar effect to what happens if you press your
fingers into your eyes hard for a couple of minutes.  It produces a sort-of
"floating linoleum" pattern (at least, it does for me).

When you do it with pressure on the eyeballs, the pattern is a result of the
peizio-electric discharges of the ocular fluid stimulating the retinal cells
to fire.  Don't do it for more than 2 or 3 minutes .. it can damage your eyes.

Anyway, being inquisitive, this fellow began to experiment with the waveform
and frequency of the current.  With a computer and a waveform synthesis unit,
he apparently succeeded, within a year or two, in being able to produce some
basic geometrical shapes in the viewers field of vision.  The figures are more
or less "superimposed" on what your already looking at, and I recall him saying
that, for this reason, it was effective to wear dark glasses while viewing.

But that was the last of that I ever heard.  What became of it?  Anyone know?

I dunno, maybe it gave you eye cancer, or something ....


 - chris

szabo@RELAY.CS.NET (Nick Szabo) (04/07/91)

In article <5282@mindlink.UUCP> a1860@mindlink.UUCP (B.F. Painter) writes:
>
>
>...    IMHO one of the main stumbling blocks of VR is the imaging quality.
>Resolution, graphics are all cited as being poor, cartoonish, and unable to
>operate in "real time" or at least 30 fps. The user is required to suspend
>disbelief conciously and make the imagination leap.  My question is has any
>research been undertaken to look directly into linking computer electric
>impulses with the eyes own electric impluses? 

This seems to assume that the bottleneck is in the peripheral resolution.  
The current limits of the 3D graphics are caused by computing power, not
the display.  Since CPUs seem to be getting faster faster than 
displays are become sharper, the display may eventually become the 
bottleneck.

Even then, IMHO visual displays and the human eye will be superior to a
more direct hookup that delivers impulses that the human brain is not 
optimized to process.  This may be true for other sensory input as well.


-- 
Nick Szabo                      szabo@sequent.com
"If you want oil, drill lots of wells" -- J. Paul Getty
The above opinions are my own and not related to those of any
organization I may be affiliated with.