[comp.ai.neural-nets] Cyberspace Implementation Issues

jdb9608@ultb.UUCP (J.D. Beutel ) (10/17/88)

In article <10044@srcsip.UUCP> lowry@srcsip.UUCP () writes:
>
>
>There's been a lot of discussion recently on how something (kind of) like 
>c-space might be implemented.  The conventional wisdom seems to be that 
>you'd need super-high res color monitors and a graphics supercomputer
>providing real-time images.  
>
>It seems to me that kind of equipment would only be needed if you were
>going to funnel all the info into an eye.  I recall reading somewhere
>that the nerves behind the retina do preprocessing on images before
>sending the data down the optic nerve.  If you could "tee" into the
>optic nerve, it seems like you could feed in pre-digested data at
>a much lower rate.  
>
>Apologies if this idea/topic has been previously beaten to death.


Beaten to death?  Nonsense!

I've heard alot about neural networks, artificial retinas in particular.
Research is producing, on the dry side, theories about how machines
can see, and conversly, on the wet side, how we ourselves see.
All the theories I've heard of concure that the neurons which react
immediately to light are input as groups to other neurons which react
to higher forms like lines and dots and movement.

But, while I think that the resultant information is more useful,
I'd also guess that there is more of that information than there
was raw information from which it was derived.

For example:
the silicon retina that some company (I don't remember the name) is working
on with Carver Mead:  every 6 light-sensitive neurons are sampled by
3 edge-sensitive neurons (up:down, (PI/4):(5PI/4), and (3PI/4):(7PI/4)).
However, all the light-sensitive neurons are arranged in a hexigonal
tessilate such that each neuron is part of 3 hexigons.  Therefore,
as the number of light-sensitive neurons increases, the ratio of
edge-sensitive to light-sensitive approaches 1.  Additionally, there
are other higher forms, like dots and spots and motion in various directions,
that will all be using those same light-sensitive neurons as input.

That's why I think that "pre-digested data" might be 10 times more massive
than the raw visual input.  Of course, one could try to digest the data
further, transmitting boxes and circles and motion paths as gestalt
instead of transmitting the lines and corners that make them up.
But, the further you digest the data, the deeper into the brain you must go.
Pixels make up lines; lines make up corners; lines and corners make up
squares and triangles; squares and triangles make up the picture of a house.
The theories I've heard of agree that we are all born with the neurons
pre-wired (or nearly so) to recognize lines, but I've heard of none
that suggest that we are pre-wired with neurons that recognize a box with
a triangle on top.  Instead, we've learned to recognize a "house" because
we've seen alot of them when we were young.  The problem is that
the way I learn "house" might be different from the way you learn "house."

So, a video screen is a video screen to two different people, but a
"tee into the optic nerve" would have to be very different for two different
people, depending on how far back into the brain you jacked in.
The system would have to be dynamic, since people learn as they age;
what a house is to you at age 10 is not what a house is to you at age 20.
Symstim and consensual hallucinations are taken for granted in cyberpunk,
and I took them for granted too.  The more I think about it, however,
the less probable is seems.

I'm cross-posting my lame followup to the neural-nets froup in the
hope that someone there will have some comment or idea on how
a computer could possibly generate a consensual hallucination for
its operator, hopefully entirely within the operator's mind, as
opposed to controling holograms and force fields which the operator
would 'really' see around him.

11011011