[sci.virtual-worlds] Brain/Environment "bottleneck"

jtm@cs.cornell.edu (Jan Thomas Miksovsky) (03/01/90)

I keep hearing things like, "Won't it be great when we have a direct 
neuron-level interface to a digital system so that we won't be slowed 
down by things like typing."  But how wide would the bandwidth between 
our minds and a virtual reality system really be if there were no 
"bottlenecks" like hands and keyboards?

Think for a minute about how fast you can type or manipulate a mouse 
when you're processing at top speed.  When I'm really flying, I can keep 
maybe eight or ten operations in my Mac's buffer: I'm pounding away on 
command keys, and dialog boxes that begin to pop up on the screen are 
blown away before they can ever be completely drawn.  I can keep this up 
for maybe a minute or so; then I have to step back and figure out what 
I'm going to do next.

If we could directly manipulate the computer with neuron pulses, yeah, 
sure, we could cruise along at light-speed.  But if we're performing any 
kind of analytical task, wouldn't we still be limited by the speed of 
our consciousness "processor"?  We might think something like, "Load up 
the simulation of the house I'm building," and even if WHAM the 
simulation appears right away, it's going to take us a second or two to 
figure how we next want to manipulate the environment.

The real speedup will probably occur in tasks that are repeated so 
often, and learned so well, that conscious thought isn't required.  
Think about when you play some sport that you're good at: balls and 
sticks or whatever are flying around, and you can just move around, 
reacting to the environment without having to spend a lot of high-level 
processing time considering your next move.

                              *  *  *  *  *

A related issue:  What research has been done in this area?  There's 
been some talk on the net recently about research involving direct 
stimulating the visual processing areas of the brain to provide blind 
people with sight.  Does the opposite work -- with prosthetic limbs, for 
instance... Can brains learn to trigger neurons in order to activate a 
foreign object?

-- 

Jan Miksovsky                                ...uunet!applelink.apple.com!D4710

cphoenix@csli.Stanford.EDU (Chris Phoenix) (03/01/90)

In article <2193@milton.acs.washington.edu> jtm@cs.cornell.edu (Jan Thomas Miksovsky) writes:
>If we could directly manipulate the computer with neuron pulses, yeah, 
>sure, we could cruise along at light-speed.  But if we're performing any 
>kind of analytical task, wouldn't we still be limited by the speed of 
>our consciousness "processor"?

If we could directly manipulate the computer with neuron pulses, and it could
manipulate our neurons, surely someone would develop a brain-accelerator.  It
could be as simple as a math coprocessor, or maybe a memory.  I don't know 
how much we could be sped up while still staying human (followups to 
alt.philosophy.humanist.computer.flame :-) ) but it's too early to say we
won't be able to keep up with computers.

>The real speedup will probably occur in tasks that are repeated so 
>often, and learned so well, that conscious thought isn't required.  

For tasks like that, have an AI do it!  No, this isn't facetious, though it
may be naive.  It shouldn't be too hard to build a program that could learn
repetitive tasks, and do them at the proper times and in the proper ways...
or something... 
To tie this in with VR:  I recently called up VPL to ask about summer 
internship opportunities.  I told them I was interested in AI, and the person
I talked to (sorry, I didn't get the name) told me that they were trying to
get away from AI, and have it more user-directed.
This surprised me.  I would have thought that AI would be a pretty important
part of VR, if only to choose which things to display and how to display them.
Can anyone comment on whether AI is being used at all in VR, and if so, how?
(and if there are any AI-VR jobs available in the Bay Area? :-) )




-- 
Chris Phoenix               |  "I've spent the last nine years structuring my
cphoenix@csli.Stanford.EDU  |      life so that this couldn't happen."
...And I only kiss your shadow, I cannot see your hand, you're a stranger 
now unto me, lost in the dangling conversation, and the superficial sighs...

peb@tma1.Eng.Sun.COM (Paul Baclaski) (03/02/90)

In article <2193@milton.acs.washington.edu> jtm@cs.cornell.edu (Jan Thomas Miksovsky) writes:
>If we could directly manipulate the computer with neuron pulses, yeah, 
>sure, we could cruise along at light-speed.  

The human brain does a lot of distributed processing in the eyes and
ears and the "neural protocol" between the brain and these input
devices is unique from person to person.  This is probably also the
case with motor nerves, olfactory bulb and tactile nerves (I've heard
that one of the major problems with heart transplants is that the 
nerve signals from the brain to the foreign heart are not quite right).

Because of this, any system that interfaces directly to nerves is
going to need lots of training and adjustment.  To use standard hardware,
one would need a personal interface machine to transform to and from 
your personal nerves.  The training time on such a interface could be
considerable--I wonder if Hans Moravec has any ideas in this area
(Mind Children does not mention problems of this sort).

A direct nerve interface is probably a very invasive proceedure
like big time brain surgery, so I would expect it to be pretty far out
there on the time scale (e.g., you either need safe brain surgery 
(nanotech to the rescue) or volunteer terminal patients).


In article <2205@milton.acs.washington.edu>, cphoenix@csli.Stanford.EDU (Chris Phoenix) writes:
> If we could directly manipulate the computer with neuron pulses, and it could
> manipulate our neurons, surely someone would develop a brain-accelerator.  

How about this: we can talk in our heads without moving our mouths.
This means that it would be possible to tap into the inside voice and
use it as an output device.  Then, the coprocessor module (off the
shelf expert systems personified) could listen to the voice and talk 
to you or display information "inside" your head.

This assumes a neural interface, but it keeps the brain intact.
Accelerating thought by improving the brain is tantamount to
downloading your consciousness into a machine.  Are you the same
person after such a transformation?  How could we tell?

OVERALL, I think direct neural interfaces are in the realm of fantasy
and are not worth discussing too much (unless you want to write
Science Fiction).

The above brain augmentation could also be done with other technology
with the exception that "inside voice" could not be used.

> To tie this in with VR:  I recently called up VPL to ask about summer 
> internship opportunities.  I told them I was interested in AI, and the person
> I talked to (sorry, I didn't get the name) told me that they were trying to
> get away from AI, and have it more user-directed.

AI is orthogonal to VR.  I would guess that VPL's strategy is to empower 
people--it sounds like you are suggesting making things automatic,
which is great for making people lazy (e.g., "wouldn't it be nice to have
some AI slaves to order around?...").  This is enticing in a negative way.
In my opinion, the purpose of AI is to understand Intelligence and the
Universe.

Put another way:  Jaron Lanier stated in the Whole Earth Review interview
(Fall 1989) that technology that simply increase the amount of power 
people have is inherently bad (because we have all sorts of social 
problems that just get amplified).  He then stated that a technology
that increases human communication is good--how could anyone say that
the telephone was a bad idea?  He also mentioned that people who watch
TV appear to be uninvolved, but people talking on the telephone are 
very much involved.  Jaron wants to encourage involvement, not uninvolvement.


Paul E. Baclaski
Sun Microsystems
peb@sun.com

aipdc@castle.edinburgh.ac.uk (Paul D. Crowley) (03/02/90)

In article <2193@milton.acs.washington.edu> jtm@cs.cornell.edu (Jan Thomas Miksovsky) writes:
>Think for a minute about how fast you can type or manipulate a mouse 
>when you're processing at top speed.  When I'm really flying, I can keep 
>maybe eight or ten operations in my Mac's buffer: I'm pounding away on 
>command keys, and dialog boxes that begin to pop up on the screen are 
>blown away before they can ever be completely drawn.  I can keep this up 
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>for maybe a minute or so; then I have to step back and figure out what 
>I'm going to do next.

I too find that desktops work too slowly, and that this affects how
useful they are. Has anyone out there used a desktop so fast that they
would't notice if it got ten times faster most of the time (i.e. it is
already too quick for the time of most tasks to be noticed). What
difference does it make?

I ask because I once had the opportunity to word-process on a 386
machine. The wp loaded within half a second and remembered where it was
before. It was also called "m" so it didn't take much effort to call up.

-- 
\/ o\ "I say we grease this rat-fuck son-of-a-bitch Paul D Crowley
/\__/ right now. No offense." - Aliens.             aipdc@uk.ac.ed.castle

almquist@cis.udel.edu (03/02/90)

In article: jtm@cs.cornell.edu (Jan Thomas Miksovsky) writes:
>If we could directly manipulate the computer with neuron pulses, yeah, 
>sure, we could cruise along at light-speed.  But if we're performing any 
>kind of analytical task, wouldn't we still be limited by the speed of 
>our consciousness "processor"?  We might think something like, "Load up 
>the simulation of the house I'm building," and even if WHAM the 
>simulation appears right away, it's going to take us a second or two to 
>figure how we next want to manipulate the environment.

YES, valid points.  BUT, perhaps by then we shall have an AI/Expert System
friend/co-pilot.  If not a co-pilot then perhaps advanced MACROs.  Instead
of having to say, "Load up the simulation of the house I'm building," we
could instead say "Simulate, House, Earthquake".  YES, poor example BUT,
if you stop to think about it, the next generation might be drastically
different.  What would happen to the thinking processes of this generation?
Would they be modified, simplified, extended, etc?  Perhaps the next generation
will not be limited by our learned limitations (ie. our mental blocks).  They
will not remember the ages of keyboards.  They will begin to develop
drastically different memory structures, fetch/recall routines, etc.  If we are
to directly tap into their brains at an early age (like the future Gibson
wove for us) their mental development, I feel, would become drastically
different from one who doesn't plug-in.  Perhaps this increased stimulation of
the brain MIGHT increase one's total brain usage.  As the saying goes, "we
currently use less than 25% of our brain's total power." 

- Michael Almquist

wex@sitting.pws.bull.com (Alan Wexelblat) (03/04/90)

Jan Mikovsky was asking about the ability to do routine things or
well-practiced things without having to expend higher-order brain power.
Look up research on "muscle memory."  That's the term I've seen used to
describe the way non-obvious physical actions are mapped into semantic
actions (such as touch-typing).

One of the cyberspace navigation techniques invented by Kim Fairchild and
myself depends on muscle memory.  It works moderately well.

--
--Alan Wexelblat		internet: wex@pws.bull.com
Bull HN Information Systems	Usenet: spdcc.com!know!wex
phone: (508) 671-7485
	Adapt, adopt, improvise!

bro@eunomia.rice.edu (Douglas Monk) (03/04/90)

In article <2193@milton.acs.washington.edu> jtm@cs.cornell.edu (Jan Thomas Miksovsky) writes:

>A related issue:  What research has been done in this area? [...]
>Can brains learn to trigger neurons in order to activate a foreign object?

Short answer: yes. Back in the '70s, I recall reading some experimental
reports on implanting fine wires as sensors into muscle fibers in the wrists
of subjects. After biofeedback training, they could learn to activate
individual muscle CELLS on command. The drawbacks that I recall are a long
training period, occasional poor performance rates (such as when fatigued,
etc.), and the fact that the wires would eventually kill the fibers in which
they were implanted, thus requiring reimplantation and a new training period.
I believe they had mapped out strategies around the first two problems, and
the last was the hard part. I recall thinking at the time that typewriters
would soon come with a cable to plug into sockets wired into our wrists,
and we would just have to visualize to type. Looking at my wrists, I see it
hasn't happened yet, despite being technically feasible for 20 years. :-)

Doug Monk (bro@rice.edu)

Disclaimer: These views are mine, not necessarily my organization's.

wcs) (03/05/90)

In article <WEX.90Mar2090441@sitting.pws.bull.com> wex@sitting.pws.bull.com (Alan Wexelblat) writes:
]Jan Mikovsky was asking about the ability to do routine things or
]well-practiced things without having to expend higher-order brain power.

	Some of the people I've talked to at LLNL are speculating
	about what they'd like, and they'd really like to have two-eye
	1Kx1Kx24bit-color at 30 frames/second, because that's about
	the most data input they think they can use - about 1Gbit/sec.
	It needs to be attached to the back-side of a Cray,
	because what they'd really like is for a couple of physicists
	to be able to walk around in a simulated nuclear reaction,
	having conversations about "what if it were a couple hundred
	degrees hotter?" or "let's make that wall a little thinner"
	"No, not that thin", or "magnify that spot there 100X".

] One of the cyberspace navigation techniques invented by Kim Fairchild
] and myself depends on muscle memory.  It works moderately well.

	How do you attach to it - position-detection gloves, or
	head/eye motion detectors, or other stuff?

[ Disclaimer: I don't do this stuff professionally (the LLNL stuff
was doing datacomm.)  The .signature was appropriate for a random discussion
and I haven't found anything sufficiently weird to replace it.]

-- 
# Bill Stewart AT&T Bell Labs 4M312 Holmdel NJ 201-949-0705 erebus.att.com!wcs
# Fax 949-4876.  Sometimes found at Somerset 201-271-4712
# He put on the goggles, waved his data glove, and walked off into cyberspace.
# Wasn't seen again for days.

jeffj@cbnewsm.ATT.COM (jeffrey.n.jones) (03/06/90)

>A related issue:  What research has been done in this area?  There's 
>been some talk on the net recently about research involving direct 
>stimulating the visual processing areas of the brain to provide blind 
>people with sight.  Does the opposite work -- with prosthetic limbs, for 
>instance... Can brains learn to trigger neurons in order to activate a 
>foreign object?
>
 
I have read about and seen prosthetic limbs that were crudely controlled
and activated by direct connects to nerves. At a virtual reality seminar
that I just attended there was a mention of a chip that was being developed
that would interface into the nerves and act like a straight switch.

[Moderators Note:]

Dr. Joseph Rosen at Stanford University Medical Center has been conducting
research into this area. The goal of one of the his projects is to 
develope a "chip prosthesis" that will enable specific axon-to-axon
reconnection.





-- 
         Jeff Jones        | Prediction is very difficult, especially 
 UUCP   uunet!seeker!jeffj | about the future.
 Infolinc BBS 415-778-5929 |                   Niels Bohr