[alt.cyberpunk] who does it... // State of the art today?

gnu@hoptoad.uucp (John Gilmore) (09/24/87)

Then there's also Walter Jon Williams' _HardWired_ (ISBN 0-812-55796-4).
I liked it, as a good old charismatic outlaws versus the government story.
You can tell it's punk from the leathers worn by the characters on the
front cover, not to mention the cut of their hair.  You can tell it's
cyber because they have chips stuck to their temples.

But Seriously, Folks, I don't want to get too hung up in definitions
and review of the literature; I'd rather talk about how we get there
from here (or avoid getting there, if that's better).

Much of the cyberpunk literature seems to assume direct brain/chip
connections, without even the benefit of an RS232 port (;_).  Is there
anyone here who can speak knowledgeably about the current state of
human-nerve-to-electronics interfaces?  My impression is that nobody
knows how to get more than a few bits per second through them, in
either direction, and that nobody has trained themself to send or
receive language through such an interface.  Or, in other words, CRT's
and keyboards are the state of the art in man/machine interface, and
no clear direction improves on that.
-- 
{dasys1,ncoast,well,sun,ihnp4}!hoptoad!gnu			  gnu@toad.com

tim@hoptoad.uucp (Tim Maroney) (09/24/87)

John, you're pretty much right about the state of the art.  I wouldn't bet
on any serious neuro-electric interfacing being done in the next two
decades.  There are a *lot* of problems.  You can't just stick wires
into neurons.

First, sensory and muscular axons are often very long - some are several
feet long - and it is hard to see how many of the cyberpunk wonder toys
could avoid cutting neurons.  After cutting the neuron, you have to jam some
kind of genetically engineered substitute against the raw end (and fast;
cells are not very happy when they don't have any nuclei), and the
technaiuqes for making something to do this just don't exist.

Second, neural connections are complicated, with amazingly small features
determining their topology.  What's more, the features change over time, and
any artificial add-on would have to do the same.

Third, neurons are not the same from person to person.  The nervous system
works out its own codes and patterns in the course of development, and it is
well established that small differences during development blossom into
large differences later.  Albinism is particularly interesting in this
respect.  Somehow the add-on systems would have to adapt to the
idiosyncracies of each particular nervous system.

Fourth, the synaptic junction is still fairly mysterious.  A lot of the
physical chemistry and physics has been worked out; a lot hasn't.

My understanding, such as it is, comes from a few semesters of
graduate-level courses on sensory processes and physiological psychology,
and a good deal of outside reading.  For anyone who wants to get a good,
reasonably current, and well-balanced treatment of the human nervous system,
I would recommend Jean-Pierre Changeux, "Neuronal Man", Oxford Paperbacks,
1985 (Dr. Laurence Garey, translator).

And before closing, I'd like to make it clear that though the obstacles are
formidable, I expect to see amazing breakthroughs before the end of my
natural lifespan (I'm 25).
-- 
Tim Maroney, {ihnp4,sun,well,ptsfa,lll-crg}!hoptoad!tim (uucp)
hoptoad!tim@lll-crg (arpa)

gary@sdcsvax.UUCP (09/24/87)

In article <3048@hoptoad.uucp> gnu@hoptoad.uucp (John Gilmore) writes:
>
>But Seriously, Folks, I don't want to get too hung up in definitions
>and review of the literature; I'd rather talk about how we get there
>from here (or avoid getting there, if that's better).
>
>Much of the cyberpunk literature seems to assume direct brain/chip
>connections, without even the benefit of an RS232 port (;_).  Is there
>anyone here who can speak knowledgeably about the current state of
>human-nerve-to-electronics interfaces?  My impression is that nobody
>knows how to get more than a few bits per second through them, in
>either direction, and that nobody has trained themself to send or
>receive language through such an interface.  Or, in other words, CRT's
>and keyboards are the state of the art in man/machine interface, and
>no clear direction improves on that.
>-- 
>{dasys1,ncoast,well,sun,ihnp4}!hoptoad!gnu			  gnu@toad.com

Au contraire, here is my own work on the output end:



                                       SEMINAR

                            _T_h_e _C_o_n_n_e_c_t_i_o_n_i_s_t _A_i_r _G_u_i_t_a_r:
                                  _A _D_r_e_a_m _C_o_m_e _T_r_u_e

                                Garrison W. Cottrell
                              Department of Air Science
                Condominium Community College of Southern California


               A major problem faced by many Cognitive Scientists has  been
          the latent desire to be a rock'n'roll star, without the requisite
          talent[1].  Recent advances in connectionist learning  mechanisms
          (Sutton,  1987) have obviated this need.  In this work we present
          the design for the _c_o_n_n_e_c_t_i_o_n_i_s_t _a_i_r _g_u_i_t_a_r[2] -  the  first  air
          guitar to actually produce the notes played.

               This work was motivated by the observation that  it  is  not
          hard  for  people  to  play the songs of their favorite groups on
          their _i_n_t_e_r_n_a_l _p_h_o_n_o_g_r_a_p_h[3] (Kosslyn, 1977).  Thus  the  problem
          may simply be one of poor mapping hardware.  This  suggests  that
          augmentation  by  cognitive models may be useful.  PDP models are
          the  obvious  candidate  for  this  task,  given  that  they  are
          "neurally-inspired", or "brain-like"[4].  In this talk we present
          the first true augmentation of the mind by a connectionist model,
          called Neuro-Acoustic Programming.

               We use a  three-layer  system  as  follows:  Electrodes  are
          placed  on  the  subject's  scalp  using  the International 10-20
          system and amplified by Grass 7P511 preamplifiers[5].  These  are
          the  inputs  to  the  hidden units.  The output layer is simply a
          localist representation of the notes.  These are then  interfaced
          with a standard guitar synthesizer.

               In training, the subject listens to Springsteen  while  "air
          guitaring"  the lead.  The EEG drives the network, resulting in a
          set of outputs.  This result is  then  compared  to  the  correct
          output  (the  _m_u_s_i_c  _t_e_a_c_h_e_r  signal)  at  small  delta t's using
          Sutton's temporal difference method, and  the  errors  are  back-
          propagated  in  the  usual  way.   After  two albums, the network
          ____________________
9             [1]One approach is to ignore this  and  form  a  band  anyway.
          People who took this tack started the punk movement.
             [2]An  _a_i_r  _g_u_i_t_a_r is a conceptual representation of a guitar,
          played in synchrony with actual music.  A cult has formed  around
          this  endeavor,  with many contests currently being held in local
          bars.
             [3]Some  people  claim that they actually _c_a_n'_t play the songs
          internally as well as they hear them.  This is the "bad cognitive
          needle"  problem,  or,  in  the  case  of Kosslyn's more advanced
          _i_n_t_e_r_n_a_l _c_a_s_s_e_t_t_e _p_l_a_y_e_r model, "air heads." As long as the  sig-
          nal  uniquely  specifies  the  song,  it  still maps to the right
          notes, so this technique is useful for the hard of thinking.
             [4]This  is  to  be  contrasted  with  "neurally-expired",  or
          "brain-dead" models.
             [5]Other types of Grass amplifiers produce  a  more  "sixties-
          like" sound.




9



          learns to produce the desired notes from the EEG.  Of  note  here
          is  that  the  hidden units develop a distributed encoding of the
          _q_u_a_l_i_a of the notes, including coarsely-coded features sufficient
          to distinguish Jerry  Garcia  from  Conway  Twitty[6].   However,
          myogram  noise  in the EEG often leads to noise in the output, so
          it appears necessary to  implant  arrays  of  silicon  electrodes
          (developed  by  Jim  Bower at CalTech) directly into the temporal
          lobes, eliminating interference from  muscle  signals.   In  this
          case, the network must actually be borne to run.












































9          ____________________
9             [6]Some hidden units convert six into nine, the so-called _J_i_m_i
          _H_e_n_d_r_i_x units (Easy Rider, 1969).

chris@pyramid.UUCP (09/24/87)

I feel like the state-of-the-art has more to do with how we
view the computer world than what we view it with.

One of the major predictions in the Gibson books is of the
network having grown into a world of its own. In Neuromancer,
et al., this space is entered through cybernetic connections
called decks.  Tron had the same kind of innerspace based
on the inside of the computer.

Gibson is very cautious about the appearance of the world
inside the net.  Tron couldn't be that cautious.  Disney
Productions had to use pictures that we could understand
and objects that we see everyday.  Gibson adds to the mystique
of his novels by only allowing us small views into the minds
of his cowboys.

All this leads up to our network -- how we use it, and how
we view it.  I'll use the Internet for my example since it's
likely someday to become part of the larger net which Gibson
has turned into a world of his own.

Today, our use of the Internet involves getting from one point
to another -- computer to computer.  Few people use it solely
for the ride (even though I'm sure many have gotten a kick out
of telneting to another continent).  Very few users care about
how they get to another site and what sites they pass through.
When people consider the two sites they are using, they picture
it geographically rather than according to net topology.

In Gibson's network you can go out on the net and look for things,
namely protected corporate and military computing hubs.  You know
where you are and have some idea of where you want to go.  You can
stop, take your bearings, and explore.

Only the rudiments of this exist on the Internet today.  So much
effort has gone into making the network transparent, that today
few people understand it, and even fewer can understand what it's
doing at any given time.  For example, at Berkeley a program was
written to send out icmp packets that would collect data about
the gateways they had gone through.  Unfortunately, many vendors
haven't bothered to support this in their networking code.  Some
gateways even crash when queried, making the program dangerous
to use on the net.  However, this kind of software would form a
base for future software that can 'ride' the net.

The second problem with 'riding' the net is that of the geographical
vs. topographical view of the network.  The Network Information
Center will send you neat little maps of the various networks
which show the connections and the locations of the sites on the
various networks they support.  All are printed on nice maps of the
US or the world.  A tremendous amount of useful information is not
provided, though.  Network speeds, local area networks, gateways
to other networks, machine types -- this information is all
essential to knowing where you are and where you're going.

What I expect to see in the near future are programs that can
graphically represent the network using color workstations.  These
programs will be able to dynamically display the network using
a direct connection to the network and querying various gateways
and machines for information on the network's behavior.  This
kind of information (to the best of my knowledge) isn't currently
available, but I expect as the networks grow it will become important
enough to those administrating them, to make these hooks necessary.
The administrators will doubtless be followed onto the net by next
year's cowboys and hackers.

At this point we will start looking at the network less as a bunch
of wires strung together and more as a computer in its own right.
After all, hasn't Sun been using the motto "The network is the
computer" for a while now?

laura@hoptoad.uucp (Laura Creighton) (09/25/87)

Well, I don't know abotu any bbn-nerve connectors for your ethernet either,
but I do know that voice recognition software is getting there.  Once
we get computers we can talk to, we can probably get computers we can
subvocalize to.  People are building speech boxes for those people who
have ruined laranxes, so I think that a fair bit of that technology must
be there already.

Me, all I want to do is to be able to talk to you by packet radio that
way.  Then the alt.cyberpunks can muscle into misc.psi and show those
wimps how telepathy is really done.  
-- 
It's the things that are useful in slaves that computers are really bad at.

Laura Creighton	
ihnp4!hoptoad!laura  utzoo!hoptoad!laura  sun!hoptoad!laura

maiden@sdcsvax.UUCP (09/25/87)

In article <3050@hoptoad.uucp> tim@hoptoad.UUCP (Tim Maroney) writes:
>John, you're pretty much right about the state of the art.  I wouldn't bet
>on any serious neuro-electric interfacing being done in the next two
>decades.  There are a *lot* of problems.  You can't just stick wires
>into neurons.

Yes, but that is not the absolute only way to get this information.
For a good example of the state of the art, take a look at _Science_
(the Journal of the American Association for the Advancement of Science)
last month (I am unsure at the moment of the exact issue) where a
private lab called EEG Systems Laboratory published a paper about
measuring performance expectation through cortical EEG.

Unlike the connectionist air guitar, this is actual fact; EEGSL has
done other things with EEG deconvolution (according to them, another
paper is due to be published in _Nature_).

Some references: you might start with some of the books edited
by Kilx (there is only one Klix on the Melvyl library search in
the UC system).

Although I am not an expert in human NMR, preliminary results from
this sector of clinical medicine seem to make the system a promising
vector for real-time analysis of behavior (witness what has been
done, albeit very crudely, with the PET scanner).

The points made about the dynamic changes in the neural system are
all correct; however an extrinsic monitoring system would side-step
most of these concerns.  A Caveat: it is still unclear exactly what
sort of information _cannot_ be derived from electrical signals
(see Woody: Electrical Fields of the Brain); it is thought, however,
that the information lost would be primarily long term potentiation.

>And before closing, I'd like to make it clear that though the obstacles are
>formidable, I expect to see amazing breakthroughs before the end of my
>natural lifespan (I'm 25).

I echo this sentiment.

Edward K. Y. Jung
------------------------------------------------------------------------
UUCP: {seismo|decwrl}!sdcsvax!maiden     ARPA: maiden@sdcsvax.ucsd.edu

oster@dewey.soe.berkeley.edu.UUCP (09/26/87)

Well, Professor Thomas Cheatham, senior faculty in Computer Science
at Harvard used to say that if you want to do direct brain input, just
use the optic nerves: that are a piece of brain tissue that's pushed
its way through the skull to get a better view.

On this subject, Air & Space magazine (a Smithsonian subsidiary) this
past summer had an article on state of the art heads-up displays and
the future cockpit.  The idea, common in flight simulators, is track
the position of the eyes, and use your computer's processing power to
fill in the details of the image where the user is actually looking.
Combine this with with a good pair of Sony WatchMans, and a decent
inertial tracking system, and you've got a consensual illusion
generator: The computer re-projects your environment onto your video
shades with whatever insertions (or deletions) you've programmed. As
long ago as the early '60s you could wear a helmet on an articulated
arm and walk around a room populated by 3-d computer graphics. As your
head and eyes move, the computer just reprojects the graphics onto the
screens in front of your eyes. (This was described in Bertram
Raphael's history of AI (called "The Thinking Computer" (I think.)))

Our Air Force is considering using a version of this technology for
pilots targetting "fire and forget missiles": The image on the
plexi-glass cockpit gets superimposed with telemtry data. Just look at
a target, press the fire button, and hunt for the next target.

For the home, this idea is best combined with the 3-d "gloves"
described in the current SigCHI proceedings. These gloves measure the
bending of the joints of the hand and send the data to your computer.
They also have little piezo-electric effectors on the fingertips.
Imagine it: your computer projects the master synthesizer keyboard
in the air in front of you (it is really left and right video images
on the heads up display glasses you are wearing, but it looks like
it's just labeled buttons glowing in the air in front of you.)

As you punch the buttons, the effectors in your gloves push back just
a bit, so it feels like you've made contact. You punch a few buttons
and whole sections of the synthesizer pop into existence. (with sound
effects if you want them.)

Or, imagine the 3-d analog of the Macintosh desktop, with window cubes
floating in space for you to walk around, drag into place, or squeeze
and stretch to more convenient sizes to watch the processes going on
within them (run your finger toward yourself along the left top edge
of the "window cube" to zoom in (magnify) the image run your finger
away from you along the edge to zoom it out.)

A group of people, with networked comuters can share the same computer
space. (if I'm in the same room with you, there is no problem, If I'm
in a different room you might see a computer generated silhouette of
me with my digitized face.)

We have the technology, we can do all of the above today. Now for the future:

All of this is possible today and cheap tomorrow. To take it really
far out, if you combine this with the emerging nano-technology
outlined in Drexler's book "Engines of Creation" then imagine this:

Already, I can go into "computer clay mode" and shape light with my
fingers. The computer is watching my gestures and leaving a glowing
scultpure in the air as I move. Couple that with "zoom" and "shrink"
and all the tools of a standard CAD system and you've got a great 3-d
design system. That's all standard "in the labs today, the office
tomorrow, and the closet the day after" technology. Add Drexler's
nano-tech and I can say to that glowing sculpture: "be steel", "be
polystyrene" "be glass", "be foam with properties I'm selecting from
this menu floating by my left shoulder" and it is that. Form a chair
out of light with your fingers, and then sit in it.

If I've sparked your interest a little, well keep in touch:

--- David Phillip Oster            --A Sun 3/60 makes a poor Macintosh II.
Arpa: oster@dewey.soe.berkeley.edu --A Macintosh II makes a poor Sun 3/60.
Uucp: {uwvax,decvax,ihnp4}!ucbvax!oster%dewey.soe.berkeley.edu

hugh@mit-eddie.UUCP (09/26/87)

  Last I found someting on machine/wetthings interfaces was a note about 
some research in Denver where some blind person had a 4x4 grid of wired
straped to (his?) vision center (in the back of the head).  You got it,
a 16 BIT display.  After a lot of work and fidding with voltages this 
person kind of saw something, enough to encource the author of the article.
  Another lost leed in my connectionsconnectionsconnections machine was
someone in the south of Califorina that got the DOD to pay for a 1500 
element SQUID helmet (and the 5 or 10,000 transputers to do the FFT's).
This was for looking in and trying to see where a persons thoughts moved
around in there wetware.  (This was say 3 years ago before High temp Super
Conducters, so imagen a helmet (for your not thick enough skull) that has
liquid He in it!)
  Both of these notes are lost fragments that I would like pointers twoard
more info for if you got them.
  Most of the rest of what is going on is sensors for the arm to tell how it
is moveing in 3 space.  So digittoids, if you know more flow your info down
my light pipe...
		||ugh Daniel
hugh@hop.toad.com hugh@eddie.mit.edu hugh@m-net.uucp hugh@well.uucp ...gac...

hobie@sq.UUCP (09/27/87)

Chris Guthrie (chris@pyramid.UUCP) writes:
>Gibson is very cautious about the appearance of the world
>inside the net.  Tron couldn't be that cautious.  Disney
>Productions had to use pictures that we could understand
>and objects that we see everyday.  Gibson adds to the mystique
>of his novels by only allowing us small views into the minds
>of his cowboys.

I think of cyberspace as looking like the game Marble Madness.  Black
ICE is those little green carnivorous slinkies and the black marbles.

"Case dodged past the ominously circling black sphere and dashed for the
green funnel. Just ahead he could see the goal flags fluttering in an 
electronic breeze. He accelerated, only a few metres from safety. Suddenly,
the music stopped and giant red letters ahead announced "GAME OVER". He was
flatlined.

 Hobie Orris			 	| 	
 guest of SoftQuad Inc., Toronto, Ont.	|"There'll be no more giant leeches
 {ihnp4 | decvax | ? }!utzoo!sq!hobie	| When you find the good Lord Jesus"

stadler@apple.UUCP (09/29/87)

In article <3055@hoptoad.uucp> laura@hoptoad.UUCP (Laura Creighton) writes:

>Me, all I want to do is to be able to talk to you by packet radio that
>way.  Then the alt.cyberpunks can muscle into misc.psi and show those
>wimps how telepathy is really done.  

This brings to mind another book...  I don't think you could call it
cyberpunk but it does have direct brain-cpu connectivity.  That would be 
"Oath of Fealty" by Niven & Pournelle.  The Todos Santos high-mucky-mucks 
had special neural transcievers.  Actually, anybody could get one, but
(a) they were bery-bery expensive, and (b) use of the mainframes was limited
by a fairly straightforward access control scheme.  (like getting hold
of a password).

Occasionally the protagonists would use it for rudimentary telepathy:

"Computer, tell so-and-so that I want to see them" 

and the computer would forward the message.  The interesting part was when 
two "wired" people put their transceivers in "forwarding" mode and proceeded
to make love by way of telepathic communication!!!!

--Andy Stadler

jojo@speedy.UUCP (09/29/87)

In article <3055@hoptoad.uucp> laura@hoptoad.UUCP (Laura Creighton) writes:
>Well, I don't know abotu any bbn-nerve connectors for your ethernet either,
>but I do know that voice recognition software is getting there.  Once
>we get computers we can talk to, we can probably get computers we can
>subvocalize to.  People are building speech boxes for those people who
>have ruined laranxes, so I think that a fair bit of that technology must
>be there already.
>
	Actually, this whole area of technology would/will probably explode if
	business world saw a profit coming out of it.  I see the biggest uses,
	although maybe the dumbest, uses of this technology being aimed at
	children in the form of Teddy Ruxpin bears.  Then again, Take the
	technology further and have the damn bear Really parse English, give
	it access to large information databases, make it more mobile and
	you have a private tutor for your child which could probably answer
	your childs questions better than you could, adapting to the level
	of information he/she is capable of.  Now that would be interesting.

	--j
jon wesener
jojo@speedy.wisc.edu
	"If you like ASTROTIT, you should see what's coming! ;-)"

maddox@ernie.Berkeley.EDU.UUCP (10/04/87)

As far as the interfacing goes, I was at the Superconductors panel at
Westercon 40.  One of the things brought up was SQUIDs:  Superconducting
Quantum Interference Device, I think it is.  The thing is a VERY sensitive
instrument for detecting electrical activity through the usual induction
principles.  They found that you get around 200 milliseconds of free will
between when you decide to perform an action and when it happens.  A monkey
with a SQUID headset was punching numbers randomly on a keypad, and a computer
correlating the input was able to tell which button it would push.  I assume
it would be different for everyone to get things mapped out, though...  And,
of course, once they can read what's going on in your mind, they can then
write using induction:  ZAM!  Your optic nerve is overriden and your visual
centre gets the direct input.  Imagine suntools where you just think at each
window to come forward, close, et cetera..  (I'm still slobbering over the
Sun, given that I usually work with 80286's..)  If anyone has any more recent
data than this July, post!

I feel it worthwhile to mention an organization:  Cyberpunk International.
Send $3.00 to 	Cyberpunk International
		P.O. Box 2187
		Sunnyvale, CA  94087

Dr. Odd, our founding member, will be glad to have more members.  I
suppose I should type up all the signup stuff but I'm too tired.  So far
we've our second newsletter out.  Cyberpunk International is basically
a link-up place for cyberpunks to put articles and things..  It's mainly
in the South Bay with a few of us here in Berkeley and San Francisco, but
the faster it spreads the better.  Well, I gotta get some down time.  If
anyone knows any substitutes for sleep besides caffeine or uppers, I'd love
to know...  (60-hour work weeks, yum!)
								Carl
/----------------------------------v------------------------------------------\
| Carl Greenberg, guest here       | "I have a very firm grasp on reality!  I |
| ARPA:  maddox@ernie.berkeley.edu | can reach out and strangle it any time I |
| UUCP: ...ucbvax!ucbernie!maddox  | want!"                         - Me      |

rhorn@infinet.UUCP (10/17/87)

In article <21130@ucbvax.BERKELEY.EDU> maddox@ernie.Berkeley.EDU.UUCP (Carl Greenberg (guest)) writes:
>  Imagine suntools where you just think at each
>window to come forward, close, et cetera..  (I'm still slobbering over the
>Sun, given that I usually work with 80286's..)  If anyone has any more recent
>data than this July, post!
>
You've never played ground support games in your local helicopter
gunship :-).  This is an old established technology.  Watch the
eyeballs with a laser to establish where you are looking and integrate
with the fire control system.  Want to zap something?  Look at at it
and twitch the correct finger.  Need to track in a missile?  Keep
looking at it and hold down the tracking trigger.  SQUID's would avoid
needing a trigger finger, but people are very comfortable with
triggers and finger control is a lot more reliable than SQUID
detection.

-- 
				Rob  Horn
	UUCP:	...harvard!adelie!infinet!rhorn
	Snail:	Infinet,  40 High St., North Andover, MA
	(Note: harvard!infinet path is in maps but not working yet)

bart@speedy.UUCP (10/18/87)

In article <977@infinet.UUCP> rhorn@infinet.UUCP (Rob Horn) writes:

    >                                           SQUID's would avoid
    >needing a trigger finger, but people are very comfortable with
    >triggers and finger control is a lot more reliable than SQUID
    >detection.

If you've ever examine the cyclic (control stick) or collective (RPM
control and throttle) of a helicopter, especially a gunship, you will
notice that it has more buttons than a frog has warts.  And if you have
to look at the labels to figure out which is which, it's probably too
late.

One rule in flying heli-choppers is NEVER take your hands off the
controls unnecesarily.  So, the pilot is given a large number of controls
at his/her finger tips.

Now weapons usually have trigger-like buttons and other devices are located
more for thumb control.  But hitting the wrong button does happen...
"damn, I just wanted to say `roger', but just launched an air-to-air
at the control tower.  I'm so embarrased".  

So, non-mechanical controls are much desired.  Though a good, passionate
daydream could have all sorts of interesting side effects.

						--bart miller
						  uw-madison cs dept
						  bart@cs.wisc.edu
						  ...!uwvax!bart