[sci.virtual-worlds] Privacy

craig@utcs.utoronto.ca (Craig Hubley) (02/15/91)

In article <16410@milton.u.washington.edu> matth@mars.njit.edu (Matthew Harelick
) writes:
>
> Wouldn't the development of neural interfaces be dangerous. If you develop 
>equipment that can "read minds" , you could develop equipment that can read
>specific parts of a persons brain, thereby invading his privacy etc. 

Could be.  Depends on who controls the information.  I, for one, would have
no qualms whatsoever about each and every detectable aspect of my body+head+
hand motion/voice/brainwaves/EKG/galvanic-skin-response/body-temperature/
heartbeat/bloodstream chemical content being detected and recorded by
microsensors that  *I* own, *I* control, and *I* (to whatever degree is 
practical) configure and program.  So long as this information goes only
into, say, a Walkman I'm carrying, and is used for *my* purposes.  It goes
without saying that I don't want every room I walk into to be gathering
this information for me, which is why I don't believe in the "media room"
interface - not only is it non-portable, it's almost impossible to keep
private.

If the room needs to know anything about me, it can query my Walkman and
it will tell the room as much/as little as I think is necessary to configure
the room to my liking.

If we break down the physical privacy barriers that have existed by default
due to our limited sight/hearing/memory etc., we must erect new, consensual,
electronic ones.  Already, with urine tests etc., we are being forced to
restate exactly what we mean by "privacy".

It is true that the collection of information tends, in time, to expose it.
Imagine if everyone's wristwatch recorded their pulse, for perfectly legit
reasons like telling them they had reached their aerobic target zone, or
celluar-phoning the nearest hospital if they had a heart attack.  Now imagine
that an insurance company offers very cheap life insurance to anyone proving,
via randomly transmitted pulse-checks, that they are in good cardiovascular
health.  Soon there would be two groups:  those that were willing to trade
their privacy in exchange for lower premiums, and another group that
consisted of those in poor cardiovascular health plus those who refused to
give up their privacy for other reasons.  So long as that second group
remained large enough, there would be economic incentive to continue to
serve it.  However, if it gets too small, then the price of insurance for
that group goes through the roof, and the premium of privacy gets very high.

Like most things, it is a question of social checks and balances, and who
has enough of an interest in the information to apply economic leverage to
dig it out.  Gathering a lot more information on people, as we do today,
simply means that we can no longer rely on default physical definitions
of privacy (e.g. I'm alone in my room therefore what I do is private)
and have to develop some new ones.  This, in my opinion, is what groups
like EFF are trying to do, based on non-physical models of privacy rights
that already seem to work, like the Bill of Rights.

Of course, there is a lot to this topic.  That is why conferences on the
topic of privacy alone have been held, why there are discussion groups for
it on the net, etc..  I think in general people have become aware of the
problem in recent years.

  Craig Hubley   "...get rid of a man as soon as he thinks himself an expert."
  Craig Hubley & Associates------------------------------------Henry Ford Sr.
  craig@gpu.utcs.Utoronto.CA   UUNET!utai!utgpu!craig   craig@utorgpu.BITNET
  craig@gpu.utcs.toronto.EDU   {allegra,bnr-vpa,decvax}!utcsri!utgpu!craig
  28 First Avenue, Toronto, Ontario M4M 1W8 Canada     Voice: (416) 466-4097