[net.games.pbm] The cost of moderating satellite News

robison@eosp1.UUCP (Tobias D. Robison) (01/04/85)

(10-line  quote at end)

The suggestion to screen net software for obscene words
comes from me, and is part of a larger, more
interesting problem that may be unsolvable at
the present time.  I still think it is worth research.
Behind my argument lie these assumptions:

(1) In the future, moderation to avoid legal
liability is inevitable.

(2) Moderation will slow the flow of news
and should be avoided wherever possible.

From a new perspective:

Imagine that you are about to submit an article to the
future net.  You may write about anything you  please,
but you know that any article that might conceivably
be libellous or illegal will be scanned by a human
moderator.  Your artcile will be screened by a
computer program to determine whether moderation
is necessary.  For the sake of this discussion
I assume that a moderator never edits your text,
but simply determines whether it is legally safe to
broadcast it.  You can write about anything you like,
but you have two choices:

   (1) Write an article that certainly deserves to
   pass the computer screening.  It will be posted
   to the net relatively quickly.

   (2) Write an article including anything you like.
   You will acceprt the delay required for human
   over-reading.

In the specific case of Tim Maroney's concern, you
may include obscene language if you feel this
is appropriate, but of course your note will be
screened by a moderator.

The PROBLEM is to write software that can distinguish
between the two types of articles as accurately as
a human reader.  Bear in mind that a human
reader will not be perfect either.

The program that does the screening should be very
conservative in what it will pass.
Most of its algorithm should be public knowledge.
The algorithm will simply establish a style
that is acceptable for quick-distribution-notes.

Now while someone (I hope) thinks about the AI
implicatiions of this screening algorithm,
I invite net.games.pbm subscribers to propose
pathological cases that will fail;  that is,
how easy would it be to write a nasty, scurrilous
note that would sneak past the software screen?
If such notes are very hard to write, the
existence of software screening in the future
can greatly reduce our reliance on
human moderation.

  - Toby Robison (not Robinson!)
  {allegra, decvax!ittvax, fisher, princeton}!eosp1!robison

In article <20980040@cmu-cs-k.ARPA> tim@cmu-cs-k.ARPA
(Tim Maroney) writes:
>What is this nonsense about screening out "swear words"
>from satellite news?
>I doubt that the law requires this, considering that
>uncensored movies are
>transmitted via satellite all the time.
>Let's not introduce such juvenile
>foolishness into the news system unless the law mandates it.

tim@cmu-cs-k.ARPA (Tim Maroney) (01/05/85)

Toby Robison is interesting as usual, but I feel the point of my concern has
not been addressed directly.  Is there some legal requirement that
satellite-broadcast USENET messages not contain words which some people call
"obscene"?  If not, then I strongly suggest that such words not be used as a
criterion for rejection of an article by satellite article screeners.

This objection is not made on personal grounds -- anyone who follows my
messages knows I very rarely use such words myself (since there are usually
more expressive ways to communicate).  The objection is that ANY unneccesary
censorship is to be avoided at all costs, and this should be considered a
general ethical principle.
-=-
Tim Maroney, Carnegie-Mellon University Computation Center
ARPA:	Tim.Maroney@CMU-CS-K	uucp:	seismo!cmu-cs-k!tim
CompuServe:	74176,1360	audio:	shout "Hey, Tim!"

"Remember all ye that existence is pure joy; that all the sorrows are
but as shadows; they pass & are done; but there is that which remains."
Liber AL, II:9.