[net.ai] The cost of moderating satellite News

robison@eosp1.UUCP (Tobias D. Robison) (01/04/85)

(10-line  quote at end)

The suggestion to screen net software for obscene words
comes from me, and is part of a larger, more
interesting problem that may be unsolvable at
the present time.  I still think it is worth research.
Behind my argument lie these assumptions:

(1) In the future, moderation to avoid legal
liability is inevitable.

(2) Moderation will slow the flow of news
and should be avoided wherever possible.

From a new perspective:

Imagine that you are about to submit an article to the
future net.  You may write about anything you  please,
but you know that any article that might conceivably
be libellous or illegal will be scanned by a human
moderator.  Your artcile will be screened by a
computer program to determine whether moderation
is necessary.  For the sake of this discussion
I assume that a moderator never edits your text,
but simply determines whether it is legally safe to
broadcast it.  You can write about anything you like,
but you have two choices:

   (1) Write an article that certainly deserves to
   pass the computer screening.  It will be posted
   to the net relatively quickly.

   (2) Write an article including anything you like.
   You will acceprt the delay required for human
   over-reading.

In the specific case of Tim Maroney's concern, you
may include obscene language if you feel this
is appropriate, but of course your note will be
screened by a moderator.

The PROBLEM is to write software that can distinguish
between the two types of articles as accurately as
a human reader.  Bear in mind that a human
reader will not be perfect either.

The program that does the screening should be very
conservative in what it will pass.
Most of its algorithm should be public knowledge.
The algorithm will simply establish a style
that is acceptable for quick-distribution-notes.

Now while someone (I hope) thinks about the AI
implicatiions of this screening algorithm,
I invite net.games.pbm subscribers to propose
pathological cases that will fail;  that is,
how easy would it be to write a nasty, scurrilous
note that would sneak past the software screen?
If such notes are very hard to write, the
existence of software screening in the future
can greatly reduce our reliance on
human moderation.

  - Toby Robison (not Robinson!)
  {allegra, decvax!ittvax, fisher, princeton}!eosp1!robison

In article <20980040@cmu-cs-k.ARPA> tim@cmu-cs-k.ARPA
(Tim Maroney) writes:
>What is this nonsense about screening out "swear words"
>from satellite news?
>I doubt that the law requires this, considering that
>uncensored movies are
>transmitted via satellite all the time.
>Let's not introduce such juvenile
>foolishness into the news system unless the law mandates it.

tim@cmu-cs-k.ARPA (Tim Maroney) (01/05/85)

Toby Robison is interesting as usual, but I feel the point of my concern has
not been addressed directly.  Is there some legal requirement that
satellite-broadcast USENET messages not contain words which some people call
"obscene"?  If not, then I strongly suggest that such words not be used as a
criterion for rejection of an article by satellite article screeners.

This objection is not made on personal grounds -- anyone who follows my
messages knows I very rarely use such words myself (since there are usually
more expressive ways to communicate).  The objection is that ANY unneccesary
censorship is to be avoided at all costs, and this should be considered a
general ethical principle.
-=-
Tim Maroney, Carnegie-Mellon University Computation Center
ARPA:	Tim.Maroney@CMU-CS-K	uucp:	seismo!cmu-cs-k!tim
CompuServe:	74176,1360	audio:	shout "Hey, Tim!"

"Remember all ye that existence is pure joy; that all the sorrows are
but as shadows; they pass & are done; but there is that which remains."
Liber AL, II:9.

kay@flame.UUCP (Kay Dekker) (01/07/85)

[[][]]

>.........  You may write about anything you  please,
>but you know that any article that might conceivably
>be libellous or illegal will be scanned by a human
>moderator.  Your artcile will be screened by a
>computer program to determine whether moderation
>is necessary.  For the sake of this discussion
>I assume that a moderator never edits your text,
>but simply determines whether it is legally safe to
>broadcast it.  

Excuse me, but I think there may be a problem here.  Both obscenity and
libellousness are rather difficult to screen for.
1) According to English law, 'obscene' is defined as 'having a tendency to
deprave and corrupt'.  This is extremely knotty: the 'Lady Chatterley' and
'OZ' cases illustrate this.
2) There are cases where seemingly-libellous material may in fact not be so.
For example, of the publication is 'in the public interest', or is 'fair
comment'.

I cannot see software (or even moderators) being able to screen articles for
'obscenity' or 'libellousness': it has taken juries many days to argue over
these points.

Furthermore, I gather that the laws which govern permissible public utterances
vary wildly between countries.  The screening rules must then have knowledge
of the different regulations that apply over the various countries into which
net-contents enter.  For example, in England, we have a law which makes illegal
'Blasphemous Libel'.  Prosecutions for this offence are extremely rare:  it was
last trundled out in 197[67] by our protector of public propriety, Mrs. Mary
Whitehouse.  She was offended by a poem by James Kirkup, "The love that dares 
to speak its name", which appeared in the British gay newspaper, "Gay News".
The prosecution was successful, and the editor and the paper were fined heavily
and the editor given a suspended prison sentence.

How many other archaic laws and regulations would this screening software have
to know about?

							Kay.
-- 
"But what we need to know is, do people want nasally-insertable computers?"
			... mcvax!ukc!flame!kay

rrizzo@bbncca.ARPA (Ron Rizzo) (01/10/85)

Screened by a computer program to decide if a human moderator (=censor)
is required?  Either the prudes are lightyears ahead of us in AI, else
they'll simply force us to use euphemisms to frustrate searches for
"keywords" (locutions like "the love that dare not speak its name" for
homosexuality); or will they dump The Quean's Vernacular into their
database, accelerating the creation of new slanguage.

The only way to defeat such a counterreaction is to proscribe entire
classes of nouns, verbs, etc. (OED goes into the database).  This is
precisely what happens in Orwell's 1984.  

				Nicefeels doublegood,
				Ron Rizzo (This ISN'T my real name!)

pgp@hou2h.UUCP (P.PALMER) (01/11/85)

  I think this discussion should be moved IMMEDIATELY to something like
  net.security.  The whole idea of "moderating", which is a gross euphemism
  for censoring, is obnoxious (and unlikely to be accepted) anyway.

smh@mit-eddie.UUCP (Steven M. Haflich) (01/12/85)

Have I missed something?  The ongoing discussion on detecting `libelous'
postings addresses only certain kinds of libel -- scurrilous or obscene
descriptions of persons with defamatory intent -- but misses entirely
kinds of libel rather more likely in this environment.

Suppose I were to write:
	In his recent posting, Toby Robinson (not Robison!) wrote:
		I feel the future of AI programming lies in assembly
		language, since only by using assembly language can
		the careful programmer attain those important last
		few percent of available machine performance, so
		important to successful AI applications.  I would
		not work for any company that insisted on my writing
		code in inefficient languages like Lisp or Prolog.
	I cannot agree with Toby on this point. ...

Note that my `posting' is about a valid technical subject and is written
in neutral terms of the technical field.  Unless the fictional Robinson
had actually made such a statement, such a posting would (I believe) be
libel.  With flagrant disregard for the truth, it clearly damages
Robinson's reputation and presumably could also damage his employment
opportunities.  It is *not* necessary for me to claim someone practices
nonconsentual sex with laser printers in order to libel him.  He would
have legal recourse against me and my employer.

It might be possible, I suppose, for the automatic censor to verify
quoted inclusions against the article database, but what about:
	At the recent SIGAI meeting in Nepal Toby Robinson (not
	Robison!) told me he felt the future of AI ... ... ...
	I cannot agree with Toby on this point. ...

There is no way for a machine to verify this one.  If the automatic
censor must kick out any quoted or paraphrased citation for review by a
human, almost *everything* will have to be reviewed!  So why bother?

Steve Haflich, MIT

sdyer@bbncca.ARPA (Steve Dyer) (01/13/85)

Please remove the newsgroup reference to net.motss on any subsequent
discussion of this topic.  It's hard to imagine a less appropriate
newsgroup.
-- 
/Steve Dyer
{decvax,linus,ima,ihnp4}!bbncca!sdyer
sdyer@bbncca.ARPA