[net.news] The cost of moderating satellite News

robison@eosp1.UUCP (Tobias D. Robison) (12/26/84)

This memo discusses the cost of moderating most news, and a
possible way (requiring "ai" software?) to avoid human moderation.

Assuming that sites receive news from satellite, and feed
news articles generated from their site via the existing news
network, there will be a great savings in use of that network if
we establish a system of "roots" feeding from small capilaries
up to trunk feeder lines to the site(s) that feed the satellite.

If it is necessary to moderate everything that goes via satellite,
there will be considerable extra costs:

(1) Just to overread everything.  Even if people donate their
services, that's a lot of donated services.

(2) Moderation will slow the timeliness of responses, damping many
useful discussions.

(3) The access of moderators to the net has cost implications:

  - If moderators are spread all over, then news must be fed to
    them, rather than to the trunk satellite feeder sites.  That
    type of feeding will be more complicated and costly.

  - If the moderators are physically spread out, but make long distance
    phone calls to moderate news at the satellite feeder sites,
    these phone calls will be expensive.

  - If moderators are required to live at the trunk feeder sites,
    so they can pre-check all news via local connections, the result
    could be elitist control of the news.

It is possible that federal laws will make moderation necessary.
But I think that most of it could be done via software!  Consider:

(0) We would continue to distinguish between groups that need a
moderator jsut to keep the net from sending illegal stuff, and groups
that are moderated in order to achieve filtered/summarized discussion.

(1) Human modertors will not be perfect;  they will occasionally let
something slip through that shouldn't.  Software need not be "perfect"
to replace or assist them, just very good.

(2) Software can scan for swearwords, suggestive language, expletives,
phone numbers and credit card numbers faster than humans.

(3) Software might be able to detect cases where hundreds of people
send similar short messages (such as "yes, my byte magazine was
delivered late too").

A reasonable procedure would be for human moderators to read anything
caught by software checkers, and to let the rest go through.
Obviously, while installing a procedure like this, Everything
would be overread for a while.

Analyzing news to detect inappropriate material is an interesting
challenge for the ai community, but I think not that difficult
to do fairly well.

  - Toby Robison (not Robinson!)
  {allegra, decvax!ittvax, fisher, princeton}!eosp1!robison

steiny@scc.UUCP (Don Steiny) (12/28/84)

**
	Maybe we could get the National Security Agency's list
of key words that they look for when monitoring phone conversations.
Anything that was flagged could be checked to keep the net out
of trouble.
-- 
scc!steiny
Don Steiny - Personetics @ (408) 425-0382
109 Torrey Pine Terr.
Santa Cruz, Calif. 95060
ihnp4!pesnta  -\
fortune!idsvax -> scc!steiny
ucbvax!twg    -/

lauren@vortex.UUCP (Lauren Weinstein) (12/28/84)

I don't consider software to be adequate or desirable for netnews
screening.  From the standpoint of avoiding the transmission of
libelous, copyrighted, or otherwise unsuitable materials, no
software could be designed that would handle such tasks except in the
most obvious of cases.  Such software could also be easily
circumvented through techniques that should be obvious to all of us.

From a legal standpoint, even if human moderators occasionally
let things slip through, we would at least have shown we made 
a good faith effort to do things rights if we had people watching
over the material.  If we had some silly software doing it, any
court would laugh itself sick over the premise that THAT, given the
state of the art, represented any real sort of screening.

Apart from screening for unsuitable materials, it is my hope that the
groups sent by satellite will eventually represent a better quality
of material.  And just like the editor of Time Magazine doesn't
publish every piece of material that crosses his desk or that people
send in, this service doesn't need to either.  In fact, nobody
would read Time if he did.  This service is not to REPLACE Usenet,
but rather is to provide an alternative for people who do not have
the time, inclination, or money to handle the ever increasing
volume of calls (which will get far worse as the net grows) with
a smaller and smaller percentage of messages representing useful
information to them.  People who want to carry on their rapid
fire discussions in such groups as net.religion and net.singles
can go ahead -- but there are quite a few people who could live
quite nicely without those groups (and some other groups like it)
and would really like to spend their time reading material with
a higher percentage of usefulness.  The idea is to give these
people a choice -- the full, growing dialup network for those
who want it (sort of analogous to standing at a sewer outfall),
and something a little more controlled and filtered for people
who can't afford the time or money to wade through all that.

One point is certainly true -- careful consideration must be
given to the flow paths toward stargate to avoid undesirable
delays.  However, my own concept is that most of these materials
would be MAILED directly to the moderator, not passed slowly
through the netnews links.  The current experimental model does
not represent the long term picture that would be necessary to
make things really work.  Also, it would seem reasonable that,
ultimately, moderators/screeners/editors would be compensated
in some way for their time.  I don't think a nationwide news
broadcasting service can operate totally on volunteer labor
forever!

Remember, what you see right now is an experiment, not
the shape of any possible future production system.

--Lauren--

tim@cmu-cs-k.ARPA (Tim Maroney) (12/29/84)

What is this nonsense about screening out "swear words" from satellite news?
I doubt that the law requires this, considering that uncensored movies are
transmitted via satellite all the time.  Let's not introduce such juvenile
foolishness into the news system unless the law mandates it.

There is no reason to think that an article containing words some consider
"obscene" could not be well worth reading.
-=-
Tim Maroney, Carnegie-Mellon University Computation Center
ARPA:	Tim.Maroney@CMU-CS-K	uucp:	seismo!cmu-cs-k!tim
CompuServe:	74176,1360	audio:	shout "Hey, Tim!"

"Remember all ye that existence is pure joy; that all the sorrows are
but as shadows; they pass & are done; but there is that which remains."
Liber AL, II:9.

robison@eosp1.UUCP (Tobias D. Robison) (01/04/85)

(10-line  quote at end)

The suggestion to screen net software for obscene words
comes from me, and is part of a larger, more
interesting problem that may be unsolvable at
the present time.  I still think it is worth research.
Behind my argument lie these assumptions:

(1) In the future, moderation to avoid legal
liability is inevitable.

(2) Moderation will slow the flow of news
and should be avoided wherever possible.

From a new perspective:

Imagine that you are about to submit an article to the
future net.  You may write about anything you  please,
but you know that any article that might conceivably
be libellous or illegal will be scanned by a human
moderator.  Your artcile will be screened by a
computer program to determine whether moderation
is necessary.  For the sake of this discussion
I assume that a moderator never edits your text,
but simply determines whether it is legally safe to
broadcast it.  You can write about anything you like,
but you have two choices:

   (1) Write an article that certainly deserves to
   pass the computer screening.  It will be posted
   to the net relatively quickly.

   (2) Write an article including anything you like.
   You will acceprt the delay required for human
   over-reading.

In the specific case of Tim Maroney's concern, you
may include obscene language if you feel this
is appropriate, but of course your note will be
screened by a moderator.

The PROBLEM is to write software that can distinguish
between the two types of articles as accurately as
a human reader.  Bear in mind that a human
reader will not be perfect either.

The program that does the screening should be very
conservative in what it will pass.
Most of its algorithm should be public knowledge.
The algorithm will simply establish a style
that is acceptable for quick-distribution-notes.

Now while someone (I hope) thinks about the AI
implicatiions of this screening algorithm,
I invite net.games.pbm subscribers to propose
pathological cases that will fail;  that is,
how easy would it be to write a nasty, scurrilous
note that would sneak past the software screen?
If such notes are very hard to write, the
existence of software screening in the future
can greatly reduce our reliance on
human moderation.

  - Toby Robison (not Robinson!)
  {allegra, decvax!ittvax, fisher, princeton}!eosp1!robison

In article <20980040@cmu-cs-k.ARPA> tim@cmu-cs-k.ARPA
(Tim Maroney) writes:
>What is this nonsense about screening out "swear words"
>from satellite news?
>I doubt that the law requires this, considering that
>uncensored movies are
>transmitted via satellite all the time.
>Let's not introduce such juvenile
>foolishness into the news system unless the law mandates it.

tim@cmu-cs-k.ARPA (Tim Maroney) (01/05/85)

Toby Robison is interesting as usual, but I feel the point of my concern has
not been addressed directly.  Is there some legal requirement that
satellite-broadcast USENET messages not contain words which some people call
"obscene"?  If not, then I strongly suggest that such words not be used as a
criterion for rejection of an article by satellite article screeners.

This objection is not made on personal grounds -- anyone who follows my
messages knows I very rarely use such words myself (since there are usually
more expressive ways to communicate).  The objection is that ANY unneccesary
censorship is to be avoided at all costs, and this should be considered a
general ethical principle.
-=-
Tim Maroney, Carnegie-Mellon University Computation Center
ARPA:	Tim.Maroney@CMU-CS-K	uucp:	seismo!cmu-cs-k!tim
CompuServe:	74176,1360	audio:	shout "Hey, Tim!"

"Remember all ye that existence is pure joy; that all the sorrows are
but as shadows; they pass & are done; but there is that which remains."
Liber AL, II:9.

mark@cbosgd.UUCP (Mark Horton) (01/06/85)

In article <20980049@cmu-cs-k.ARPA> tim@cmu-cs-k.ARPA (Tim Maroney) writes:
>Toby Robison is interesting as usual, but I feel the point of my concern has
>not been addressed directly.  Is there some legal requirement that
>satellite-broadcast USENET messages not contain words which some people call
>"obscene"?  If not, then I strongly suggest that such words not be used as a
>criterion for rejection of an article by satellite article screeners.

Obscene words are not the issue.  The problem is another type of message:
the one that encourages and assists illegal behavior.  Such as
	A working telephone credit card number is xxx-xxx-xxxx.
	Have fun!
or
	I have proof that <insert name of person> has embezzled
	large sums of money from <insert name of company>.

In cases like this, someone gets hurt.  That someone could be looking
for someone to sue, and the company with the transmission facility is
an obvious target.  The recent bboard case, where the computer on which
the bboard resided was confiscated, sets a precedent.  We have to take
whatever measures we reasonably can to prevent such things from happening.
I understand that the company in question has specifically insisted that
everything they broadcast be screened.

I can't imagine how an AI program could be expected to detect something
like this.  Besides, if such a program were put into place, it would have
bugs that would quickly become well known, and it would become easy to
fool it.

	Mark Horton

kay@flame.UUCP (Kay Dekker) (01/07/85)

[[][]]

>.........  You may write about anything you  please,
>but you know that any article that might conceivably
>be libellous or illegal will be scanned by a human
>moderator.  Your artcile will be screened by a
>computer program to determine whether moderation
>is necessary.  For the sake of this discussion
>I assume that a moderator never edits your text,
>but simply determines whether it is legally safe to
>broadcast it.  

Excuse me, but I think there may be a problem here.  Both obscenity and
libellousness are rather difficult to screen for.
1) According to English law, 'obscene' is defined as 'having a tendency to
deprave and corrupt'.  This is extremely knotty: the 'Lady Chatterley' and
'OZ' cases illustrate this.
2) There are cases where seemingly-libellous material may in fact not be so.
For example, of the publication is 'in the public interest', or is 'fair
comment'.

I cannot see software (or even moderators) being able to screen articles for
'obscenity' or 'libellousness': it has taken juries many days to argue over
these points.

Furthermore, I gather that the laws which govern permissible public utterances
vary wildly between countries.  The screening rules must then have knowledge
of the different regulations that apply over the various countries into which
net-contents enter.  For example, in England, we have a law which makes illegal
'Blasphemous Libel'.  Prosecutions for this offence are extremely rare:  it was
last trundled out in 197[67] by our protector of public propriety, Mrs. Mary
Whitehouse.  She was offended by a poem by James Kirkup, "The love that dares 
to speak its name", which appeared in the British gay newspaper, "Gay News".
The prosecution was successful, and the editor and the paper were fined heavily
and the editor given a suspended prison sentence.

How many other archaic laws and regulations would this screening software have
to know about?

							Kay.
-- 
"But what we need to know is, do people want nasally-insertable computers?"
			... mcvax!ukc!flame!kay

rrizzo@bbncca.ARPA (Ron Rizzo) (01/10/85)

Screened by a computer program to decide if a human moderator (=censor)
is required?  Either the prudes are lightyears ahead of us in AI, else
they'll simply force us to use euphemisms to frustrate searches for
"keywords" (locutions like "the love that dare not speak its name" for
homosexuality); or will they dump The Quean's Vernacular into their
database, accelerating the creation of new slanguage.

The only way to defeat such a counterreaction is to proscribe entire
classes of nouns, verbs, etc. (OED goes into the database).  This is
precisely what happens in Orwell's 1984.  

				Nicefeels doublegood,
				Ron Rizzo (This ISN'T my real name!)

pgp@hou2h.UUCP (P.PALMER) (01/11/85)

  I think this discussion should be moved IMMEDIATELY to something like
  net.security.  The whole idea of "moderating", which is a gross euphemism
  for censoring, is obnoxious (and unlikely to be accepted) anyway.

smh@mit-eddie.UUCP (Steven M. Haflich) (01/12/85)

Have I missed something?  The ongoing discussion on detecting `libelous'
postings addresses only certain kinds of libel -- scurrilous or obscene
descriptions of persons with defamatory intent -- but misses entirely
kinds of libel rather more likely in this environment.

Suppose I were to write:
	In his recent posting, Toby Robinson (not Robison!) wrote:
		I feel the future of AI programming lies in assembly
		language, since only by using assembly language can
		the careful programmer attain those important last
		few percent of available machine performance, so
		important to successful AI applications.  I would
		not work for any company that insisted on my writing
		code in inefficient languages like Lisp or Prolog.
	I cannot agree with Toby on this point. ...

Note that my `posting' is about a valid technical subject and is written
in neutral terms of the technical field.  Unless the fictional Robinson
had actually made such a statement, such a posting would (I believe) be
libel.  With flagrant disregard for the truth, it clearly damages
Robinson's reputation and presumably could also damage his employment
opportunities.  It is *not* necessary for me to claim someone practices
nonconsentual sex with laser printers in order to libel him.  He would
have legal recourse against me and my employer.

It might be possible, I suppose, for the automatic censor to verify
quoted inclusions against the article database, but what about:
	At the recent SIGAI meeting in Nepal Toby Robinson (not
	Robison!) told me he felt the future of AI ... ... ...
	I cannot agree with Toby on this point. ...

There is no way for a machine to verify this one.  If the automatic
censor must kick out any quoted or paraphrased citation for review by a
human, almost *everything* will have to be reviewed!  So why bother?

Steve Haflich, MIT

sdyer@bbncca.ARPA (Steve Dyer) (01/13/85)

Please remove the newsgroup reference to net.motss on any subsequent
discussion of this topic.  It's hard to imagine a less appropriate
newsgroup.
-- 
/Steve Dyer
{decvax,linus,ima,ihnp4}!bbncca!sdyer
sdyer@bbncca.ARPA