[net.news] AI

lauren@vortex.UUCP (Lauren Weinstein) (01/05/85)

If anyone can come up with an AI program that can determine whether
or not a piece of text contains potentially libelous or copyrighted
materials, not only would I be personally interested in it, but
I suspect that every magazine and newspaper publisher in the world would
also be ready to snap it up.  I won't be holding my breath....

Without addressing in detail here the issue of who does or does not wish
to broadcast obscene language, I will point out the individual
publishers of any materials decide what they feel is appropriate for
their medium.  Time Magazine and Hustler are different sorts of
publications.  Persumably the people buying Hustler expect and desire
a different "type" of material from those buying Time, to say the least.

"HBO," late at night, might not mind running movies with occasionally
dirty words.  But you will not find a hard X film on HBO, and you
won't find any obscene language on Nickelodeon -- regardless of any
"theoretical" rights to run such materials.  They don't WISH to run them.

The important point, however, is that ALL broadcast materials and
publications that have mass distribution screen all input material
to meet legal, ethical, topic, and quality standards.  Does 
"Time Magazine" run every article sent in to them?  Of course not.
Does HBO (or even the "Playboy Channel") run every movie that shows
up without screening for suitability?  Not on your life.
Forgetting the practical considerations, nobody would read/watch these
services if they operated in such a manner.  Nobody would have the
time and few would have the inclination.  They'd turn into cesspools
in short order.  More cesspools in the world we don't need.

--Lauren--

P.S.  To the AI screening advocates among you, I can assure you
      that there are innumerable means to bypass any screening software,
      even if the algorithms were not known by the public.
      I was going to demonstrate some of them here, but I've
      decided not to bother since they are pretty damn obvious.

      You can say a lot to libel or injure people without
      saying anything that software could detect!  I would think
      that this would be totally obvious to everyone.  And until
      software has the cognitive powers of the human mind, human
      screening will be needed by all publications that wish to 
      maintain any sort of quality, legal considerations aside.

--LW--
 	

pritch@osu-eddie.UUCP (Norman Pritchett) (01/07/85)

> If anyone can come up with an AI program that can determine whether
> or not a piece of text contains potentially libelous or copyrighted
> materials, not only would I be personally interested in it, but
> I suspect that every magazine and newspaper publisher in the world would
> also be ready to snap it up.  I won't be holding my breath....
> 
I have a queston as to how such a program would handle just simple naughty
words.  It seems that robustness in getting your article sent would be a
nice trait: for example replacing the naughty words with "D***".

-- 
----------------------------------------------------------------------
Norm Pritchett - The Generic Hacker
UUCP: ...!cbosgd!osu-eddie!pritch