robison@eosp1.UUCP (Tobias D. Robison) (01/08/85)
I've received some private mail with suggested ways to trick a software moderator. Based upon this I am getting MORE hopeful that software moderation can really be done. Maybe we don't even need sophisticated AI software to do it. The following argument has some unexpected curveballs in it so please follow carefully. We have two pre-requisites: (1) The software moderator (I'll call it "sofref") is a CONVENIENCE for people who want to avoid human moderation. It can be very restrictive. If you submit mail that doesn't follow its rules, you simply have to accept human moderation. Failure to follow sofref's rules doesn't cause you to get censored, just to be delayed. Some perfectly sensible types of mail could will be flunked by sofref. It doesn't have to accept all types of OK mail. (2) Sofref tries to accept material that is not libellous. If you make your pseudo-libellous remarks sufficiently cryptic, or well-hidden, THEY ARE NOT LIBELLOUS. There is an enormous body of law devoted to failed legal actions in which printed matter avoided libel by thinly disguising its intent. Sofref can accept mail provided it consists only of words known to it (except for the signature of the sender). To play it safe, it can flunk obscenities and aggressive words. (NOTE AGAIN, THIS DOES NOT MEAN SUCH MAIL WILL BE CENSORED BY COMPUTER, BUT ONLY THAT A HUMAN MODERATOR WILL DECIDE WHETHER IT IS ACTIONABLE OR INAPPROPRIATE TO THE NET.) People have suggested fooling sofref by mispelling people's names, e.g.: rreeaaggaaxnx. But sofref does not have to accept words it does not recognize. Several people have suggested tricking sofref by including vertical messages, or collections of letters that spell out cursewords pictographically. But sofref can flunk ALL pictures, and it can randomly rejustify all paragraphs to ruin vertical tricks. (If you need your message to be sent without re-justification, send it to the human moderator; this is not a new problem, telegrams used to be universally "shaped" by the telegraph company.) There may still be ways to sneak a curseword past sofref, BUT A SINGLE OBSCENITY CDOES NOT MAKE WRITING ACTIONABLE! In order to be libellous, one must say something in a number of words, and say it pretty clearly. - Toby Robison (not Robinson!) {allegra, decvax!ittvax, fisher, princeton}!eosp1!robison
macrakis@harvard.ARPA (Stavros Macrakis) (01/09/85)
`Software moderators' seem like a non-solution to a non-problem. What I want for bulletin boards is some better software for reading them, allowing for `reviewing'. As each user reads the news, hse types mini-comments, like 0 (quality) 9 (quality) s (scurrilous) o (obscene) h (attempt at humor) r (redundant) x (irrelevant to this group). Each user would also have a database defining hirs tastes vis-a-vis other's judgements. I might consider most material on some group to be garbage, so I only see it if someone I trust considers it interesting. Conversely, I might expect another group's messages interesting until proven otherwise, and so I might exclude them only if someone I trust reviews them badly. I might want to delay judgement until a day or a trusty review has come in. Anyone want to try to implement this? I suspect this conventional programming task would produce something much more useful than some pseudo-AI pseudo-moderator.
lauren@vortex.UUCP (Lauren Weinstein) (01/09/85)
I don't have any intention of letting software take the place of human screeners in any system that I have anything to do with. All it takes is one slip and problems could result. If people are doing the screening, you can at least show that you made reasonable attempts to provide protection. If you rely on software, you are just asking to be laughed out of court. I'd be amused if someone could find a SINGLE national publication or news organization that would be willing to put material on a national network, when it was submitted anonymously by the public and only screened by software. GOOD LUCK. The whole concept of having AI software try to detect things like even OBVIOUS libel is ridiculous in any case. I'd sure like to see the software that could detect the potential trouble in the following... "Yes, the diode ratios are indeed negatively biased, but remember that flow control can be inactive in areas of high gain. By the way, does everyone out there know about the guy who runs the computer over at the big diode company on the net? Yeah, you know the one, the one that posted that message about skinning chipmunks to the net last week. Well, I hope you all realize that he does terrible things to young people. Yes, he has a long record of acts that would certainly make him unsuitable for employment by any company with any sense. He doesn't even really deserve to be alive. I hope his boss fires him, and nobody else will hire him. Anyway, the diode matrices can be best determined by..." ---- Now, if this had been a real message, enough was said that could result in the person being spoken about (who even though not named, was clearly indicated in a manner that most net people could understand) getting VERY upset, especially if he lost his job as a result of the message. This is only a trivial example. I submit that designing messages that could bypass automatic non-human screening would be exceedingly trivial in nearly all cases, given the current state of the art. However, this discussion is purely an academic exercise in AI as far as I am concerned. So dream on... --Lauren--
geb@cadre.UUCP (01/09/85)
Regarding the proposed censorship of net news: could someone please outline the following: 1. The need for such. I haven't seen much that is overtly offensive on the net. As far as libel, why not let the poster worry about that? Has there been a court case that holds the entire network responsible for libel? If not, why not show a little backbone instead of knuckling under to hypothetical threats? I'm not saying someone might not get sued, but this cringing in fear of the parasitical elements of our society (lawyers) is destroying freedom of speech. 2. The authority for such. Who has the right to censor net postings? I suppose whoever owns the hardware at a particular node MAY have the legal right to do it, but elsewhere, I think it's doubtful. Does AT&T own the net, or what?
sean@ukma.UUCP (Sean Casey) (01/12/85)
Big Brother, here we come. Sean
liz@tove.UUCP (Liz Allen) (01/12/85)
In article <483@ukma.UUCP> sean@ukma.UUCP (Sean Casey) writes: >Big Brother, here we come. Would you folks relax? We're not talking screening all net news -- only what will be coming over stargate! USENET, as it is, will continue to bring all the messages that you could possibly want... -- -Liz Allen Univ of Maryland, College Park MD Usenet: ...!seismo!umcp-cs!liz Arpanet: liz@tove (or liz@maryland) "This is the message we have heard from him and declare to you: God is light; in him there is no darkness at all" -- 1 John 1:5