bstempleton (12/05/82)
My points about net traffic refer not so much to the volume handled by the phone lines and software but the volume of crap that reaches people. Mind you, fixing uucp to make it faster is a very good idea. Batching news is an incredible kludge when the real problem is in uucico. Such a fix, however, is only temporary, and we'd run into the problem again soon, especially since anybody can now get a unix box for under $20,000. Huffman encoding news and mail would save more phone bills than anything. Again, this are only stopgap fixes. Moderators are the only lasting fix I can think of.
soreff (12/14/82)
There are options for keeping junk off the net other than using moderators. Using a moderators to maintain average article quality is roughly equivalent to giving the moderator veto power over submissions to the group. A different approach might be to require that an article be "seconded" before it can be widely distributed. This, of course, would require that some distribution to potential seconders occur with a lower level of control. One way of controlling this might be to count the number of nodes an article has gone through. An unseconded article could go only to readers on the home site or an immediately adjacent site. Any of these readers could "second" the article, allowing its (excuse me: any reader except the author) unrestricted transmission to the rest of the net. The "seconding" feature would have to be added to the news programs, and would alter a flag in the copy of the article being read and would cause the article to be retransmitted. -Jeffrey Soreff hplabs!soreff
rick (12/17/82)
I *really* like the idea of requireing any article that gets more than local distribution to first obtain a 'second'. The name of the seconder should be posted as part of the header along with that of the originator. This is right in keeping with the principle of distributed control that has guided the net from the beginning. Rick Thomas houx*!u1100s!rick
sjb (12/18/82)
To what policy of distributed control are you referring? At present, there is NO central control (or any real type of control, for that matter) on USENET; and I hope it stays that way forever. Second'ing articles is just another way to slow things down, put extra load on the net, and make it more difficult to get things done. After all, most people are just going to ask their friends to second the articles and barely anything will be rejected.
martin (12/20/82)
Jeffrey Soreff's suggestion that articles be seconded before they are released to the net as a whole is the most refreshing comment on network administration to date. This network's problems are more social than technical. It is an exciting event and forum largely because it is unrestrained, decentralized, and democratic. Jeffrey's suggestion imposes minimal control on an article, while not imposing power or responsibility on an individual editor. Bravo! Perhaps the idea can be modified, such that an article need not have a seconder to be broadcast, but that the seconding be simply an attribute that an article carries. Readers who want to read only 'approved' (seconded) articles could filter out the rest. This approval procedure need not stop at one seconder; each reader could be given the right to vote + or - on any article, so that subsequent readers could filter out articles with low ratings. This flow of approval is dependent on network architecture, and therefore not a network wide rating; but anything that helps us filter the vast volume of news would be welcome. Martin Tuori
mmt (12/20/82)
The idea of "seconded" news is acceptable only if, as Martin Tuori suggests, it is shown as a mark on the title line, not if used as a "veto by silence". If nobody at the neighbour site likes the article, this is no reason to believe that it will be disliked elsewhere. The freedom (and consequent occasional stupidity) of the news is what makes it worthwhile, and if someone wants their ignorance displayed, so be it. There is plenty of good stuff to make up for it. Mail sharing within a site might help with the person-hours spent reading the news: several readers select newsgroups of primary interest to themselves, and take responsibility for passing on interesting news to others at the site.
rick (12/21/82)
Second'ing articles is just another way to slow things down, put extra load on the net, and make it more difficult to get things done. After all, most people are just going to ask their friends to second the articles and barely anything will be rejected. Even if they ask their friends to second their articles for them, at least there will have been somebody else who read it before it went out on the net. And there will have been an oportunity for corrections of grammar and spelling, (not to mention outright lies) that occur because of ignorance or lazyness.
bstempleton (12/21/82)
This +/- voting idea has some real problems, the worst of which is unconcious censorship. People may get into the habit of voting "-" on an article because they don't like the philosophy in it rather than because they don't think it belongs on the net. It's also quite hard to say how you could ever implement this. You can't have an article spend a day at each host getting votes - that's just not practical. Seconding by a local user has some merit, but has three flaws. 1) Anybody can get a friend to second it. 2) Anybody with a second userid or on a no password system can second their own article. (If news does not get security implemented, then anybody can do it from their own account) 3) Under all circumstances, anybody with the root password can second their own stuff. Don't laugh, it is root people who are a large portion of the problem. Sending on unseconded stuff does not reduce the net load, it just provides a better filter for what people see, and there are other ways to do this. A moderator, as I propose it, does NOT have a veto, and I can't understand why so many people think so. The legal problem is another one, though.
sjb (12/21/82)
I fail to see the point of making someone read an article before it goes out if they're just going to 'second' it anyway. Like I said, it's just going to slow things down. The people that don't want to go through the hassle are just going to forge things, and if you think forgery is bad now, just wait until then! The only point I see is that maybe there will then be 'someone else' to blame along with the author for its content. If that's the case, then maybe NOTHING will go out, since nobody wants to take the blame for someone else (NOW we get into forgery!) You're either just going to slow things down or stop things completely.
presley (12/22/82)
I don't think having a moderator read your article before allowing it on the net is the best idea; it doesn't go far enough. Each system should have a moderator (preferably self-appointed -- usually the loudest user) who will be responsible for every article which passes through his machine. As an article arrives at his machine (posted locally or from another), he will read it, correct punctuation and spelling, delete anything which is false or offends his own feelings, add his own lies and prejudices, and then allow it to leave his machine. Sites which can't agree on a moderator could uucp rabbit!~/mh-ai/jj.cpio and add to their sys entries something similar to: site:net.all::/usr/bin/jj site Flames to /dev/null, please.
trt (12/22/82)
'Seconding' an article provides a grace period in which one can have second thoughts. Besides (or perhaps instead of) seconding one might require that articles be held on the local machine for a suitable period. Again, this gives the submittor or someone else time to reflect on just how relevant his offering is. An 'immediate broadcast' option would mollify those who consider this a violation of their civil rights. If the 'grace period' system is sufficiently painless then noone will bother to override it. Tom Truscott
smk (12/22/82)
I have to reply to this. I don't want any requests I need right away to be slowed down by seconding. Leave it the way it is, but fix up those nasty net.ctl messages by checking them against the history file. Normally, I wouldn't reply to the whole newsgroup about this, but there is a tendency to go with and idea supported by 10 people. On USENET, this is NOT significant at all. With 372 sites now on the net, any major change should be supported by at least the system administrators of half of these machines. Let's not be too hasty in counting votes for changes or anything else. If by some misfortune, we have to have seconding, I'm sure I would want to bypass this by either: 1. Using my accts on other local machines to second my own articles, or 2. modify news to automatically look like any posted article I send has been seconded. There have been many good suggestions out there, but please be CAREFUL when implementing a net-wide change! --steve
mclure (03/09/83)
#R:watmath:-396000:sri-unix:8200002:000:1329
sri-unix!mclure Nov 30 22:38:00 1982
I don't like the proposed solution to the ever-increasing news overload
and its effect on Usenet sites. The proposed solution increases the
hair considerably without addressing the real problem. I think there
is a *much* better alternative for easing the load caused by news
transmission:
>>> REWRITE UUCP!!! <<<
Someone should do a complete re-write of that code, while
at the same time considering what sort of organization will
produce a more stream-lined and efficient transmission
facility for news. It would avoid creating massive spool
directories and deal more intelligently by batching news.
It would ensure fewer calls to transmit large amounts of
information by preventing too frequent time-outs as is now
the case. I could go on...
Our site, ucbvax!menlo70!sri-unix, was recently subjected to incredibly
high loads caused by the interaction of the various uucp software, two
receiving sites, and news transmission. Fortunately we're out of
that thicket for now, but only in a hackish sort of way. We trickle
articles into inews and have terminated one of our neighbors. The
other two possibilities (news batching and uucp sub-dirs) were
considered and will probably be adopted at some point.
Of all the Unix software I've seen, uucp seems to be by far the most
poorly designed.
Stuart