[net.news] The politics of groups

gam@amdahl.UUCP (G A Moffett) (09/07/85)

Once a group reaches a certain size, it ceases to be practical to make
decisions by consensus (and I am claiming that ``concensus'' was the
old-style method of decision-making on Usenet).

A democracy can make decisions, but since the vote-counting is
so important, the verification of one-person-one-vote becomes
costly (as we are seeing by the 'vote fraud' discussion).

A way of by-passing these political processes, as we are admitting to
ourselves they will not work for Usenet anyway, is to allow a
free marketplace to decide.  That is, as I said in my followup
to the ``Doomsday Cometh'' article, that each site decides what it
can carry and the aggregate of those individual decisions will determine
in which direction Usenet will go.
-- 
Gordon A. Moffett		...!{ihnp4,cbosgd,hplabs}!amdahl!gam

lauren@vortex.UUCP (Lauren Weinstein) (09/08/85)

I agree that letting individual sites make decisions regarding
what groups they should carry will help the current situation.
Obviously, they have that right now, but few exercise it.

I doubt, however, that geographic distributions will be of much
use.  People want to ask their questions to, and have their 
comments heard by, the widest audience possible.  If we try
to enforce distributions on topics that aren't "naturally"
regional (like Calif. politics) I think we'll find gateways
and other mechanisms popping up to spread the stuff all over the
net.  I just don't see distributions as a long-term control
technique.  Even the topology of the net tends to cause
local distributions to have only limited value.

--Lauren--

root@bu-cs.UUCP (Barry Shein) (09/09/85)

just a few thoughts.

First, I think Lauren was a little gloomy the day he announced the death
of the USENET, it's certainly reached an annoying level in many ways,
but, as the old expression goes, 'the reports of my death have been
greatly exagerrated'.

Second, I think a major problem we are facing is the technology we are
using, that is, 1200 baud links. Networking technology is progressing
quickly enough that I doubt this will be a problem for long. A simple
thing that would help would just be to figure out some way to get a
little more 'star' configured, with two sites setting up a fast link
like a 9600B leased line and feeding their local sites. I think this
kind of thing is not explored enough, certainly not by me but I will
(the B.U. == Harvard link is still 1200b.) Processors are getting so
cheap that I doubt it will be long before active sites can just absorb
the cost of a dedicated box for feeding/receiving news. I think the
current cost to us for an AT&T UNIX/PC (nee 7300) with SYSV, 1MB mem,
20MB disk is now about 4-5,000, and for these kind of problems will run
neck and neck with a VAX750. Once we have these silent little servants
sitting in the corner taking care of things, and higher baud rates, how
many problems are left (I know, wading thru the stuff!) What I really
need to do this is ethernet+tcp/ip for internal distribution (or better,
NFS and just leave it on the little thing.)

I still like some way of adding automatic feedback to a system, like
readnews somehow recording and collecting whether people are actually
reading the stuff and using that as at least a partial factor in the
worthiness of a group.

Hell, what would someone deduce if I wrote a little 'find' thingy that
went around and reported the subscribed/unsubscribed newsgroups at my
sites? What about if we had those numbers for lotsa sites?

I know, a list of exceptions will follow (if ya got lots non-programmers
they probably all unsubscribed net.sources, does that mean...) We assume
this is just info for human beings with common sense at this point.

	-Barry Shein, Boston University

I know, talk is cheap.

lauren@vortex.UUCP (Lauren Weinstein) (09/10/85)

The little issue of "wading through the stuff" remains in any
unmoderated environment, regardless of the transmission technology.
More and more people have been dropping off of netnews, not because
they can't get ENOUGH articles, but because they get TOO MANY
articles already.

Finding the occasional gems among the repetitious and "nonsense"
messages is getting increasingly difficult and fewer and fewer
people have time to try!  As the net grows, the number of such
"noise" messages will increase.  Wait until you ask a simple
question and get 10K replies, including a range of incorrect,
correct, and harrassment replies ("why did you bother asking
such a question you dummy?")  Even assuming all the replies are
correct and/or useful (which they won't be) it's still a pain.
We're reaching a point where people are rather reluctant to post
questions, due to the flood of replies that will come pouring
in, sometimes for WEEKS!

At the main session of the last Usenix conference, I asked the
full auditorium, "How many of you find that the sheer VOLUME
of 'less than useful and/or repetitious, etc.' material on Usenet 
has become unmanageable?  Have you found yourselves thinking about 
wading through netnews as much more of a pain than a pleasure?"  The number 
of hands that went up astounded even me.  It must have been well over 90%.

It's not MORE articles we need.  It's BETTER ones!

--Lauren--

smb@ulysses.UUCP (Steven Bellovin) (09/10/85)

> Networking technology is progressing
> quickly enough that I doubt this will be a problem for long. A simple
> thing that would help would just be to figure out some way to get a
> little more 'star' configured, with two sites setting up a fast link
> like a 9600B leased line and feeding their local sites. I think this
> kind of thing is not explored enough, certainly not by me but I will
> (the B.U. == Harvard link is still 1200b.) Processors are getting so
> cheap that I doubt it will be long before active sites can just absorb
> the cost of a dedicated box for feeding/receiving news. I think the
> current cost to us for an AT&T UNIX/PC (nee 7300) with SYSV, 1MB mem,
> 20MB disk is now about 4-5,000, and for these kind of problems will run
> neck and neck with a VAX750.

I'm afraid you're grossly underestimating the CPU and disk throughput
requirements to put netnews on a machine.  2 of our 5 outbound links, on
a 750 used solely as a communications server (Ethernets, DMR-11s, laser
printers, etc.), are at high speed.  Guess what -- one uuxqt running rnews
and there are no cycles left.  If there are two, reading news will become
unpleasant.  Those two plus a print job will totally kill the machine.

henry@utzoo.UUCP (Henry Spencer) (09/10/85)

> ... I think a major problem we are facing is the technology we are
> using, that is, 1200 baud links. Networking technology is progressing
> quickly enough that I doubt this will be a problem for long...
> ... Processors are getting so
> cheap that I doubt it will be long before active sites can just absorb
> the cost of a dedicated box for feeding/receiving news...

I think you're forgetting something:  at many sites (mine, for example)
news basically gets a "free ride" on equipment bought for other reasons.
Their budget for news-dedicated equipment is, and will remain, precisely
zero.  Also, some of us are getting increasingly unwilling to spend more
money on providing yet more bandwidth for (say) net.politics.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

gds@mit-eddie.UUCP (Greg Skinner) (09/11/85)

All this talk about improving news transmission by using higher-speed
modems is all very nice, but what people seem to be forgetting is that
higher-baud modems cost significantly more money, and companies just
aren't going to fork over that money for the sake of news transmission.
For example, if I had 9600 baud modems at my company, I'd be using them
to log in from home, and expecting them to be used by others doing the
same. 

The only way I can see higher-baud modems becoming a standard item for
news sites is if news readers were asked to contribute a portion of
their salary for the purchase and maintenance of faster modems.  As chuq
says, you get what you pay for.  I wouldn't mind paying a nominal fee
for a fast modem -- think of it like paying for cable.
-- 
Do not meddle in the affairs of wizards,
for they are subtle and quick to anger.

Greg Skinner (gregbo)
{decvax!genrad, allegra, ihnp4}!mit-eddie!gds
gds@mit-eddie.mit.edu

david@ukma.UUCP (David Herron, NPR Lover) (09/12/85)

In article <1090@ulysses.UUCP> smb@ulysses.UUCP (Steven Bellovin) writes:
>I'm afraid you're grossly underestimating the CPU and disk throughput
>requirements to put netnews on a machine.  2 of our 5 outbound links, on
>a 750 used solely as a communications server (Ethernets, DMR-11s, laser
>printers, etc.), are at high speed.  Guess what -- one uuxqt running rnews
>and there are no cycles left.  If there are two, reading news will become
>unpleasant.  Those two plus a print job will totally kill the machine.

This one is easy to fix.  Simply have rnews do unbatching.  It won't be
very hard (I think, haven't really gotten into the code with this in mind
just yet) but it would save those thousands of execs (one for every article).
And since each exec has to spend a certain amount of time initializing itself
so it can properly insert the article, then there's an even greater savings.

-- 
--- David Herron
--- ARPA-> ukma!david@ANL-MCS.ARPA
--- UUCP-> {ucbvax,unmvax,boulder,oddjob}!anlams!ukma!david
---        {ihnp4,decvax,ucbvax}!cbosgd!ukma!david

Hackin's in me blood.  My mother was known as Miss Hacker before she married!

drews@utrc-2at.UUCP (Drew Sullivan) (09/12/85)

> I'm afraid you're grossly underestimating the CPU and disk throughput
> requirements to put netnews on a machine.  2 of our 5 outbound links, on
> a 750 used solely as a communications server (Ethernets, DMR-11s, laser
> printers, etc.), are at high speed.  Guess what -- one uuxqt running rnews
> and there are no cycles left.  If there are two, reading news will become
> unpleasant.  Those two plus a print job will totally kill the machine.

Here at UTRC we are running two dedicated IBM-ATs.  One is planned to be a
printer server with various laser printers  hung off of it, and the other is
the news machine.  With about 40 megs of disk and 2 megs of memory, I have
found no problem with both vnews and rnews running at the same time.
What is planned is to have lots of cut-down PCs connected via a network
as user machines that take/put files to the AT servers, in this way we
always have cycles to spare.  The cost of the user-interface (and hence
reponse time) is born by the PCs and the back-bone machines are tuned
for other requirements.  Our biggest problem now is setting up the
network.

 -- Drew.

preece@ccvaxa.UUCP (09/13/85)

> I still like some way of adding automatic feedback to a system, like
> readnews somehow recording and collecting whether people are actually
> reading the stuff and using that as at least a partial factor in the
> worthiness of a group.  /* Written  8:37 pm  Sep  8, 1985 by
> root@bu-cs.UUCP in ccvaxa:net.news */
----------
The notes system already has that capability.  It keeps statistics
on use of the files, local contributions to the files, amount of time
people spend in files, etc.

-- 
scott preece
gould/csd - urbana
ihnp4!uiucdcs!ccvaxa!preece

chuqui@nsc.UUCP (Chuq Von Rospach) (09/15/85)

In article <2167@ukma.UUCP> david@ukma.UUCP (David Herron, NPR Lover) writes:
>This one is easy to fix.  Simply have rnews do unbatching.  It won't be
>very hard (I think, haven't really gotten into the code with this in mind
>just yet) but it would save those thousands of execs (one for every article).

I looked at this in 2.10.1 and it unfortunately is (or was, I haven't
looked to see if 2.10.3 has been really cleaned up) non-trivial.
inews/rnews was written under the assumption that a lot of the code was
only accessed once and didn't translate well at all to running multiple
messages through it.

There werw two major areas of performance problem. The first was the
multitude of fork() calls. By default it cost you two fork() calls per
message because someone got lazy and used a 'system()' call. The other,
rather more insidious, was that inews did a lot of writing and re-reading
of the message. The basic path for the data was:

	stdin -> inews -> file in /tmp -> back into inews -> files in
	/usr/spool -> back into inews -> back into /usr/spool -> linked
	into final location.

The reason for this was that everything it read from stdin went into a
holding file, then the file was re-read, the header stripped and parsed,
and the message stored again. Then, they created a new file, wrote the new
header, and copied the message from the storage file to the final file and
linked it into place. You'd probably save a LOT of processing if you could
teach inews to simply write things once, but it was set up this way because
it was the 'easy' way to handle the headers and things...

chuq

-- 
Chuq Von Rospach nsc!chuqui@decwrl.ARPA {decwrl,hplabs,ihnp4}!nsc!chuqui

An uninformed opinion is no opinion at all. If you dont know what you're
talking about, please try to do it quietly.

root@bu-cs.UUCP (Barry Shein) (09/20/85)

[have patience folks, if the first part boors you I get completely sidetracked
 fast enough]

> I'm afraid you're grossly underestimating the CPU and disk throughput
> requirements to put netnews on a machine.  2 of our 5 outbound links, on
> a 750 used solely as a communications server (Ethernets, DMR-11s, laser
> printers, etc.), are at high speed.  Guess what -- one uuxqt running rnews
> and there are no cycles left.  If there are two, reading news will become
> unpleasant.  Those two plus a print job will totally kill the machine.

(* This was in response to my suggestion to commit an inexpensive box as a
usenet server at your site, possibly plus working out a faster transport
as a way to alleviate one aspect of news problems, a small one
admittedly, but one that comes up frequently *)

No, I am the one who is afraid (:-)

I am afraid you are grossly confusing price with cpu power by posing your
750 as an example.

It is nearly impossible to buy anything these days much slower than a 750
for more than a few thousand dollars.

My ~$5,000 AT&T 7300 (UNIX/SYSV) apparently will run neck and neck on
your favorite benchmarks with your $200,000 750* (I own a 750 also,
without formal benchmarks I believe the benchmarks that show this just
from using each.) And *THE POINT*: for $5,000 I can use it as an
intelligent USENET modem without much justification (I suspect you use
your 750 for much more than a usenet device.) Even if you believe my
7300 is, what, 25% slower, 40% slower, my argument still is not that
unreasonable as you seem to complain (and it ain't that much slower.)

Wake up, you, like I, own a curious paper-weight of a past age. Sell it
to a VMS user (who doesn't have much choice) at 20c or so on the dollar and
buy something 3-5X the speed (one of the new 68020 boxes with a good winch.)
[I will, any VMS users interested? unlike a uvax, it has real periphs...]

Obsolescence hurts, ouch! Hey, it was a fine box in its time...(vax.)

	-Barry Shein, Boston University

P.S. Some of my remarks are hypothetical at this point, but not without
rationale. To make this work for us I still need an ethernet interface I
think, or some such, but that probably just means a 4.2 box rather than
a SYSV box, as SYSV still doesn't support any useful high-speed
networking (soon, soon, I know.) Have I annoyed 1/2 the net yet :-)?

*NOT floating point, but I don't think that is at issue here.

P.P.S. (sorry) This is more relevant to the discussion than may meet the
eye, yes I considered appropriateness, a *lot* of people (me too) are
suffering bad future shock and missing some viable solutions, a lot of
people sound like the backbone sites must be 1200B PDP11/34's with
RL02's and any solution has to satisfy that configuration. How about
CD's? uWave links?  T1? 9600B async modems w/ error correction?  build
'modems' with 680x0's in them (allowing us to consider more
cpu-intensive compression algorithms, more intelligence in general,
network file systems into your news system, I already have a version of
readnews here in test that accesses a remote /usr/spool/news via TCP/IP,
gee, took me almost most of an evening to get working! It's not ready
for distribution, don't ask yet, but Hint: I just intercept open(),
read() etc before libc.a and look for 'machine:path' in open(), mark the
fd (fd | SOMEBIT) and send requests as a struct to a remote daemon for
read() etc on fd's with that bit set...you can probably do that to, esp
if your clients and hosts are very similar (within an ntohx() of each
other.)

I know, a) a lot of these suggestions have their problems (I know the
problems) b) people like Lauren W. are hard at work at just these types
of solutions (and probably better than I have listed.) It's just that,
well, I'm an incurable technologist, I think this whole damn death-of-the-net
discussion *should* be cross-posted to human-nets, net.dcom, net.lan, net.ai,
net.telecom and a few other places as that's where a lot of the problem
solvers are! This group is becoming incredibly pessimistic when there is
no need to be.

What is that quote? Some look at what is and ask 'why?' other's dream
of what could be and ask 'why not?' (MLK I believe, BU grad? (D.D.?))

henry@utzoo.UUCP (Henry Spencer) (09/26/85)

> ... a lot of
> people sound like the backbone sites must be 1200B PDP11/34's with
> RL02's and any solution has to satisfy that configuration.

At least one backbone site (utzoo) *is* a PDP11/44 with Fujitsu Eagles.

> How about
> CD's? uWave links?  T1? 9600B async modems w/ error correction?  build
> 'modems' with 680x0's in them...

If you pay for it, we'll use it.  But it took a damn long time just getting
this place equipped with 1200-baud modems.  I can just hear the reactions
if I ask to spend N thousand dollars (even for not-too-large N) on something
dedicated solely to netnews.  Remember that news generally gets a free ride
on equipment bought for other things -- the budget for news itself is ZERO
at many places.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry