[news.misc] "NNTP has had a number of very bad effects on the net..."

webber@aramis.rutgers.edu (Bob Webber) (07/04/88)

In article <4244@pasteur.Berkeley.Edu>, fair@ucbarpa.Berkeley.EDU (Erik E. Fair) writes:
> In the referenced article, webber@porthos.rutgers.edu (Bob Webber) writes:
> 	NNTP has had a number of very bad effects on the net, but
> 	its security is the most minor of them.
> Bob, I'd be most interested to read a complete explication of this ...

   1) It has greatly decreased the response time of the net.  This has
      encouraged the use of news for idle chat rather than using mail
      for same.  [There are already enough people who prefer news to
      mail because it is more reliable without giving them instant
      turnaround as well.]  This also decreases the usefulness of cancel.
      
   2) It messes up the economics of the net.  Cross country communications
      costs are now no-charge for a number of sites.  Originally when
      nntp was installed, there was talk about using the nets wisely
      minimizing the number of times the same link would carry the
      same news, but instead what happens is that people just randomly feed
      off whatever site seems well connected.  The impact on the arpanet
      being one example of this -- although it is difficult to separate
      out all the factors leading up to the current arpanet disintegration.
      
   3) It has increased the centralization of the backbone.  Now that they
      don't have to talk over phone lines, they handle directly more of
      the ``local'' traffic.  For example, rutgers, gatech,  ncar, ames,
      bellcore, mcnc, mit-eddie, purdue, ucbvax, ucsd, uw-beaver,
      ukma, att, husc6, cmc12, hplabs, decwrl, decvax all interconnect
      over non-charge communication networks.  Meaning that most of the
      link restrictions on the US backbone map are software.

So, to sum up, I am not criticising nntp as software per se, but for
its social impact on the net.  Communications costs were once a major
feedback control on the flow of the net.  Removing them was not a 
clever idea.

----- BOB (webber@athos.rutgers.edu ; rutgers!athos.rutgers.edu!webber)

REASON: 
> Bob, I'd be most interested to read a complete explication of this
> remark, posted to the net, or Emailed to me; your choice. What are
> the effects that NNTP has had on the network, and which of them
> are, in your opinion, bad (and why are they bad?)?

  Since it was posted on the net and questioned on the net by more than
  one person within a day, I might as well reply on the net as well.
  Normal followup to news.misc manuveur applied with a cross posting
  to warn them that we were on the way.

jbuck@epimass.EPI.COM (Joe Buck) (07/06/88)

Bob Webber's gripes about NNTP:
>
>   1) It has greatly decreased the response time of the net.
It's a matter of taste whether that is a problem.
>  This has encouraged the use of news for idle chat rather than using mail
>  for same.

The main reason here is because of the unreliability of mail these
days, not NNTP.
      
>   2) It messes up the economics of the net.  Cross country communications
>      costs are now no-charge for a number of sites.

That's been true all along; quite a few of the long-distance net
hops were always somebody's fixed-cost leased line.

>      The impact on the arpanet
>      being one example of this -- although it is difficult to separate
>      out all the factors leading up to the current arpanet disintegration.

The Arpanet disintegration is easy to understand: insufficient
bandwidth.  Hell, about ten PCs pumping data full blast can use up
all available bandwidth.  To the extent that mailing lists become
newsgroups, NNTP should decrease traffic.

>   3) It has increased the centralization of the backbone.

This is flat-out wrong.  It LESSENS the clout of the backbone.  Most
long distance NNTP traffic is by NON-backbone machines.  Every single
site on the official US backbone, other than UUNET (which really is vital),
could quit, and thanks to NNTP, we could rebuild the whole thing in a
couple of weeks (except for a few sparsely populated areas), since low-cost
replacements can quickly be found.  Whiny backbone admins can now
politely be told, "Thank you for all the contributions you've made in
the past, but if you can no longer tolerate the problems, the net can
get along just fine without you."

 For example, rutgers,...decwrl, decvax all interconnect
>      over non-charge communication networks.  Meaning that most of the
>      link restrictions on the US backbone map are software.

The US backbone map has almost nothing to do with the way news
actually flows within the US.
-- 
- Joe Buck  {uunet,ucbvax,pyramid,<smart-site>}!epimass.epi.com!jbuck
jbuck@epimass.epi.com	Old Arpa mailers: jbuck%epimass.epi.com@uunet.uu.net
	If you leave your fate in the hands of the gods, don't be 
	surprised if they have a few grins at your expense.	- Tom Robbins

fair@ucbarpa.Berkeley.EDU (Erik E. Fair) (07/06/88)

I concede point 1 about speed of article propagation. This was, in
fact, one of my goals. Netnews propagation across the Internet is
nearly as fast as an Internet mailing list.

You assert that this is bad, because it has increased the level of
idle chit-chat in netnews. It is not clear to me at all that the
effects are related. The ARPANET (and the Internet) have had much
faster message propagation in their mailing lists for many years,
and those mailing lists haven't suffered this effect, with the
exception of those mailing lists whose purpose is idle chit-chat
in the first place. Also, is there any particular newsgroup or
class of newsgroups that exemplify the effect you cite?

The secondary effect of people using mail in place of netnews is
not related to NNTP either; that has been going for a very long
time, because the UUCP network doesn't have a universally distributed
reliable mail system (Thanks AT&T!). This problem has gotten worse
due mostly to the increasing size of the UUCP network, rather than
to any improvements that may have been realized in speed of netnews
propagation. Also, I don't see people on the Internet engaging in
this particular breach of network etiquette, except to reach people
on the UUCP network. I assert that this is because the Internet
has a real, working electronic mail system.

How has increased speed of article propagation reduced the
effectiveness of the cancel control message?


In the second point, I quarrel with your wording; NNTP has to some
extent *changed* the economics of the network, but I don't think
they're messed up, although I'm sure that the Europeans would say
that the U.S. USENET has had its economics screwed up since the
word "go."

I was the one who suggested that the NNTP distribution network
should be set up to strictly reflect the physical connectivity of
the ARPANET; in this way you can make the guarantee that no article
will cross a physical link more than once, modulo IMP routing
irregularities. If one were then to convert all the public Internet
mailing lists to netnews newsgroups transmitted by NNTP, there
would be a number of very interesting effects, not the least of
which would be lower agreggate ARPANET traffic.

Unfortunately, I don't have the power to declare my views as
regulation; all I can do is persuade, cajole, and jaw-bone, and
my efforts in this regard have had little effect. It seems that
netnews links are built more on personal contacts than anything
else, and most of the system administrators I know are very reluctant
to take the initiative to contact an obvious neighbor and set up
the links on a more rational basis if they don't already know the
individual on the other side. Part of this is a trust problem -
can I trust the guy on the other end to provide reliable service,
and respond promptly to problem reports?

Also, be aware that Internet links cost too, just in different
geld. The ARPANET is being partially dismantled because DARPA got
its funding cut, they apparently see themselves as a research
organization, and the ARPANET is an operational network (the last
"experiment" that I know of that used the whole ARPANET as a test
bed was the beta-test of PSN release 7, and I think we're both
cognizant of the troubles THAT caused) which was a tremendous drain
on their resources.

The single biggest reason why the ARPANET is so congested today is
actually quite easy to identify: on January 1, 1983 it converted
protocol from NCP to IP/TCP, and thus was the Internet created.
The Internet has since seen explosive growth - it is estimated that
there are between 50,000 and 100,000 hosts on the Internet today.
The vast majority of the systems on the ARPANET today are actually
gateways to other organizational networks, and the offered load
from these other networks to the ARPANET is staggering. No wonder
the ARPANET is staggering under it. It also doesn't help that the
LANs connected to the ARPANET are primarily 10Mbit/sec Ethernets,
and the ARPANET IMP-IMP trunks are all 56Kbaud DDS circuits.

Prior to the release of NNTP, a number of organizations were
exchanging netnews over the Internet, either by batching with SMTP,
or by using UUCP over IP/TCP, and the numbers were increasing.
NNTP's release accelerated the exchange of netnews over the Internet
enormously, by using a much more efficient protocol, and by solving
the problem of local netnews access for sites without distributed
filesystems or the resources to run netnews on all their systems.

NNTP changed the economics from one of communications cost, to one
of CPU and disk cost. The limits are still there, they're just
different for the Internet sites. NNTP hasn't changed the economics
for a UUCP site, save that they might be able, due to the relative
ubiquitousness of the Internet, to get a full netnews feed physically
closer to where they are (and thus cheaper in phone bills). These
changes would have happened anyway for all organizations that have
internal networks or access to the Internet; NNTP just speeded it up.


I didn't understand what you meant in point three, can you expand it?
Particularly the part about backbone sites handling more "local"
traffic?

The effect of NNTP on the USENET backbone should be fairly evident;
if you look at a backbone map from anytime in 1985, almost all the
links are UUCP/Phone based, with a little TCP/UUCP going on. Now,
three quarters of the bacbkone is NNTP based links. In effect, the
USENET has a new backbone that is 56Kbaud or better bandwidth, with
very low message-switching time (10 minutes maximum at sites that
follow the recommendations in the NNTP release documentation), and
much more efficient in its use of transmission bandwidth between
sites (if a pure NNTP site is seeing more than 10% duplicates in
its news log, they've got a problem somewhere; typical UUCP sites
with redundant links should consider themselves lucky if they get
the agreggate duplicate rejection rate under 40%). Granted that
the new backbone may not be using the bandwidth of the underlying
network as well as it should be; this is not something I can fix.

I don't see the point of your comment about restricted NNTP links -
the only restricted backbone link in the U.S. (to/thru hao, now
ncar) went unrestricted again when they joined the Internet and
converted to NNTP.

Some people are probably asking themselves what the Internet is
getting out the deal. Answer: a computer conferencing system that
beats all hell out of mailing lists. When a mailing list converts
completely to a netnews newsgroup, you get:

1. reliability - no central distribution point because postings
	radiate out from their origin. Also, netnews revels in
	redundancy; in a properly set up netnews distribution
	network there are no single point failures. Mailing lists
	cannot do this because there is no duplicate control system
	for mail.

2. simplicity of administration - once netnews is set up, you have a
	distribution channel, and new newsgroups are very easy to
	add. The software automatically notifies everyone reading
	netnews that there is a new newsgroup, and would they like
	to read it?

	No more mailing list administration - readers of a newsgroup
	can come and go without notifying anyone. Mailing list
	administrators have to be notified when users appear and
	disappear.

	No more cryptic errors for the users to interpret - did that
	last mailing get to everyone on the list? What does this
	bounce mean?
	
	centralized information access - can you find every mailing
	list on the Internet?

3. efficiency - netnews by NNTP is delivered on a per-network basis,
	not a per-host basis, which means fewer long-haul connections
	to make in distributing the messages.

The only thing we can't really give people is privacy - so private
membership mailing lists will continue to exist.


In summary, I fail to understand your objections to the effects of
NNTP, given that you are a proponent of the free flow of information.
NNTP has made it possible for the network to grow much more quickly
and painlessly than it otherwise might have, and I think it has
staved off (albeit temporarily) the coming balkanization of the
USENET. There is more information flow through USENET because of
NNTP than there would have been without it.

In some sense, by turning up the volume, I'm hoping that I've made
the problem of information overload much more obvious to the software
writers of the network, and that they will come up with some attacks
on the problem that I haven't thought of yet.

	your rebuttal, sir?

	Erik E. Fair	ucbvax!fair	fair@ucbarpa.berkeley.edu

webber@aramis.rutgers.edu (Bob Webber) (07/10/88)

In article <2263@epimass.EPI.COM>, jbuck@epimass.EPI.COM (Joe Buck) writes:
< Bob Webber's gripes about NNTP:
< <   1) It has greatly decreased the response time of the net.
< It's a matter of taste whether that is a problem.

Yes.  The whole question of what is ``bad'' and what is ``good'' could
be said to be a ``matter of taste.''  But so what?

< <  This has encouraged the use of news for idle chat rather than using mail
< <  for same.
< The main reason here is because of the unreliability of mail these
< days, not NNTP.

Mail is very reliable these days.   If you send out a message and it doesn't
make it and then you resend it, it won't make it again.  What could be more
reliable?  What the net is in desperate need of is a list of ``dumb'' mail
sites that are willing to send messages where the sender asks rather
than play their own silly games.  [Incidently, rutgers is, alas, not such
a site -- personally, I would rather that a site cut back on its connections
when it finds too much mail going thru it rather than screwing around
with the mail address (of course, this should be balanced with the
load of mail the site itself generates -- certainly any site should be
willing to handle as much mail of other peoples as it handles for its
own (counting both local recieve and local send as being mail service
for ``its own.'').]

< <   2) It messes up the economics of the net.  Cross country communications
< <      costs are now no-charge for a number of sites.
< 
< That's been true all along; quite a few of the long-distance net
< hops were always somebody's fixed-cost leased line.

If I add a fixed-cost leased line to site X, I have added just one
such connection to the overall cost-flow of the net.  If I add myself
to something like nsfnet, it is equivalent to setting up hundreds of
direct connect fixed-cost leased lines simultaneously.  The difference
is orders of magnitude.

< all available bandwidth.  To the extent that mailing lists become
< newsgroups, NNTP should decrease traffic.

Hardly.  Most arpanet mailing lists are near dead because of the nuisance
of managing them (both for reader and sender).  Converting them to newsgroups
just gave them new life and greatly increased their volume.  Replacing them
with newsgroups might have been interesting at a time when newsgroup creation
was more liberal, but maintaining backward compatibility with their mail
connections has been silly and led to many misunderstandings on the net.

< <   3) It has increased the centralization of the backbone.
< This is flat-out wrong.  It LESSENS the clout of the backbone.  Most
< long distance NNTP traffic is by NON-backbone machines.  Every single
< site on the official US backbone, other than UUNET (which really is vital),
< could quit, and thanks to NNTP, we could rebuild the whole thing in a
< couple of weeks (except for a few sparsely populated areas), since low-cost
< replacements can quickly be found.  Whiny backbone admins can now
< politely be told, "Thank you for all the contributions you've made in
< the past, but if you can no longer tolerate the problems, the net can
< get along just fine without you."

This is FLAT-OUT WRONG.  The problem has NEVER been that the backbone
had any REAL clout as far as being ``needed'' for the net.  The problem
is that sites run by administrative types orient themselves behind authority
figures and invest anyone claiming to be responsible leadership with
de facto control.  NNTP traffic increases the number of such sites controlling
communications of the net.  Just at a time when individual home users could
afford to be news and mail trafficers, you have all these major institutions
saying ``Don't bother, we can use arpanet or nsfnet or what have you and
ship it all around cheaper and faster than you could ever hope to.''  


<  For example, rutgers,...decwrl, decvax all interconnect
< <      over non-charge communication networks.  Meaning that most of the
< <      link restrictions on the US backbone map are software.
< 
< The US backbone map has almost nothing to do with the way news
< actually flows within the US.

On this we agree.

------ BOB (webber@athos.rutgers.edu ; rutgers.edu!athos.rutgers.edu!webber)

< 	If you leave your fate in the hands of the gods, don't be 
< 	surprised if they have a few grins at your expense.	- Tom Robbins

Who cares that it was Tom Robbins who said it?  I WANT A FULL CITATION!
WHAT BOOK!  Who knows, if he said something as interesting as this, he
might have something else worth reading to say.

webber@aramis.rutgers.edu.UUCP (07/10/88)

In article <4277@pasteur.Berkeley.Edu>, fair@ucbarpa.Berkeley.EDU (Erik E. Fair) writes:
< I concede point 1 about speed of article propagation. This was, in
< fact, one of my goals. Netnews propagation across the Internet is
< nearly as fast as an Internet mailing list.
<
< You assert that this is bad, because it has increased the level of
< idle chit-chat in netnews. It is not clear to me at all that the
< effects are related. The ARPANET (and the Internet) have had much
< faster message propagation in their mailing lists for many years,
< and those mailing lists haven't suffered this effect, with the
< exception of those mailing lists whose purpose is idle chit-chat
< in the first place. 

Do you count the AI digest, RISKS, and SF-LOVERS as mailing lists whose
purpose was idle chit-chat in the first place.  For the most part, ARPANET
mailing lists are a real looser.  Generally someone has a neat idea of what
they would like to see a discussion of and a bunch of people subscribe
and then find out they are all waiting for someone else to shed wisdom.
When this fails to happen, they revert to discussing how neat it would
be if they knew something about the topic at hand.  After a few months
this generally dies down and then a year later you get a message in you
box saying ``hey is anyone out there?''

The only exceptions to this have been groups specifically dedicated to
the maintence of some specific software, e.g., the networking
discussion lists and the gnu list.

<                                Also, is there any particular newsgroup or
< class of newsgroups that exemplify the effect you cite?

No -- the effect is rather evenly spread throughout the net.  If you supply
me with the last 5 years archives in a handful of technical groups such
as comp.arch, I will be more than glad to give you a detailed analysis
demonstrating the thesis.

< The secondary effect of people using mail in place of netnews is
< not related to NNTP either; that has been going for a very long
< time, because the UUCP network doesn't have a universally distributed
< reliable mail system (Thanks AT&T!). 

Well, after all, they made money on both the sending and bouncing so you
really can't look to them for help.  

Six or seven years ago, mail worked very reliably.  Not all mail paths
would work, but those that did tended to continue to.  So while contacting
random people on the net might have required more effort, once contacted
it was easier to maintain the link.  Merging the UUCP network into the
ARPA network has been a mixed blessing headed toward unmixed regret of
which nntp is just one more nail in the coffin.  Sticking to the
original technology, by now UUCP could have been entirely free of
institutional connection, instead being run by unix hackers on their
home machines.

< on the UUCP network. I assert that this is because the Internet
< has a real, working electronic mail system.

Giggle.  Where have you been for the last few years as the Internet changed
the way it handled address lookup.

< How has increased speed of article propagation reduced the
< effectiveness of the cancel control message?

The farther a message gets before the cancel message is issued, the
less likely the cancel message will find all the places it went.
Alot of this has to do with most of the news flowing over slower links
and nntp increasing the number of points at which the original message
might enter hence the possibility of a cancel message issued 15 minutes
after the original ending up traveling a day or more behind the original
in some portions of the net.

< In the second point, I quarrel with your wording; NNTP has to some
< extent *changed* the economics of the network, but I don't think
< they're messed up, although I'm sure that the Europeans would say
< that the U.S. USENET has had its economics screwed up since the
< word "go."

Well, the Europeans are just as messed up, so there is no problem
there.  Originally the net was small enough that although it was
``payed for'' by institutions, they seldom noticed it.  Now it has
grown big enough that it is coming more and more to the attention of
these institutions and so things are getting shakier.  The problem is
more that the original economic structure just doesn't scale up
well and that nntp has hastened the attempt to scale it up while
decreasing the ability to convert to something more solid.

< I was the one who suggested that the NNTP distribution network
< should be set up to strictly reflect the physical connectivity of
< the ARPANET; in this way you can make the guarantee that no article
< will cross a physical link more than once, modulo IMP routing
< irregularities. If one were then to convert all the public Internet
< mailing lists to netnews newsgroups transmitted by NNTP, there
< would be a number of very interesting effects, not the least of
< which would be lower agreggate ARPANET traffic.
<
< Unfortunately, I don't have the power to declare my views as
< regulation; all I can do is persuade, cajole, and jaw-bone, and
< my efforts in this regard have had little effect. It seems that
< netnews links are built more on personal contacts than anything
< else, and most of the system administrators I know are very reluctant
< to take the initiative to contact an obvious neighbor and set up
< the links on a more rational basis if they don't already know the
< individual on the other side. Part of this is a trust problem -
< can I trust the guy on the other end to provide reliable service,
< and respond promptly to problem reports?

If you had set up nntp so that it dynamically figured out who its optimal
news neighbors were instead of relying on a fixed file that an
administrator would have to continually monitor, you might have had
a chance.  But leaving it up to hundreds of new news administrators to
worry about this along with all the other problems of bring up new 
software was obviously [20-20 hindsight] too much to expect.  Of course,
once something is set up, only breaking it will get it to change.

< Also, be aware that Internet links cost too, just in different
< geld.

But not in a manner where the person using them feels that they
personally have any influence over the availability of the service.
I believe in economics this is referred to as the paradox of the Commons
(after similar problems with shared grazing pastures in the British Isles).

<...
< gateways to other organizational networks, and the offered load
< from these other networks to the ARPANET is staggering. No wonder
< the ARPANET is staggering under it. It also doesn't help that the
< LANs connected to the ARPANET are primarily 10Mbit/sec Ethernets,
< and the ARPANET IMP-IMP trunks are all 56Kbaud DDS circuits.

And to this you thought it wise to add NNTP???????????

< Prior to the release of NNTP, a number of organizations were
< exchanging netnews over the Internet, either by batching with SMTP,
< or by using UUCP over IP/TCP, and the numbers were increasing.
< NNTP's release accelerated the exchange of netnews over the Internet
< enormously, by using a much more efficient protocol, and by solving
< the problem of local netnews access for sites without distributed
< filesystems or the resources to run netnews on all their systems.

So, what is the bottom line?  Was the efficiency enough to compensate for
the extra usage or did it just dig the grave faster?

< NNTP changed the economics from one of communications cost, to one
< of CPU and disk cost. The limits are still there, they're just
< different for the Internet sites. NNTP hasn't changed the economics
< for a UUCP site, save that they might be able, due to the relative
< ubiquitousness of the Internet, to get a full netnews feed physically
< closer to where they are (and thus cheaper in phone bills). These
< changes would have happened anyway for all organizations that have
< internal networks or access to the Internet; NNTP just speeded it up.

Hardly.  The major economic significance of NNTP communications is that
it combines cheapness with institutionalism.  While it may increase the
number of readers, all of the new readers are dependent on the old
institutions because the old institutions interconnect whereas the new
readers just connect up to the old institutions.  

CPU and disk cost have ALWAYS been the bottleneck in effective usage of
the net.  Why do you think there are no archives?  Why do you think most
people can afford to keyword search incoming news?

< I didn't understand what you meant in point three, can you expand it?
< Particularly the part about backbone sites handling more "local"
< traffic?

I think I just did above (as will as in my reply to Joe Buck).

< the agreggate duplicate rejection rate under 40%). Granted that
< the new backbone may not be using the bandwidth of the underlying
< network as well as it should be; this is not something I can fix.

Actually it is something you COULD fix, but I am sure you have other
things to do.  For that matter, it is something I COULD fix as well --
although it is not clear how many people would be interested in me
mucking around with their net connections.

< Some people are probably asking themselves what the Internet is
< getting out the deal. Answer: a computer conferencing system that
< beats all hell out of mailing lists. When a mailing list converts
< completely to a netnews newsgroup, you get:
< 
< 1. reliability - no central distribution point because postings
< 	radiate out from their origin. Also, netnews revels in

Hmmm, I guess you haven't noticed the tendency toward moderated lists.

< 2. simplicity of administration - once netnews is set up, you have a
< 	distribution channel, and new newsgroups are very easy to
< 	add. The software automatically notifies everyone reading
< 	netnews that there is a new newsgroup, and would they like
< 	to read it?

At which point it becomes a major administrative decision.  Whereas,
a mailing list anyone can anytime decide to ask to be added without
getting it cleared thru the site admin who may or may not have any
interest in the topic of the particular group.

< 	No more mailing list administration - readers of a newsgroup

Granted it is easier on the mailing list maintainers to not have to
maintain a mailing list, but it is not clear that it is easier on
the random user to have to deal with their local news administrator
and those they connect to than to just send a note to the mailing list
maintainer.

< 	No more cryptic errors for the users to interpret - did that
< 	last mailing get to everyone on the list? What does this
< 	bounce mean?

Well, if they are going to be using mail, it is best they learn
sometime -  I particularly love the AT&T messages that don't include
any indentification of the mail message but just tell you some uux
process failed.

< 	centralized information access - can you find every mailing
< 	list on the Internet?

Well, actually if I wanted to I could (it would be a rare mailing list
that didn't send traffic thru rutgers).

< 3. efficiency - netnews by NNTP is delivered on a per-network basis,
< 	not a per-host basis, which means fewer long-haul connections
< 	to make in distributing the messages.

Actually, alog the the newer mail software bundles messages too
cutting down on this problem.  

< The only thing we can't really give people is privacy - so private
< membership mailing lists will continue to exist.

Oh, privacy is no problem.  You can forge messages to protect your
identity (e.g., the recent friend of Mel's who posted the sendsys
request) and you can always post encrypted messages to binary groups.
Rot13 is just the tip of the iceberg.

< In summary, I fail to understand your objections to the effects of
< NNTP, given that you are a proponent of the free flow of information.

Does it make more sense to you now?

< NNTP has made it possible for the network to grow much more quickly
< and painlessly than it otherwise might have, and I think it has
< staved off (albeit temporarily) the coming balkanization of the
< USENET. There is more information flow through USENET because of
< NNTP than there would have been without it.

Currently our library has a shelving problem.  In order to take in new
books they have to get rid of old books.  Similarly, from the very beginning
the net has had to throw-away old information to give new information
a chance.  For the most part, this has not increased the amount of available
information, it has just made for more confusion.  If there were solid
archives and constant access and review of all this information, then
I would say sure go ahead and increase things as much as the system will
bear, but given that any new information comes at the expense of old
(and although there are a few private treasure collections).

< In some sense, by turning up the volume, I'm hoping that I've made
< the problem of information overload much more obvious to the software
< writers of the network, and that they will come up with some attacks
< on the problem that I haven't thought of yet.

I assure you it was quite obvious 7 years ago.  It is amusing that while
everyone agrees that the net exists solely on the basis of trust, here
you are purposely stress testing it.  Perhaps as your next act you should
take nntp away and see how they handle that.

< 	your rebuttal, sir?

Ditto?

---- BOB (webber@athos.rutgers.edu ; rutgers!athos.rutgers.edu!webber)

brad@looking.UUCP (Brad Templeton) (07/10/88)

I have to agree with Bob one point one.  I think I'm well connected, but my
site is here in the Northeastern end of the backbone and not near NNTP.

Today, even reading news frequently, it comes in with batches that already
have long conversations in them (Usually involving Matthew "Whiner") which
should have been done via mail.  It's annoying.

As for mail routing, I think it's bad that almost every moderatedly connected
site has to keep around a huge database.  I think this endless duplication
is what is bound to cause mail unreliability.

Of course it's more expensive to have a smaller number of maintained paths, but
it would be better.  UUNET would get rich as a probable top level site,
but someday people may just not bother to send mail to anybody who isn't
reliably connected.
-- 
Brad Templeton, Looking Glass Software Ltd.  --  Waterloo, Ontario 519/884-7473

jbuck@epimass.EPI.COM (Joe Buck) (07/11/88)

In article <Jul.9.21.22.40.1988.17447@aramis.rutgers.edu> webber@aramis.rutgers.edu (Bob Webber) writes:
>Mail is very reliable these days.   If you send out a message and it doesn't
>make it and then you resend it, it won't make it again.  What could be more
>reliable?  What the net is in desperate need of is a list of ``dumb'' mail
>sites that are willing to send messages where the sender asks rather
>than play their own silly games.

Most sites running routing mailers have them configured not to
reroute all-bang paths.  Unfortunately, you're at Rutgers, so it's
impossible for you to specify exactly where mail should go, since
Rutgers does such aggressive rerouting.  That is, even if I told you
a working path, your local mailers might decide to rewrite it for
you.  I can avoid mailing through Rutgers; you can't.  Sorry.

>If I add a fixed-cost leased line to site X, I have added just one
>such connection to the overall cost-flow of the net.  If I add myself
>to something like nsfnet, it is equivalent to setting up hundreds of
>direct connect fixed-cost leased lines simultaneously.  The difference
>is orders of magnitude.

Yep.  Ain't it great?  Of course there are a lot of things that are
still variable-cost: CPU cycles, disk space.

>< <   3) It has increased the centralization of the backbone.
>< This is flat-out wrong.  It LESSENS the clout of the backbone.  ...
>
>This is FLAT-OUT WRONG.  The problem has NEVER been that the backbone
>had any REAL clout as far as being ``needed'' for the net.  The problem
>is that sites run by administrative types orient themselves behind authority
>figures and invest anyone claiming to be responsible leadership with
>de facto control.

Oh, give me a break.  Do you mean that a lot of sys admins take Gene
Spafford's list as the official one?  This was true before NNTP came
around.  Are you claiming sys admins are more willing to accept the
concept of the "backbone veto?"  Not the ones I talk to, quite the
contrary.  A lot of us are rather pissed off by some of the recent
assertions of backbone power.

>  NNTP traffic increases the number of such sites controlling
>communications of the net.  Just at a time when individual home users could
>afford to be news and mail trafficers, you have all these major institutions
>saying ``Don't bother, we can use arpanet or nsfnet or what have you and
>ship it all around cheaper and faster than you could ever hope to.''  

There are quite a few individual or small users shipping lots of news
and mail around, especially in the SF Bay Area.  And no backbone site
can stop me from setting up a news feed.  The net isn't run by "major
institutions" anyway; connections are set up and maintained by
individuals, using resources of these institutions.

>[ see .signature below ]
>Who cares that it was Tom Robbins who said it?  I WANT A FULL CITATION!

It's from "Jitterbug Perfume", a book with enough pithy .signature
quotes to keep me going for quite a while.  You'll have to locate the
page number yourself.


-- 
- Joe Buck  {uunet,ucbvax,pyramid,<smart-site>}!epimass.epi.com!jbuck
jbuck@epimass.epi.com	Old Arpa mailers: jbuck%epimass.epi.com@uunet.uu.net
	If you leave your fate in the hands of the gods, don't be 
	surprised if they have a few grins at your expense.	- Tom Robbins

faustus@ic.Berkeley.EDU (Wayne A. Christopher) (07/12/88)

It seems like NNTP has had two effects -- it has made news cheaper,
because it's more effecient and it makes it easier for news to go over
the internet, but on the other hand it's made news a lot quicker, thus
increasing volume.  The first is good, obviously, but the second is
bad.  I've posted things, seen followups from the East Coast, posted
followups to them, and seen more followups to my followups all in one
night.  While this may make discussions a lot easier, they're very bad
for volume.

Since we don't have the old bandwidth constraints (uucp batching, etc),
I think we should create some new ones.  How about adding a
re-transmission delay into the news software, so that an article would
wait at a particular site for at least a few hours (say) before being
sent out again?  That way, the high-bandwidth newsgroups would have
discussions with delays of a few days or so, instead of an hour or
two.  We could set different delays for different newsgroups, so that
cancel messages and "timely" groups like comp.risks and *.announce
would move quickly, whereas soc.singles and comp.lang.c could move more
slowly.  For binary and source groups it wouldn't make much of a
difference, since they don't contain followups.

The problem with this is that discussions would arrive at people at
widely different times, and there would be a lot more followups about
things that for most people are weeks old.

Does this sound sensible?

	Wayne

andy@carcoar.Stanford.EDU (Andy Freeman) (07/12/88)

In article <4414@pasteur.Berkeley.Edu> faustus@ic.Berkeley.EDU (Wayne A. Christopher) writes:
>It seems like NNTP has had two effects -- it has made news cheaper,
>because it's more effecient and it makes it easier for news to go over
>the internet, but on the other hand it's made news a lot quicker, thus
>increasing volume.

It isn't clear that faster propagation causes more volume.  For one,
it decreases the interval for redundant followups; this reduces their
number.  (Of course, faster propagation only affects followups from
people who read all of the news in a group before responding to a
message, but those people aren't as rare as they seem.)  Increasing
the propagation time guarantees that more people will followup because
they can't see followups that haven't reached them yet.

-andy
----
UUCP:  {arpa gateways, decwrl, uunet, rutgers}!polya.stanford.edu!andy
ARPA:  andy@polya.stanford.edu
(415) 329-1718/723-3088 home/cubicle

webber@aramis.rutgers.edu.UUCP (07/13/88)

In article <2293@epimass.EPI.COM>, jbuck@epimass.EPI.COM (Joe Buck) writes:
> In article <Jul.9.21.22.40.1988.17447@aramis.rutgers.edu> webber@aramis.rutgers.edu (Bob Webber) writes:
< <reliable?  What the net is in desperate need of is a list of ``dumb'' mail
< <sites that are willing to send messages where the sender asks rather
< <than play their own silly games.
< Most sites running routing mailers have them configured not to
< reroute all-bang paths.  Unfortunately, you're at Rutgers, so it's
< impossible for you to specify exactly where mail should go, since
  ^^^^^^^^^^ 
< Rutgers does such aggressive rerouting.  That is, even if I told you
< a working path, your local mailers might decide to rewrite it for
< you.  I can avoid mailing through Rutgers; you can't.  Sorry.

Actually I can.  All I have to do is bounce the message off some other 
``arpanet'' site.  However, it is not easy to know which sites will be
best to bounce it off of to avoid passing thru a rerouter.

< << <   3) It has increased the centralization of the backbone.
< << This is flat-out wrong.  It LESSENS the clout of the backbone.  ...
< <
< <This is FLAT-OUT WRONG.  The problem has NEVER been that the backbone
< <had any REAL clout as far as being ``needed'' for the net.  The problem
< <is that sites run by administrative types orient themselves behind authority
< <figures and invest anyone claiming to be responsible leadership with
< <de facto control.
< 
< Oh, give me a break.  Do you mean that a lot of sys admins take Gene
< Spafford's list as the official one?  This was true before NNTP came
< around.  Are you claiming sys admins are more willing to accept the
< concept of the "backbone veto?"  Not the ones I talk to, quite the
< contrary.  A lot of us are rather pissed off by some of the recent
< assertions of backbone power.

So?  Of course there will always be Usenet sites administered by people
with some understanding of the software, but with nntp ``you'' are bringing
in alot of people who have no feel for the tradition of unix, usenet, ...

Glad you are pissed off -- now tell me about how nntp is helping you
do something about it.

< There are quite a few individual or small users shipping lots of news
< and mail around, especially in the SF Bay Area.  And no backbone site
< can stop me from setting up a news feed.  The net isn't run by "major

Actually they could do a pretty good job of it if they wanted to (and if
they still remember how the software works).

< institutions" anyway; connections are set up and maintained by
< individuals, using resources of these institutions.

Not saying they are run by institutions -- saying they are run by yuppies.

< It's from "Jitterbug Perfume", a book with enough pithy .signature
< quotes to keep me going for quite a while.  You'll have to locate the
< page number yourself.

Thanks, sounds interesting. Am still ticked off with the people who
contributed quotes to the fortune files without including references
(anyone know the source of Hanlon's Razor?).

---- BOB (webber@athos.rutgers.edu ; rutgers!athos.rutgers.edu!webber)

webber@aramis.rutgers.edu.UUCP (07/13/88)

In article <4414@pasteur.Berkeley.Edu>, faustus@ic.Berkeley.EDU (Wayne A. Christopher) writes:
< ...
< Since we don't have the old bandwidth constraints (uucp batching, etc),
< I think we should create some new ones.  How about adding a
< re-transmission delay into the news software, so that an article would
< wait at a particular site for at least a few hours (say) before being
< sent out again?  That way, the high-bandwidth newsgroups would have
< discussions with delays of a few days or so, instead of an hour or
< two.  We could set different delays for different newsgroups, so that
< cancel messages and "timely" groups like comp.risks and *.announce
< would move quickly, whereas soc.singles and comp.lang.c could move more
< slowly.  For binary and source groups it wouldn't make much of a
< difference, since they don't contain followups.

I hardly see anything timely about comp.risks.  It seems that waiting
until the facts were in could only help the group.  However, in general,
when anyone starts talking about this is how we should handle all the
groups except for a few, it is always difficult to tell if they are
saying this is how we should handle the groups they don't like or this
is how even the groups they like should be handled.

< The problem with this is that discussions would arrive at people at
< widely different times, and there would be a lot more followups about
< things that for most people are weeks old.
< 
< Does this sound sensible?

Yep.  It was.

---- BOB (webber@athos.rutgers.edu ; rutgers!athos.rutgers.edu!webber)

jerry@oliveb.olivetti.com (Jerry Aguirre) (07/13/88)

In article <4414@pasteur.Berkeley.Edu> faustus@ic.Berkeley.EDU (Wayne A. Christopher) writes:
>Since we don't have the old bandwidth constraints (uucp batching, etc),
>I think we should create some new ones.  How about adding a
>re-transmission delay into the news software, so that an article would
>wait at a particular site for at least a few hours (say) before being
>sent out again?  That way, the high-bandwidth newsgroups would have
>discussions with delays of a few days or so, instead of an hour or
>two.  We could set different delays for different newsgroups, so that

Sence we don't want the article to take 14 days to transmit from one end
of the net to the other how about putting the delay in the news reader.
You could receive the article, forward it, but not allow anyone to read
it until two days after it was posted.  That way everyone would see it
at the same time.

Of course I won't be willing to wait the two days so I will disable this
feature at my site...  I can't believe that people are complaining about
news transmission being too fast!  It wasn't that long ago that a large
portion of the articles would take more than the default expiration time
to reach everyone.

The software is also a lot more reliable and there are more redundant
paths now.  Perhaps we should also add a random junker that corrupts or
eliminates some of the articles.  With a little coding we could be back
where we were years ago.

peter@ficc.UUCP (Peter da Silva) (07/13/88)

Here's an idea... when you quit out of your newsreader it presents all the
messages you have posted in that session and asks if you still want to send
them. At least it'd cut down on all the RTFM messages.
-- 
-- `-_-' Peter (have you hugged your wolf today) da Silva.
--   U   Ferranti International Controls Corporation.
-- Phone: 713-274-5180. CI$: 70216,1076. ICBM: 29 37 N / 95 36 W.
-- UUCP: {uunet,academ!uhnix1,bellcore!tness1}!sugar!ficc!peter.

faustus@ic.Berkeley.EDU (Wayne A. Christopher) (07/14/88)

In article <Jul.12.13.26.21.1988.25310@aramis.rutgers.edu>, webber@aramis.rutgers.edu (Bob Webber) writes:
> I hardly see anything timely about comp.risks.

The major reason I don't think it needs artificial delays is that it's
moderated.  Moderated groups are limited in bandwidth by the moderators
(except for sources and binaries).  Any idea how many of the top volume
groups are moderated?

	Wayne

webber@aramis.rutgers.edu (Bob Webber) (07/14/88)

In article <4456@pasteur.Berkeley.Edu>, faustus@ic.Berkeley.EDU (Wayne A. Christopher) writes:
> In article <Jul.12.13.26.21.1988.25310@aramis.rutgers.edu>, webber@aramis.rutgers.edu (Bob Webber) writes:
< < I hardly see anything timely about comp.risks.
< 
< The major reason I don't think it needs artificial delays is that it's
< moderated.  Moderated groups are limited in bandwidth by the moderators
< (except for sources and binaries).  Any idea how many of the top volume
< groups are moderated?

Try this one on:  the high volume groups are kept artificially high by
difficulty in creating new unmoderated groups.  Split the net up into
a million groups -- all low volume -- will you then be happy?  The
size of the flow does not belong to any one group -- it belongs to the
net as a whole.  The boundaries are completely artificial.  

---- BOB (webber@athos.rutgers.edu ; rutgers!athos.rutgers.edu!webber)

peter@ficc.UUCP (Peter da Silva) (07/16/88)

In article <4456@pasteur.Berkeley.Edu>, faustus@ic.Berkeley.EDU.UUCP writes:
> The major reason I don't think it needs artificial delays is that it's
> moderated.  Moderated groups are limited in bandwidth by the moderators
> (except for sources and binaries).

The sources and binaries groups are limited in volume, at least as compared
with the unmoderated equivalents. Some of them have quite high standards
(for example, but not limited to, comp.sources.unix). They generally
feed out large postings over a period of time to cut down on the load
on the net. They avoid repostings by making archives available.

Some of the stuff that gets posted to binaries groups (and even some of the
stuff that gets posted to sources groups) is a bit dubious. But it *is*
limited and managed, pretty effectively, by the moderators.

Now if only there was a comp.sources.ibm-pc.
-- 
Peter da Silva  `-_-'  Ferranti International Controls Corporation.
"Have you hugged  U  your wolf today?" (uunet,tness1)!sugar!ficc!peter.

mangler@cit-vax.Caltech.Edu (Don Speck) (07/24/88)

In article <4277@pasteur.Berkeley.Edu>, fair@ucbarpa.Berkeley.EDU (Erik E. Fair) writes:
>				    NNTP hasn't changed the economics
> for a UUCP site, save that they might be able, due to the relative
> ubiquitousness of the Internet, to get a full netnews feed physically
> closer to where they are (and thus cheaper in phone bills).

From what I've seen, tcp/uucp and NNTP have made ARPAnet sites *less*
receptive to dialup UUCP, so it's likely to be *harder* to get a phone
feed from any given ARPAnet gateway, not easier.  "We have the ARPAnet;
why do we need UUCP?"  Well, before NNTP, one needed UUCP to get Usenet.
But now individual ARPA sites don't need the UUCP sites anymore.

NNTP has introduced a strong economy of scale, leading to a much wider
disparity in readers per machine - or more to the point, in *posters*
per machine.  A typical college Usenet machine now serves hundreds of
posters.  That's why Usenet now has a hundred thousand undergraduates
posting idle chit-chat.

NNTP promotes consolidation, not the decentralization that is Usenet's
hallmark.  Unlike mailing lists, which can be received by any machine,
NNTP servers are one per department, with the trend being toward one
per campus.  This extrapolates to one per regional network.  Do you
see much rrn'ing across BARRNET, yet?  You will... and caching slave
servers will hasten the day.

As something becomes more and more centralized, it falls under the
purview of ever higher levels in the bureaucracy.  It becomes *very*
visible, to more conservative people, something we never had to
worry about with mailing lists, because mail was private.  Usenet
has a pretty radical reputation.  Do you think it could withstand
a DARPA review?

Centralization brings us back to having critical failure points,
where loss of a single machine knocks out a department, a campus,
perhaps a regional net.  I'm not talking about machine failures;
I'm talking about *administrative* shutdowns.

For all the fuss made about the AT&T gateway consolidation announcement,
the thing that worried me most about it was when our operations manager
(who dislikes Usenet) read about the fuss in InfoWorld.  Usenet is
beginning to come under public scrutiny, like it or not.

The loss of AT&T is nothing compared to what would happen if the
Internet were kicked out from under Usenet.  The FCC's proposed
Enhanced Service Provider surcharge could have mostly done it.
Usage-sensitive billing on the ARPAnet would kill off many links.
Usenet could be completely banned from the Internet by a serious
charge of impropriety, such as if MCImail, ATTmail, and Telemail
complained about unfair government-subsidized competition, or if
misc.forsale were to get a Golden Fleece.

By giving away Usenet to the Internet, it now has to answer to a
new master, one that can be a lot harsher if crossed.

Don Speck   speck@vlsi.caltech.edu  {amdahl,ames!elroy}!cit-vax!speck

gds@spam.istc.sri.com (Greg Skinner) (07/24/88)

In article <7397@cit-vax.Caltech.Edu> mangler@cit-vax.Caltech.Edu (Don Speck) writes:
>Well, before NNTP, one needed UUCP to get Usenet.
>But now individual ARPA sites don't need the UUCP sites anymore.

Before nntp, there were arpa sites that received Usenet from uucp
neighbors.  However, they were at their liberty to use any file
transfer-type protocol (smtp, rcp, etc.) to move news to arpa sites
that didn't want uucp phone connections.  With nntp, a standard was
provided for news exchange among arpa neighbors.

>NNTP has introduced a strong economy of scale, leading to a much wider
>disparity in readers per machine - or more to the point, in *posters*
>per machine.  A typical college Usenet machine now serves hundreds of
>posters.  That's why Usenet now has a hundred thousand undergraduates
>posting idle chit-chat.

This has nothing to do with nntp.  What about all the AT&T and DEC
posters?  They don't use nntp internally that I know of (perhaps not
externally either, but I am not sure).  Yet they are voluminous due to
sheer numbers.

>Unlike mailing lists, which can be received by any machine,
>NNTP servers are one per department, with the trend being toward one
>per campus.  This extrapolates to one per regional network.  Do you
>see much rrn'ing across BARRNET, yet?  You will... and caching slave
>servers will hasten the day.

NNTP was not designed for news to be *read* across gateway boundaries.
As for your previous points, I've seen no trends towards one nntp
server per campus.  There are many reasons why this is so -- single
point of failure, disk and cpu charges unfairly incurred on a single
machine, for example.

>As something becomes more and more centralized, it falls under the
>purview of ever higher levels in the bureaucracy.  It becomes *very*
>visible, to more conservative people, something we never had to
>worry about with mailing lists, because mail was private.  Usenet
>has a pretty radical reputation.  Do you think it could withstand
>a DARPA review?

Are you implying that nntp is at fault for making Usenet more visible
than the existing mailing lists?  I disagree (although I agree in
principle that as something becomes more centralized it becomes more
visible).  *Personal* mail is as private as the system administration
permits it to be.  Mailing lists, despite attempts to keep them secret
or restrict information flow, eventually come under the scrutiny of
administrators whose hosts list mail passes through.  Lists I've known
of (even those I have not been a member of) have caused far more
damage than Usenet has, and there was no DARPA involvement to squelch
those lists.  More often than not, a word from the offended
administrators to the list maintainers was enough to resolve any
difficulties.

>Centralization brings us back to having critical failure points,
>where loss of a single machine knocks out a department, a campus,
>perhaps a regional net.  I'm not talking about machine failures;
>I'm talking about *administrative* shutdowns.

Again, nntp was not designed to serve large organizations (such as
huge campuses or regional nets).  If it's being used in that manner,
naturally its lack of efficacy, not to mention the effect that will be
felt if that service is discontinued, will be felt throughout the
organization.

>Usenet is beginning to come under public scrutiny, like it or not.

Usenet came under public scrutiny long before nntp.

>By giving away Usenet to the Internet, it now has to answer to a
>new master, one that can be a lot harsher if crossed.

Certainly this is the case of running a new service over any existing
service.  However, nntp will be no more at fault for causing problems
than any mailing lists which existed prior to then.

--gregbo

p.s.  It might be more appropriate to title this discussion "Usenet
has had some ill effects on the Internet" but that is also highly
debatable.