[net.news] Fear and Loathing...

guest@ccivax.UUCP (What's in a name ?) (11/14/85)

A few quick questions:

What type of phone service is being used to handle net traffic?

Why are cross postings not handled as symbolic links or some
similar method rather than sending (in this case) 3 copies
of an identical file? Perhaps cross-postings could be handled
more efficiently?

Would the groups removed cause large quantities of traffic to be
posted to an even larger set of groups.

Isn't a net group better than some of the "underground" mailings
that get sent through the same channels?  One site may traffic
100 "letters" that are identical but to different people.

Would it be more practical to "spread" the backbone a little wider
in order to get better distribution of long distance costs?
Or maybe concentrate it into lower cost, higher speed lines?

I know that broadcast/sattelite has been discussed, are there
ways of getting around the feds concerns (encryption?).

The main concern of deleting groups with high traffic is that
there are only a few of these which I read regularly.
If all that creative effort is diverted to the groups I do read,
that makes it harder to pick and choose what I want to read.
I might do a one time scan of a group to get background info
but not for regular reading.

Basically, the traffic problem is the same as the "better funnel"
problem.  Deleting groups makes the neck of the funnel smaller,
but the traffic going into it doesn't change.  In other words,
the subscribers ability to filter the information he wants
becomes more difficult.  Also, there are more issues and articles
getting response, leading to more, not less traffic.

Cross-posting and Re-Re-Re-posting of already long articles are
legitimate concerns, but maybe we need to find better ways of
reducing the physical traffic associated with these practices.

Examples:

tar new files together, preserving linkages so that

net/news/mcvax.867 net/news/net.news.group/mcvax.867 net.flame/mcvax.867

are links to the same file rather than three identical copies of the same
file.  Or perhaps use a "recieve news" command that could scan a single
copy of an article and generate the appropriate linkages.  This way,
original postings and postings to be forwarded could be managed in a
single directory.

By using this system.article_number convention, references can be quickly
found and read. Perhaps even put a "see reference" command on readnews.

In essence, the idea would be to reduce traffic by using the file-system
as a data-base cross referencing system.  While sending and forwarding
only one copy of new postings.

Also, when digests are produced, rather than sending the articles
themselves across the backbone, send a "digest script file" that could
pass these "highlights" to digest subscribers by building them
on the machine most local to the digest machine.  If a digest
machine didn't have the files it needed to complete the job,
it would request it's "feeder" to create the digest, or just request
the needed articles.

woods@hao.UUCP (Greg Woods) (11/14/85)

> Why are cross postings not handled as symbolic links 

  They are, in fact, hard links.

> Would it be more practical to "spread" the backbone a little wider
> in order to get better distribution of long distance costs?

  If there were sites willing to share the cost, sure. Any volunteers?

> tar new files together, preserving linkages so that
> 
> net/news/mcvax.867 net/news/net.news.group/mcvax.867 net.flame/mcvax.867
> 
> are links to the same file rather than three identical copies of the same
> file. 

  As above, this is already in place.

> By using this system.article_number convention, references can be quickly
> found and read. Perhaps even put a "see reference" command on readnews.

  This too is already there. Just typing <####@site.UUCP> will get you
the referenced article.

  Good ideas, but that wheel has already been invented and there is STILL
too much traffic. We need more than that. Whatever we come up with (length
limits, moderation, etc.), it must be enforceable by the software or by a small
group of people. Experience has CLEARLY shown that schemes that depend on
netwide cooperation of users (or even administrators) are doomed to failure.

--Greg