[news.newusers.questions] news.announce.newusers

rodney@sun.ipl.rpi.edu (Rodney Peck II) (08/22/89)

I think maybe there is an even bigger problem with news.  Most people
sort of fall across it or have it pointed out by a friend.  The group
with the stuff everyone is supposed to read (news.announce.newusers)
is subscribed along with everything else on the net then you weed
through it and get what you want.

Any ideas about how to make sure that new people get the hook for the
n.a.n group so they can at least find the f-ing manual?  

I don't like the idea of "ask your system administrator".  I am my
system administrator and I've had to teach all this to myself.  (Part
of the reason that having someone ask a really goofy question is so
irritating.)  But then, most of Unix is like that.  People ask me in a
really accusing tone how they are supposed to know about `man' and I
stare back at them in disbelief.

My site (ipl.rpi.edu) is a very well connected large sun4 (if you
wanna know how big, send mail, it's big).  There's also the
information tech. services group here at rpi which is very helpful
when I get really stuck.  If we're having trouble figuring out what to
do, what about the poor little guy with a uucp connection over a phone
line?

--
Rodney

arromdee@crabcake.cs.jhu.edu (Kenneth Arromdee) (11/11/89)

In article <110@toaster.SFSU.EDU> eps@cs.SFSU.EDU (Eric P. Scott) writes:
>>How is a new user supposed to even learn of the EXISTENCE of those
>>news.announce.newusers articles in the first place?  (given that many sites
>>expire them at a rate more frequent than they arrive)
>Each article in n.a.n carries an explicit 3-month expiration to
>exempt it from the normal system expiration, and a Supersedes:
>header to expire-on-demand the previous edition.  Any news
>administrator who overrides this (and this takes special effort),
>is, IMHO, not merely incompetent, but malicious.  ...

I just logged onto jhunix and read news.  No articles in
news.announce.newusers.

It _does_ happen.
--
"The workers ceased to be afraid of the bosses.  It's as if they suddenly
 threw off their chains." -- a Soviet journalist, about the Donruss coal strike

Kenneth Arromdee (UUCP: ....!jhunix!arromdee; BITNET: arromdee@jhuvm;
     INTERNET: arromdee@crabcake.cs.jhu.edu)

jay@splut.conmicro.com (Jay Maynard) (11/11/89)

In article <110@toaster.SFSU.EDU> eps@cs.SFSU.EDU (Eric P. Scott) writes:
>>How is a new user supposed to even learn of the EXISTENCE of those
>>news.announce.newusers articles in the first place?  (given that many sites
>>expire them at a rate more frequent than they arrive)
>Each article in n.a.n carries an explicit 3-month expiration to
>exempt it from the normal system expiration, and a Supersedes:
>header to expire-on-demand the previous edition.  Any news
>administrator who overrides this (and this takes special effort),
>is, IMHO, not merely incompetent, but malicious.  ...

Right. All it takes is a line in crontab:
0 2 * * * /usr/lib/news/expire -e2 -i
Real special effort, that. (Yes, I expire news here after 2 days.) Any
news administrator who doesn't do this leaves himself open to the moron
who discovers the Expires: header and posts his jokes to rec.humor with
an expiration date in 2004.

Until I get news 3.0 running here, or figure out a way to expire
news.announce.newusers under different rules than all other groups
without running expire twice, it'll stay that way, too. (For those who
arenew to this, expire is an expensive program to run, both in terms of
time and CPU.) As I write this, expire has been running over 2 hours and
9:43 of CPU, and isn't halfway through. I'm not about to run something
like that twice a day unless I really need to for some reason like being
tight on disk space.

-- 
Jay Maynard, EMT-P, K5ZC, PP-ASEL   | Never ascribe to malice that which can
jay@splut.conmicro.com       (eieio)| adequately be explained by stupidity.
{attctc,bellcore}!texbell!splut!jay +----------------------------------------
Shall we try for comp.protocols.tcp-ip.eniac next, Richard? - Brandon Allbery

bdb@becker.UUCP (Bruce Becker) (11/12/89)

In article <3032@splut.conmicro.com> jay@splut.conmicro.com (Jay "you ignorant splut!" Maynard) writes:
|In article <110@toaster.SFSU.EDU> eps@cs.SFSU.EDU (Eric P. Scott) writes:
|[...]
|Until I get news 3.0 running here, or figure out a way to expire
|news.announce.newusers under different rules than all other groups
|without running expire twice, it'll stay that way, too. (For those who
|arenew to this, expire is an expensive program to run, both in terms of
|time and CPU.) As I write this, expire has been running over 2 hours and
|9:43 of CPU, and isn't halfway through. I'm not about to run something
|like that twice a day unless I really need to for some reason like being
|tight on disk space.

	Doctor! Quick! Administer a dose of
	C news! This patient has all the symptoms...

Cheers,
-- 
  .::.	 Bruce Becker	Toronto, Ont.
w \@@/	 Internet: bdb@becker.UUCP, bruce@gpu.utcs.toronto.edu
 `/c/-e	 BitNet:   BECKER@HUMBER.BITNET
_/  \_	 Your Agrarian Distress Card - Don't heave loam without it...

bill@netagw.uu.net (Bill Aten) (11/12/89)

>
>Right. All it takes is a line in crontab:
>0 2 * * * /usr/lib/news/expire -e2 -i
>
>Until I get news 3.0 running here, or figure out a way to expire
>news.announce.newusers under different rules than all other groups
>without running expire twice, it'll stay that way, too.

You might give use the following crontab entry instead:
0 2 * * * /usr/lib/news/expire -n !news.announce.newusers -e2 -i

One run of 'expire', but it doesn't touch 'news.announce.newusers'.

-- 
=============================================================================
Bill Aten                             |   Internet:  bill@netagw.uu.net
UUCP:  ...!uunet!netagw!bill          | Compuserve:  70270.451@compuserve.com
=============================================================================

gary@sci34hub.UUCP (Gary Heston) (11/14/89)

In article <3032@splut.conmicro.com>, jay@splut.conmicro.com (Jay Maynard) writes:
> In article <110@toaster.SFSU.EDU> eps@cs.SFSU.EDU (Eric P. Scott) writes:
> >>How is a new user supposed to even learn of the EXISTENCE of those
> >>news.announce.newusers articles in the first place?  (given that many sites
> >>expire them at a rate more frequent than they arrive)
 
> Right. All it takes is a line in crontab:
> 0 2 * * * /usr/lib/news/expire -e2 -i
> Real special effort, that. (Yes, I expire news here after 2 days.)
 	[.....]
> Until I get news 3.0 running here, or figure out a way to expire
> news.announce.newusers under different rules than all other groups
> without running expire twice, it'll stay that way, too. 

Ok, Jay, you ask for it....

0 2 * * * /usr/lib/news/expire -n all !news.announce.newusers -e2 -i

You can also run on a specific newsgroup, with this syntax:

0 1 * * * /usr/lib/news/expire -n news.announce.newusers -e100

to prevent things from hanging around beyond the three-month 
reposting (with a 10-day cushion, to allow for net.propagation).
Running on a single group like that should take a lot less time.
You also need only do it weekly, say, on Monday morning when you
don't want to get up early.  :-)

I believe you will find info on this in the man page for expire.

To quote several other net.persons, and assuming that by now 
you're groaning and hiding your face, Read The Fine Manual!!

:-)

Also, I suspect you're a fairly limited site, in user numbers. 
In your case (speaking to the rest of the inexperienced admins/
users), archiving the articles would be sensible, so you
wouldn't have to deal with expiring them. Where there's lots 
of newuser activity (college systems, etc.) keeping them
online is better (of course, they generally have far more 
resources to put to this use than we do).

When you get a chance, look at your newsdir and see if you have 
some rather large directory entries. That will slow things down
a whole lot; it can be fixed by replicating with cpio -p or 
by dumping to tape and DELETING the structure (so that dirs 
are re-created with the minimum number of entries necessary),
then restoring. My major expire run takes about 35 minutes, 
on a 386 multibus machine with a 150MB SCSI drive dedicated to
news, about 100MB used. I think your expire should run faster.
I don't use the -i option on mine, either.

> Jay Maynard, EMT-P, K5ZC, PP-ASEL   | Never ascribe to malice that which can
> jay@splut.conmicro.com       (eieio)| adequately be explained by stupidity.
> {attctc,bellcore}!texbell!splut!jay +----------------------------------------
> Shall we try for comp.protocols.tcp-ip.eniac next, Richard? - Brandon Allbery

P.S. I passed your last posting about the ISC 386/ix inode eating
bug to our Technical Support people. They appreciated it. Thanks!

-- 
    Gary Heston     { uunet!sci34hub!gary  }    System Mismanager
   SCI Technology, Inc.  OEM Products Department  (i.e., computers)
      Hestons' First Law: I qualify virtually everything I say.

" Maynard) (11/15/89)

In article <407@sci34hub.UUCP> gary@sci34hub.UUCP (Gary Heston), among
others, writes:
>0 2 * * * /usr/lib/news/expire -n all !news.announce.newusers -e2 -i
>You can also run on a specific newsgroup, with this syntax:
>0 1 * * * /usr/lib/news/expire -n news.announce.newusers -e100
>I believe you will find info on this in the man page for expire.
>To quote several other net.persons, and assuming that by now 
>you're groaning and hiding your face, Read The Fine Manual!!

Groooan. OK, OK...I plead ignorance, and inability to RTFMs for B news:
I don't have them any more, having lost them to the Microport SV/AT fsck
bug.

Guess I'll have to look at C news - that is, as soon as I locate a copy
I can get via uucp. (No FTP here.)

>When you get a chance, look at your newsdir and see if you have 
>some rather large directory entries. That will slow things down
>a whole lot; it can be fixed by replicating with cpio -p or 
>by dumping to tape and DELETING the structure (so that dirs 
>are re-created with the minimum number of entries necessary),
>then restoring. My major expire run takes about 35 minutes, 
>on a 386 multibus machine with a 150MB SCSI drive dedicated to
>news, about 100MB used. I think your expire should run faster.
>I don't use the -i option on mine, either.

Actually, I think my problem is a horribly scrambled free list on /usr;
fsanalyze claims that the average seek distance on my history file is
something like 700 cylinders (on an ST251!). Yes, I am using dbz. I have
packdisk, but am scared to death of it.
My expire used to run faster, but I suspect that my scrambled disk has
slowed it down.

>P.S. I passed your last posting about the ISC 386/ix inode eating
>bug to our Technical Support people. They appreciated it. Thanks!

Uh, you're welcome, I think...but I don't think that was mine. (Most of
that work has been done by T. William Wells.)

-- 
Jay Maynard, EMT-P, K5ZC, PP-ASEL   | Never ascribe to malice that which can
jay@splut.conmicro.com       (eieio)| adequately be explained by stupidity.
{attctc,bellcore}!texbell!splut!jay +----------------------------------------
Shall we try for comp.protocols.tcp-ip.eniac next, Richard? - Brandon Allbery

rickh@lancelot (Rick Hung) (11/15/89)

Hi...I'm new to USENET...I'm trying to learn all the commands and features of unix, and how to use it well.
I just thought i'd try and enter a message.

Rick H.

bill@twwells.com (T. William Wells) (11/16/89)

In article <3042@splut.conmicro.com> jay@splut.conmicro.com (Jay "you ignorant splut!" Maynard) writes:
: Guess I'll have to look at C news - that is, as soon as I locate a copy
: I can get via uucp. (No FTP here.)

You can get it from uunet, among other places. Or, if you are
desparate, I can send it to you.

: >P.S. I passed your last posting about the ISC 386/ix inode eating
: >bug to our Technical Support people. They appreciated it. Thanks!
:
: Uh, you're welcome, I think...but I don't think that was mine. (Most of
: that work has been done by T. William Wells.)

BTW, I have a binary patch for 2.0.2 if anyone wants it.

Replies to this message should be via e-mail, rather than posting.

---
Bill                    { uunet | novavax | ankh | sunvice } !twwells!bill
bill@twwells.com