yost@esquire.UUCP (David A. Yost) (09/12/89)
All I know about Cnews is that it exists and is being worked on, so the following suggestion comes from left (Bnews) field. I would be pleased to read the following (or equivalent) about Cnews: Cnews automatically manages the disk space used by news articles in a FIFO fashion. Thus, you don't have to keep tweaking expire times to make sure don't run out of space; articles will stay around as long as possible, being deleted automatically only as space is needed for new articles. (Cnews will ensure that the free space on a given file system used for articles always exceeds the disk-reserve configuration parameter for that file system.) Articles are removed in the order specified in the "remove-list", a FIFO list of article file names. The remove-list is accessed by a pair of routines (also accessible through programs) that push and pull files from the list (which is implemented as a table in the Cnews article database). The new Cnews version of the expire program is normally used only to add filenames to the remove-list, rather than deleting them immediately as was the case in Bnews. The purpose of Cnews expire is only to set the lifetimes of different newsgroups relative to each other. Any particular article will only live until it is its turn to be deleted. You can also use the new Cnews expire to delete some number of articles (or amount of disk space) from the head of the remove-list. --dave yost
henry@utzoo.uucp (Henry Spencer) (09/13/89)
In article <1401@esquire.UUCP> yost@esquire.UUCP (David A. Yost) writes: >I would be pleased to read the following >(or equivalent) about Cnews: > ... > articles will stay around as long as > possible, being deleted automatically only > as space is needed for new articles... > The new Cnews version of the expire program > is normally used only to add filenames to > the remove-list, rather than deleting them > immediately as was the case in Bnews. The > purpose of Cnews expire is only to set the > lifetimes of different newsgroups... Well, this may half-please you. :-) It's not set up that way out of the box. But at least part of it -- expiry only on space shortage -- can be done by minor changes to shell files. People have done it. We don't have a "remove list" or anything like that, but you could get much the same effect by doing iterative expiry, generating expire control files specifying successively tighter expiry times until you had enough free space to suit you. The overhead needn't be very high if it's managed intelligently. Again, we don't provide the solution, but I think most of the tools are there to build your own. -- V7 /bin/mail source: 554 lines.| Henry Spencer at U of Toronto Zoology 1989 X.400 specs: 2200+ pages. | uunet!attcan!utzoo!henry henry@zoo.toronto.edu
brad@looking.on.ca (Brad Templeton) (09/13/89)
In article <1401@esquire.UUCP> yost@esquire.UUCP (David A. Yost) writes: > > Cnews automatically manages the disk space > used by news articles in a FIFO fashion. > Thus, you don't have to keep tweaking expire > times to make sure don't run out of space; > articles will stay around as long as > possible, being deleted automatically only > as space is needed for new articles. C news doesn't do that. I suggested it to them a while ago but they were already in the "Please, NO MORE FEATURES!" mode. In fact, when I had smaller disk packs I wrote a very simple expire program that was space based. Instead of keep the last N days of articles, you said keep the last N *bytes* of articles. Much nicer. It could be tuned per group, too. It also ran when space got low. But it just removed files. It did not update the database. It left that to regular expire every night or couple of days, as you wish. It's short, I will post it if need be. -- Brad Templeton, Looking Glass Software Ltd. -- Waterloo, Ontario 519/884-7473
yost@esquire.UUCP (David A. Yost) (09/13/89)
In article <1989Sep12.230820.2296@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes: >Well, this may half-please you. :-) It's not set up that way out of the >box. But at least part of it -- expiry only on space shortage -- can be >done by minor changes to shell files. People have done it. We don't >have a "remove list" or anything like that, but you could get much the >same effect by doing iterative expiry, generating expire control files >specifying successively tighter expiry times until you had enough free >space to suit you. The overhead needn't be very high if it's managed >intelligently. Again, we don't provide the solution, but I think most >of the tools are there to build your own. Too bad. I was hoping for something automatic, out of the box. One of the worst things about Bnews is the amount of nursing it needs, and the amount of gory detail you need to know about it. One thing I'm not clear on: using your suggested fix, does inews make room for itself as it goes along, or do I still have to guess how much room to leave for tonight's feed? --dave P.S. Yup, "expiry" is in my dictionary.
henry@utzoo.uucp (Henry Spencer) (09/14/89)
In article <1403@esquire.UUCP> yost@esquire.UUCP (David A. Yost) writes: >One thing I'm not clear on: using your suggested >fix, does inews make room for itself as it goes >along, or do I still have to guess how much >room to leave for tonight's feed? No, you don't have to guess. The (shell) program that works through spooled input checks disk space before proceeding. It's pretty simple to have it run an expire if space is inadequate. We don't do it that way ourselves, but a first-cut version should be a one-line change. -- V7 /bin/mail source: 554 lines.| Henry Spencer at U of Toronto Zoology 1989 X.400 specs: 2200+ pages. | uunet!attcan!utzoo!henry henry@zoo.toronto.edu
sysruth@helios.physics.utoronto.ca (Ruth Milner) (09/15/89)
In article <1989Sep12.230820.2296@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes: >In article <1401@esquire.UUCP> yost@esquire.UUCP (David A. Yost) writes: >>I would be pleased to read the following >>(or equivalent) about Cnews: >> ... >> articles will stay around as long as >> possible, being deleted automatically only >> as space is needed for new articles... > >Well, this may half-please you. :-) It's not set up that way out of the >box. But at least part of it -- expiry only on space shortage -- can be >done by minor changes to shell files. People have done it. If you're planning to do it, remember that since news articles tend to be small individual files, you are likely to run out of inodes long before you run out of space, even if you do some tweaking with the inode parameters in mkfs. The exceptions, in my experience anyway, tend to be when you run into problems batching or unbatching so you get huge files created and left around for a while. -- Ruth Milner UUCP - {uunet,pyramid}!utai!helios.physics!sysruth Systems Manager BITNET - sysruth@utorphys U. of Toronto INTERNET - sysruth@helios.physics.toronto.edu Physics/Astronomy/CITA Computing Consortium
henry@utzoo.uucp (Henry Spencer) (09/15/89)
In article <1989Sep14.182604.15992@helios.physics.utoronto.ca> sysruth@helios.physics.utoronto.ca (Ruth Milner) writes: >... remember that since news articles tend to >be small individual files, you are likely to run out of inodes long before >you run out of space... This is *very* dependent on system configuration details. Exhausting inodes is one problem that utzoo has never, ever, had, although we tend to run with 90%-full filesystems most of the time. We've never even been close. So I think it is fairer to say that it *might* be a problem, depending on your system. -- V7 /bin/mail source: 554 lines.| Henry Spencer at U of Toronto Zoology 1989 X.400 specs: 2200+ pages. | uunet!attcan!utzoo!henry henry@zoo.toronto.edu
allbery@NCoast.ORG (Brandon S. Allbery) (09/16/89)
As quoted from <1989Sep14.161548.26094@utzoo.uucp> by henry@utzoo.uucp (Henry Spencer): +--------------- | In article <1403@esquire.UUCP> yost@esquire.UUCP (David A. Yost) writes: | >One thing I'm not clear on: using your suggested | >fix, does inews make room for itself as it goes | >along, or do I still have to guess how much | >room to leave for tonight's feed? | | No, you don't have to guess. The (shell) program that works through spooled | input checks disk space before proceeding. It's pretty simple to have it run | an expire if space is inadequate. We don't do it that way ourselves, but a | first-cut version should be a one-line change. +--------------- As the news maintainer (sometimes, at least) on ncoast, I've taken a pretty close look at C news. The good news is that it's quite usable without any handholding; the better news is that much of it is *simple* shell scripts. If you can program the Bourne shell, you can change many things in C news quite easily. For example, I modified the batcher to support the ability to batch for certain sites only at certain times (more specifically, *not* to batch during working hours on weekdays), so those sites can poll during the day just to get mail. It was fairly trivial shell hacking. ++Brandon -- Brandon S. Allbery, moderator of comp.sources.misc allbery@NCoast.ORG uunet!hal.cwru.edu!ncoast!allbery ncoast!allbery@hal.cwru.edu bsa@telotech.uucp, 161-7070 BALLBERY (MCI), ALLBERY (Delphi), B.ALLBERY (GEnie) Is that enough addresses for you? no? then: allbery@uunet.UU.NET (c.s.misc)
gary@sci34hub.UUCP (Gary Heston) (09/16/89)
[ wishlist about features deleted ] In article <1989Sep14.182604.15992@helios.physics.utoronto.ca>, sysruth@helios.physics.utoronto.ca (Ruth Milner) writes: > If you're planning to do it, remember that since news articles tend to > be small individual files, you are likely to run out of inodes long before > you run out of space, even if you do some tweaking with the inode > parameters in mkfs. The exceptions, in my experience anyway, tend to > Ruth Milner UUCP - {uunet,pyramid}!utai!helios.physics!sysruth I don't find it necessary to "tweak", I just looked at the typical size of postings and calculated an appropriate value. A "df -t /news" returns the following: /news (/dev/dsk/1s0 ): 95420 blocks 30760 i-nodes total: 304254 blocks 65488 i-nodes The worst problems I've had was when permissions problems caused batched files to build up in the /news partition. This caused some of the directory entries to get stretched out over 50,000+ inodes, which KILLED performance. (Expire was taking 6 hours to run, for example; an ls took 5 minutes to come back, and rnews was running all day--this on a 16MHZ 386 Multibus machine I use as a mail/news hub.) The only way to cure it was to dump everything to tape, recreate the directory, and restore. -- Gary Heston { uunet!gary@sci34hub } System Mismanager SCI Technology, Inc. OEM Products Department (i.e., computers) Hestons' First Law: I qualify virtually everything I say.
edhew@xenitec.uucp (Ed Hew) (09/18/89)
In article <1989Sep14.182604.15992@helios.physics.utoronto.ca> sysruth@helios.physics.utoronto.ca (Ruth Milner) writes: > >If you're planning to do it, remember that since news articles tend to >be small individual files, you are likely to run out of inodes long before >you run out of space, even if you do some tweaking with the inode >parameters in mkfs. The following seems to be an optimum space inode ratio (at least here). Mount Dir Filesystem blocks used free % used iused ifree %iused /usr/spool/ /dev/news 140000 91186 48814 65% 16911 8081 68% >The exceptions, in my experience anyway, tend to >be when you run into problems batching or unbatching so you get huge >files created and left around for a while. I like to have that happen in another (stable) filesystem with more room. > Ruth Milner UUCP - {uunet,pyramid}!utai!helios.physics!sysruth > Systems Manager BITNET - sysruth@utorphys > U. of Toronto INTERNET - sysruth@helios.physics.toronto.edu Ed. A. Hew Authorized Technical Trainer Xeni/Con Corporation work: edhew@xenicon.uucp -or- ..!{uunet!}utai!lsuc!xenicon!edhew ->home: edhew@xenitec.uucp -or- ..!{uunet!}watmath!xenitec!edhew ->home: previously: edhew@egvideo.uucp [for people with old maps]
perry@ccssrv.UUCP (Perry Hutchison) (09/20/89)
In article <321@sci34hub.UUCP> gary@sci34hub.UUCP (Gary Heston) writes: >directory entries ... stretched out over 50,000+ inodes, which KILLED >performance ... the only way to cure it was to dump everything to tape, >recreate the directory, and restore. Another approach: create a new tree in which each directory is new but each leaf is a link to the existing file. Then rm -rf the old tree, and rename the new one. Should be faster since only directories get messed with and the data doesn't have to be dumped and restored. (If you're short on i-nodes, this might have to be done in parts.)