hirai@swatsun.uucp (Eiji "A.G." Hirai) (03/30/88)
Sometimes, our unbatch files for news are mangled and not accepted by rnews. Then it gets put in news/880329whatever for later perusal of anyone who's interested. Well, what can we do with it? Is there a way to recover partially mangled unbatch files or are the articles in the file lost forever? It seems like there's no alternative except to do a costly ihave/sendme type of newsfeed if we want to avoid losing articles like this. If I've made any mistakes or if I'm missing a vital info somewhere, please excuse me. I'm a novice at certain things but please help us out! I help out other people on the net whenever I can in areas I know about... Thanks for reading this! -a.g. hirai hirai@swatsun.uucp -- Eiji "A.G." Hirai @ Swarthmore College, Swarthmore PA 19081 | 215-543-9855 UUCP: {rutgers, ihnp4, cbosgd}!bpa!swatsun!hirai | "All Cretans are liars." Bitnet: vu-vlsi!swatsun!hirai@psuvax1.bitnet | -Epimenides Internet: swatsun!hirai@bpa.bell-atl.com | of Cnossus, Crete
clewis@spectrix.UUCP (Chris R. Lewis) (04/05/88)
In article <1718@ilium.UUCP> hirai@swatsun.uucp (Eiji "A.G." Hirai) writes: > > Sometimes, our unbatch files for news are mangled and not >accepted by rnews. Then it gets put in news/880329whatever for later >perusal of anyone who's interested. Well, what can we do with it? Is >there a way to recover partially mangled unbatch files or are the >articles in the file lost forever? These files are the incoming articles. Untouched. Presumably somewhere along the processing (uncompress, unbatch whatever) some sort of error occured. If the thing is a batch, then most likely all of the articles up to a given point have been properly received, one article caused rnews to complain, then the remaining articles may or may not have been extracted. So you might want to recover the missing ones. What you can do is this: - use "od -c" to look at the beginning of these files. There are quite a few things you could find. This is not intended to be an exhaustive list, nor precisely exact, but it should give you some hints about how to look at the 88* files: 1) #! cunbatch<cr><then trailing binary goop> This is a 2.11 compressed batch. The trailing goop when split off and uncompressed will probably look like (2) below. 2) #! rnews <number><cr><then trailing ASCII goop - looks like normal articles> This is a 2.10 or 2.11 uncompressed batch. 3) Headers for what looks like a normal article. To recover the data from (1), write a small C program (you might be able to do this with sed or something too) that simply copies the file to somewhere and IGNORES the characters up to and including the first newline. Then run the resultant file thru uncompress. eg: stripcunbatch < 88<whatever> | compress -d > result The result should look *just* like (2) - eg: a series of articles where each one is prefixed with something like "rnews <number>". What this is saying is that there is an article with <number> characters in it immediately following in the batch, after it might be another "rnews <number>" with another article. Then what? Well, you could run the batch into rnews again. Eg: rnews < result; rnews -U (you might be able to issue "rnews -S < result" and defeat the batching, but the docs say "this should never be done directly") But this will most likely give you exactly the same results again. What you should do is take a look at your log or errlog file and try to see on which article in the batch that rnews blew up in. Then, break the batch apart and manually feed the articles after the barf into rnews directly. If you can't find an error, try feeding the whole batch. rnews should then give you the errors again. And create a new 88* file again... (throw it away) Don't worry, rnews won't reinstall articles it got successfully the first time around. By judicious fiddling with the rnews <number> header or other things you may be able to recover everything. You'll have to experiment. Precise recovery depends on what sort of error you had to cause it in the first place. -- Chris Lewis, Spectrix Microsystems Inc, UUCP: {uunet!mnetor, utcsri!utzoo, lsuc, yunexus}!spectrix!clewis Phone: (416)-474-1955
scott@zorch.UU.NET (Scott Hazen Mueller) (04/06/88)
In article <535@spectrix.UUCP> clewis@spectrix.UUCP (Chris R. Lewis) writes: | To recover the data from (1), write a small C program (you might | be able to do this with sed or something too) that simply | copies the file to somewhere and IGNORES the characters | up to and including the first newline. Then run the resultant | file thru uncompress. eg: Use dd(1); e.g., dd if=88blea of=foo bs=1 skip=13 Trims that leading garbage off just fine. I should know; I've done it this way... |Chris Lewis, Spectrix Microsystems Inc, -- Scott Hazen Mueller scott@zorch.UU.NET (408) 245-9461 (pyramid|tolerant|uunet)!zorch!scott
michael@stb.UUCP (Michael) (04/10/88)
Then there are ways simpler than dd: tail +2 88whatever | uncompress | rnews For some reason, every time I patch news I need to do that for a few days. Michael -- : Michael Gersten uunet.uu.net!ucla-an.ANES\ : ihnp4!hermix!ucla-an!denwa!stb!michael : sdcsvax!crash!gryphon!denwa!stb!michael : "A hacker lives forever, but not so his free time"