de@helios.ucsc.edu (De Clarke) (03/24/91)
I have two questions for the Net. One is this: In the case where a host (helios, for example) recognizes several remote printers lwp|lw3|Printer on 3rd floor:\ :lp=:rm=phaeton:rp=lw: lpp|lp3|Printer on 3rd floor:\ :lp=:rm=phaeton:rp=lp: p3|Printronix P300 on loa:\ :lp=:rm=ucscloa:rp=p3: lwt|lpt|Printer on thaler:\ :lp=:rm=thaler:rp=lw: lws|lps|Printer on soleil:\ :lp=:rm=soleil:rp=lw: lwe|lpe|Printer on eos:\ :lp=:rm=eos:rp=lw: sundog|HPLJII w/ PP PS:\ :lp=:rm=sundog:rp=lw: sh|printer in shops:\ :lp=:rm=loel:rp=lp: int|the intex talker:\ :lp=:rm=ninja:rp=int: and one of these hosts for some reason is down or the remote printer on that host is misconfigured. For example, the "int" pseudoprinter on ninja was not properly configured in ninja's printcap after a reconfiguration. Then, many weeks later, someone innocently tries to print something on that "int" device. As a result, (1) no one can print anything on lpe from helios, and an lpq -Plpe produces a message about waiting for queue on ninja (!) (2) removing the job that is stuck waiting for Pint does not solve the problem, the lpe queue is still waiting on ninja (3) recreating the printcap entry for int on ninja does not solve the problem, the lpe queue on helios is still stuck (4) killing both child lpds and their parent on helios and restarting lpd does get the queue to move BUT (5) the jobs which users had submitted to Plpe then get sent, neatly and without explanation, to Plp3. which means a bunch of users have had to go to a different floor to get their output and they are not amused. We have noticed this madness several times, but this is the first time I have been able to note the details for you. It seems to us the the lpd gets incredibly confused if it encounters an error during remote printing. This is an incredibly non-robust lpd. When you have 40 hosts and 10 printers situated hither and yon, you cannot have your remote printing get randomized whenever a print host goes down or someone makes a mistake with a printcap file somewhere! Any comments? has anyone else noticed this? (I sent this question to Sun Hotline and got a phone message -- which I haven't yet been able to answer, *why* don't they use email???? -- which didn't make much sense to me, the guy said that we should provide a separate spool directory for each printer in the printcap file. Maybe he is right. But I didn't think you needed separate spool directories for remote printers. Am I in left field?) ---------- Second and more interesting question ------------- Is anyone out there using C2 security with the *automounter* mounting the audit log partitions? I asked Sun if I could do this (we love the automounter, 'cause we hate long NFS timeouts when a host goes away) and they said more or less, "There's no reason why you can't do that but we don't support it and don't call us if you get into trouble." So if there's no reason not to do it, why are they scared to support it? The log host needs to be the big server, but the big server doesn't necessarily have the disk needed for the audit log (I know, buy more disk... have you heard about the CA state budget lately?) so I'd like to borrow disk from less populated stations here and there. The automounter is treating us very well and I would really like to know if there are any war stories which would indicate that using it for this purpose is unwisdom (or suicide). If you send mail I'll get the answer sooner, and I would like that. Thanks.