[comp.sys.apollo] Need help with lpr

kerr@tron.UUCP (Dave Kerr) (07/17/90)

Fellow apollo users:

I'm trying to set up lpr/lpd on our apollo nodes and have a few
questions that I'm hoping somebody out there can help me with.

First some background:
  Most nodes are running sr10.1, but we have some 9.7 nodes too.

  Our local printers are LZR26 postscript printers.

  I'm using the :pc: option in /etc/printcap to pass off the files
  to /com/prf for printing to local apollo printers (since apollo 
  doesn't supply filters for lpr to work with postscript printers).

  The reason I want lpr is so that I can print to printers on remote
  (non-apollo) unix machines, and print from remote machines to our
  printers.
 
  The plan was to have one node per printer acting as a spooling node
for lpr. Other nodes in the general area of the printer would link their
/usr/spool/lpd directory to the spooling node for that printer. This is
the way we've done it for Aegis in the past.

   The lpr manuals indicate that this is an acceptable configuration, but
at 10.1 lpr is broken! The problem is that if you run lpr on a node
that isn't running lpd, you get an error that it can't start the daemon.
Apollo's *workaround* is to run lpd on every node (their solution is
"fixed at 10.2").

I thought some about running lpd on every node with the spool directories 
linked back to a master node and it seems that that will cause problems.
I'm not sure, but I wanted to try and avoid trouble before the fact. My 
concern is that there is a lockfile created in /usr/spool/lpd/<printer>
that contains the PID of an lpd daemon. If I've got several nodes running 
lpd, I'm bound to get two lpd's (on different nodes) that end up with the 
same PID. That seems like a problem to me. 

Can anybody shed some light on the proper configuration of lpr/lpd on 
an sr10.1 apollo system?

Thanks in advance,
Dave Kerr

--

-- 
Dave Kerr (301) 765-4453 (WIN)765-4453
tron::kerr                 Internal WEC vax mail
kerr@tron.bwi.wec.com      from an Internet site
kerr@tron.UUCP             from a smart uucp mailer

pcc@apollo.HP.COM (Peter Craine) (07/18/90)

In article <613@tron.UUCP>, kerr@tron.UUCP (Dave Kerr) writes:
|> 
|> I'm trying to set up lpr/lpd on our apollo nodes and have a few
|> questions that I'm hoping somebody out there can help me with.
|> 
|> First some background:
|>   Most nodes are running sr10.1, but we have some 9.7 nodes too.
|> 
|> ( . . . Stuff deleted . . . )
|>
|> Can anybody shed some light on the proper configuration of lpr/lpd on 
|> an sr10.1 apollo system?
|> 
|> Thanks in advance,
|> Dave Kerr

Let's see if I can get you on your way.  The general rules of running lpd at
sr10.1:

    1) every node has it's own local spooling directory

    2) every node runs lpd

    3) you need tcp/ip configured properly for all the nodes in question.

    4) everybody needs "right" /etc/printcap's.


The first condition is fairly simple, provided you don't have diskless nodes.
If you have all disked nodes, then just make sure that they all have their own
/usr/spool/lpd tree.  I'll go over the files/directories later.  If you have
diskless nodes, /usr/spool/lpd will have to be a link to
`node_data/usr.spool.lpd (or some such place) and all `node_datas have to have
the appropriate tree's.  I leave the actual work here as an exercise to the
student.

The second condition is equally simple.  touch /etc/daemons/lpd (once the
printcap file is in place), then reboot (or start it by hand, if you're daring)
and away you go.

From here, the conditions become a bit more complex.  I won't begin to describe
how to setup tcp/ip.  Let's just say that it must be done first.  On your sr9.7
nodes, make sure that you're running version 3.1 of tcp/ip.  **NOTE** For lpd
to work properly, every host that will run lpd must be "equivalenced".  If this
isn't done, lpd's won't talk to each other.

Because all the printing is really done by prf, there are two ways of setting
up printcap.  One technique allows you to use the same printcap file on
every node (probably the easier way).  Using the other technique is a bit
more flexible, is JLRU (Just Like Real Unix) but is a royal pain: every host
uses its own printcap.

We'll examine both of these techniques with the following example.  Consider
a network with three nodes:  //foo, //bar, and //mung.  //foo and //bar each
have one printer, and //mung doesn't have any.  For the sake of simplicity,
we'll call the printer on //foo foo, and the printer on //bar bar.

Technique # 1:
  /etc/printcap contains something like the following:

lp|foo|printer on //foo:\
	:pc=/usr/apollo/bin/prf -banner off -headers off -site //foo -pr foo:\
	:lp=/dev/null:\
	:sd=/usr/spool/lpd/lp:\
	:lf=/usr/adm/lpd-errs:
bar|printer on nacho:\
        :pc=/usr/apollo/bin/prf -banner off -headers off -site //bar -pr bar:\
	:lp=/dev/null:\
        :sd=/usr/spool/lpd/bar:\
	:lf=/usr/adm/lpd-errs:

Only one node requires the physical printcap file, (let's, for the sake of
argument, say that it lives on //foo), and the other two nodes (//bar and
//mung) have links pointing to the real printcap file (/etc/printcap ->
//foo/etc/printcap).

Please note that the default printer destination for EVERY node is //foo,
because printer name 'lp' (the default for lpr) is assigned to //foo.


Technique # 2:

In this case, each node will need it's own printcap file.  All the lpd's will
talk to each other over TCP/IP.  For example, if someone on //foo wanted to
print something on //bar's printer, the "lpr" on foo places an entry in it's
queue; the lpd on //foo sends the request to the printer on //bar; the lpd on
//bar send the request to prf to print the job.

Why go through all this extra work?  Well, technique # 1 works ONLY if prf
is doing all the work.  If any printers were actually being serviced by lpd,
then technique # 1 won't work (because the lpd doing the work is the only
one that can talk to the device).  Also, this technique allows some
abstraction; so each lpd doesn't need to know what's actually talking to the
printer.

Here are the /etc/printcap files for the above example:

printcap on //foo:

lp|foo|printer on //foo:\
	:pc=/usr/apollo/bin/prf -banner off -headers off -site //foo -pr foo:\
	:lp=/dev/null:\
	:sd=/usr/spool/lpd/lp:\
	:lf=/usr/adm/lpd-errs:
bar|printer on //bar:\
	:lp=:rm=bar:rp=bar:\
	:sd=/usr/spool/lpd/bar:\
	:lf=/usr/adm/lpd-errs:


printcap on //bar:

foo|printer on //foo:\
	:lp=:rm=foo:rp=foo:\
	:sd=/usr/spool/lpd/foo:
	:lf=/usr/adm/lpd-errs:
lp|bar|printer on //bar:\
	:pc=/usr/apollo/bin/prf -banner off -headers off -site //bar -pr bar:\
	:lp=/dev/null:\
	:sd=/usr/spool/lpd/lp:\
	:lf=/usr/adm/lpd-errs:


printcap on //mung:

foo|printer on //foo:\
	:lp=:rm=foo:rp=foo:\
	:sd=/usr/spool/lpd/foo:
	:lf=/usr/adm/lpd-errs:
lp|bar|printer on //bar:\
	:lp=:rm=bar:rp=bar:\
	:sd=/usr/spool/lpd/bar:\
	:lf=/usr/adm/lpd-errs:


Here we see that the each system knows about both systems.  The printer on
//foo is the default printer only for //foo; the printer on //bar is the
default printer for both //bar and //mung.


Contents of /usr/spool/lpd:

When all is said and done, /usr/spool/lpd needs to contain the directories
mentioned in all the "sd" entries in the printcap used by that node.
The file "servername" had better not be there at all.
Files will be created in /usr/spool/lpd/<whatever> dynamically.  Everything
in /usr/spool/lpd/... should be owned by user daemon, group daemon.  Otherwise,
things may not work as expected.  The initial acl's should be set for BSD
inheritance (chacl -R -B /usr/spool/lpd).


** SR10.2:

The world has been made easier at SR10.2.  Just link all the /usr/spool/lpd's
to one node, create /usr/spool/lpd/servername, which contains the name of the
ONE node running lpd -- ** NO LEADING // ** -- make sure that lpd runs on
that one node, and you're all set.  Doing this places restrictions on using
multiple LPD's (how you can have multiple LPD's, if necessary), but I won't
go into that here  (especially since I stopped supporting lpd about 6 months
ago).


That should get you going.  I hope I've not made any big typos/mistakes.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    Peter Craine                +  "Sometimes you have to slap them in the face
    Hewlett-Packard             +       to get their attention."
    Chelmsford Response Center  +  *I* don't want my opinions.  Why would HP?

mike@tuvie (Inst.f.Techn.Informatik) (07/21/90)

In article <4ba5d449.20b6d@apollo.HP.COM> pcc@APOLLO.COM writes:
>** SR10.2:
>
>The world has been made easier at SR10.2.  Just link all the /usr/spool/lpd's
>to one node, create /usr/spool/lpd/servername, which contains the name of the
>ONE node running lpd -- ** NO LEADING // ** -- make sure that lpd runs on
>that one node, and you're all set.  Doing this places restrictions on using
>multiple LPD's (how you can have multiple LPD's, if necessary), but I won't
>go into that here  (especially since I stopped supporting lpd about 6 months
>ago).
>
I think this is *NOT* the best solution to this problem - what happens
if the servernode is down - with the standard lpr/lpd solution it is
queued locally, but this way you'll get an error and that's that.
I wish Apollos could run vanilla BSD UNIX without screwing up
UNIX features. I do not want an improved UNIX, I WANT UNIX!

                                bye,
                                        mike
       ____  ____
      /   / / / /   Michael K. Gschwind             mike@vlsivie.at
     /   / / / /    Technical University, Vienna    mike@vlsivie.uucp
     ---/           Voice: (++43).1.58801 8144      e182202@awituw01.bitnet
       /            Fax:   (++43).1.569697
   ___/