Sun-Spots-Request@RICE.EDU (William LeFebvre) (03/21/88)
SUN-SPOTS DIGEST Saturday, 19 March 1988 Volume 6 : Issue 33 Today's Topics: Multi-processing on Sun systems More information on 892 MB drives (and a big problem) ping with subnets under SunOS 3.[45] Data Representation Problems and NFS Missing font for vt100 tool yet another "How many users?" question University Ingres on sun 4's ? Alternative Fixed Width Fonts for Sun 3s? How to get FF between jobs on the HPLJ and HPLJ+? Send contributions to: sun-spots@rice.edu Send subscription add/delete requests to: sun-spots-request@rice.edu Bitnet readers can subscribe directly with the CMS command: TELL LISTSERV AT RICE SUBSCRIBE SUNSPOTS My Full Name Recent backissues are stored on "titan.rice.edu". For volume X, issue Y, "get sun-spots/vXnY". They are also accessible through the archive server: mail the word "help" to "archive-server@rice.edu". ---------------------------------------------------------------------- Date: Tue, 8 Mar 88 15:48:35 EST From: kc@rna.rockefeller.edu (Kaare Christian) Subject: Multi-processing on Sun systems I have developed a set of hardware/software/firmware tools that enable me to put additional 68020 cpus into vme based sun 3 systems, such as the 3/160. If there is general interest, I will make the extra effort in documentation and packaging that will allow others to easily :-) use these tools. If, as I expect, the interest is tiny, I will arrange things individually with those who are interested. If you would like additional information or a copy of the software (when it is available, real soon now), please write to me. Kaare Christian kc@rna.rockefeller.edu cmcl2!rna!kc ------------------------------ Date: Tue, 8 Mar 88 20:53:38 EST From: Steve M. Burinsky <smb@mimsy.umd.edu> Subject: More information on 892 MB drives (and a big problem) I have been able to dig up some information on Sun's new drives that may be of interest. I only have experience with one of them. And, of course, there are some problems... Sun has selected and qualified the NEC D2363 and the Hitachi DK815-10. For interchangability reasons, Sun formats them both to 892 MB. More on this later. The DK815-10 has 1737 cylinders, 15 heads, 68 sectors per track, at 600 bytes per sector (including overhead). The D2363 has 1024 cylinders, 27 heads, 68 sectors per track, at 600 bytes per sector (including overhead). Sun 3.5 diag formats the DK815-10 with 1735 cylinders, 2 alternates, 15 heads, 67 sectors per track (one for slipping), 600 bytes per sector (including overhead), interleave 1, and type 1. The D2363 is formatted with 964 cylinders, 2 alternates, 27 heads, 67 sectors per track (one for slipping), 600 bytes per sector (including overhead), interleave 1, and type 2. The reason Sun only uses 964 cylinders on the D2363 is to make it the same size (approximately) as the DK815-10. However, this causes them to throw away about 50 MB. We use 1022 cylinders, since (1) we buy our disks straight from NEC and (2) we only have Sun supporting the controllers. All the messages I've seen about these drives (including the ones from Sun) call them 892 MB drives. Maybe I forgot how to multiply, but I come up with the following numbers (formatted): D2363 (1022 cyl): 902.7 MB D2363 ( 964 cyl): 851.5 MB DK815-10 (1735 cyl): 851.4 MB And now the problems. When we first formatted our D2363's we used 68 sectors per track. We thought diag was smart enough to leave one sector for slipping, which it doesn't. So, we supported no slipping. Here's how a couple of the file systems worked out: xy0a: 9 cyl, 68 sec, 3840 inodes, 7717 kB --> 2.01 kB/inode (good) xy0g: 495 cyl, 68 sec, 214272 inodes, 425624 kB --> 1.99 kB/inode (good) These kB/inode numbers are typical. Mkfs defaults to 2048 bytes per inode block if it can. I've seen kB/inode ratios as high as 2.42 on SCSI disks (sd0a). DEC RM05's max out at about 2.24. DEC RA81's max out at about 2.66. The numbers seem to get worse with bigger drives. When we noticed no slip sectors, we reformatted with 67 sectors per track. Now, those file systems looked like this: xy1a: 9 cyl, 67 sec, 2048 inodes, 7851 kB --> 3.83 kB/inode (bad!) xy1g: 493 cyl, 67 sec, 63488 inodes, 437469 kB --> 6.89 kB/inode (very bad!!) We didn't bother to check the number of inodes in the file systems until we ran out when loading the SunOS distribution tape. We tried virtually every possible combination of mkfs options to try to raise the number of inodes, but to no avail. The problem is in mkfs. In the 68-sector case, we specified 4 cylinders per cylinder group with no problems. The default is 16, but this would have left us with too few inodes. In the 67-sector case, mkfs would not allow anything less than 16 cylinders per cylinder group. Hence, the result shown above. I stared at the mkfs code until I went cross-eyed. The fix is not apparent. The problem has to do with the number of cylinders per cycle. Since 68 = 2*2*17, mkfs allows the number of cylinders per cylinder group to be a multiple of 16/(2*2) = 4. But 67 is odd, so mkfs forces multiples of 16 cylinders per cylinder group. It seems to me that all big drives will have this problem. As the capacity of a cylinder increases, so must the number of inodes; but mkfs lower-bounds the number of cylinders per cylinder group. This upper-bounds the number of cylinder groups. Mkfs also upper-bounds the number of inodes per cylinder group. The result is an upper-bound on the number of inodes in the file system. We contacted Sun about this, and they agree it is a problem. Two weeks later we have heard no reply. Mark Weiser at Xerox PARC tells me that Super Eagles have 67 sectors per track also, and mkfs works fine on them. I don't know the answer to that one. I have no Super Eagles and don't understand mkfs well enough. I'd be interested to hear from anyone who has run into this problem before or has other advice to offer. I have forwared this message to sunbugs. Steve smb@mimsy.umd.edu ------------------------------ Date: Tue, 8 Mar 88 08:18:13 EST From: steve@cs.umd.edu (Steven D. Miller) Subject: ping with subnets under SunOS 3.[45] Now that my 3/60 has arrived, I've had to wade in and deal with SunOS 3.5. It (and, I think, everything since SunOS 3.4) exhibits two bugs when it comes to raw IP sockets: 1) Regardless of whether or not SO_DONTROUTE is set in the socket options field, the rip_output() routine forces ip_output() to avoid the full routing table lookup. 2) Ip_output(), when called with SO_DONTROUTE/IP_ROUTETOIF (as done by rip_output()), does what all the 4.2BSD implementations before it have done: it looks for an interface with the same net number as the destination, and if it finds one, it returns ENETUNREACH. That's all fine and good, but in this case even on a subnetted network, it is still looking for an interface with the same *net number* (not net+subnet) as the IP destination. Of course, it finds an interface, and it proceeds to do ARP on the destination address... even though that address is not on the local subnet. It should be noted that both of these bugs are more-or-less conjecture; I don't have 3.4 or 3.5 sources yet, but I do know that I fixed the 3.2 rip_output() routine, dropped it in place, and ping started to work across subnet gateways. Fixing the other bug requires more sources than I have, and my suggested fix for bug #2 is really just an educated guess. I've got a call in to Sun about this problem. If I'm given a bug fix number (or whatever it is they give out), I'll pass it along to the net. Also note that routing for everything other than raw IP sockets (i.e., TCP, UDP, NFS, and even kernel-generated ICMP) basically works. -Steve Spoken: Steve Miller Domain: steve@mimsy.umd.edu UUCP: uunet!mimsy!steve Phone: +1-301-454-1808 USPS: UMIACS, Univ. of Maryland, College Park, MD 20742 [[ I believe that this has been noted before on Sun-Spots. Refer to volume 6 issue 25. --wnl ]] ------------------------------ Date: 8 Mar 88 21:59:14 GMT From: boulder!halls@hao.ucar.edu (Andy Halls) Subject: Data Representation Problems and NFS We have a set of applications that save data to files in "binary" form. That is, numbers are stored in files with the native representation used by the computer. We wish to share these files in a heterogeneous environment via NFS. My understanding is that NFS provides no support for handling this problem. The knowledge about what kind of data is in the file is squished out by the byte stream abstraction that NFS does support. It appears that the application will have to handle the differences. XDR provides the tools but would require a significant effort to add to the applications. Besides the applications are saving in binary format for performance reasons. A receiver makes right technique might be more effective. I understand that Sun is comming out with a 386 based machine. As I recall the byte order on the 386 is the opposite of the 68xxx. Is Sun providing any tools to support "binary" data on files in heterogeneous networks? I'm soliciting general comments on my observations...do I have the right idea, do you have a slick method for solving this problem, care to share it? Thanks, Andy Halls phone: home (303) 455-9139 work (303) 282-2166 uucp: {cires | hao | nbires}boulder!halls internet: halls@boulder.colorado.edu [[ The approved solution adopted by the Internet is to always write binary numbers in "network byte order". There are routines defined on every BSD Unix system (4.2 and newer) to translate between local and network byte order. These are htonl, htons, ntohl, and ntohs. Look at the manual page "byteorder(3N)". I'm not sure how to handle machines whose word size is not a power of 2 (i.e.: 36). By the way, network byte order is big endian---the same as the Sun. --wnl ]] ------------------------------ Date: Wed, 2 Mar 88 18:53:05 EST From: eric@eddie.mit.edu (Eric Van Tassell) Subject: Missing font for vt100 tool Hi, I ftp'ed the vt100 tool and it pisses and moans about not having "vt100font". Where do I get said font? TIA eric@eddie.mit.edu ------------------------------ Date: Mon, 07 Mar 88 17:42:36 EST From: Preston Mullen <mullen@nrl-css.arpa> Subject: yet another "How many users?" question In v6n24, wucs1!br@uunet.uu.net (Bill Ross) asked for an assessment of how many users a Sun-4 could support for timesharing. I have been asked to pass on a more specialized question, since I can't answer it: How many simultaneous users can a Sun-4 support as a domain server, mail relay, and host for generating and reading electronic mail and news? o As a domain server, it would be authoritative for about 250 hosts and about 1500 mailboxes. o As an SMTP mail relay, it would receive perhaps 8 messages per minute in peak periods, about 2000 messages per day. More than half of the incoming messages would be relayed to other Internet hosts. o A variety of editors (including GNU Emacs) and mail systems (Berkeley, MH) would be used. News might be read with the standard news program or with MH. Perhaps two thirds of the mail sent by users of the mail machine would go out to other machines. o Most of the users would be scientist-managers (not students or hackers); all would access the machine over a local network using Telnet or rlogin. Good interactive response time is important; adequate performance is defined as enough to avoid complaints from users accustomed to doing the same functions on their own Sun 3/50 workstations. o Users will not be compiling or doing arbitrary computations, but occasional use of grep or equivalents (MH pick) is not ruled out. My own wild guess is that a 4/280 with 32 Mb (or more) of memory and fast 1 Gbyte disks can support between 100 and 200 such simultaneous users, but I am not sure which extreme is more likely. I'd also be interested in a comparison of the 4/110 and the 4/260. Would it make more sense to try to spread this load over 2 or 3 4/110 servers, each with a couple of the 327 Mbyte SCSI disks? Finally, how would something like this scale down to 3/60s filled with memory? 10 users? 20? I am especially interested in hearing from people who have already done something like this, on any scale. Thanks. ------------------------------ Date: Tue, 8 Mar 88 13:23:44 EST From: Robin Rohlicek <rohlicek@bbn.com> Subject: University Ingres on sun 4's ? Has anyone successfully compiled University Ingres from the SUG tape on a sun 4? My experience is that it compiles but then doesn't function properly whereas on a sun 3 everything seems fine. I haven't had a chance to track down the problem yet... ------------------------------ Date: Tue, 8 Mar 88 16:50 EDT From: VERMETTE@sdr.slb.com Subject: Alternative Fixed Width Fonts for Sun 3s? Anybody out there know of any alternative fonts for Sun 3s? I'm part of a gang porting a large VMS Fortran code that formerly used a Raster Technologies Model One/80 for display, and on that device we used fonts from a collection called (I think!) the Stanford KST font set. Ideally, I'd like THOSE fonts on my Sun. If I can't have that, I'd settle for two fonts: one good 19 pt Italic font and one very large font. Can any of you good people help? And while I'm asking questions, it seems, from looking at the man page for fontedit, that there's a size limit of 24 points for a font. Anyone know a way around this? If you've got answers, please email to me at: vermette@slb-sdr. If there are multiple responses I'll summarize them for this forum at a later date. adTHANKSvance, Mark Vermette (203) 431 - 5555 ------------------------------ Date: Tue, 8 Mar 88 17:34:58 MST From: dbd%benden@lanl.gov (Dan Davison) Subject: How to get FF between jobs on the HPLJ and HPLJ+? We've got two HPLJ+ printers attached to our Sun network, and they have a problem. We occasionally get more than one print job on a page. Now, the brain-damaged HPLJ doesn't dump its buffer for printing until it gets a FF. Our printcap entry looks like this: # Printcap entry for HP laserjet on /dev/ttyb # The system manual recommends # fs#6020:fc#0300:xs#040 # That combination does not work. The one below does. # south|hp|lp|HP LaserJet next to Christian's office:\ :lp=/dev/ttyb:sd=/usr/spool/south:lf=/usr/adm/hp.errs:\ :br#9600:rw:fc#0300:mx#0:tr=\f:sb:sf:fo: Note the "fo": the manual (printcap(5)) says: "print a form feed when the device is *opened*". (Emphasis added). This clearly works because if a few seconds (how many depends, apparently, on the phase of the moon & the local high tide) elapse between *printing* of jobs we get blank pages. But if you "lpr" three files in a row, they are frequently concatenated. Obviously the device is not closed and reopened if there are jobs waiting in the print queue. [We'd also like to get tabs expanded; they aren't now.] Someone must have dealt with this problem before. Is there any way to make sure each job gets printed on a separate piece of paper? Please e-mail replies to me and I will summarize. Thanks muchly. dan davison/theoretical biology/los alamos national laboratory dbd@benden.lanl.gov dd@lanl.gov [[ You might start by removing "sf" and see what effect that has. You may have to write a simple "input filter" that does little more than write a form feed and copy input to output. As for expanding tabs, you only want that done when you are using the printer as a line printer. If you are printing a troff-ed document or graphics, you absolutely do not want automatic tab expansion). So this is best handled before the job is queued. --wnl ]] ------------------------------ End of SUN-Spots Digest ***********************