jeremym@syma.sussex.ac.uk (Jeremy Maris) (01/15/91)
We used macdump with Ultrix 3.1 + Uab + Cap with no problems. It took about 25 minutes to back up a 40Mb disk on a MacII via Ethernet/Uab, ~ 25k/sec. I recently upgraded to Ultrix 4.1, and built the ru-cap2 distribution and recompiled/linked uab+. Now it takes about 5 HOURS to back up 40Mb, or ~2k/sec AUFS is also much slower - a 630K file takes about 2.5 minues to copy, or ~4k/sec - ftp across the Atlantic is quicker than this! Decnet and ftp transfers are unaffected at ~ 50K/sec, so it isn't a bad ethernet segment. Has anyone come across similar problems ? -- Jeremy Maris,Experimental Psychology,University of Sussex,Brighton,England. Janet: jeremym@uk.ac.sussex.syma Nsfnet: jeremym@syma.sussex.ac.uk Bitnet: jeremym%sussex.syma@ukacrl.bitnet UUCP: ...ukc!syma!jeremym
tih@barsoom.nhh.no (Tom Ivar Helbekkmo) (01/16/91)
jeremym@syma.sussex.ac.uk (Jeremy Maris) writes: > [ru-cap2/uab+ is *much* slower on Ultrix 4.1 than Ultrix 3.1] > > Has anyone come across similar problems ? Yes, I'm seeing the same thing here. Enormous performance loss after I upgraded to Ultrix 4.1 on my DECstation 2100. I've recompiled with the version 2.1 cc, but to no avail. Seems to me the problem is in uab+, since that's what's using the most CPU time here. Any pointers from someone with more knowledge than myself would be appreciated! -tih -- Tom Ivar Helbekkmo, NHH, Bergen, Norway. Telephone: +47-5-959205 tih@barsoom.nhh.no, thelbekk@norunit.bitnet, edb_tom@debet.nhh.no
hedrick@athos.rutgers.edu (Charles Hedrick) (01/17/91)
We think UAB is a bad idea. When you use it, every packet must be routed through a UAB, which runs as a separate process. This doubles the number of process activations to do a given task, and increases the amount of I/O. If a new release slows down performance using UAB, a good bet is that they've either increased the overhead of process activations (a common side effect of creeping featurism in Unix implementations) or done something that increases the latency of activating processes. (You could at least experiment with running UAB with nice -8 or something like that.) This is why we moved Ethertalk into the CAP code. Unfortunately we only did the implementation for Sun and Pyramid, because that's what we've got. But I believe Ultrix has the facilities needed to do a port to it. (The critical resource is /dev/enet, which supposedly is in the newest Ultrix. Even if it's not, Berkeley just released a successor to /dev/enet that is known to run on Ultrix.) Thus I strongly recommend that anyone concerned about CAP performance on Ultrix port the Ethertalk code to Ultrix. We'd be happy to take back the changes. Originally it was fairly unportable, as we used Sun-specific features. But now that it's been ported to the Pyramid, I suspect a Ultrix port would be fairly easy. The main problem was the use of mmap. However for the Pyramid port we've replaced that with System V shared memory, which I believe Ultrix supports. We also use Sun RPC, but Ultrix definitely has that. Generally the problem in porting our Ethertalk support to new systems is finding a way to get the kernel to hand you incoming Ethertalk packets. Any system which will use /dev/enet, or Berkeley's new BPF, should work. I've looked at a few other systems we have around, and concluded that (at least in the release we have) Pyramid (unless you have source, as we do), Convex and NeXT will not allow Ethertalk to be implemented, and Ultrix and SGI will. SGI doesn't have /dev/enet, but it does have a pseudo-device intended for network monitoring software that lets you specify what packets you want to see by giving a packet prototype and masks. That should be good enough. If you have source, /dev/enet is quite simple to install in any Unix system I can imagine. (Probably even multiprocessor systems -- we have a version of /dev/enet that runs on a Pyramid multiprocessor, so we've had to identify where locks are needed on data structures.)