[comp.unix.wizards] Is anyone using DMR-11's under 4.3 or Ultrix?

arnold@emory.uucp (Arnold D. Robbins {EUCC}) (07/23/87)

We are trying to connect two vaxen using DMR-11s. One is a 780 running
Mt. Xinu's 4.3 + NFS, the other is a 750 running Ultrix 1.2.

The two machines don't want to talk to each other; the 780 at least will
eventually crash from a lack of mbufs. Here is the relevant part of the
/etc/rc.local file:

	# set hostname and config interlan and s/w loopback
	hostname emoryu2
	ifconfig il0 netmask 0xffff0000 `hostname` broadcast 128.140.0.0
	ifconfig lo0 localhost
	route add `hostname` localhost 0
	hostid `hostname`
	
	# set up dmr
	/etc/ifconfig dmc0 inet emoryu2-dmr emcard-dmr netmask 0xffff0000
	/etc/route add emcard emoryu2-dmr 0
	
	/etc/route add 0 emoryu1 1	# for our X.25net csnet connection
	
With this setup, neither side of the DMR responds to a ping. I can do a

	route add emoryu2-dmr localhost 0

and then the local side of the dmr responds, but the remote does not.

We have a Class B network number, 128.140, if that makes any difference.

Anyway, if you are successfully using a DMR-11 to talk to another vax,
under 4.3 and/or Ultrix, please let me know what you're doing right, and what
I'm doing wrong. (There are days when I'd rather use a tin can and string...)

Thanks,
-- 
Arnold Robbins
ARPA, CSNET:	arnold@emory.ARPA	BITNET: arnold@emory
UUCP:	{ decvax, gatech, sun!sunatl }!emory!arnold
ONE-OF-THESE-DAYS:	arnold@emory.mathcs.emory.edu

dyer@spdcc.COM (Steve Dyer) (07/24/87)

I've installed and used DMR-11s at 1mb speeds (T1 lines to JVNC
between Harvard and MIT) and at 9600 baud (leased analog lines
between Mass Micro, UMass Amherst and the CSNET CIC.)

I've never ever managed to get two DMR-11s running in DDCMP mode talking
to each other.  Somehow, neither end would sync up, and sooner or later
one end or the other's output queue would fill up so that further
attempts would give "no buffer space available."  I never bothered to
chase this further, because the solution is so easy: just select "maintenance
mode" in your config file (from memory, I believe it's "flags 1") and
relink your kernel.

In "maintenance mode", packets are sent without any link-level protocol
other than a simple checksum encapsulation (a packet with a bad checksum
received is discarded and doesn't generate an input interrupt.)  This is
arguably what you want anyway for a TCP link, since a reliable link-level
protocol makes the point-to-point link appear as if the "network" is suffering
from variable delays, rather than feeding back the retransmission info to
the TCP which can presumably do something interesting with the information.

I might mention that the error rates on T1 lines are incredibly low (at
least in my experience) and that relying on maintenance mode has worked
successfully in both high- and low-speed environments.  In both cases,
the usage was primarily TCP-based; I could imagine that you might run
into trouble with UDP or IP-based applications without some sort of
transport layer.
-- 
Steve Dyer
dyer@harvard.harvard.edu
dyer@spdcc.COM aka {ihnp4,harvard,linus,ima,bbn,m2c}!spdcc!dyer