ABSTINE@CLVMS.BITNET (AB Stine) (01/07/87)
Does anybody know if there is anyway for SET HOST on VMS to use LAT instead of CTERM to perform a remote login to another VMS system on Ethernet? DECnet-DOS SETHOST allows both, I was wondering if DECnet-VMS maybe could also do it... thanks Art Stine Systems Programmer Clarkson University
LEICHTER-JERRY@YALE.ARPA (01/12/87)
Does anybody know if there is anyway for SET HOST on VMS to use LAT instead of CTERM to perform a remote login to another VMS system on Ethernet? DECnet-DOS SETHOST allows both, I was wondering if DECnet-VMS maybe could also do it... As far as I know, no such program exists. Sorry. BTW, why would you want it? I can understand using LAT between different kinds of systems, but VMS to VMS, CTERM should do better. -- Jerry -------
ted@cgl.ucsf.edu@blia.UUCP (01/12/87)
In article <8701120516.AA08397@ucbvax.Berkeley.EDU>, LEICHTER-JERRY@YALE.ARPA writes: > BTW, why would you want it? I can understand using LAT between different > kinds of systems, but VMS to VMS, CTERM should do better. There is a great deal more overhead in using CTERM because it goes through all of the standard DECnet protocol layers. LAT uses its own ethernet protocol type value and bypasses all of this. That's why DEC implemented LAT for the terminal servers. Of course, if you use LAT, the host must be on the same ethernet. Can't get through a router. =============================================================================== Ted Marshall Britton Lee, Inc. p-mail: 14600 Winchester Blvd, Los Gatos, Ca 95030 voice: (408)378-7000 uucp: ...!ucbvax!mtxinu!blia!ted ARPA: mtxinu!blia!ted@Berkeley.EDU disclaimer: These opinions are my own and may not reflect those of my employer; I leave them alone and they leave me alone. fortune for today: Ten years of rejection slips is nature's way of telling you to stop writing. -- R. Geis
dstevens@sitvxb.BITNET.UUCP (01/13/87)
What about the overhead caused by DECnet on a VMS system. When you set host from any system to a VMS system the amount of overhead caused by CTERM is a pain (to say the least), but there is very little overhead when you LAT into a VAX. Also concider, if you have a memory poor vax, and a non-memory poor vax in a cluster, and users can only connect directly to the poor vax, what do they do?? They SET HOST to the other (faster vax). Now your fast vax is being bogged down by all these remote host from the other cluster node, and its no longer fast. David L. Stevens Stevens Institute of Technology [ No I'm not related to the founders of the college...I just work here. ] BITNET: DSTEVENS@SITVXA.BITNET CCnet: DSTEVENS@SITVXA ps: I would realy love to see SET HOST/LAT/CTERM in VMS 5.0 (hope-hope beg-beg) pps: I also am planning on SPRing it as a suggested enhancement
LEICHTER-JERRY@YALE.ARPA (01/14/87)
What about the overhead caused by DECnet on a VMS system. When you set host from any system to a VMS system the amount of overhead caused by CTERM is a pain (to say the least), but there is very little overhead when you LAT into a VAX. I find this puzzling. We run a fair number of incoming SET HOST connections (from VAXStations) here, and I've never noticed any particularly high cost. How are you measuring the cost of a CTERM connection, and the cost of a LAT connection? In typical usage, much of the work on a CTERM connection is done by the local machine (the one the terminal is actually attached to) rather than the remote one - echoing, deletions, and so on. Certainly this means more overhead at the local end, but it's hard to see how it can result in more overhead at the remote end. (Screen editors are another case, but even that isn't so simple, since most typing into screen editors occurs at the end of a line, and editors these days are usually clever enough to let the OS get them a bunch of characters and do the echoing itself in this situation.) Please don't take this as doubting your observations, but I AM trying to understand them. Also concider, if you have a memory poor vax, and a non-memory poor vax in a cluster, and users can only connect directly to the poor vax, what do they do?? They SET HOST to the other (faster vax). Now your fast vax is being bogged down by all these remote host from the other cluster node, and its no longer fast. How would this differ if SET HOST/LAT existed? Wouldn't they users still do the same thing? Maybe its your whole setup that needs re-working (moving lines, moving memory)? -- Jerry -------
LEICHTER-JERRY@YALE.ARPA.UUCP (01/16/87)
> BTW, why would you want it? I can understand using LAT between dif- > ferent kinds of systems, but VMS to VMS, CTERM should do better. There is a great deal more overhead in using CTERM because it goes through all of the standard DECnet protocol layers. LAT uses its own ethernet protocol type value and bypasses all of this. That's why DEC implemented LAT for the terminal servers. Of course, if you use LAT, the host must be on the same ethernet. Can't get through a router. In principle, what you say is true. In practice, do you have any evidence for it? The difference in protocol complexity actually used over an Ethernet is not all that great; the actual load offered is small compared to capacity (so that differences are likely to be lost in the noise even if there are any); and, in any case, it all depends on what you are doing. Let's take a simple case: Typing a 30-character command line to VMS. With LAT, echoing is done by the VMS system, and given user typing speeds, each character typed will be transmitted in its own packet, and each echo likewise. That's around 60 packets. With CTERM, on the other hand, the local system handles echoes, deletions, and so on, sends nothing until the entire line is collected, then sends all 30 characters in one packet. So which is "more efficient"? VMS and CTERM share a model of terminal interaction that is actually quite complex and sophisticated. The more the application involved uses the more sophisticated parts of the model (field editing and so on), the bigger the advantage CTERM has. CTERM loses out when it is reduced to imitating LAT, either because the application uses the interface at a very low level (char- acter at a time, application echoing) or because it is being used between systems that don't share this model - RSTS and Unix systems, for example. (Actually, in this case you have to trade off efficiency and "transparency" - fidelity in reproducing the non-matching system's interaction model). BTW, you might think that screen editors would fall into the very-low-level class; but most typing in screen editors occurs at the ends of lines, and modern screen editors are generally clever enough to recognize this and simply ask for as many characters as will fit on the line, with the OS (on the remote system) doing the echoing (and waking up the application if a control key is struck, for example). In fact, most decent terminals today have an insert mode, which allows this kind of input to be done even in mid-line, in many cases. LAT was designed to be efficient for terminal servers, which connect a bunch of terminals to a relatively small number of hosts. Each packet between a server and a host has space for data for multiple terminal sessions; the LAT protocol gains efficiency when it can actually pack such data together. Con- versely, the "single session from a single host to a single host" is the worst configuration for the LAT protocol - though it will work. Using the LAT protocol instead of CTERM wins: - If you are using applications that use very low level interfaces to the terminal driver; - If you want to talk to systems with incompatible terminal interac- tion models, and you insist on complete transparency; - If you want to talk to systems that don't have DECnet (via "reverse LAT" connections); - If you want to build a terminal server which supports multiple terminals, wants to talk to all kinds of hosts - and runs on a machine without DECnet support, as building LAT support is simpler than building DECnet support. CTERM PROBABLY wins in other cases - though I'd still like to see someone measure this, rather than just speculate. The actual VMS LAT implementation also has one major advantage over the actual VMS CTERM implementation: An LTA can be disconnectable (or connect to a pre- viously disconnected process), while an RTA cannot. This has nothing to do with the protocols or their relative efficiencies - it reflects an (unfortu- nate) series of VMS design decisions. Finally, one suggestion I saw for why LAT would be better is that SET HOST only supports one session, while terminal servers support multiple sessions. This, however, is an issue quite independent of the protocol used. It would be quite possible to write an LTPAD that only supported on session, and it would also be quite possible to write an RTPAD that supported many. (In fact, back in V3 days, I actually saw such a beast - it was a hacked up version of RTPAD that ran 4 sessions. Rather unreliable, though, as it was derived from an early, buggy version of RTPAD and didn't benefit from numerous later revi- sions. Long gone, now.) -- Jerry -------