[mod.protocols.tcp-ip] Port Collisions

Margulies@SCRC-YUKON.ARPA.UUCP (05/13/86)

Folks,

This mail is a follow-up to a discussion I had with Jon some weeks ago.

We here at Symbolics are concerned with the process of assigning TCP/UDP
port numbers.  It is not always appropriate for us (and other vendors)
to apply for ports in the Czar-controlled first 256 ports.  Either
because of time constraints or issues of proprietary information, we
cannot always write and distribute an RFC for each of our protocols.

We have had complaints from customers that we are not the only vendor
using the `any private file proctocol' port.

We need a way to define new protocols without feat of collision with
other people.

We see two possibilities:

1) A registry of ports for private protocols.  Symbolics would be
willing to administer a simple registry for a group of ports outside the
first 256. We (or someone else, it matters not) would keep a list, and
anyone from an identifiable organization could ask for ports.  The
registrar's only function would be to hand out each such port only once.
It would be helpful, of course, if the official RFC's for TCP/UDP would
designate the group of ports involved as reserved for use through this
registry.

2) A new protocol that would vastly increase the effective namespace by
multiplexing ports.  For example, a port that the user side connects to
and sends an arbitrary (or at least a reasonably long) character string
specifying the service desired.  Some obvious use of prefixes or
suffices would suffice to avoid collisions.

--benson

STJOHNS@SRI-NIC.ARPA (05/14/86)

I  am not sure I see the problem here.  A "private file protocol"
is just that - PRIVATE.  It is run between machines that make the
assumption  that  they are all running the same private protocol.
Or is there the possibility that one machine is running  multiple
PRIVATE file protocols?

Either  a  protocol is a network wide standard - implying that is
is documented, and that it is designed for at least a minimum  of
interoperability  -  or  it  is private, with little or no public
documentation, and with no designed  interoperability.   In  this
context,  I  am  talking  about global interoperability, not just
interoperability between UNIX systems for example.

I can see  some  advantage  though  in  providing  some  sort  of
sentinal  as  part  of the PRIVATE protocols to say "I am running
FOO as my private protocol, go away if you don't talk FOO".   But
wouldn't  this  more  properly  be  part  of  the protocol?  Each
protocol should do some  confirmation  for  robustness  purposes,
right?

Mike

craig@LOKI.BBN.COM (Craig Partridge) (05/14/86)

> We here at Symbolics are concerned with the process of assigning TCP/UDP
> port numbers.  It is not always appropriate for us (and other vendors)
> to apply for ports in the Czar-controlled first 256 ports.  Either
> because of time constraints or issues of proprietary information, we
> cannot always write and distribute an RFC for each of our protocols.

Why not make the port numbers used user/site configurable?  Berkeley
actually did this quite nicely with a services list, which mapped
a service name/protocol pair with a port number.  Since programs
use this database (or are supposed to) to find out what port they
are supposed to us, one could run SMTP on TCP port 25 on the Internet
but port 243 on some private network if one so chose.

The advantage is the vendor need not necessarily worry about what
port you pick for your special application -- it can always be
changed among cooperating machines.

Craig Partridge
CSNET Technical Staff

Murray.pa@XEROX.COM.UUCP (05/14/86)

One word of caution.... Xerox managed to get off on the wrong foot in
the Ethernet packet type assignment business.

If you administer a group of ports, I suggest that the ground rules
include publishing a who-to-contact as well as a simple (one line?)
description of its function. Otherwise people will flame at you when you
won't answer questions.

The proprietary aspect might complicate that attitude. I don't think
that objection really holds water. It's too easy to look at the bits on
an ethernet.

I like your second suggestion, but somebody is bound to ignore it
because it takes an extra packet to get started.

Margulies@SCRC-YUKON.ARPA.UUCP (05/14/86)

    Date: 13 May 1986 18:38-PDT
    From: STJOHNS@SRI-NIC.ARPA

    I  am not sure I see the problem here.  A "private file protocol"
    is just that - PRIVATE.  It is run between machines that make the
    assumption  that  they are all running the same private protocol.

Wrong meaning of private.  Try the definition `A protocol not published
as an RFC, for any reason whatsoever.'

    Or is there the possibility that one machine is running  multiple
    PRIVATE file protocols?

Exactly. Symbolics has a 'private' file protocol.  One of our customers
wanted to teach their lisp machine to talk someone elses 'private' file
protocol.  Unfortunately, they were on the same port.  This situation is
typical, and likely to happen again and again.


    Either  a  protocol is a network wide standard - implying that is
    is documented, and that it is designed for at least a minimum  of
    interoperability  -  or  it  is private, with little or no public
    documentation, and with no designed  interoperability.   In  this
    context,  I  am  talking  about global interoperability, not just
    interoperability between UNIX systems for example.

The problem is with the phrase `network-wide standard'.   The number
czar only designates protocols as `network-wide standards' as part of
the activity of the internet research community (I'm paraphrasing Jon
Postel, and I hope that I'm getting it right). That is not to say that
vendors cannot offer protcols for inclusion in the global set. However,
it is often not practical for us to do so.  We can't always afford the
time to document the protocol for the community, which is a pretty-near
necessart for inclusion in the global protocol set.

Anyway, I don't think that your simple division of the world into Public
and Private is good enough.  I think that things are more complex.

First, there is an extensive gray area between officially blessed
protocols (global interoperability) and protocols that are private hacks
amongst a small number of cooperative machines (no interoperability).
As a commercial vendor, we are concerned with orderly partial
inter-operability.  If Berkley has established a 'private' Unix TCP
protocol, it is very likely that sooner or later someone with a lisp
machine (or something else non-Unix) will want to talk that protocol to
a Unix.  That dosen't necessarily qualify the protocol as a network-wide
standard, but gives a good reason for us to avoid port collisions.

Second, even if I were implementing a protocol that would only be used
at one site amongst three machines,  I would like some assurance that I
won't find out next week that some vendor is using the port that I chose
for some protocol that I am interested in using.

    I can see  some  advantage  though  in  providing  some  sort  of
    sentinal  as  part  of the PRIVATE protocols to say "I am running
    FOO as my private protocol, go away if you don't talk FOO".   But
    wouldn't  this  more  properly  be  part  of  the protocol?  Each
    protocol should do some  confirmation  for  robustness  purposes,
    right?

Indeed, its not hard to say `sorry, you can't talk to him, because he's
doing something else over that port.'  Cold comfort if your goal is to
talk to him.

    Mike

Margulies@SCRC-YUKON.ARPA.UUCP (05/14/86)

    Date: Wed, 14 May 86 02:37:25 PDT
    From: Murray.pa@Xerox.COM

    One word of caution.... Xerox managed to get off on the wrong foot in
    the Ethernet packet type assignment business.

    If you administer a group of ports, I suggest that the ground rules
    include publishing a who-to-contact as well as a simple (one line?)
    description of its function. Otherwise people will flame at you when you
    won't answer questions.

Fine with me.  

    The proprietary aspect might complicate that attitude. I don't think
    that objection really holds water. It's too easy to look at the bits on
    an ethernet.

I expect that proprietary protocols will be fairly rare.  Mostly, I
expect that people just won't have the time or inclination to document.
I really cannot see why someone would insist on a listing of `Foobar
Company proprietary protocol 1'.


    I like your second suggestion, but somebody is bound to ignore it
    because it takes an extra packet to get started.

DCP@SCRC-QUABBIN.ARPA.UUCP (05/14/86)

    Date: Wed, 14 May 86 02:37:25 PDT
    From: Murray.pa@Xerox.COM

    I like your second suggestion, but somebody is bound to ignore it
    because it takes an extra packet to get started.


For reference, I suggest people read the documentation for the CHAOSNET
protocol, MIT A.I. Memo 628.  Basically, CHAOSNET wins in the connection
initiation methodology and the IP-class of protocols lose.  Small fields
(read: numbers) that require a number czar make ease of extension very
difficult.  In chaos it is easy to say "I want to invent a protocol
called RESET-TIME-SERVER" because the name of the protocol can also be
the contact name used in the RFC (SYN, for TCP folks).

I have a feeling TP4 loses in this respect as well.

PATTERSON@BLUE.RUTGERS.EDU.UUCP (05/14/86)

Yes, but then you have collision problems with protocol names. Most people
would use acronyms, not word-by-word forms. You still need a Socket Czar,
only now sockets have a (reasonably) human-understandable format (i.e TELNET
instead of 23.)
 
Ross Patterson
Center for Computer and Information Services
Rutgers University
-------

braden@ISI-BRADEN.ARPA.UUCP (05/14/86)

It sounds like another version of the SNA/DECNET free-enterprise protocol
wars.

Do you think we should encourage the proliferation of private protocols,
many of them doing the same things?  It is clearly in the national 
interest (that's us, friends) to promote maximal interconnection of
heterogeneous systems.  That is what standards are for.

Until recently, in England there were several different standards for
electric plugs, because each of the 19th century power barons designed
their own.  So you bought an appliance with a cord but no plug on the
end, and added the plug necessary for your outlet. Rather like a
configuration file, isn't it?

As a customer, do you think I should buy a software system from a vendor
that did not have the resources to properly document its internal function?
I wonder what kind of maintenance and support I will get with that product.

DCP@SCRC-QUABBIN.ARPA.UUCP (05/14/86)

    Date: 14 May 86 13:01:46 EDT
    From: Ross Patterson <PATTERSON@BLUE.RUTGERS.EDU>

    Yes, but then you have collision problems with protocol names. Most people
    would use acronyms, not word-by-word forms. You still need a Socket Czar,
    only now sockets have a (reasonably) human-understandable format (i.e TELNET
    instead of 23.)
 
Go read Benson's message again.  He said that private protocols would be
rather long contact names, possibly including the vendor/entity that
implemented it as part of the name.  People who use short names or
acronyms are anti-social.  Standard contact-names that correspond to
RFCs could still be administered by a Czar, e.g., TELNET, SUPDUP, ECHO,
but private protocols would have names like SYMBOLICS-NFILE.

Benson and I have already colided, AND WE'RE ON THE SAME FLOOR OF THE
SAME BUILDING OF THE SAME COMPANY.  Since numbers don't mean anything,
we both happened to pick 666 for our private port number.

Margulies@SCRC-YUKON.ARPA (Benson I. Margulies) (05/14/86)

    Date: Tue, 13 May 86 21:22:49 -0400
    From: Craig Partridge <craig@loki.bbn.COM>


    > We here at Symbolics are concerned with the process of assigning TCP/UDP
    > port numbers.  It is not always appropriate for us (and other vendors)
    > to apply for ports in the Czar-controlled first 256 ports.  Either
    > because of time constraints or issues of proprietary information, we
    > cannot always write and distribute an RFC for each of our protocols.

    Why not make the port numbers used user/site configurable?  Berkeley
    actually did this quite nicely with a services list, which mapped
    a service name/protocol pair with a port number.  Since programs
    use this database (or are supposed to) to find out what port they
    are supposed to us, one could run SMTP on TCP port 25 on the Internet
    but port 243 on some private network if one so chose.

We believe in 'plug-in-and-play' software.  Expecting not-very-savy
users to figure out the port usage of all the various programs they are
using is a big expectation.

What if two sites disagree on the port for a protocol? They will never
be able to inter-operate!

We have logical services and protocols that are mapped to ports. The
only way to make that do what you want is to use a different protocol
name for each port assignment in use anyplace, so that the database of
hosts can indicate which hosts are going to talk over which ports.  This
is an unreasonable burden.

    The advantage is the vendor need not necessarily worry about what
    port you pick for your special application -- it can always be
    changed among cooperating machines.

Its the vendor's job to worry so the user dosen't have to.

    Craig Partridge
    CSNET Technical Staff

Margulies@SCRC-YUKON.ARPA (Benson I. Margulies) (05/14/86)

    Date: Wed, 14 May 86 10:47:27 PDT
    From: braden@isi-braden.arpa (Bob Braden)

    It sounds like another version of the SNA/DECNET free-enterprise protocol
    wars.

    Do you think we should encourage the proliferation of private protocols,
    many of them doing the same things?  It is clearly in the national 
    interest (that's us, friends) to promote maximal interconnection of
    heterogeneous systems.  That is what standards are for.

Not all protocols are suitable for standardization.  Sometimes, there
isn't enough knowledge in the industry to settle on a standard.  We
can't be expected to wait around.

Some protocols perform very specific functions that are pointless to
standardize.  Heavily networked products are very likely to involve
network protocols that are not useful outside a particular application.
Yet these have to have ports, and those ports can't conflict with other,
interoperating protocols.

    Until recently, in England there were several different standards for
    electric plugs, because each of the 19th century power barons designed
    their own.  So you bought an appliance with a cord but no plug on the
    end, and added the plug necessary for your outlet. Rather like a
    configuration file, isn't it?

This is why I'm opposed to the configuration file solution.


    As a customer, do you think I should buy a software system from a vendor
    that did not have the resources to properly document its internal function?
    I wonder what kind of maintenance and support I will get with that product.

The statement `did not have the resources' is not a realistic view. As a
customer, I'd rather my vendor spent their (my) money on supporting me,
not informing the internet community about the network protocols in
their product.

POSTEL@USC-ISIB.ARPA (05/15/86)

Hi.

The number czar is primarily interested in having the protocols
assigned ports from the reserved space (numbers 0 through 255) be
documented and public.  It does not really matter much if the protocol
is developed for academic internet research or practical commercial
use.  It does matter that there be some good reason for the protocol
and that there be some evidience that some work was actually done on
the protocol.  The number czar dislikes assigning numbers on
speculation (that is, tends to turn down requests of the form "I might
write some protocols, give me a bunch of port numbers").  There are
already a few assignments for essentially private protocols (some that
would not be made today), but in most of these recent cases the
developer was able to send the czar some description of the protocol.

One of the great things about the Internet is that is an "open
system", and what that means is that the protocols are public.  Over
and over the czar has seen people develop some little hack protocol
for their own private use that turns out to be a neat thing and other
people want to use it too but it is not documented.

If there really is a need for a set of port numbers to be assigned to
individuals or companies for private use, the number czar is perfectly
happy to do the work of keeping track of who owns which numbers and
giving the next available number to the next requestor.  If this is a
thing we want to do what part of the port number space should be
allocated to this?  How many numbers should be set aside for this type
of assignment?  What is the impact on existing programs and systems?
Is anyone using the numbers that will suddenly be off limits by this
this reservation of part of the port number space?

Or if the other suggestion is followed (to have a port assigned for a
multiplexing service based on a character string argument), the number
czar is happy to keep the list of unique (initial) strings and who is
the contact person.

--jon.
-------

SRA@XX.LCS.MIT.EDU (Rob Austein) (05/15/86)

The collision problem with protocol names is not as severe as the with
(small) protocol numbers.  People implementing private applications
tend to use ports in the low 500s (or working down from 1024), so
there is a fair chance that there will eventually be conflicts.
Whereas I doubt that Symbolics would have much trouble testing a new
service named "SYMBOLICS-PRIVATE-PROTOCOL-XYZ-PRIME".  When a protocol
becomes well enough known and widely enough used that it needs a short
snappy name, -then- you go talk to the Socket Czar.

--Rob

JNC@XX.LCS.MIT.EDU ("J. Noel Chiappa") (05/15/86)

	Right. What happens if you pick a name for your private
protocol that happens to be exactly the same as the name someone else
picked for their private protocol? You are merely using strings
instead of numbers; the same problems can happen. The set of possible
identifiers is still pretty small, if people use descriptive english
words for services.
	Now, if you preface all your private services with some
personal string, e.g. 'SYMBOLICS-', as in 'SYMBOLICS-NEW-FILE' then
maybe you have a valid point.

	Noel
-------

DCP@SCRC-QUABBIN.ARPA (David C. Plummer) (05/15/86)

    Date: Wed, 14 May 86 10:47:27 PDT
    From: braden@isi-braden.arpa (Bob Braden)

    It sounds like another version of the SNA/DECNET free-enterprise protocol
    wars.

    Do you think we should encourage the proliferation of private protocols,
    many of them doing the same things?  It is clearly in the national 
    interest (that's us, friends) to promote maximal interconnection of
    heterogeneous systems.  That is what standards are for.

    Until recently, in England there were several different standards for
    electric plugs, because each of the 19th century power barons designed
    their own.  So you bought an appliance with a cord but no plug on the
    end, and added the plug necessary for your outlet. Rather like a
    configuration file, isn't it?

    As a customer, do you think I should buy a software system from a vendor
    that did not have the resources to properly document its internal function?
    I wonder what kind of maintenance and support I will get with that product.

Example.  Symbolics calls its current "private file protocol" NFILE.  It
is currently documented in the Release 6.1 Release Notes.  We know it to
be far superior to TCP/FTP and pervious file protocols on Lisp Machines.
NFILE is a true file ACCESS protocol as oposed to FTP which is a
transfer protocol.  Back then we weren't ready to tout it to the rest of
the world.  I'm still not sure we are.  The reasons are many.  We aren't
sure it is really sufficient.  We do know that there are some very
complex operations in it because of the need to handle aborting out of
operations correctly.  The generic-file-system model it assumes may not
fit in well with all known system.  We also fear that many people will
not understand the need for some of the complexity and will either want
it thrown out because they can't figure out how to implement it, or
won't implement the parts without telling people.  (Need I remind people
that Unix TCP/FTP still sends the wrong code for the DELETE operation, I
believe it is.)  Companies may be willing to document their protocols to
let others experiment with them.  They may not be prepared to solidify
on it or to change it to the whims of the masses.  If NFILE replaced
TCP/FTP as the Internet standard, and if we discovered some other
feature that is rather crucial to our system, we would be in the same
ballpark as we are now: a private protocol.

There are sometimes some good reasons not to push private protocols as
standards until all the conditions are right, even though those private
protocols are documented and provide superior functionality.

mrose@NRTC-GREMLIN.ARPA (Marshall Rose) (05/15/86)

    If your argument is:  

	 "having a port space of 512 distinct addresses is too small" 

    then I doubt anyone dis-agrees in principle.  On the other hand, if
    your argument is:  

	 "Benson and I, working on the floor for the same company,
	  collided, so the port space is too small"

    Then your argument points more to a possible (mis)management problem
    in your company than a port scarcity problem!  

    Let's face it, 512 distinct addresses for ports probably is too
    small for the totality of applications that can/could use TCP.  I
    don't think it's too small any given group of co-operating sites
    using TCP, though I could be convinced otherwise.  

    Personally, I like the Berkeley method since you can just define
    some numbers to meet your needs.  I'm not so keen on using strings
    or datastructures as port identifiers, though I'm sure, when ISO
    gets around to defining the port space for TS-users, they'll be sure
    to add an option for regular expressions, mandelbrots, etc.  (-:

/mtr

dpk@mcvax.UUCP.UUCP (05/15/86)

I like the idea of the "multiplexing port".  Consider this one
vote for it.

-Doug-

Margulies@SCRC-YUKON.ARPA.UUCP (05/15/86)

    Date: Wed 14 May 86 19:54:03-EDT
    From: "J. Noel Chiappa" <JNC@XX.LCS.MIT.EDU>

	    Right. What happens if you pick a name for your private
    protocol that happens to be exactly the same as the name someone else
    picked for their private protocol? You are merely using strings
    instead of numbers; the same problems can happen. The set of possible
    identifiers is still pretty small, if people use descriptive english
    words for services.
	    Now, if you preface all your private services with some
    personal string, e.g. 'SYMBOLICS-', as in 'SYMBOLICS-NEW-FILE' then
    maybe you have a valid point.

That is exactly my plan.


	    Noel
    -------

Margulies@SCRC-YUKON.ARPA.UUCP (05/15/86)

    From: Doug Kingston <mcvax!dpk@seismo.CSS.GOV>

    I like the idea of the "multiplexing port".  Consider this one
    vote for it.

    -Doug-

well, so do I.  However, for it to be a workable solution, there has to
be some commitment that it won't be a member of the large `ignored RFC'
collection.  I think that we need a registry to tide us over in the
interim.

DCP@SCRC-QUABBIN.ARPA.UUCP (05/15/86)

    Date: Wed, 14 May 86 17:46:32 -0700
    From: Marshall Rose <mrose@nrtc-gremlin>

	If your argument is:  

	     "having a port space of 512 distinct addresses is too small" 

	then I doubt anyone dis-agrees in principle.  On the other hand, if
	your argument is:  

	     "Benson and I, working on the floor for the same company,
	      collided, so the port space is too small"

	Then your argument points more to a possible (mis)management problem
	in your company than a port scarcity problem!  

We were both doing independent, exploratory research of completley
unrelated problem domains.  Maybe we should have a number czar that is
on call 24 hours a day in case somebody has a hack attack at 9:30pm (or
3am) and needs a protocol number in 5 minutes.  Because my application
was stream oriented it works over TCP and CHAOS.  I didn't have any
problem with Chaos because my protocol was called MANDELBROT and
Benson's was completely different.

	Let's face it, 512 distinct addresses for ports probably is too
	small for the totality of applications that can/could use TCP.  I
	don't think it's too small any given group of co-operating sites
	using TCP, though I could be convinced otherwise.  

Here is a list of protocols that we use at Symbolics.  Some are trivial.
Some work over TCP and have RFCs.  All are (or were at one time or
another) useful, but not critical.  Sorry to bother everybody with this
list, but it shows that one company has several protocols that are not
all RFCs.  I don't think any of the ones below are considered
proprietary.  That's 32+ protocols we have use for.  If 16 companies or
groups or implementations each have 32 different protocols, we would (a)
have a tower of Babel, and (b) run out of the 512 port numbers.  The
tower of Babel might be needed.  For example, what machines other than
Suns can use the Sun net-paging protocol?

 "MANDELBROT"		My network mandelbrot protocol and is Lisp only
			(it actually sends Lisp forms)
 "SEND"			Interactive messages (ancient)
 "CONVERSE"		Interactive messages, done much better
 "SMTP"			(It's a byte stream protocol)
 "EXPAND-MAILING-LIST"	(ditto)

 "ECHO"			I think these have TCP equivalents
 "BABEL"
 "TIME"
 "NAME"

 "MAIL"			Chaos version
 "UPTIME"		Similar to TIME
 "QFILE"		(old) LispM file protocol
 "NFILE"		new one
 "FINGER"		Similar to NAME, but simpler in ways

 "DOMAIN"		The internet domain resolver stuff

 "NAMESPACE"		Our own namespace stuff, written before the RFC
 "NAMESPACE-TIMESTAMP"	for domains came out, and in some of our
 "WHO-AM-I"		opinions, superior 

 "TELNET"		Specified in an RFC
 "SUPDUP"		Specified in an RFC
 "TELSUP"		A mixture of the two
 "TTYLINK"		"raw" linking; no protocol.  useful for connecting to Unix
 "3600-LOGIN"		Better than all the above for talking to 3600s

 "PRINTER-QUEUE"	Various printer queue operations
 "LGP-STATUS"
 "LGP"
 "LGP-QUEUE"
 "DOVER"
 "DOVER-STATUS"

 "CONFIGURATION"	Various other useful protocols
 "RESET-TIME-SERVER"	For machines whose clocks are off
 "TCP"			To connect to TCP streams through a protocol gateway
 "NOTIFY"		Information dispersal
 "STATUS"		Network status
 "DUMP-ROUTING-TABLE"	Topology debugging
 "RTAPE"		Remote Tape facility

	Personally, I like the Berkeley method since you can just define
	some numbers to meet your needs.  I'm not so keen on using strings
	or datastructures as port identifiers, though I'm sure, when ISO
	gets around to defining the port space for TS-users, they'll be sure
	to add an option for regular expressions, mandelbrots, etc.  (-:

    /mtr

root@BU-CS.BU.EDU.UUCP (05/15/86)

I think there is a problem here. Due to obvious security reasons
many of these protocols will need a low socket number and there are
only 255 of those guaranteed to be available (UNIX extends that to
1023 but I don't think everyone can honor that.) Some small portion
of that is already in use and if commercial trends continue I suspect
many, many vendors will need a *secure* port for their products.

The only solution I can think of is a super-server protocol which
would listen on a single secure port and map strings to available non-secure
ports indirectly (I believe the TOPS-20 IPC works something like this.)
I've thought about it a few hours and it's not easy, especially when
you start to take different TCP implementations into account (how exactly
do I reserve and "hand-off" a port number between two unrelated processes?
that's an O/S issue but I fear not generally easy.)

Doesn't SUN's RPC (which I believe is in the public domain) address this
issue (no pun intended)? If you say "do you seriously expect me to implement
RPC and XDR just to get a port number?", I sympathize, I was only trying
to find analogues for enlightenment.

	-Barry Shein, Boston University

dab@BORAX.LCS.MIT.EDU.UUCP (05/15/86)

   Date: Thu, 15 May 86 12:04:14 EDT
   From: Ra <root@BU-CS.BU.EDU>

   I think there is a problem here. Due to obvious security reasons
   many of these protocols will need a low socket number and there are
   only 255 of those guaranteed to be available (UNIX extends that to
   1023 but I don't think everyone can honor that.) Some small portion
   of that is already in use and if commercial trends continue I suspect
   many, many vendors will need a *secure* port for their products.

	   -Barry Shein, Boston University

As far as I've found, this belief that some ports are secure while
others aren't is only implemented by Berkekley Unix.  Since other IP
implementations do not necessarily honor this belief, there is no
security in using *secure* ports unless your network consists
exclusively of machines running Berkelely Unix.
					David Bridgham

mrose@nrtc-gremlin.UUCP (05/16/86)

    The obviously problem with a "multiplexing port" is that you can no
    longer tell by looking at the TCB what protocol is being spoken.
    This renders programs like netstat on BSD UNIX, et. al., pretty
    much worthless.  If we're going to expand the port space somehow, I
    vote we do it explicitly in the TCP headers, so it's part of the
    information in the TCB, rather than expand the port space covertly
    by exchanging the information in the TCP data.  

/mtr

    ps:  of course, this is the exact opposite of what we did in rfc973
    (iso transport on top of the tcp), primarily because I thought 1)
    keeping track of the numbers, if there ever were numbers, should be
    separate from the tcp port space, since the protocols probably
    weren't going to look anything like our good old ARM-style protocols
    ; and, 2), there's a good chance that we'd need more than 512 port
    numbers in the next three-five years.  To postpone that problem, 4K
    port numbers were reserved; presumably, though I didn't think about
    it, 2K of those are for "private" use.  

MULHERN@SRI-KL.ARPA.UUCP (05/16/86)

I did work for Jon Brommeland at NAVELEX Vallejo which was sponsored
by PDW-120.  He did not let us contact 120 directly, and the Vallejo
involvement did not turn out well, but I (and several others here)
learned much about 120's systems -- NWSS, FHLT,HLT,OBS, OBU, IID, 
STT/TDCS and ISE.  Our task was to design a local network for
the 120 testbed at Patuxent River.  We also reviewed the initial
specifications for OBU  (non-black world).  This work was completed
in 82/83.
-------

JSLove@MIT-MULTICS.ARPA.UUCP (05/16/86)

I think symbolic port names are a great idea.  There are several
approaches which could be taken.  Possibly implementations of some of
these already exist, but I don't know about them.

A service multiplexing protocol:  this applies to TCP but not to UDP
because it relies on connections.  First, reserve one low numbered TCP
port for this protocol.  When a user establish a connection to this
server, the user sends the contact name, and optional contact arguments,
in ASCII, terminated by a network newline (CRLF).  The contact name is
always required, and should either be registered, or long and self
explanatory.  CHAOS protocol allows arguments to follow the contact
name, and I think we should allow this also, but they depend on the
contact name and the only specification here is that they be delimited
from the contact name by a space.  The protocol server which listens on
this TCP port should parse the contact line, look it up in the available
service list, and pass off that connection as if it had just been
established to the service on a reserved port.  If the service is
unavailable, the connection should be aborted.

Note that "multiplexing" in this case is not used in the sense that is
it used in IEN 90, the MUX protocol.

TCBs on such systems should contain a comment field which is filled in
with the service name.  Then listing the TCBs can show the comment, and
it won't matter that all the multiplexed services are to the multiplexed
port.  (Multics already does this; it makes the TCB list much easier to
read.)

There should be no significant performance impact in using this
mechanism for TELNET, SUPDUP, SMTP, FINGER, or FTP control connections.
While services used for metering, like TIME, ECHO, DISCARD, etc., can be
expected to require dedicated port numbers, there should be no need to
dedicate port numbers for private services thereafter.  Even DOMAIN TCP
connections could use multiplexing.

There should be a reserved contact name which lists the available
contact names.  The WKS domain RR would be useful only in showing that
multiplexed service is available.  Perhaps this list operation should
include port numbers for contacting the services, when available.  A
standard format should be used, so that the list is machine-readable as
well as human-readable.

A fly in the ointment is that some services reserve port numbers in ways
that can't be multiplexed.  For example, the FTP data port is port 20.
A replacement for the FTP "PORT" command would be needed to eliminate
this assumption.  This could be addressed by modifying the FTP
specification and all similar protocols if it is ever needed.  But since
we are doing this primarily for new protocols, they can be devised with
some mechanism for negotiating additional connections, as needed, which
does not require reserving ports.

There are other advantages within operating systems.  Services other
than TCP which support streams could hand off, for example, SMTP
connections in a similar manner.  If an X.25 network like TYMNET is
connected two sites, the operating system might permit an X.25 level 3
connection to be passed off to the SMTP server.  The contact name
arguments in that case might include a host name and associated password
since you couldn't verify the host in the same way that you might using
the Internet.

An alternative or additional protocol could use a UDP service to map
contact names to ports.  You would send the contact name to the port as
a datagram, and receive back a datagram which would indicate that 1) the
service is available on port P, 2) the service is available on the
multiplexed port M, or 3) that the service is unavailable.  Choice #2
only applies to connection based services, but #1 and #3 could apply to
either TCP or UDP, and perhaps other lower level services as well.  If
the query packet also contained a transport ID (e.g., 6 for TCP) or name
(e.g., "TCP") then the service could be more generally applicable.

This permits the equivalent of multiplexed service on UDP, a minimum
cost of an extra datagram per host per service per bootload to find out
what the port number is assigned to a service.  It wouldn't matter that
one Symbolics 3600 was running NFILE on port 57, and another on port
666, as long as they both call the service "NFILE".

The UDP service could not be expected to be able to construct packets
large enough to contain the names of all the available services.  While
in many cases they would fit, it would be unreasonable to expect that
this always be the case.  Rather than add complexity to this protocol it
would be better to require contacting the TCP port to get the service
list.

The number czar (tsar?)  would still be needed to hand out contact names
like "TELNET" and "FINGER", and prefixes like "SYMBOLICS-".  Once Benson
has the "SYMBOLICS-" prefix he can hand out SYMBOLICS-PRIVATE-1,
SYMBOLICS-NFILE, and so on without having to bother the czar or fear of
conflict with the rest of the world.

The server port number returned by UDP can have any value.  The number
needn't be less than 256 or 1024 to provide security.  The 255 well
known ports are just that:  well known.  If a host implements security
based on port number, then it is only necessary to ensure that the port
number given out by the UDP multiplexing port is secured, even if it is
port FFFF (hex).  The UNIX port scheme depends on the user-side port
being less than 1024 to ensure that the connection is valid, not the
server side port number.

It is unnecessary to encode contact names in every packet, especially
for TCP.  A port number is sufficient to keep track of packets, and only
the two parties to a conversation need know how the numbers map to
services.  This may not be strictly true for gateways that restrict
services.  Schemes that I have heard of restrict the SYN-bearing TCP
packets.  I'm afraid that they would have to prohibit the multiplex
port.  However, the UDP port translation could be used to avoid using
the multiplex port for all contact names which don't require arguments.

It would be nice to see an alternative to UDP that was in most ways
identical to it, but which had a contact name in the packet instead of
one of the port numbers.  For simple transactions this would avoid the
complexity of exchanging packets to get the port number before anything
could be accomplished.  The format of the packet would be as follows:
The length would be in the pseudo header.  The source port would be
replaced by a transaction ID (15 bit number) and a query/response flag
(one bit).  The destination port would be replaced by an ASCII character
string, terminated by one or two nulls to take an even number of
characters.  Two bytes would be reserved for an optional checksum.  The
rest would be data octets.  If the contact name is long, the number of
available data octets would be fewer than for UDP, but if the response
flag were set, perhaps the contact name could be omitted from the
response packet, permitting longer responses.  Otherwise, the
query/response flag would indicate which of the ID and the contact name
is the local "port".

If there already exist RFCs describing protocols which implement any of
these schemes, please point them out.  Otherwise, someone with immediate
need for these services could write an RFC or two.

All that is necessary for the Internet to generally convert to this mode
of operation is that a version of Berkeley Unix get into the field which
supports multiplexed service and which has user programs which attempt
multiplexed operation if it is available.  I think the advantages of
this mode of operation will become so apparent that the rest of the
network will be converted within a year or two.

The time will come when both TCP and TP4 are considered hopelessly
outmoded antiques.  By making TCP services more flexible in this way,
perhaps that can be postponed.

Acuff@SUMEX-AIM.ARPA.UUCP (05/16/86)

   I like the idea of having a "port assignment service" that would
map a very large space of names into port numbers for a particular
host, but I would like to see "mature" protocols that have gotten to
the stage of being documented in RFCs be assigned ports, thus freeing
them from the extra connection overhead.  Then it would be easy for
protocol developers to experiment, and to make quick hacks, but the
most used protocols would not be affected.

	-- Rich
-------

tom@LOGICON.ARPA.UUCP (05/16/86)

On the subject of picking port numbers:

>Benson and I have already colided, AND WE'RE ON THE SAME FLOOR OF THE
>SAME BUILDING OF THE SAME COMPANY.  Since numbers don't mean anything,
>we both happened to pick 666 for our private port number.

I wonder if that's because they both knew that the protocols they were
working on would be "devilishly difficult" to debug :-) 

(Sorry, I just couldn't resist...)

Tom

chris@GYRE.UMD.EDU.UUCP (05/17/86)

Just to avert any confusion before it turns into a flame war:

	From: dab@mit-borax.arpa (David A. Bridgham)

	As far as I've found, this belief that some ports are secure
	while others aren't is only implemented by Berkekley [sic]
	Unix.  Since other IP implementations do not necessarily honor
	this belief, there is no security in using *secure* ports
	unless your network consists exclusively of machines running
	Berkelely Unix.

This is true, but not important.  The `proper' authorisation protocol,
as implemented by rcmd(), is to look for the host name in a list
of `trusted' hosts first.  Only after the host has been declared
`trusted' is the user name considered.  As long as one declares
only trust*worthy* hosts (specifically those that restrict access
to said ports) as trust*ed*, the protocol works.

For anything more complex, of course, a public-key cryptosystem or
other `better' authentication scheme is required.

Chris

JSLove@MIT-MULTICS.ARPA.UUCP (05/18/86)

Apparently the most controversial thing that I said in my last message
was that I thought all the TCP services should be supported by the
service multiplexor.  The service side of the multiplexor could offer
two types of service:  service associated with a port number and service
not associated with a port number.

If no multiplexor service is associated with a port number, then the
services are divided into two groups:  those available by contact name,
and those available by local port.  It would probably be simpler to
implement the service side of the multiplexor this way.  However, from
my point of view, if ANY service is available by both mechanisms, then
the simplest way to implement it is to have the multiplexor able to
access ALL well known TCP services which are available by port number.
By "well known" services, I mean services where one or more server
processes will accept any number of connections for the service.

This has several advantages.  First, I can easily imagine circumstances
where for some service, access by both methods is desired, perhaps
temporarily during cutover.  Second, the multiplexor service list can
easily list all the TCP services offered, not just those offered through
the multiplexor.  I would like to be able to obtain complete symbolic
service lists.

The biggest and most controversial advantage is that I think the
Internet made a big mistake in not using contact names in general in
this way.  There is a simple way of proving me wrong:  implement the
service multiplexor and see if, once they become fairly widespread,
there does not appear a definite preference for using the multiplexor
for new applications indefinitely.

I am not suggestion that existing user programs like TELNET be
converted.  I am noting my belief that this will seem like a good idea
later.  If this belief is mistaken, the multiplexor will still be useful
for addressing the needs of stream protocol developers.

Rather than implement TCBs which have no local port number associated
with them, I expect that the service port multiplexor would be simplest
to implement by having the server scan a list of listening TCBs.  This
means that every service would be available by both a port number and a
multiplexor name.  The services like TELNET would have "well known" port
numbers from 1 to 255, while private services could be available on more
obscure or even randomly chosen higher numbered ports.

If all services are available both ways, it is true that there is some
duplication of effort.  However, the user sides need not change.  In the
near term, we can't expect every host to run the multiplexor, so user
programs should still contact services with assigned numbers by the port
numbers.  Changing user programs may be considered only if the
multiplxor servers of this type become nearly universal.

Special operations like the service list might be implemented by passing
off the connection to a service which lists the services, or by handing
them in the multiplexor server.  I would prefer to implement certain
standard reserved operations like the service list as part of the
multiplexor protocol specification, so this choice may be covered by an
eventual specification.

I'd rather see symbolic names than a service which maps 32 bit service
numbers onto 16 bit service numbers, with the mappings from symbolic
names to 32 bit numbers done at the user host, and the mapping from 32
bit to 16 bit service identifiers done at the server host.  Perhaps this
description of the Sun protocol is defective, but if not, why have the
32 bit numbers at all?

mark@cbosgd.ATT.COM.UUCP (05/18/86)

>As far as I've found, this belief that some ports are secure while
>others aren't is only implemented by Berkekley Unix.  Since other IP
>implementations do not necessarily honor this belief, there is no
>security in using *secure* ports unless your network consists
>exclusively of machines running Berkelely Unix.

I wouldn't even go that far.  Even if your network is all based on
the UNIX conventions (the System V product is the same at Berkeley)
you still don't really have much security.  You have to trust the
super users of all the systems on your network, and keep the cable
physically secure.  There are enough cheap PCs running UNIX these
days that any user with a PC can break in with a little cleverness.

Many protocols depend on higher levels of security, e.g. FTP uses
a password on every connection.  I won't claim that there aren't
security problems here, either, but the point is that for many
applications, magic numbers like 255 or 1024 don't mean much.
As far as I'm concerned, I can choose any 16 bit number.  In fact,
our current protocol being developed uses port 1624 and we're
quite happy.  Nonetheless, I hope to reserve the port number
to avoid a possible random future collision.  Of course, we will
have some sort of management decision about publishing our protocol
before we can publish it.

	Mark

craig@LOKI.BBN.COM.UUCP (05/19/86)

    Just a side note to these discussions.  Some IP protocols have 
only 8 bit port numbers (for example, RDP and HMP).  Any general
solution might think a bit about how to handle these beasties
(RDP was spec'd after TCP and UDP, so there is no guarantee that
new protocols will abide with the 16 bit port size unless Jon
has decided to be more demanding on protocol designers).

    You can't casually give away many ports in a 256 port space
for private use.

Craig Partridge
CSNET Technical Staff

Margulies@SCRC-YUKON.ARPA.UUCP (05/19/86)

    Date: Thu, 15 May 86 22:23 PDT
    From: Provan@LLL-MFE.ARPA


    just out of curiousity, why is it so important that you not publish your
    protocol as an RFC?  is it just secret and yo udon't want it to be copied
    or is there soem other reason?

    just so my cards are on the table, i do work for a private, for profit,
    TCP protocol development firm, so be sure not to give me any secrets.

At this time, we have no protocols that we with to treat as proprietary.

We do have protocols that are complex and not seperately documented.
Commented Lisp code is clear to those who have to maintain the works.

To go back and write an RFC just to get a port number is a lot of work,
since we don't need it and I imagine that no one else will ever use
these protocols.

The importance of avioding RFC's is that as a vendor it makes us
uncomfortable to have to interact with something like a number czar to
extend our product.  It introduces unpredictable timing and adds
work-load.

mrose@nrtc-gremlin.UUCP (05/19/86)

    I'm not going to get into an argument about how people doing
    "independent, exploratory research of completely unrelated problem
    domains" should interact with a numbers czar on call 24-hours a day.
    That's silly, right?  But wouldn't it be okay to have a tacit
    agreement between people doing development work about the port space
    being used for "independent, exploratory research of completely
    unrelated problem domains" is divided up?  (i.e., Benson's group
    gets 600-699, David's group gets 700-799, etc., etc.).

    No matter how complex you make the port space (e.g., go from 16-bit
    integers to null-terminated strings of arbitrary length), you will
    still run into collisions.  There has to be some authority somewhere
    which maintains a registry, binding on all parties, which assigns
    protocols to the port space.

/mtr

MULHERN@SRI-KL.ARPA.UUCP (05/20/86)

sorry for my previous message -- please ignor it
-------