[comp.protocols.tcp-ip] network password protection/TCP spec

richb.UUCP@dartvax.UUCP (Richard E. Brown) (04/20/87)

TO THE MODERATOR:

I tried to post this a while ago, but received a reply from
some mailer daemon that this message should be mailed to you, not
posted.  I never received any responses, so I'm suspicious:  did
it arrive and was it posted?  If not, would you post it now?  If
it was, thanks.

Rich Brown

------------ MY MESSAGE FOLLOWS -------

  A while back, there was a discussion of protecting passwords,
  which lead to a discussion of taking over someone's TCP
  connection.  One person noted that if a spoofer simply startd
  sending in-sequence messages, they could take over the session
  and the victim would be relatively helpless.  Another person
  responded that he thought TCP specified that an ACK with a
  sequence number that was too high would result in a RST to clean
  out the connection.  (Further discussion revealed that TCP does
  *not* specify this -- in fact, it allows the session to
  continue.)

  My question:  Is this behavior (sending RST if the ACKs get too
  high) desirable?  Are there any pitfalls to doing this?

  Here at Dartmouth, we have developed a stream protocol which
  runs over AppleTalk.  It is in use throughout the campus with
  our Macintosh terminal emulator, and several commercial vendors
  are also implementing this protocol.  If it is useful, a stroke
  of a pen will implement it (well, you know what I mean).  Thanks.

  Rich Brown
  Dartmouth College
  Kiewit Computer Center
  Hanover, NH 03755
  603/646-3648

  richb@dartmouth.edu
  richb@dartmth.bitnet
  richb@dartvax.UUCP
  A0183 on AppleLink

JSLove@MIT-MULTICS.ARPA ("J. Spencer Love") (04/21/87)

I sent some of the messages on this topic, in particular the claim that
Multics sends RST segments when it receives precognitive
acknowledgements, and a subsequent justification of this behavior when
it turned out to be contrary to RFC 793.

If a precognitive acknowledgement is received, it comes from one of
three sources:  1) it is a paleolithogram which has been lying around in
the network exceeding its time-to-live, perhaps because of gateway
problems or sick hardware; 2) it indicates that the other end of the
connection has accepted data which you have not sent; or 3) someone is
trying to get you to reset the connection or update your window values.

The reason RFC 793 says that precognitive acknowledgements should be
ignored is that the authors only considered case 1 interesting.  Case 3
only exists in violation of the spec (a way to close windows) or can be
considered to be in the same category as case 2 in terms of tampering
with the connection.

My argument was that case 2 was possible and that handling it by sending
a RST was sensible.  The argument against it is that this might cause
case 1 to abort connections spuriously.  My counterargument is that RST
packets generated via case 1 are very unlikely to abort the connection.
This is because the RST packet's sequence number must be in the window
acceptable to the receiving TCP.  This sequence number is the
precognitive ACK which caused the RST.  If the sequence number is
out-of-window, then the RST packet is ignored.  This is described on
page 37 of RFC 793.

The range of valid sequence numbers for the RST extends from the highest
acknowledged sequence number to the window edge which is determined by
the available buffer space.  This window cannot in any case exceed
65,535 acceptable values, but is usually much smaller.  The range of
possible values is much larger; there are 4,294,967,296 possible values.
Thus, the probability that a paleolithogram-induced RST will be in the
window is actually quite small.  This is generally the case when initial
sequence numbers are chosen by a clock as described in RFC 793; it is
less true for implementations where all sequence numbers start at zero.
See pages 26 and 27 of the RFC.

If your protocol can sort out acceptable RST packets from a much larger
set of possible RST packets, then case 1 above won't be very likely to
destroy connections.  However, if your protocol doesn't have this
property, then perhaps some other mechanism for detecting and defeating
subversion attempts would be more appropriate.

This mechanism isn't really good enough to be a dependable hacker trap.
If you really want protection you should try using encryption.  For
example, you could encrypt TCP packets as announced by an extended
security option (as yet unspecified) which appears in the IP header and
is thus in the clear.  The cipher chaining could use some function of
the IP header's fragment reassembly identifier field so that each TCP
packet would start out differently even for equivalent port values and
sequence numbers (assuming DES).  Key selection would be on a host-host
basis with regard to security level and perhaps type-of-service but not
with regard to port numbers.  This might make it very difficult to
subvert connections, but I know of no research to test the robustness of
this scheme.

For AppleTalk, how big are your acknowledgement fields?  Can packets get
queued in gateways or device drivers from which they might emerge later
and cause trouble?  Is any of this TCP technology applicable at all?
TCP and IP have huge headers which contain most commonly used fields in
fixed places to minimize packet processing overhead, so that their users
can pass many many packets per second through very high speed
interfaces.  A more byte-oriented protocol with many optional fields
might have very different design constraints.  For example, if there
were no gateways, then perhaps case 1 can never arise, so precognitive
acks always indicate something very wrong.  On the other hand, some
protocols might consider any RST (abort) packet valid, so that you
should be very careful about possibly generating spurious aborts.

                    -- Spencer Love (617-253-2091 if email fails)