mckee@MITRE.ARPA (H. Craig McKee) (10/29/86)
Bob Knight Back on 25 Feb 86 you noted a large FTP load when new host tables were released. The responses seemed to concentrate on what might be done with the host tables. A more fundamental issue is the behavior of FTP servers. I suggest that network performance would be improved if FTP were revised as follows: 1. FTP should reject a request for a non-local transfer if "M" incomming or "N" outgoing jobs are already in progress. Rationale: When you permit many concurrent FTP jobs long connection times are necessary and everyone gets slow service. Long connection times increase the probability that equipment failure or a transitory glitch will cause the connection to be aborted. In the absence of a restart marker the user will have to start from the beginning of the file. I estimate that sites with a single 56 Kbps PSN circuit should limit concurrent FTP jobs to two incomming and two outgoing. Once rejected, what's a user to do, sit around and harass the FTP server? I suggest: 2. The FTP rejection message should allow the user to queue the request for later execution. Rationale: FTP jobs are not urgent. So long as the user knows the job will eventually be completed, it doesn't matter if it takes several hours. Clark, Mills, and Nagle, among others, have offered good advice on techniques to improve the performance of TCP and avoid network congestion. The unstated, perhaps obvious, objective follows from Queuing Theory: Never drive any path through the network at more than 70% of capacity. Being able to reject a request will smooth the FTP load and should improve network throughput. Being able to queue jobs will maintain the operational utility of FTP. H. Craig McKee