drlove@well.sf.ca.us (David R. Love) (07/05/90)
What units of measure should one use in evaluating an OSI implementation? Are the old units such as elapsed time, bits per second and so on sufficient? I've read some of the derisive comments here about the expected poor performance. I'd be very interested in any actual experiences and examples of performance measurement. Does anyone know if any of the voluntary test labs or interoperability test facilities provide this kind of information? Is there any hard data on the comparison of X.400 and FTAM in moving files of different sizes? I've heard people comment that for short messages, the overhead of X.400 is extreme. Does it remain so regardless of file/message size or is there a break-even point? This is my first posting here and conditionally apologize if I've committed any network transgression. At any rate, I would appreciate anyone's assistance and thank you in advance for same. Best Regards, David Love
kit@GATEWAY.MITRE.ORG (07/06/90)
The basis for evaluating an OSI implementation will vary depending upon whether you are measuring a store-and-forward implementation (such as X.400 messaging) or an interactive implementation (such as virtual terminal). For the former, probably the most important measure is the total throughput (such as average number of messages relayed per unit time). I have seen X.400 implementations that were able to relay more than one message per second, and other implementations that took on the order of seven seconds per message! I think it is clear that tcp/ip messaging systems have better performance, as do proprietary implementations (as opposed to X.400 based). Benchmarks are hard to do, since you cannot be certain that the bottleneck is in the transit system that is under test, rather than being in the test driver system (the one that generates or receives the messages, to run the benchmark). The only way to be certain you are testing the transit system is to have four drivers for one test system...ouch... Note that I have seen that submission and delivery are higher execution overhead than the transit/relay case...I am not sure the reason for this, though more checking/validation, acknowledgements, and safe-storage occur at the end systems. For the latter (interactive) systems, probably the most important measure is the maximum number of concurrent users supported before performance is "unacceptably" degraded (assuming that it has "acceptable" performance for the first user). Again, some driver computer systems can be used which build x.25 (or your favorite protocol) virtual circuit connections and drive the emulation of multiple concurrent interactive sessions and journal the timestamps and session info (though the data collected may be difficult to summarize and tabulate into performance numbers). Though I am not an FTAM expert, I think FTAM falls somewhere in between the two above categories. Some activities, such as remote access of a file directory, to scan the files, etc, are interactive. Other activities, such as sending a file to a remote system may look more like a batch environment (though it is not necessarily store-and-forward). I would be interested in other people's comments on this. Any benchmark is likely to be unrealistic for reasons that are hard to control. For example, disk and directory accesses and memory paging may be unrealistically reduced if you send a stream of activities that invoke the same OSI applications for the same originator and/or recipients. After the first one, subsequent repetitions of an activity may already have everything it needs in memory without having to go to the disk or directory. Other important measures are the extent to which the implementation supports the features you want (which is a static evaluation, rather than executable evaluation), such as the maximum transmission size, or any other feature... FYI, on your query about message overhead for X.400, I had done a measurement of the number of octets (bytes) transmitted out the comm-line from one X.400 MTA implementation, and saw that for a 1000 octet text message with one recipient, there was a total of about 1550 octets sent (or 550 octets of overhead total); that does not seem outrageous to me. If you were sending a 10 octet or 10000 octet message you would probably still have the same 550 octet overhead (depending upon if the ASN.1 compiler uses definite or indefinite length string encoding). I also saw that each additional recipient in the header resulted in between 150 and 250 octets overhead additional; it varies depending upon the size of the recipient name, and whether the name appears in both P1 and P2, or just in P1. (I think that the above numbers included the upper-layers OSI overheads, but did not include the lower-layers packet/frame overheads.) This was for a 1984-X.400 MTA over X.25. Kit Lueder, MITRE Corporation.