jackg@tekchips.UUCP (Jack Gjovaag) (02/17/84)
The problem Gene Spafford brings up about the possibility of a truly random key failing to encrypt a significant portion of a message doesn't seem to me to be a problem at all. The probability of a random xor key generating a long string of zeros, and thereby leaving the cleartext unencrypted is no greater than the probability of producing a non-zero string of bits that encrypts the text into something that *appears* to be unencrypted. Therefore, someone trying to decypher an encrypted message should take little comfort if he sees a meaningful string of characters in the encrypted text. In fact, if it isn't inconvenient to generate the key and the encrypted text simultaneously, the key can be chosen to be a string of readable cleartext. It is then sent over the unsecure communication channel and the encrypted text sent over the secure channel. Anyone tapping the unsecure channel will *always* see readable stuff but what looks like cleartext will have no discoverable relation to the actual message (unless he can tap the secure channel as well). Clearly, it isn't always convenient to use this scheme, but it does illustrate the fact that the distinction between key and encrypted text is artificial and they can be viewed as simply two components of a message, neither of which can be assumed to make any sense without the other. But I digress. It seems to me that it is always better to strive for maximum randomness in the generation of the key even if there is a probability of leaving significant parts of the cleartext untouched because an intruder cannot know that it hasn't. Jack Gjovaag Computer Research Lab Tektronix, Inc.
henry@utzoo.UUCP (Henry Spencer) (02/21/84)
In a discussion of one-time pads and such, Jack Gjovaag suggests: In fact, if it isn't inconvenient to generate the key and the encrypted text simultaneously, the key can be chosen to be a string of readable cleartext... NO! A one-time pad is truly unbreakable -- insufficient information available even in theory -- only if the key is truly random. Readable cleartext is not random! It is true that the redundancies introduced into the ciphertext by a nonrepeating but nonrandom key are much more subtle than those that are introduced by a random but repeating key. They nevertheless are there, and methods exist for attacking such a cipher by exploiting those redundancies. Using (say) the text of a book as the key to a cipher is a very old idea. It's not useful for military field communications, but it is *very* attractive to spies because it eliminates the need for key listings that are blatantly ciphering aids. This attractiveness to a very undesirable class of people (if you are the ones being spied on, that is!) has meant considerable effort invested in techniques for cryptanalysis of such ciphers. Successful attacks were devised a long time ago. That aside, Jack's basic point is correct: you can view ciphertext and keytext symmetrically, as two sequences of bits that need to be combined to yield a message. The pure form of this is the one-time pad, which achieves absolute secrecy by having one of the two bit sequences transmitted by a completely secure means. (Please, no quibbles about "completely secure" -- incomplete security of key transmission simply means less-than-absolute secrecy of message.) The problem is the sheer volume of key needed. Practically all other cipher systems can be viewed as ways to reduce the volume of key transmission by generating the "real" key from a smaller distributed key. Cryptanalysis becomes possible because this generation process inevitably introduces redundancies; the goal of the cipher designer is to make these redundancies too subtle to be exploited effectively. -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry