derek (04/14/83)
/* I have stubmled upon an algorithm which generates all numbers between 1 and 32767 inclusive in a seemingly random order. The idea is to start with a seed, then generate the next number in the sequence by the following steps: (1) exclusive or bits 0 and 1 (2) right shift by one position (3) if step (1) yields TRUE, set bit 14 I am submitting this here in hope of learning if the sequence can be classified as a psuedo-random sequence suitable for use in random number generators, and, to see if anyone can tell me why it works (I do not know). Derek Andrew utah-cs!sask!derek University of Saskatchewan */ #define random(x) ((x >> 1) | (((x & 1) ^ ((x >> 1) & 1)) << 14)) main() { register x, count, seed; x = seed = 3; for (count = 0; count < 32767; count++) { x = random(x); if (x == seed) { printf("We have looped to original seed.\n"); break; } } printf("count = %d value = %d\n", count, x); }
leichter (04/15/83)
The algorithm you describe is a particular case of a GFSR (Generalized Feedback Shift Register). To make a GFSR: Take a shift register, shifting right one bit every "tick". Add on XOR gates that cause the value moved into any cell of the register to be the XOR of the value shifting in from the left and the value shifting out of any other cell. The resulting operation can be used in a number of ways. If you feed a value into the leftmost cell, and keep shifting as long as you have new input bits, what ends up in the register is some CRC - i.e. the remainder of the input, considered as a polynomial over the integers mod 2, when divided by some fixed polynomial. (There is an XOR gate for each term of the fixed polynomial; the degree of the fixed polynomial is (# of cells in the register)-1.) With no input, successive shifts cycle through some subset of possible n-bit binary numbers. The length of the cycle depends on the polynomial chosen, and can be up to 2^n - 1. (0 stays 0 no matter what the polynomial, so it can't occur in a maximum-length cycle; some starting values and polynomials produce 0 after a while.) The values in the cycle are quite random, for appropriate values of the polynomial; random number generators based on this technique have been studied; I believe they are called Tausworth generators. An article appeared in Computing Surveys about 4 years ago on this - specifi- cally, it discussed the use of such generators to produce encryption devices. Interesting side note: There is a really clever use for GFSR's in LSI design. Suppose you need a piece of LSI circuitry that counts n clock ticks. You don't need the intermediate values for anything; you just need to know when n ticks have gone by, where n is some fixed constant. You could build a register, load it with n, and decrement it on every clock tick until you get 0; but decrementing takes a LOT of hardware compared to a GFSR, which is very little more than the flip-flops for the register and some wires. So... you find some GFSR an initial value k such that putting k into the GFSR and shifting n times gives you 0 (or any other very-cheap-to-test value - 0 is hard to beat). Your count loop is now: Load k in the register; shift on each clock tick; done when the register is 0. (Finding such GFSR's and k's isn't, I gather, too hard, and in any case I'm sure tables of small GFSR's and k's for all small values of n are available by now.) -- Jerry decvax!yale-comix!leichter leichter@yale
soreff (04/15/83)
There is a class of psuedorandom number generator which use shifting and exclusive oring of an n-bit shift register. They will generate all the 2^n states of the n-bit register except 00...0. I seem to recall that they are connected to the "primitive polynomials" in the theory of galois fields. Look at any good book on error correcting codes for more information. -Jeffrey Soreff (hplabs!soreff)
lew (04/15/83)
Jerry Leichter should have added that Derek Andrew's generator is a VERY particular case of the generalized shift register, since bit 14 is the only bit which is not a simple shift of a previous bit. Note that: b14[n] = b14[n-14] ^ b14[n-15] ( ^ == xor ) The seed is just a specification of the initial conditions, b14[0] thru b14[-14]. By recursion, you can easily write b14[n] as an xor product of these bits, for any n. This is really just a one bit random number generator. Bits 13 thru 0 are just a trace of the last 14 values of bit 14. This disqualifies it for practical use. There is an article in the November 1970 Bell System Technical Journal entitled "A Fast Method of Generating Digital Random Numbers", by C.M. Rader, L.R. Rabiner, and R.W. Schafer. It describes generators of the form: x[n] = rot( x[n-1] ^ x[n-2] ) In a footnote, they mention that each bit can be defined as an xor product of previous values of the same bit, so that the effect is that of a set of one-bit generators running in parallel. Lew Mammel, Jr. ihuxr!lew
brunton (04/15/83)
Further to Jerry's note on GFSR's (Generalized Feedback Shift Register), this subject is at the heart of Finite Field Arithmetic. In particular, the example of XOR'ing bits 0 and 1, doing a shift on a 15 bit shift register, and stuffing the XOR result in the MSB position is a specific example of a Galois Field generating circuit over the field GF(2^15). As Jerry also mentioned, a circuit (or field generator) of this type is capable of developing up to (2^15-1) different 15 bit combinations. This will be true only if the feedback polynomial (or feedforward polynomial in this case) is first irreducible over the finite field being generated, and second primitive. Peterson and Weldon's book on Error-Correcting Codes gives a introduction to this subject and a table of feedback polynomials that a maximal length. As, an aside, it is also true that the particular polynomial used in the example, that besides having linearly independent roots, the roots of the reciprocal polynomial are also linearly indepentent. This implies that the reciprocal polynomial is also irreducible, and primitive. Scott Brunton ucbvax!hplabs!brunton Hewlett-Packard Labs.
don (04/23/83)
Subject: random numbers Volume 2 of Knuth should be consulted if you want random number generators. Random bit munging is not likely to be good. If you want to use pseudo-random numbers for cryptography, beware! Linear shift registers are known to be worthless for cryptography now a days. Non-linear congruential methods might be good, but a lot of the work on them is probably secret. Since the UNIX crypt command was broken recently, I have started using DES to encrypt files. It is unfortunate that people have not taken to heart the issue of information privacy enough to agree on secure file encryption and encrypted mail protocols. The UNIX secretmail is also dubious since the knap-sack method it uses is crumbling rapidly under the assult of modern cryptographers.
mat (04/23/83)
hplabs!hansen has stated that the UN*X crypt(I) has been broken. Can anyone give more info on this? On how it is broken? By whom the work was done? And for that matter, how it worked in the first place? I know it was originally developed from a ``wheel--and--key'' machine, but I don't know how that works either. Come on, fella's There's a newsgroup out here! Can't someone post copies of announcements? Maybe if I see enough references to useful journals, etc, I won't need to ask eventually! ANd the KnapSack method is crumbling? Is factoring of large numbers moving ahead that fast? -!hou5e!mat Mark Terribile (Duke of deNet) Spiridellis and Associates at American Bell, Holmdel
leichter (04/23/83)
Don remarks that the knapsack method used in Unix secret mail "is crumbling rapidly". Just to provide a little further data: The basic knapsack system has completely crumbled, in the sense that there is a polynomial time algorithm that will break it (although it's a slow algorithm...). A reference is: "A Polynomial Time Algorithm for Breaking the Basic Merkle-Hellman Crypto- system", Adi Shamir, Proceedings of the 23rd FOCS (roughly December 82, I don't have a copy), pg.145-152. Shamir has proposed some variations on the basic scheme, including an "ultimate" variation that appears in some sense to be the strongest one possible ("Embedding Cryptographic Trapdoors in Arbitrary Knapsack Systems" - I can't provide a reference, I'm afraid - all I have is a 6-month-or-so old pre-print). The security of these variations is not known with certainty, but at best one can say that no one can yet see an attack - they don't look to healthy. -- Jerry decvax!yale-comix!leichter leichter@yale
leichter (04/23/83)
In answer to Mark Terribile: Re knapsack systems, see my other submission to this newsgroup a couple of minutes ago. Re: Factoring large numbers: This has nothing to do with the knapsack system. As a separate problem, though, progress is being made, although all known algorithms are still super-polyno- mail - although they are sub-exponential. (Current best is something like exp(sqrt(log log n)) or some crazy thing like that.) My personal feeling is that factoring MAY be very hard, but we really don't have much good reason to believe it at this point. (The strong argument FOR hardness has always been that hundreds of years of effort by the best mathematicians hadn't cracked th \\\the problem. Unfortunately, the efforts had as a goal a deterministic factoring method for all numbers. The notion of probablistic algorithms is only a couple of years old, and thus we have almost no evidence that there is not a fast probablistic algorithm. Note that the Solovay-Strassen probablistic prima- lity tester would have been perfectly understandable to Fermat - but it wasn't though of until very recently. Also note that, inspired by the probablistic algorithms, deterministic algorithms have now been devised.) The reason factoring is interesting, of course, is that the other major public- key cryptosystem, RSA, depends on its being hard. It turns out that there is an attack on RSA - and, in fact, on ALL public-key systems - that has nothing to do with factoring. It's a bit long to go into here, but the basic argument goes like this: Think about, say, RSA. Suppose I had some way to read just the bottom bit of an encrypted message. Could I do anything? In fact, one can show that the ability to determine the bottom bit in polynomial time makes it possible to read the whole message in polynomial time. (There is a clever trick that lets you essentially "shift the message right one bit" once you know the bottom bit.) Now how do I read the bottom bit? I - the interceptor - don't instead, I trick you (the recipient) into reading it for me. The particular way in which I trick you depends on the protocol we use on our network, and it MAY be possible to come up with a safe protocol - but no one knows how, at the moment. Here is an example of an unsafe protocol: In response to a yes or no question, send (the encryption of) an even number for yes, of an odd number for no. (Looks innocent, no?) I wait until you send me a message asking if I want to join you for pizza. I send you the message I intercepted. You decrypt, and check the bottom bit. If it's 0, you think I said yes and go to the pizza place; otherwise, you don't go. All I have to do is watch what you do... The reference for this work is: "Why and How to Establish a Private Code on a Public Network", by Goldwasser, Micali, and Tong, Proceedings of the 23rd FOCS, pg 134-144. Warning: It's a pretty technical article, and slow going. -- Jerry decvax!yale-comix!leichter leichter@yale
pn (04/23/83)
Could you say more about the UNIX crypt command and how it was broken?