wlp@calmasd.Prime.COM (Walter Peterson) (09/11/89)
Perhaps someone can answer a question that has bugged me since I first learned C almost 10 years ago. C has bitwise operators for AND (&), OR (|) and XOR (^) and boolean operator for AND (&&) and OR (||), but not for XOR (^^). Why? What happened to the boolean XOR operator ? If && makes sense for the boolean AND and || makes sense for the boolean OR, why doesn't ^^ make sense for the boolean XOR ? Most assemblers that I know have XOR as a single instruction so why make people go to the trouble of writing something like (a || b) && (!(a && b)) when a ^^ b is so much "cleaner". Can anyone tell me why this was left out of the language ? Is there any chance that some future version of ANSI-C will have it ? -- Walt Peterson. Prime - San Diego R&D (Object and Data Management Group) "The opinions expressed here are my own."
ok@cs.mu.oz.au (Richard O'Keefe) (09/11/89)
In article <575@calmasd.Prime.COM>, wlp@calmasd.Prime.COM (Walter Peterson) writes: > C has bitwise operators for AND (&), OR (|) and XOR (^) and boolean > operator for AND (&&) and OR (||), but not for XOR (^^). Why? No, && is *not* a "boolean" operator, it is a "SHORT-CIRCUIT" operator. && and || are in fact dispensible: (a) && (b) has the same effect as (a) ? !!(b) : 0 (a) || (b) has the same effect as (a) ? 1 : !!(b) On a surprising number of machines, short-circuit operators are _slower_ than boolean operators for simple expressions, typically because conditional branches do nasty things to pipelines. If you want and(x,y) = 1 if x != 0 and y != 0, 0 otherwise or(x,y) = 1 if x != 0 or y != 0, 0 otherwise xor(x,y) = (x != 0) != (y != 0) just #define and(x,y) (!!(x) & !!(y)) #define or( x,y) (!!(x) | !!(y)) #define xor(x,y) (!!(x) ^ !!(y)) or for an even prettier hack #define xor(x,y) (!(x) != !(y)) > What happened to the boolean XOR operator? Nothing happened to it. C inherited short-circuiting operators from B, which copied the idea from BCPL. > If && makes sense for the > boolean AND and || makes sense for the boolean OR, why doesn't ^^ make > sense for the boolean XOR ? Because && is not boolean AND but short-circuiting and. The point of (a) && (b) is to skip evaluating (b) when (a) is false; that lets you do things like (i >= 0 && a[i] != x) safely. But in exclusive or, what can you skip? > Most assemblers that I know have XOR as a single instruction so why > make people go to the trouble of writing something like > (a || b) && (!(a && b)) when a ^^ b is so much "cleaner". Most computers have an AND or AND-NOT instruction. But && doesn't generate that instruction, it generates branches. AND comes from the & operator. Most computers have an OR instruction. But || doesn't generate that instruction, it generates branches. OR comes from the | operator. In just the same way, if you way to generate an XOR instruction, use ^. [A really good compiler might well notice that an expression involving && or || is simple and generate an instruction sequence using AND or OR.] Nobody has to write (a || b) && !(a && b). As I said above, just #define xor(x,y) (!(x) != !(y)) and then write xor(a,b) which is IMHO rather clearer than a^^b.
johnl@esegue.segue.boston.ma.us (John R. Levine) (09/11/89)
In article <575@calmasd.Prime.COM> wlp@calmasd.Prime.COM (Walter Peterson) writes: >C has bitwise operators for AND (&), OR (|) and XOR (^) and boolean >operator for AND (&&) and OR (||), but not for XOR (^^). Why? Because it's nowhere near as useful. The && and || operators are useful because they guarantee left to right short circuit evaluation. You can't find the value of an XOR without evaluating both operands, so short circuit evaluation is meaningless. >[Why have to write] (a || b) && (!(a && b)) when a ^^ b is so much "cleaner"? If you know that a and b actually have the values 0 or 1, typically because they are comparison expressions, you can write either of: a ^ b a != b and get exactly what you want. If they're not strictly 1 or 0, these will do: !a ^ !b !a != !b If this is too ugly, you can always write an XOR(a,b) macro. -- John R. Levine, Segue Software, POB 349, Cambridge MA 02238, +1 617 492 3869 johnl@esegue.segue.boston.ma.us, {ima|lotus}!esegue!johnl, Levine@YALE.edu Massachusetts has 64 licensed drivers who are over 100 years old. -The Globe
henry@utzoo.uucp (Henry Spencer) (09/11/89)
In article <575@calmasd.Prime.COM> wlp@calmasd.Prime.COM (Walter Peterson) writes: >C has bitwise operators for AND (&), OR (|) and XOR (^) and boolean >operator for AND (&&) and OR (||), but not for XOR (^^). Why? >What happened to the boolean XOR operator ? ... Groan. This comes up regularly. The ^^ operator would in fact be of very limited use. The big point of && and || is their sequencing properties, e.g. the second operand of && is evaluated only if the first is true. This cannot be done with ^^, which would inherently need to evaluate both operands. >(a || b) && (!(a && b)) when a ^^ b is so much "cleaner". Try "!a != !b", which has the same effect with not many more symbols. Or if your operands are boolean (0 or 1) to begin with, "a ^ b" or "a != b". >Can anyone tell me why this was left out of the language ? Because nobody considered it useful enough to put it in. >Is there any chance that some future version of ANSI-C will have it ? It's very unlikely. -- V7 /bin/mail source: 554 lines.| Henry Spencer at U of Toronto Zoology 1989 X.400 specs: 2200+ pages. | uunet!attcan!utzoo!henry henry@zoo.toronto.edu
richard@aiai.ed.ac.uk (Richard Tobin) (09/12/89)
In article <575@calmasd.Prime.COM> wlp@calmasd.Prime.COM (Walter Peterson) writes: >C has bitwise operators for AND (&), OR (|) and XOR (^) and boolean >operator for AND (&&) and OR (||), but not for XOR (^^). Why? Well, && differs from & in two ways: it treats its arguments just as zero or non-zero (rather than bitwise) and it does "short-circuit" evaluation (that is, in A && B, if A is false B will not be evaluated). Certainly a non-bitwise xor makes sense, but a short-circuit one doesn't. >Most assemblers that I know have XOR as a single instruction Ah, but what they have is a bitwise-xor instruction, which is what C *does* provide. >so why make people go to the trouble of writing something like >(a || b) && (!(a && b)) when a ^^ b is so much "cleaner". I'd guess it's fairly rare compared with && and ||. If you want it, ((a==0) != (b==0)) will give the result you want, and probably not be any less efficient than it should be. Note that C also doesn't provide a bitwise equivalence operator "exclusive nor"). As an aside, I was amused by the code produced by gcc for ~(a^b) - while cc produces (on a Sparc) the xnor instruction, gcc produces this: xor %i0,%i1,%i0 xnor %g0,%i0,%i0 (it's using xnor just to do the negation). -- Richard -- Richard Tobin, JANET: R.Tobin@uk.ac.ed AI Applications Institute, ARPA: R.Tobin%uk.ac.ed@nsfnet-relay.ac.uk Edinburgh University. UUCP: ...!ukc!ed.ac.uk!R.Tobin
chris@mimsy.UUCP (Chris Torek) (09/12/89)
In article <575@calmasd.Prime.COM> wlp@calmasd.Prime.COM (Walter Peterson) writes: >C has bitwise operators for AND (&), OR (|) and XOR (^) and boolean >operator for AND (&&) and OR (||), but not for XOR (^^). Why? The best explanation I have ever heard is that the important property of && and || is not their `booleanness', but rather that they are short-circuit operators (they avoid evaluating the right hand side whenever the left hand side gives the answer by itself). By definition, exclusive or cannot short circuit, so the `^^' operator would be misleading. >... why make people go to the trouble of writing something like >(a || b) && (!(a && b)) when a ^^ b is so much "cleaner". The simplest expression for `boolean xor' is !a != !b since `!' is a boolean-normalising `not' operator and inverted inputs to xor still yeild xor. -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163) Domain: chris@mimsy.umd.edu Path: uunet!mimsy!chris
gwyn@smoke.BRL.MIL (Doug Gwyn) (09/12/89)
In article <575@calmasd.Prime.COM> wlp@calmasd.Prime.COM (Walter Peterson) writes: >C has bitwise operators for AND (&), OR (|) and XOR (^) and boolean >operator for AND (&&) and OR (||), but not for XOR (^^). Why? >What happened to the boolean XOR operator ? If && makes sense for the >boolean AND and || makes sense for the boolean OR, why doesn't ^^ make >sense for the boolean XOR ? There is no need for ^^. The only reason for && and || is for the short- circuit property, which obviously ^^ could not have. For Boolean values a and b, a!=b is equivalent to a^^b and takes no more characters to code, besides being more readble for most people. For arbitrary values a and b (which I do NOT recommend in Boolean contexts), !a!=!b is one of many equivalent ways to express a^^b. >Is there any chance that some future version of ANSI-C will have it ? I hope not.
pmontgom@sonia.math.ucla.edu (Peter Montgomery) (09/13/89)
In article <575@calmasd.Prime.COM> wlp@calmasd.Prime.COM (Walter Peterson) writes: >C has bitwise operators for AND (&), OR (|) and XOR (^) and boolean >operator for AND (&&) and OR (||), but not for XOR (^^). Why? >What happened to the boolean XOR operator ? If && makes sense for the >boolean AND and || makes sense for the boolean OR, why doesn't ^^ make >sense for the boolean XOR ? I ask why C lacks &&= and ||=. In FORTRAN, I often write code like allok = allok .and. a(i).gt.b(i) C will let me write allok = allok && a[i] > b[i]; but it seems in the language spirit to avoid repeating "allok"; shouldn't we be allowed to abbreviate this to allok &&= a[i] > b[i]; -------- Peter Montgomery pmontgom@MATH.UCLA.EDU
tbrakitz@phoenix.Princeton.EDU (Byron Rakitzis) (09/13/89)
In article <1687@sunset.MATH.UCLA.EDU> pmontgom@math.ucla.edu (Peter Montgomery) writes: > I ask why C lacks &&= and ||=. In FORTRAN, I often write code like > ....... > >but it seems in the language spirit to avoid repeating "allok"; >shouldn't we be allowed to abbreviate this to > > allok &&= a[i] > b[i]; >-------- > Peter Montgomery > pmontgom@MATH.UCLA.EDU Why not allok &= (a[i] > b[i]); In this case, the expression on the right will evaluate to either 0 or 1, and you can AND this with the previous value of allok. (line fodder) -- "C Code." "C Code run." "Run, Code, run!" Byron Rakitzis. (tbrakitz@phoenix.princeton.edu ---- tbrakitz@pucc.bitnet)
gwyn@smoke.BRL.MIL (Doug Gwyn) (09/13/89)
In article <1687@sunset.MATH.UCLA.EDU> pmontgom@math.ucla.edu (Peter Montgomery) writes: >shouldn't we be allowed to abbreviate this to > allok &&= a[i] > b[i]; If allok is being used as a Boolean, as it should be in such a context, you can use &= instead. One less character to type, too. Now I suppose someone will point out that == does the wrong thing for the high-order 0 bits in a C pseudo-Boolean (represented as int 0 or 1), so they "need" an === assignment-operator. This can clearly quickly get out of hand. Suffice it to say that the lack of the full set of 10 non-trivial binary Boolean operators and their corresponding assignment versions hasn't seemed to be a significant practical problem to me, and I'm more "Boolean aware" than most..
diamond@csl.sony.co.jp (Norman Diamond) (09/14/89)
In article <575@calmasd.Prime.COM> wlp@calmasd.Prime.COM (Walter Peterson) writes: >C has bitwise operators for AND (&), OR (|) and XOR (^) and boolean >operator for AND (&&) and OR (||), but not for XOR (^^). Why? In A && B, if A is false (0) then B is not evaluated. In A || B, if A is true (non-0) then B is not evaluated. In A ^^ B, for which value(s) of A do you skip B? -- -- Norman Diamond, Sony Corporation (diamond@ws.sony.junet) The above opinions are inherited by your machine's init process (pid 1), after being disowned and orphaned. However, if you see this at Waterloo or Anterior, then their administrators must have approved of these opinions.
bph@buengc.BU.EDU (Blair P. Houghton) (09/15/89)
In article <10839@riks.csl.sony.co.jp> diamond@riks. (Norman Diamond) writes: >In article <575@calmasd.Prime.COM> wlp@calmasd.Prime.COM (Walter Peterson) writes: > >>C has bitwise operators for AND (&), OR (|) and XOR (^) and boolean >>operator for AND (&&) and OR (||), but not for XOR (^^). Why? > >In A && B, if A is false (0) then B is not evaluated. >In A || B, if A is true (non-0) then B is not evaluated. >In A ^^ B, for which value(s) of A do you skip B? You never do. So what? So it's a boolean xor, instead of a short-circuiting operator. It returns 1 or 0 depending on the result of the boolean comparison. Try getting that out of (A ^ B) every time. Without doing (!!(A ^ B)), I mean. I mean, what's the biggie about having &&? We _could_ just do (A ? !!B : 0), but would you want to? --Blair "I mean, I suh-wear.."
rhg@cpsolv.UUCP (Richard H. Gumpertz) (09/15/89)
The && and || operators differ from the & and | operators in two respects: 1) they do NOT perform bitwise operations; 4 && 2 is 1 while 4 & 2 is 0. 2) they are guaranteed to perform short-circuited evalauations: the right-hand operand will NOT be evaluated if the left-hand operand determines the value of the expression. An ^^ operator would maintain the first property of not being bitwise which would be usesful for non 0/1 booleans; on the other hand no short-circuit evaluation is possible with XOR because BOTH operands must always be evaluated. && and || also have straightfoward implementations in control-flow (e.g. inside an IF) contexts; ^^ is not as straightforward in most architectures. Most compilers would have to implement X ^^ Y as (X != 0) ^ (Y != 0) or as X ? (Y == 0) : (Y != 0) or such. Might the lack of short-circuiting and the lack of any code advantage over the equivalents given above caused K&R to decide the ^^ operator was not worthwhile? As for &&=, ||=, and ^^=, might the lack of trivial implementation (few if any architectures directly implement the non-bitwise booleans; only flow- contexts allow "better" code to be produced for && and ||; &&= and ||= are by definition not flow contexts!) have affected K&R similarly? They might have reasoned that X = (X != 0) & (Y != 0) or X = (X != 0) && (Y != 0) will generate code as good as X &&= Y and so left them out. I hate resorting to implementation arguments when discussing the design of a language, but after all C does very much reflect its history as an implementation language with few "expensive-to-implement" operations hidden behind seemingly simple operators. As long as we are discussing missing "assignment" operators, you might ponder the lack of unary assignment operators. Why should I have to say X = -X or X = ~X? Why not have unary assignment operators (ala ++ and --) for negation and complement? I suppose a new syntax would have to be invented, but it might be useful at times. Remember that mentioning X only once guarantees that its side-effects, if any, will happen only once! Maybe K or R would care to chime in? Richard H. Gumpertz (913) 642-1777 ...uunet!amgraf!cpsolv!rhg
barmar@think.COM (Barry Margolin) (09/16/89)
Regarding the lack of the ability to abbreviate allok = allok && (a[i] > b[i]) with allok &&= a[i] > b[i] In article <10390@phoenix.Princeton.EDU> tbrakitz@phoenix.Princeton.EDU (Byron Rakitzis) writes: >Why not > allok &= (a[i] > b[i]); >In this case, the expression on the right will evaluate to either 0 >or 1, and you can AND this with the previous value of allok. That's not a good substitute, for two reasons. First, because && and !! treat all non-zero values as boolean truth, but & does bit-wise AND. If allok contained 2 before the above expression it would end up with 0 (whether or not the comparison is true or false!). I admit that such code is not a good idea, and your version may actually work in the poster's case. Second, and more importantly, is that your version doesn't retain the short-circuit operation. If the expression were something like allok = allok && a[i++] > b[j++] then the side effects on i and j would be different from allok &= a[i++] > b[j++] The first must NOT increment i and j when allok is true, while the second MUST increment them in any case. Barry Margolin Thinking Machines Corp. barmar@think.com {uunet,harvard}!think!barmar
ok@cs.mu.oz.au (Richard O'Keefe) (09/16/89)
In article <174@cpsolv.UUCP>, rhg@cpsolv.UUCP (Richard H. Gumpertz) writes: > I hate resorting to implementation arguments when discussing the design of a > language, but after all C does very much reflect its history as an > implementation language with few "expensive-to-implement" operations hidden > behind seemingly simple operators. This is not entirely true. For example, C inherited ++ and -- from B, which ran on a machine which didn't have auto-{in,de}crement. C and B both inherited their set of test&branch operations from BCPL, which was an implementation language for word-addressed machines (not even 360s!). BCPL had test&branch versions of AND and OR, but no test&branch version of XOR. There are quite a lot of cheap hardware instructions which C doesn't reflect at all (Rubin's complaint): e.g. PDP-11s have rotate instructions, but C has no "rotate" operator. Subscripts are a seemingly simple operator, but can involve shifts and multiplies which can be quite hairy, and floating-point arithmetic can hide very expensive operations behind seemingly simple operators. > As long as we are discussing missing "assignment" operators, you might ponder > the lack of unary assignment operators. Why should I have to say X = -X or > X = ~X? If you really don't want side-effects in X to be repeated, #define NegateVar(X) ((X) *= -1) #define ComplementVar(X) ((X) ^= -1) and then NegateVar(fred); /* has effect of fred = -fred; */ ComplementVar(jim); /* has effect of fred = ~fred; */ This is not hard.
bengsig@oracle.nl (Bjorn Engsig) (09/18/89)
|In article <1687@sunset.MATH.UCLA.EDU> pmontgom@math.ucla.edu (Peter Montgomery) writes: |>shouldn't we be allowed to abbreviate this to |> allok &&= a[i] > b[i]; Article <11046@smoke.BRL.MIL> by gwyn@brl.arpa (Doug Gwyn) says: |If allok is being used as a Boolean, as it should be in such a context, |you can use &= instead. One less character to type, too. No. I assume that &&= should only evaluate it's right hand if the lefthand is nonzero. | |[Doug continues with other points about boolean (assingment) operators] I agree with these, it would be too much for a very limited use. I would write the above if (allok) allok = a[i] > b[i]; -- Bjorn Engsig, bengsig@oracle.nl, bengsig@oracle.com, mcvax!orcenl!bengsig
roelof@idca.tds.PHILIPS.nl (R. Vuurboom) (09/20/89)
In article <2120@munnari.oz.au> ok@cs.mu.oz.au (Richard O'Keefe) writes: >of XOR. There are quite a lot of cheap hardware instructions which C >doesn't reflect at all (Rubin's complaint): e.g. PDP-11s have rotate >instructions, but C has no "rotate" operator. I've been pondering this point for a while now. Every time Rubin brings up this complaint (give him an A for perseverance :-) various friendly and not so friendly counter arguments are proposed. These counter arguments appear to me to fall into three categories. The "feel free" category (if you want to clutter the language with all sorts of (marginally) useless constructs feel free to design a new language). The "C is not assembler" category. The "besides you can" (...besides you can easily implement it as <follows more or less nifty macro>). To me there's an underlying principle (paradigm, dogma if you will :-) to all three of these arguments i.e. Language design (evolution) ought essentially to be a top down activity where concepts such as consistency and orthogonality (for example) should play an important role. I think most people would agree that the Pascal school strongly embodies this language paradigm. Without a doubt this has been the leading paradigm of the seventies and eighties. So much so in fact that this top down approach crossed over from software to hardware with the increasing implementation of high level language support in the increasingly cisc architectures. First point to note is that this trend is being reversed in risc architectures. Second point to note is that this was not always the case. I remember reading an interview (I think) with one of the designers of Fortran. His or her statement was something like: "Language design? What language design? We made it up as we went along. All we were interested in was in proving that compiled code could be just as fast as hand assembled. If some machine instruction was easily representable as a high level construct then sometimes we went ahead and stuck it in. That's how Fortrans multi-way branch came into existence, for example" Now I'm sure that C was not defined quite so willy nilly. However C's roots go back into the sixties before the top down paradigm was reigning. If you compare C to Pascal its obvious that someone along the line of C's heritage took a long hard look at the underlying machine architecture and instruction set. Again here it is obvious that a number of constructs in C found their way into the language simply because of the existence of the associated machine instruction. This entire discussion would be neither here nor there were it not for the glaring fact that both C and Fortran are still very healthy languages today and both threaten to outlive Pascal in usefulness. Which I believe shows that to a certain extent bottom up language design (or at least allowing for bottom up design principles) has turned out to be a very effective design method which is standing the test of time. I have little doubt that if this group (with its current paradigms) existed some 20 years ago and someone proposed a new construct answers would have been like this: A ++ operator? Look Ritchie, if you want to go ahead and clutter up the language feel free to design your own. Besides you can implement it as #define PLUSPLUS(x) x=x+1 which is almost as fast. A good optimizer... Its the very fact that 20 years ago bottom up design paradigms were not considered invalid (heretical) that helped give C its lean and fast feeling. (I say "helped" since top down principles (block structure case in point) also gave C its power and flexibility). To summarize: The (bottom-up) language evolution/design principle that Rubin advocates was not considered heretical 20 years ago in fact it was considered a valid paradigm. There is a good case to be made that this very principle has made C (and Fortran) stronger languages than Pascal for example where this principle was generally avoided. Risc architectures have shown a swing in emphasis from top-down to bottom-up design techniques with successful results. The question is: will the renewed introduction of this paradigm continue into language design of the nineties? And in particular: the language evolution of C and its derivatives? -- wiskunde: Dutch for mathematics. Literally: Knowledge of certainty wis: certainty kunde: Knowledge Roelof Vuurboom SSP/V3 Philips TDS Apeldoorn, The Netherlands +31 55 432226 domain: roelof@idca.tds.philips.nl uucp: ...!mcvax!philapd!roelof
matthew@sunpix.UUCP ( Sun Visualization Products) (09/27/89)
In article <29557@news.Think.COM> barmar@think.COM (Barry Margolin) writes: | | allok = allok && a[i++] > b[j++] | |then the side effects on i and j would be different from | | allok &= a[i++] > b[j++] | |The first must NOT increment i and j when allok is true, while the |second MUST increment them in any case. | |Barry Margolin |Thinking Machines Corp. Would you like to reword your last sentence? -- Matthew Lee Stier | Sun Microsystems --- RTP, NC 27709-3447 | "Wisconsin Escapee" uucp: sun!mstier or mcnc!rti!sunpix!matthew | phone: (919) 469-8300 fax: (919) 460-8355 |
barmar@kulla (Barry Margolin) (09/28/89)
In article <883@friar-taac.UUCP> matthew@friar-taac.UUCP (Matthew Stier - Sun Visualization Products) writes: >In article <29557@news.Think.COM> barmar@think.COM (Barry Margolin) writes: >| allok = allok && a[i++] > b[j++] >|then the side effects on i and j would be different from >| allok &= a[i++] > b[j++] >|The first must NOT increment i and j when allok is true, while the >|second MUST increment them in any case. >Would you like to reword your last sentence? I'll try. I see I got the first clause backward, as well as using confusing wording. In the first case, i and j are incremented IF AND ONLY IF allok was true. In the second case, i and j must ALWAYS be incremented, regardless of the old value of allok. Barry Margolin, Thinking Machines Corp. barmar@think.com {uunet,harvard}!think!barmar
davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (09/28/89)
In article <29557@news.Think.COM> barmar@think.COM (Barry Margolin) writes: | | allok = allok && a[i++] > b[j++] | |then the side effects on i and j would be different from | | allok &= a[i++] > b[j++] | |The first must NOT increment i and j when allok is true, while the |second MUST increment them in any case. What are you saying here? The & and && operators work diferently. I also think the *effect* of thew first example would be easier to understand if you wrote: if (!allok) allok = a[i++] > b[j++] but that's a matter of preference. Yes I know, some people like (allok == 0) better. -- bill davidsen (davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen) "The world is filled with fools. They blindly follow their so-called 'reason' in the face of the church and common sense. Any fool can see that the world is flat!" - anon
barmar@kulla (Barry Margolin) (09/28/89)
In article <596@crdos1.crd.ge.COM> davidsen@crdos1.UUCP (bill davidsen) writes: >In article <29557@news.Think.COM> barmar@think.COM (Barry Margolin) writes: >| allok = allok && a[i++] > b[j++] >|then the side effects on i and j would be different from >| allok &= a[i++] > b[j++] > What are you saying here? The & and && operators work diferently. It started with someone asking why there are no &&= or ||= operators in C. Someone suggested that &= could be used whenever you would have wanted to use &&=. My posting shows that they are different, precisely because & and && are different. Barry Margolin, Thinking Machines Corp. barmar@think.com {uunet,harvard}!think!barmar