[comp.lang.c] Optimization

rsalz@bbn.com (Rich Salz) (09/27/87)

Warning!  Generalizations abound, nit-pickers beware (and go away :-)

Someone writes:
> ...
>	float junk[3], *junk_ptr;
> ...
>	junk_ptr = (float *) junk;
>	xfpt(*junk_ptr++, *junk_ptr++, *junk_ptr);
Everyone's jumped in about how this is broken code -- the order of
evaluation isn't defined.  My point is different:  isn't this code more
inefficient, anyhow?  On all machines where X is a compile-time constant,
&junk[X] is a compile-time constant.  That's often faster for limited
cases then going through a pointer indirection:  C compilers have to be
very conservative about the way they handle pointers and the things
they're pointed to.

In general, optimization tricks should only be done after the entire
program is working and stable.  Not to cast aspersions, but if this is the
first time the original poster has come across the evaluation-order
gotcha, then the code is not at that stage yet.  Remember Knuth:
"premature optimization is the root of all evil."

Interested parties should get a copy of "Writing Efficient Programs," by
Jon Bentley.  Prentice-Hall, ISBN 0-13-970244-X.
-- 
For comp.sources.unix stuff, mail to sources@uunet.uu.net.
And if C is the assembly language of the 80's, why is ACP written in MIX?

barmar@think.COM (Barry Margolin) (04/21/88)

In article <488@wsccs.UUCP> terry@wsccs.UUCP (Every system needs one) writes:
>Basically, if it works without -O, it sould work with -O, regardless of what
>the compiler writer's optimization does to achieve it's goal.

This is unreasonable, and probably rules out almost all optimizations
but the simplest peephole optimizations, and even some of them.
Consider the statements

	a = b;
	a = b;

It doesn't take much of a flow analysis to determine that the second
statement is redundant.  Most peephole optimizers will hack this.

But my main objection to your blanket statement above is that it
requires code that ACCIDENTALLY works when unoptimized to continue
working when optimized.  For example, let's postulate an environment
with two mechanisms for creating a new activation record: a slow one
that zeros the record, and a fast one that doesn't.  Unoptimized code
might use the slow one, while the optimizer might replace this with
the fast one.  They are both compatible with C, because it specifies
that the initial contents of auto variables are undefined unless there
is an explicit initializer.  But this means that a program that is
missing an "= 0" initializer on a variable would work unoptimized but
fail when optimized.

Optimizable programs have many qualities in common with portable
programs.  Both require that the programmer be more careful and pay
attention to the standard.  Also, just because a program works doesn't
mean that it is correct.  In fact, assuming correct compilers, it is
the converse that is true: a conforming program should work in any
given implementation, and should work when optimized.  Unfortunately,
it is not possible to statically determine whether a program is
conforming; you can show that it is nonconforming by finding a
conforming implementation on which it fails.  Unfortunately, the same
problem exists when trying to determine whether an implementation is
conforming; all you can do is disprove it.


Barry Margolin
Thinking Machines Corp.

barmar@think.com
uunet!think!barmar

dsill@NSWC-OAS.arpa (Dave Sill) (04/22/88)

Terry Lambert writes:
>Basically, if it works without -O, it should work with -O, regardless of what
>the compiler writer's optimization does to achieve its goal.  If this makes
>writing compilers harder, so what?

This bears repeating.  There should be no circumstances under which
the semantics of the language are changed by a flag to the compiler.

=========
The opinions expressed above are mine.

"com pile ... vt ... com piled; com piling [ME compilen, fr. MF
compiler, fr. L compilare to plunder] ..."
					-- Webster's Ninth New
					   Collegiate Dictionary

So *that's* where these gung-ho optimizer-writers get their
justification...

mcdonald@uxe.cso.uiuc.edu (04/22/88)

>This is unreasonable, and probably rules out almost all optimizations
>but the simplest peephole optimizations, and even some of them.
>Consider the statements

>	a = b;
>	a = b;

>It doesn't take much of a flow analysis to determine that the second
>statement is redundant.  Most peephole optimizers will hack this.

First remembering that the substantial majority of code that will use
"volatile" is not ever going to be intended to be portable, I'll vote
FOR volatile on the grounds that on the (violently non-UNIX, it's a 
genuine PDP11/03- RT11) computer I'm sitting at now, I have lots of code
that does exactly that. "a" is a very special memory address, connected
to an external connector. Normally it is zero. If I copy a non-zero
value it outputs that value for exactly 1 microsecond, then returns to
zero. Lots of people do this in the real-time control world. I think
that having "volatile" is a useful thing, and not THAT hard for
compiler vendors to implement. Just think, a lot of them are going to
have to deal with Fortran 8X :-)  !

Doug McDonald (mcdonald@uiucuxe)

ljz@fxgrp.uucp (Lloyd Zusman) (04/23/88)

In article <13074@brl-adm.ARPA> dsill@NSWC-OAS.arpa (Dave Sill) writes:
>Terry Lambert writes:
>>Basically, if it works without -O, it should work with -O, regardless of what
>>the compiler writer's optimization does to achieve its goal.  If this makes
>>writing compilers harder, so what?
>
>This bears repeating.  There should be no circumstances under which
>the semantics of the language are changed by a flag to the compiler.
>
> ...

Wholehearted agreement!

This seems to imply ...

1)  No optimizer for C should be considered as conforming to ANSI standards
    unless, in so many words "... if it works with -O, it should work with
    -O, ..."

-or-

2)  Something like 'volatile' must exist.

Now, if the first option comes to pass, then 'volatile' is a moot point.
Methinks this is rather unlikely.

In the case of the second option, we still don't necessarily need 'volatile'
as currently specified.  Since the purpose of 'volatile' is to tell the
compiler something it needs to know about how to handle its optimization,
how about using

    #pragma volatile ...

instead?  This would satisfy those people who don't want the name space
cluttered with new keywords and also those people who need to protect
certain variables from over-zealous optimizers.

The ANSI committee could specify that the 'volatile' #pragma option either
does nothing (without generating an error) or else work as 'volatile'
is now specified.  Wouldn't this be more satisfactory than the current
proposal?

By the way, 'noalias' could be handled similarly.

--
    Lloyd Zusman
    Master Byte Software
    Los Gatos, California		Internet:   ljz@fxgrp.fx.com
    "We take things well in hand."	UUCP:	    ...!ames!fxgrp!ljz

bts@sas.UUCP (Brian T. Schellenberger) (04/24/88)

In article <13074@brl-adm.ARPA> dsill@NSWC-OAS.arpa (Dave Sill) writes:
|Terry Lambert writes:
|>Basically, if it works without -O, it should work with -O, regardless of what
|>the compiler writer's optimization does to achieve its goal.  If this makes
|>writing compilers harder, so what?
|
|This bears repeating.  There should be no circumstances under which
|the semantics of the language are changed by a flag to the compiler.

These two things are by no means the same.  Yes, turning on optimization should
not change any of the *defined* semantics of the language.  On the other hand,
it must change *some* behavior.  I mean, if you happen to know that your 
compiler store constants > 255 in a certain place, and you change the place
to change the value of "1024", then I think it reasonable for your code to
break when optimized (by, say, shifting) even it works when not optimized.

Similarly, timing loops will surely break with optimization; who could expect
otherwise?  If optimize must keep the space and time and memory layout the
same, it's not able to do much.

The question then becomes one of degree:  I consider it unreasonable for the
compiler to optimze my entire program to a no-op because it concludes the whole
thing is pointless (the Apollo 9.7 C compiler does this for some code!),
whereas I consider it quite reasonable for it to speed up timing loops.

ANSI is trying to define exactly where to draw those lines.  Those things 
guaranteed in ANSI must work regardless of optimization or target machine;
those things not defined may break in some places.  If you don't like where
the line is drawn, complain, but arguing that the line should not exist is 
silly.

And anyway, you can always compile with minimal optimization if you find doing
really unreasonable things makes you happy.
-- 
                                                         --Brian.
(Brian T. Schellenberger)				 ...!mcnc!rti!sas!bts

. . . now at 2400 baud, so maybe I'll stop bothering to flame long includes.

wes@obie.UUCP (Barnacle Wes) (04/25/88)

In article <20065@think.UUCP>, barmar@think.COM (Barry Margolin) writes:
> This is unreasonable, and probably rules out almost all optimizations
> but the simplest peephole optimizations, and even some of them.
> Consider the statements
> 
> 	a = b;
> 	a = b;
> 
> It doesn't take much of a flow analysis to determine that the second
> statement is redundant.  Most peephole optimizers will hack this.
> 
> But my main objection to your blanket statement above is that it
> requires code that ACCIDENTALLY works when unoptimized to continue
> working when optimized.

Aha!  But this code is not always redundant!  I have worked on a
system (admittedly a WEIRD system) that had a particular I/O device
with a 16-bit status register hung off an 8-bit bus.  You accessed the
register by two sucessive reads of the register address, getting the
high byte first and the low byte second.  Guess what?  We never used
any of the bits in the high byte!  So our code basically looked
something like:

	mdmstatus: byte @ address;
	status: word;

	status := word;		! discard first byte
	status := word;		! get second byte
	if (status AND H0020) .....

-- 
    /\              -  "Against Stupidity,  -    {backbones}!
   /\/\  .    /\    -  The Gods Themselves  -  utah-cs!uplherc!
  /    \/ \/\/  \   -   Contend in Vain."   -   sp7040!obie!
 / U i n T e c h \  -       Schiller        -        wes

sarima@gryphon.CTS.COM (Stan Friesen) (04/25/88)

In article <20065@think.UUCP> barmar@fafnir.think.com.UUCP (Barry Margolin) writes:
>
>But my main objection to your blanket statement above is that it
>requires code that ACCIDENTALLY works when unoptimized to continue
>working when optimized.
>
	I want to add my affirmation of this! Code that ACCIDENTALLY works
is more common than many people realize. One of the worst cases was in a
program I was maintaining(written by someone else) that had something
like the following:

func(args)
type args;
{
	type .... /* whole bunch of auto variable declarations */
	.
	.
	int cur_pos;	/* Almost the last auto declared	*/
	.
	.
	.

	Code that uses cur_pos as if it were static;
}

	And it WORKED, at least until I made some minor changes elsewhere.
You see, the auto variables were "pushed" on the stack in the order of
declaration, and no other function had nearly as many auto variables as
func(). Thus the value in cur_pos was left undisturbed on the stack! Talk
about working *accidentally*!!!! (The programmer in question had already
been terminated for incompetence).
-- 
Sarima Carandolandion			sarima@gryphon.CTS.COM
aka Stanley Friesen			rutgers!marque!gryphon!sarima
					Sherman Oaks, CA

ok@quintus.UUCP (Richard A. O'Keefe) (04/25/88)

Having expressed vehement opposition to 'noalias', I'd like to balance
things out by supporting 'volatile'.  The effect of 'volatile' has to
be available somehow if C is to be useful for writing device drivers &
so on, and it shouldn't be a #pragma because #pragma isn't supposed to
change what anything _means_.  The very worst thing about 'volatile'
is that on the evidence of this newsgroup, a lot of people have trouble
spelling it (:-).

limes@sun.uucp (Greg Limes) (04/26/88)

In article <280@fxgrp.UUCP> ljz@fx.com (Lloyd Zusman) writes:
>The ANSI committee could specify that the 'volatile' #pragma option either
>does nothing (without generating an error) or else work as 'volatile'
>is now specified.  Wouldn't this be more satisfactory than the current
>proposal?

<risking flames, floods, pestilance, creme pies, and easy chairs>

Who do I send my vote for "#pragma volatile" to? Methinks this is an
implementation issue, for which the manual would read:

  -O#	Optimises uses of external variables. Any external variables
	that may be modified asynchronously, i.e. external device
	registers or variables modified in signal handlers, should be
	marked as "volatile" by use of the statment
		#pragma volatile(foo, bar)

since this is not part of the language. Currently, SUN has a warning in
the manual that the higher levels of optimization should not be used
when compiling programs that modify external variables from within
signal handlers.

Can you say "implementation issue"? I knew you could.

DISCLAIMER: I am not in the compler group, and do not speek for them.
-- 
   Greg Limes [limes@sun.com]				frames to /dev/fb

tneff@atpal.UUCP (Tom Neff) (04/26/88)

In article <895@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
> ... [volatile] shouldn't be a #pragma because #pragma isn't supposed to
>change what anything _means_.  

I don't see volatile as changing what anything means, except to the optimizer.
Forbidding the code generator to cache a variable's contents should have
no effect on a program's behavior with 'classic' variables like counters in
ordinary RAM.  Indeed, in a compiler with full control over optimization,
setting a /NOOPTIMIZE switch ought to make volatile effectively a no-op,
since the elementary optimization of cacheing values within an expression,
for instance, would be turned off and the code would in fact go "to the
metal" every time a variable was referenced -- which is all volatile is
trying to do.  My claim is that volatile essentially says nothing
about a program except what's legal for the optimizer to play with -- thus 
#pragma is appropriate.  However I will not kick if it stays a keyword, as
I would have with the late unlamented noalias, simply because it would be
unnecessary to sprinkle new alien syntax into the familiar library header
files -- only those who need it would ever have to see it.


-- 
Tom Neff			UUCP: ...uunet!pwcmrd!skipnyc!atpal!tneff
	"None of your toys	CIS: 76556,2536		MCI: TNEFF
	 will function..."	GEnie: TOMNEFF		BIX: are you kidding?

barmar@think.COM (Barry Margolin) (04/27/88)

In article <174@obie.UUCP> wes@obie.UUCP (Barnacle Wes) writes:
>In article <20065@think.UUCP>, barmar@think.COM (Barry Margolin) writes:
>> 	a = b;
>> 	a = b;
>> It doesn't take much of a flow analysis to determine that the second
>> statement is redundant.  Most peephole optimizers will hack this.
>Aha!  But this code is not always redundant!

I know, but it usually is.  And that's the whole point of "volatile":
it tells the compiler not to assume that such code is redundant, among
other things.  You certainly wouldn't want a compiler to generate
double accesses for all such cases, just the ones where a or b is
volatile.

Proponents of restricting the compiler might claim that programmers
rarely write such things unless they actually mean it.  However,
programmers often write macros, and it's quite possible that such a
redundancy would result from macros that happen to be adjacent.


Barry Margolin
Thinking Machines Corp.

barmar@think.com
uunet!think!barmar

henry@utzoo.uucp (Henry Spencer) (04/27/88)

> Who do I send my vote for "#pragma volatile" to?

Your implementor.  Notwithstanding some of the cricitism of "volatile", it
is moderately well entrenched in X3J11 now... especially since the public-
comment period that has just closed was the last serious chance to get rid
of it.  If you really dislike it (I personally don't, I just think it's
a bit overdone), you've missed your chance.
-- 
NASA is to spaceflight as            |  Henry Spencer @ U of Toronto Zoology
the Post Office is to mail.          | {ihnp4,decvax,uunet!mnetor}!utzoo!henry

chasm@killer.UUCP (Charles Marslett) (04/27/88)

In article <13074@brl-adm.ARPA>, dsill@NSWC-OAS.arpa (Dave Sill) writes:
> Terry Lambert writes:
> >Basically, if it works without -O, it should work with -O, regardless of what
> >the compiler writer's optimization does to achieve its goal.  If this makes
> >writing compilers harder, so what?
> 
> This bears repeating.  There should be no circumstances under which
> the semantics of the language are changed by a flag to the compiler.

Actually, I have written code the broke with optimization turned on
on every machine I've found with a Unix-derived C compiler (and at least
two that are not Unix-derived).  Memory mapped I/O drivers almost always
blow up when you have sequences like

      port->data = 0;
      port->data = 0;
      port->data = 0;
      port->data = 2;

So are we to say that no one has yet written a C compiler?  At least for
Unices?

I may be picky, but if flags to the compiler should not change the semantics
of the language (in the broadest sense of the definition of semantics) -- what
good are they.  Why type the things in if they don't affect the generated
program (unless they triger off a pretty listing or play music while we
wait for errors, I guess ({:)  ).

Charles Marslett
chasm@killer.UUCP

dhesi@bsu-cs.UUCP (Rahul Dhesi) (04/28/88)

In article <3938@killer.UUCP> chasm@killer.UUCP (Charles Marslett) writes:
>I may be picky, but if flags to the compiler should not change the semantics
>of the language (in the broadest sense of the definition of semantics) -- what
>good are they. 

The sentiment behind this rhetorical question is precisely why it is
dangerous that ANSI-conformant compilers will silently ignore
unrecognized #pragmas.
-- 
Rahul Dhesi         UUCP:  <backbones>!{iuvax,pur-ee,uunet}!bsu-cs!dhesi

terry@wsccs.UUCP (Every system needs one) (04/28/88)

In article <20065@think.UUCP>, barmar@think.COM (Barry Margolin) writes:
> In article <488@wsccs.UUCP> I write:
> >Basically, if it works without -O, it sould work with -O, regardless of what
> >the compiler writer's optimization does to achieve it's goal.
> 
> This is unreasonable, and probably rules out almost all optimizations
> but the simplest peephole optimizations, and even some of them.

I don't believe this.  Optimization based on correct language structure as
currently practiced should be able to reduce bogus constructs just as well
as optimization based on correct language structure as proposed.  It's just
harder to implement.  I say "so what?" to that.

> But my main objection to your blanket statement above is that it
> requires code that ACCIDENTALLY works when unoptimized to continue
> working when optimized.
[Example of an incorrect algorythm deleted]

Your example was apparently correct, but a wee bit long.  Let me back up
a little and get to the main point I was trying to convey: good code that
works will be broken by the new standard.  This code is good from both the
standpoint of K&R and the standpoint of 'standard common practice'.  I would
not expect

	#define EOF (-1)
	unsigned char getachar();
	main()
	{
		while( getachar() != EOF);
		...
		...
		...
	}

to not be optimized to the equivalent of

	unsigned char getachar();
	main()
	{
		for( ;;)
			getachar();
	}

In fact, a good optimizer would do just that, as an unsigned char can never
be negative, by definition.


> Optimizable programs have many qualities in common with portable
> programs.  Both require that the programmer be more careful

This I most certainly agree with.  In general, I write in a subset of the
"allowable" C syntax to avoid common stupidities in C implementations.

> and pay attention to the standard.

Again I agree; but K&R, not ANSI; most K&R compliant compiler's I know are
either all derived from the same source (You know who that is ...the guys
with the "deathstar" logo), or written to be compatable with those from
that source or their derivitives.  Why should I have to work harder on all
my programs, making them cryptic in the process, to allow some C "stud" to 
work less on his (the compiler).

> Also, just because a program works doesn't mean that it is correct.

No, but it's one hell of a plus :-)

The main thrust of my argument was to point out that all of the "nifty"
speed-ups you can get via optimization should be applicable to ANY assembly
code.  If you want to optimize higher, use quads and optimize constants in
expressions via binary tree insertion at associative and communitive property
boundries in the expressions.  We all know (except, of course, that joker
Trebmal Yrret) that it is easier to write a top-down parser than a bottom up
parser without program generators.  Human generated code is much better than
machine generated code (yacc, lex).  I know it's a bitch to optimize and it
would be a lot easier with ANSI C.  Tough.  Don't break everybody's code to
make your job easier, and don't stick me with it.  I have my own work to do.


PS: The 'you' is not meant to be indictive of Barry.  It refers to optimizer
writers.


| Terry Lambert           UUCP: ...{ decvax, ihnp4 } ...utah-cs!century!terry |
| @ Century Software        OR: ...utah-cs!uplherc!sp7040!obie!wsccs!terry    |
| SLC, Utah                                                                   |
|                   These opinions are not my companies, but if you find them |
|                   useful, send a $20.00 donation to Brisbane Australia...   |
| 'There are monkey boys in the facility.  Do not be alarmed; you are secure' |

terry@wsccs.UUCP (Every system needs one) (04/28/88)

In article <474@sas.UUCP>, bts@sas.UUCP (Brian T. Schellenberger) writes:
| In article <13074@brl-adm.ARPA> dsill@NSWC-OAS.arpa (Dave Sill) writes:
| |I (Terry) write:
| |>Basically, if it works without -O, it should work with -O, regardless of what
| |>the compiler writer's optimization does to achieve its goal.  If this makes
| |>writing compilers harder, so what?
| |
| |This bears repeating.  There should be no circumstances under which
| |the semantics of the language are changed by a flag to the compiler.
| 
| And anyway, you can always compile with minimal optimization if you find doing
| really unreasonable things makes you happy.

I find compiling my K&R code which runs on many machines today on different
machines in 10 years at an inconvenience to optimizer writers makes me happy.
It makes the optimizer writer's unhappy.  I am more deserving of happy ;-).

Is that unreasonable?  Being a compiler user, rather than a compiler writer
(at least not a 'C' compiler writer), there are more of me, and more is
better.  I know this is true, otherwise what would the compilers be written
in, given that there are more compilers being written that there are, in this
case.


| Terry Lambert           UUCP: ...{ decvax, ihnp4 } ...utah-cs!century!terry |
| @ Century Software        OR: ...utah-cs!uplherc!sp7040!obie!wsccs!terry    |
| SLC, Utah                                                                   |
|                   These opinions are not my companies, but if you find them |
|                   useful, send a $20.00 donation to Brisbane Australia...   |
| 'Admit it!  You're just harrasing me because of the quote in my signature!' |

limes@sun.uucp (Greg Limes) (04/30/88)

In article <500@wsccs.UUCP> terry@wsccs.UUCP (Every system needs one) writes:
>I would not expect
>
>	#define EOF (-1)
>	unsigned char getachar();
>	main()
>	{
>		while( getachar() != EOF);
>		...
>		...
>		...
>	}
>
>to not be optimized to the equivalent of
>
>	unsigned char getachar();
>	main()
>	{
>		for( ;;)
>			getachar();
>	}
>
>In fact, a good optimizer would do just that, as an unsigned char can never
>be negative, by definition.

In fact, I would expect the first to be optimised exactly into
the second, for the precise reason you mention. When designing
an input function, you MUST take boundry conditions into account;
in this case, since the input stream consists of eight bit
characters, the value EOF must be (by definition) out-of-band data,
and therefore will not be representable in an eight bit unsigned
value. Change your declaration for getachar(); it will not work even
for many compilers that do absolutely no optimisation at all. It
returns something larger than an unsigne char, probably an "int".
-- 
   Greg Limes [limes@sun.com]				frames to /dev/fb

terry@wsccs.UUCP (Every system needs one) (04/30/88)

In article <3938@killer.UUCP>, chasm@killer.UUCP (Charles Marslett) writes:
> In article <13074@brl-adm.ARPA>, dsill@NSWC-OAS.arpa (Dave Sill) writes:
> > Terry Lambert writes:
> > >Basically, if it works without -O, it should work with -O, regardless of what
> > >the compiler writer's optimization does to achieve its goal.  If this makes
> > >writing compilers harder, so what?
> > 
> > This bears repeating.  There should be no circumstances under which
> > the semantics of the language are changed by a flag to the compiler.
> 
> Actually, I have written code the broke with optimization turned on
> on every machine I've found with a Unix-derived C compiler (and at least
> two that are not Unix-derived).  Memory mapped I/O drivers almost always
> blow up when you have sequences like
> 
>       port->data = 0;
>       port->data = 0;
>       port->data = 0;
>       port->data = 2;

They also blow up 1) when they are written wrong, 2) the memory mapping is
changed without corresponding changes to the code, or 3) they are trying
to talk to a VME bus when running on you QBUS machine.

> So are we to say that no one has yet written a C compiler?

No.  We say your code is broke and/or non-portable.  Strictly speaking, if
non-portable is it, it is broke.  It's one of the reason's I've come around
to volitile for some few cases.  I'd support it a lot better if it were
possible to declare functions this way to avoid problems with shareable
library routine updates.

> At least for Unices?

This is laughable, considering where C came from.

> I may be picky, but if flags to the compiler should not change the semantics
> of the language (in the broadest sense of the definition of semantics) -- what
> good are they.

I *never* wait for errors!  If they want me, they know where to find me ;-)!

The reason is simple.  Say I want a source-level debugger and have the stupid
insight to turn off the optimizer so I have assembly code which corresponds
to source on a line-by-line basis.  This is one hell of an incentive to have
code behave the same way with or without -O!  How do I debug something if
my debugger is only useful when there isn't a problem?  GAAAK!  You mean I
was right when I said before that looking at assembly was still useful?!?
(Henry, eat your hat!;-)

> Why type the things in if they don't affect the generated
> program (unless they triger off a pretty listing or play music while we
> wait for errors, I guess ({:)  ).

I didn't say they shouldn't effect the generated program, I said that they
shouldn't effect the OPERATION of the generated program.


| Terry Lambert           UUCP: ...{ decvax, ihnp4 } ...utah-cs!century!terry |
| @ Century Software        OR: ...utah-cs!uplherc!sp7040!obie!wsccs!terry    |
| SLC, Utah                                                                   |
|                   These opinions are not my companies, but if you find them |
|                   useful, send a $20.00 donation to Brisbane Australia...   |
| 'Admit it!  You're just harrasing me because of the quote in my signature!' |

root@mfci.UUCP (SuperUser) (05/01/88)

Expires:

Sender:

Followup-To:

Distribution:

Keywords:


In article <500@wsccs.UUCP> terry@wsccs.UUCP (Every system needs one) writes:
}Your example was apparently correct, but a wee bit long.  Let me back up
}a little and get to the main point I was trying to convey: good code that
}works will be broken by the new standard.  This code is good from both the
}standpoint of K&R and the standpoint of 'standard common practice'.  I would
}not expect
}
}	#define EOF (-1)
}	unsigned char getachar();
}	main()
}	{
}		while( getachar() != EOF);
}		...
}		...
}		...
}	}
}
}to not be optimized to the equivalent of
}
}	unsigned char getachar();
}	main()
}	{
}		for( ;;)
}			getachar();
}	}
}
}In fact, a good optimizer would do just that, as an unsigned char can never
}be negative, by definition.

No, these two programs are not equivalent.  When comparing an unsigned to
a signed integer, the signed integer is first cast to unsigned (which results
in no change in the bit pattern), then the comparison is performed.  In fact,
since octal and hex constants are signed in C, on a machine with 4 byte two's
complement integers 0xffffffff is equivalent to -1, and people compare these
signed constants to unsigned values all the same.  Most people probably think
they're unsigned to being with.  People are often surprised by the fact that
an expression like (u > -1) is always false when u is unsigned, since the -1
is first cast to unsigned, whereupon it becomes the largest possible
unsigned.

mem@zinn.MV.COM (Mark E. Mallett) (05/03/88)

In article <3938@killer.UUCP>, chasm@killer.UUCP (Charles Marslett) writes:
> In article <13074@brl-adm.ARPA>, dsill@NSWC-OAS.arpa (Dave Sill) writes:
> > Terry Lambert writes:
> > 
> > This bears repeating.  There should be no circumstances under which
> > the semantics of the language are changed by a flag to the compiler.

Avoiding the word "semantics", -O can change the way the program
operates, which is what is important in this discussion.

And if you don't think that the semantics of the language can be changed
by a flag to the compiler, well... what about "-D" ??

-mm-
-- 
Mark E. Mallett  PO Box 4188/ Manchester NH/ 03103 
Bus. Phone: 603 645 5069    Home: 603 424 8129
uucp: mem@zinn.MV.COM  (...decvax!elrond!zinn!mem   or   ...sii!zinn!mem)
BIX: mmallett

gwyn@brl-smoke.ARPA (Doug Gwyn ) (05/03/88)

In article <487@sas.UUCP> bts@sas.UUCP (Brian T. Schellenberger) writes:
>In any case, any quality implementation allows you turn off all optimization.

You've taken a somewhat simplistic view of optimization.  There is no well-
defined meaning for "turning off all optimization", since the C language
does not map one-to-one unambiguously onto machine language.  All C
compilers with which I am familiar, and very likely all serious compilers,
automatically perform certain code generation "optimizations" (these should
be called "improvements") that save both time and space over naive code
generation.  One virtually never has the option of disabling these.

ray@micomvax.UUCP (Ray Dunn) (05/04/88)

In article <2758@bsu-cs.UUCP> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
>In article <3938@killer.UUCP> chasm@killer.UUCP (Charles Marslett) writes:
>>I may be picky, but if flags to the compiler should not change the semantics
>>of the language (in the broadest sense of the definition of semantics) -- what
>>good are they. 
>
>The sentiment behind this rhetorical question is precisely why it is
>dangerous that ANSI-conformant compilers will silently ignore
>unrecognized #pragmas.

What?  Huh?  I had almost fallen asleep hitting ^N, then this appeared!

Dangerous indeed.  Could this in fact not become the portability nightmare
of the future, and is this perhaps something that we should try to nip in
the bud as quickly as possible?

Forgive me if this has been mentioned previously, it certainly hadn't
(#pragma) registered on me before.

It seems to me that the compilers (not a job for Lint this time I think)
should, by ANSI definition, supply a warning when ignoring a pragma.  That
doesn't necessarily provide much help to the programmer in unravelling the
effects however.

These animals are dangeroos.  Please do not feed.
-- 
Ray Dunn.                      |   UUCP: ..!{philabs, mnetor}!micomvax!ray
Philips Electronics Ltd.       |   TEL : (514) 744-8200   Ext: 2347
600 Dr Frederik Philips Blvd   |   FAX : (514) 744-6455
St Laurent. Quebec.  H4M 2S9   |   TLX : 05-824090

nevin1@ihlpf.ATT.COM (00704a-Liber) (05/04/88)

In article <511@wsccs.UUCP> terry@wsccs.UUCP (Every system needs one) writes:

>I didn't say they shouldn't effect the generated program, I said that they
>shouldn't effect the OPERATION of the generated program.

What's the difference between your two statements?  Optimizers, almost by
definition, do effect the operation of the generated program.

Take the following example:

	int bar;
	int foo;
	int temp;
	...
	bar = temp;
	foo = bar;
	...

Unoptimized, what should this produce?

Answer:  something like:

	Take the value stored in temp
	Convert it to int
	Store it in bar
	Return the value stored in bar

	Take the value stored in bar
	Convert it to int
	Store it in foo
	Return the value stored in foo

If *any* of these steps are skipped then I would consider that code to be
optimized.

Here is what optimized code might be:

	Take the value stored in temp
	Store it in foo
	Store it in bar

Should this be an illegal optimization?  No!  

Optimizers, in a nutshell, figure out the *results* of what you coded and
then generate the best possible code to obtain those results.  Optimizers
need not generate code that performs all of the operations in the order that
you requested them, as long as the results, for a correct program, are
unchanged.

Volatile is needed to stop those optimzations since, in some circumstances,
it is required that none of the intermediate operations can be skipped
because outside forces may change some of the variables between
operations.


The problem I have with volatile, as it is defined now, is that it doesn't
really say which optimizations are allowed and which aren't (and yes, I
have read footnote 52 in the Jan 88 dpANS; this statement is not good
enough).  In my unoptimized example, assuming all the variables were
declared volatile, which statements can be legally taken out?  Am I allowed
to get rid of the 'return the value stored in xxx' since this value is
never used?  I don't know.  Also, the augmented assignment operators used
with volatile variables since in order to use them properly I need to know
how many real machine-level references are made to the volatile variable
and if the whole variable is loaded into memory at once (in the case of a
volatile struct, for instance).  The semantics for volatile seem to be
ill-defined, at best.
-- 
 _ __			NEVIN J. LIBER	..!ihnp4!ihlpf!nevin1	(312) 510-6194
' )  )				"The secret compartment of my ring I fill
 /  / _ , __o  ____		 with an Underdog super-energy pill."
/  (_</_\/ <__/ / <_	These are solely MY opinions, not AT&T's, blah blah blah

chip@ateng.UUCP (Chip Salzenberg) (05/04/88)

In article <13074@brl-adm.ARPA> dsill@NSWC-OAS.arpa (Dave Sill) writes:
>This bears repeating.  There should be no circumstances under which
>the semantics of the language are changed by a flag to the compiler.

This statement is, in my opinion, usually true.  However, the semantics of
C are not as simple as Dave Sill seems to think they are.  For example,
Dave may consider:

	a = b;
	a = b;

to mean "move data from b to a twice", which is what a simple C compiler
might do.  On the other hand, some people -- myself included -- consider
that same C fragment to mean "assign b's value to a, then assign b's value
to a", which is redundant and subject to optimization.

Many people who complain about optimizers ass_u_me that they know what the
output of the compiler should look like.  They are not really writing C,
they are writing for an assembler pre-processor which happens to accept C
syntax.  This method of programming is a Bad Thing, because it produces
non-portable code.

Aggressive optimizers, on the other hand, are a Good Thing.  They allow
programmers to write code which is fast _and_ portable.  Putting up with
"volatile" and other optimizer controls is a small price to pay.
-- 
Chip Salzenberg                "chip@ateng.UU.NET" or "codas!ateng!chip"
A T Engineering                My employer may or may not agree with me.
  "I must create a system or be enslaved by another man's." -- Blake

henry@utzoo.uucp (Henry Spencer) (05/05/88)

> ... This is one hell of an incentive to have
> code behave the same way with or without -O!  How do I debug something if
> my debugger is only useful when there isn't a problem?  GAAAK!  You mean I
> was right when I said before that looking at assembly was still useful?!?
> (Henry, eat your hat!;-)

[begin snotty tone :-)] Well, I suppose people who make errors, and thus
need debuggers, would have to worry about such things, but I fail to see
any reason why *I* should eat my hat...  [end tone] :-) :-)

More seriously, yes, debugging optimized code can be a real pain.  I don't
think even the Mips people, who put in a lot of effort on things like this,
have a debugger that can explain to you what the optimized code is doing.
You just have to tone down the optimization for debugging, and pray that
there aren't any serious differences of opinion between you and the optimizer
when it comes time to "compile for production".  If that doesn't work, you
are indeed reduced to reading the assembler, which could be, um, interesting
on something like one of the Multiflow machines.  (Heaven knows it's no
picnic even on more orthodox hardware.)

Most anybody who's programmed in C for a long time will have run into things
like storage-management bugs that "go away" when you put in debugging...
-- 
NASA is to spaceflight as            |  Henry Spencer @ U of Toronto Zoology
the Post Office is to mail.          | {ihnp4,decvax,uunet!mnetor}!utzoo!henry

friedl@vsi.UUCP (Stephen J. Friedl) (05/05/88)

In article <1025@micomvax.UUCP>, ray@micomvax.UUCP (Ray Dunn) writes:
< In article <2758@bsu-cs.UUCP> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
< >The sentiment behind this [deleted] rhetorical question is precisely
< >why it is dangerous that ANSI-conformant compilers will silently ignore
< >unrecognized #pragmas.
< 
< What?  Huh?  I had almost fallen asleep hitting ^N, then this appeared!
< 
< Dangerous indeed.  Could this in fact not become the portability nightmare
< of the future, and is this perhaps something that we should try to nip in
< the bud as quickly as possible?

Sorry, no more buds will be nipped unless X3J11 decides to go
one more round with us ha ha ha ha :-).

Actually, I would imagine that most compilers would give some kind
of switch that turns on or turns off unreported #pragma warnings.

-- 
Steve Friedl    V-Systems, Inc. (714) 545-6442    3B2-kind-of-guy
friedl@vsi.com    {backbones}!vsi.com!friedl   attmail!vsi!friedl

dkc@hotlr.ATT (Dave Cornutt) (05/06/88)

In article <1988May4.195636.1801@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
 > > ... This is one hell of an incentive to have
 > > code behave the same way with or without -O!  How do I debug something if
 > > my debugger is only useful when there isn't a problem?  GAAAK!  You mean I
 > > was right when I said before that looking at assembly was still useful?!?
[snotty tone omitted :-)]
 > More seriously, yes, debugging optimized code can be a real pain.  I don't
 > think even the Mips people, who put in a lot of effort on things like this,
 > have a debugger that can explain to you what the optimized code is doing.
 > You just have to tone down the optimization for debugging, and pray that
 > there aren't any serious differences of opinion between you and the optimizer
 > when it comes time to "compile for production".  If that doesn't work, you
 > are indeed reduced to reading the assembler, which could be, um, interesting
 > on something like one of the Multiflow machines.  (Heaven knows it's no
 > picnic even on more orthodox hardware.)
 > 
 > Most anybody who's programmed in C for a long time will have run into things
 > like storage-management bugs that "go away" when you put in debugging...

This got me thinking.  (I know, doesn't happen every day, but...)
"Hmmm", I hmmmed.  "What if the compiler had a 'simulated optimization'
option?  That is, don't actually optimize, but see to it that all of the
little side effects that occur when optimizing are simulated.  For instance,
arrange for all unitilized variables to contain some garbage value instead
of nice convenient zeros; initialize ints to 666, floats to Planck's
constant; character arrays to 'KILROY WAS HERE', and pointers to some
value (not the nil value) that will cause a segmentation violation or
bus error when dereferenced."

Too farfetched?  Or can it be fetched a little nearer?
-- 
Dave Cornutt, AT&T Bell Labs (rm 4A406,x1088), Holmdel, NJ
UUCP:{ihnp4,allegra,cbosgd}!hotly!dkc
"The opinions expressed herein are not necessarily my employer's, not
necessarily mine, and probably not necessary"

dsill@nswc-oas.arpa (Dave Sill) (05/11/88)

>NASA is to spaceflight as            |  Henry Spencer @ U of Toronto Zoology
>the Post Office is to mail.          | {ihnp4,decvax,uunet!mnetor}!utzoo!henry

I think it's indecorous for a Canadian to criticize the U.S. space
program in an international forum.  And besides, people who live in
glass houses... 

=========
The opinions expressed above are mine.

walter@garth.UUCP (05/12/88)

Dave Sill writes:
>>NASA is to spaceflight as            |  Henry Spencer @ U of Toronto Zoology
>>the Post Office is to mail.          | {ihnp4,decvax,uunet!mnetor}!utzoo!henry
>
>I think it's indecorous for a Canadian to criticize the U.S. space
>program in an international forum.  And besides, people who live in
>glass houses... 
>
You're assuming he has a low opinion of the Post Office :-)
Perhaps, having tried to 'reply' to postings, he is amazed by the
efficiency with which letters are rushed to their _correct_ destination :-)
-- 
------------------------------------------------------------------------------
The U.S. is to space exploration	Any similarities between my opinions
as Portugal was to sea exploration.	and those of the person who signs my
					paychecks is purely coincidental.
E-Mail route: ...!pyramid!garth!walter		(415) 852-2384
USPS: Intergraph APD, 2400 Geng Road, Palo Alto, California 94303
------------------------------------------------------------------------------

dsill@nswc-oas.arpa (Dave Sill) (05/12/88)

Chip Salzenberg <chip@ateng.uucp> writes:
>In article <13074@brl-adm.ARPA> dsill@NSWC-OAS.arpa (Dave Sill) writes:
>>This bears repeating.  There should be no circumstances under which
>>the semantics of the language are changed by a flag to the compiler.
>
>This statement is, in my opinion, usually true.  However, the semantics of
>C are not as simple as Dave Sill seems to think they are.  For example,
>Dave may consider:
>
>	a = b;
>	a = b;
>
>to mean "move data from b to a twice", which is what a simple C compiler
>might do.  On the other hand, some people -- myself included -- consider
>that same C fragment to mean "assign b's value to a, then assign b's value
>to a", which is redundant and subject to optimization.

I believe the semantics of C are not as complex as Chip Salzenberg
seems to think they are.  I don't remember anything in K&R that says
assignment means "assign b's value to a unless a probably already has
b's value, as far as the compiler can tell."

>Many people who complain about optimizers ass_u_me that they know what the
>output of the compiler should look like.  They are not really writing C,
>they are writing for an assembler pre-processor which happens to accept C
>syntax.  This method of programming is a Bad Thing, because it produces
>non-portable code.

There may be people like that, but I'm not one of them.  I don't mind
optimization AS LONG AS THE CODE WORKS THE WAY IT'S SUPPOSED TO.  In
the case of redundant assignments and such, I think a warning,
preferably from lint, is a better approach than assuming the
programmer made a mistake.

>Aggressive optimizers, on the other hand, are a Good Thing.  They allow
>programmers to write code which is fast _and_ portable.  Putting up with
>"volatile" and other optimizer controls is a small price to pay.

Sometimes any price is too high.

=========
The opinions expressed above are mine.

"We must remove the TV-induced stupor that lies like a fog across the
 land."
					-- Ted Nelson

throopw@xyzzy.UUCP (Wayne A. Throop) (05/14/88)

) dsill@nswc-oas.arpa (Dave Sill)
)>NASA is to spaceflight as      |  Henry Spencer @ U of Toronto Zoology
)>the Post Office is to mail.    | {ihnp4,decvax,uunet!mnetor}!utzoo!henry
) I think it's indecorous for a Canadian to criticize the U.S. space
) program in an international forum.  And besides, people who live in
) glass houses...

I don't know from indecorousness, but I DO know that I'd rather have my
mail delivered by NASA than ride in a spacecraft built by the Post Office.

        (appologies to Bartholomew Gimble 
                (if our quote system is attributed correctly))
--
You will not see a monster {at Loch Ness}
just as millions before you have not.
                        --- Charls Kuralt
-- 
Wayne Throop      <the-known-world>!mcnc!rti!xyzzy!throopw

terry@wsccs.UUCP (Every system needs one) (05/14/88)

In article <375@m3.mfci.UUCP>, root@mfci.UUCP (SuperUser) writes:
> No, these two programs are not equivalent.  When comparing an unsigned to
> a signed integer, the signed integer is first cast to unsigned (which results
> in no change in the bit pattern), then the comparison is performed.  In fact,
> since octal and hex constants are signed in C, on a machine with 4 byte two's
> complement integers 0xffffffff is equivalent to -1, and people compare these
> signed constants to unsigned values all the same.  Most people probably think
> they're unsigned to being with.  People are often surprised by the fact that
> an expression like (u > -1) is always false when u is unsigned, since the -1
> is first cast to unsigned, whereupon it becomes the largest possible
> unsigned.

I beg to differ.  Not only are programmers not supprised, they expect it
(unless the are so cursed as to be working on a trinary or one's complement
machine).  The programs ARE equivalent to a good optimizer.  In addition,
since the size of int is defined to be the register size, unless the size of
int is 8 bits, the unsigned char will be sign-extended to int to make the
comparison; in this process, everything will be converted to size of int
before the comparison.  The -1 will overflow, losing the sign bit, and you will
come up with something large.  It generally will not equal, unless you are very
lucky.  Then you have badly written code that happens to work.

			terry@wsccs

mouse@mcgill-vision.UUCP (der Mouse) (05/16/88)

In article <258@ateng.UUCP>, chip@ateng.UUCP (Chip Salzenberg) writes:
> In article <13074@brl-adm.ARPA> dsill@NSWC-OAS.arpa (Dave Sill) writes:
>> This bears repeating.  There should be no circumstances under which
>> the semantics of the language are changed by a flag to the compiler.

Like -D?  -I?  -R?  -fsingle?

> This statement is, in my opinion, usually true.  However, [the
> semantics of C aren't simple].  For example, Dave may consider:

> 	a = b;
> 	a = b;

> to mean "move data from b to a twice", which is what a simple C
> compiler might do.  On the other hand, some people -- myself included
> -- consider that same C fragment to mean "assign b's value to a, then
> assign b's value to a", which is redundant and subject to
> optimization.

As far as I can tell, those two meanings are exactly the same thing,
stated two slightly different ways.  The second one is not redundant if
either a or b is volatile; and if neither is volatile, then the first
one *is* redundant and subject to optimization.

					der Mouse

			uucp: mouse@mcgill-vision.uucp
			arpa: mouse@larry.mcrcim.mcgill.edu

terry@wsccs.UUCP (Every system needs one) (05/17/88)

In article <310@zinn.MV.COM>, mem@zinn.MV.COM (Mark E. Mallett) writes:
| In article <3938@killer.UUCP>, chasm@killer.UUCP (Charles Marslett) writes:
| > In article <13074@brl-adm.ARPA>, dsill@NSWC-OAS.arpa (Dave Sill) writes:
| > > Terry Lambert writes:
| > > 
| > > This bears repeating.  There should be no circumstances under which
| > > the semantics of the language are changed by a flag to the compiler.
| 
| Avoiding the word "semantics", -O can change the way the program
| operates, which is what is important in this discussion.
| 
| And if you don't think that the semantics of the language can be changed
| by a flag to the compiler, well... what about "-D" ??

Mark:

	First:	I am the one being repeated, not doing the repeating.
	Second:	The statement being repeated did NOT deal with all
		options; only -O... it was:

|Basically, if it works without -O, it should work with -O, regardless of what
|the compiler writer's optimization does to achieve its goal.  If this makes
|writing compilers harder, so what?

	Third:	The point was that while a transform based -O is most likely
		a non-reversable one, no information should be lost due to
		bad assumptions made by the compiler writer, and that ANSI
		C was a cop-out by compiler writers who wanted the hard
		work of determining volatility and aliases, as well as
		other information useful for optimization, done by the
		programmer.  I simply suggested that I could not see one
		instance of where it was conceptually impossible for a
		compiler writer to determine these things; just damn hard.
		My point was "better the compiler writer than the compiler
		user".


| Terry Lambert           UUCP: ...{ decvax, ihnp4 } ...utah-cs!century!terry |
| @ Century Software        OR: ...utah-cs!uplherc!sp7040!obie!wsccs!terry    |
| SLC, Utah                                                                   |
|                   These opinions are not my companies, but if you find them |
|                   useful, send a $20.00 donation to Brisbane Australia...   |
| 'Admit it!  You're just harrasing me because of the quote in my signature!' |

terry@wsccs.UUCP (Every system needs one) (05/17/88)

In article <4628@ihlpf.ATT.COM>, nevin1@ihlpf.ATT.COM (00704a-Liber) writes:
> In article <511@wsccs.UUCP> terry@wsccs.UUCP (Every system needs one) writes:
> 
> >I didn't say they shouldn't effect the generated program, I said that they
> >shouldn't effect the OPERATION of the generated program.
> 
> What's the difference between your two statements?  Optimizers, almost by
> definition, do effect the operation of the generated program.

Not from a black-box point of view.  The only thing that's effected from the
user's point of view is execution speed and/or resource usage, not what the
user thinks of as "operation".  The same information in produces the same
information out.

My qualm is that badly written optimizers operating on "good code" (code not
dependant on side effects) sometimes change the shape of the black box, and
they should not.

				terry@wsccs

terry@wsccs.UUCP (Every system needs one) (05/17/88)

In article <1988May4.195636.1801@utzoo.uucp>, henry@utzoo.uucp (Henry Spencer) writes:
| [begin snotty tone :-)] Well, I suppose people who make errors, and thus
| need debuggers, would have to worry about such things, but I fail to see
| any reason why *I* should eat my hat...  [end tone] :-) :-)
| 
| More seriously, yes, debugging optimized code can be a real pain.  I don't
| think even the Mips people, who put in a lot of effort on things like this,
| have a debugger that can explain to you what the optimized code is doing.
| You just have to tone down the optimization for debugging, and pray that
| there aren't any serious differences of opinion between you and the optimizer
| when it comes time to "compile for production".

Ok.  You can wash the condiments off your hat.  ;-)

The point is that there shouldn't be differences of opinion in "good code",
and anywhere that such a thing is possible is an example of a bad assumption,
given that the code conforms to the same standard as the compiler.

A bad assumption is a compiler bug.  If a compiler can't compile conforming
code, it isn't a conforming compiler.

				terry@wsccs

terry@wsccs.UUCP (Every system needs one) (05/17/88)

In article <871@xyzzy.UUCP>, throopw@xyzzy.UUCP (Wayne A. Throop) writes:
) ) dsill@nswc-oas.arpa (Dave Sill)
) )>NASA is to spaceflight as      |  Henry Spencer @ U of Toronto Zoology
) )>the Post Office is to mail.    | {ihnp4,decvax,uunet!mnetor}!utzoo!henry
) ) I think it's indecorous for a Canadian to criticize the U.S. space
) ) program in an international forum.  And besides, people who live in
) ) glass houses...
) 
) I don't know from indecorousness, but I DO know that I'd rather have my
) mail delivered by NASA than ride in a spacecraft built by the Post Office.

	Henry Spencer is to international relations as
	The Canadian Post Office is to mail in semi trucks

| Terry Lambert           UUCP: ...{ decvax, ihnp4 } ...utah-cs!century!terry |
| @ Century Software        OR: ...utah-cs!uplherc!sp7040!obie!wsccs!terry    |
| 'Admit it!  You're just harrasing me because of the quote in my signature!' |

mouse@mcgill-vision.UUCP (der Mouse) (05/18/88)

In article <529@wsccs.UUCP>, terry@wsccs.UUCP (Every system needs one) writes:
> In article <375@m3.mfci.UUCP>, root@mfci.UUCP (SuperUser) writes:
>> No, these two programs are not equivalent.  When comparing an
>> unsigned to a signed integer, the signed integer is first cast to
>> unsigned [...]
> In addition, since the size of int is defined to be the register
> size,

The size of an int is defined to be whatever the compiler designer
wants it to be.  Generally, this will be a `natural' size of the
machine, such as the register size or the bus width.

> unless the size of int is 8 bits,

Who says a char is 8 bits?  Suppose char and int are both 16 bits?  (As
far as I can tell, the programs are equivalent exactly when char is
narrower than int.)

> the unsigned char will be sign-extended to int

If it gets extended at all, it will be zero-extended.  Pre-ANSI, it
would be zero-extended to unsigned int (generally); post-ANSI (and
probably in a few pre-ANSI compilers) it will be zero-extended to
signed int.

					der Mouse

			uucp: mouse@mcgill-vision.uucp
			arpa: mouse@larry.mcrcim.mcgill.edu

dsill@nswc-oas.arpa (Dave Sill) (05/23/88)

der Mouse writes:
>In article <258@ateng.UUCP>, chip@ateng.UUCP (Chip Salzenberg) writes:
>> In article <13074@brl-adm.ARPA> dsill@NSWC-OAS.arpa (Dave Sill) writes:
>>> This bears repeating.  There should be no circumstances under which
>>> the semantics of the language are changed by a flag to the compiler.
>       ~~~~~~~~~~~~~~~~~~~~~~~~~
>Like -D?  -I?  -R?  -fsingle?

If you're able to change C's semantics with a -D or -I option,
something's wrong.  -R does indeed change semantics, and for that
reason I would try to avoid using it (I've never seen it used,
myself).  Same for -f, it's not even in my cc man page.

=========
The opinions expressed above are mine.

"If I learn from my mistakes, pretty soon I'll know everything."	
					-- Richard O'Keefe

stu@linus.UUCP (Stuart A. Werbner) (05/27/88)

	I have something as useful to contribute to this subject as everyone
	else:

	"Back before the war...
	Shimon Peres...
	What about me?
	Blahblahblah
	What about me? 
	Blah Blah Blah?"
	..." 		- Iggy Pop "Blah Blah Blah"

	As a concluding remark, I would like to add the following:

	Blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
	blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah.

	Maybe now comp.lang.c can move on to a new topic!!!!!!

	Happy Memorial Day!!



						Stuart A. Werbner

tneff@bfmny0.UUCP (Tom Neff) (08/16/89)

In article <1613@mcgill-vision.UUCP> mouse@mcgill-vision.UUCP (der Mouse) writes:
>In article <14523@bfmny0.UUCP>, tneff@bfmny0.UUCP (Tom Neff) writes:
>> <Burp> The fact is that every extra hour the applications wonk spends
>> trying to get the #*$&@# compiler or linker or OS loader to work, or
>> on the phone to some consultant, is worth billions of instructions on
>> any processor of his choice.
>
>Yeah, so what?  If I can shave 10% off of my compiles, that's a lot of
>time.  Suppose it takes me a week to save that 10%: then after I spend
>a total of nine weeks waiting for compiles, I've broken even: that
>would have been ten weeks.  After that it's clear gain. ...
>
>Now, if the compiler is sold to eight thousand customers...I'll let you
>work out the arithmetic for yourself. :-)

There is a fallacy in here somewhere.  Presumably we are now talking
about the application in question (to be optimized) being a compiler
itself, right?  So, if you speed the compiler up 10% and sell it to
8,000 customers, you can extrapolate truly impressive time savings on
paper, BUT none of that translates directly back to you, AND it's
running on 1,000 < N < 8,000 separate machines (in parallel as it were)
so much of the resource savings is spurious.  On an ideal, single
central, heavily loaded machine, you can reach the theoretical point
where X% of optimization yields X% more work done.  But that
configuration is a rarity in computing today.

Having said that, it's nice to have a faster compiler.  But there is no
such thing as a free lunch.  Was the release of the compiler delayed
three or four months while customers needing new features fretted, so
that this 10% optimization could go in?  Were two or three followup
patch levels required, with attendant customer confusion and frustration
and damage to the image of reliability, as latent bugs introduced by
this 10% optimization revealed themselves?  Did the competition see its
latest and greatest compiler version tested against some embarrassing
old swaybacked rev of your product in a leading computer magazine
review, because your own turbocharged 10%-optimized wonder beast was
still held up in the shop being cleaned up instead of loaded on the
reviewer's system?

This is what I mean when I talk about the distinction between
optimization for a purpose, versus optimization for its own sake.
Optimization is not some mystical state of grace, it is an intricate act
of human labor which carries real costs and real risks.  And it is
frequently resorted to for irrational reasons; computer wonks are proud
of being able to do it, so when they are not physically prevented from
optimizing, they tend to optimize.
-- 
"We walked on the moon --	((	Tom Neff
	you be polite"		 )) 	tneff@bfmny0.UU.NET

flint@gistdev.UUCP (Flint Pellett) (08/17/89)

In article <1613@mcgill-vision.UUCP> mouse@mcgill-vision.UUCP (der Mouse) writes:
>In article <14523@bfmny0.UUCP>, tneff@bfmny0.UUCP (Tom Neff) writes:
>> <Burp> The fact is that every extra hour the applications wonk spends
>> trying to get the #*$&@# compiler or linker or OS loader to work, or
>> on the phone to some consultant, is worth billions of instructions on
>> any processor of his choice.
>
>Yeah, so what?  If I can shave 10% off of my compiles, that's a lot of
>time.  Suppose it takes me a week to save that 10%: then after I spend
>a total of nine weeks waiting for compiles, I've broken even: that
>would have been ten weeks.  After that it's clear gain. ...
>
>Now, if the compiler is sold to eight thousand customers...I'll let you
>work out the arithmetic for yourself. :-)

There are a couple sides to this, and learning not just when to optimize,
but WHAT to optimize, is a large part of growing up as a programmer.  As an
example, I once had a "consultant" forced on me by a client, who was to
look a system of ours and tell us how to speed it up.  (We'd just finished
developing the thing following a "make it work first, then make it work
fast" philosophy, so I felt I knew what to do to speed it up already, but
sometimes it's hard to refuse "help".)  After several days of poking
around, the consultant came back and proudly displayed a routine he had
found and rewritten: he said he had tested the routine and had achieved
about 90% improvement: now it ran in about 10% of the time it used to use. 
I asked what the effect of that was on the overall system performance, and
he replied that he had not measured that.  So, we did, and there was no
noticeable effect: the routine he'd spent a day improving was being
executed about once a day.  Meanwhile, I had spent my 2 days working on a
routine which I knew was being run constantly: I managed to squeeze a mere
30% improvement out of that routine, with a resulting improvement of about
15% in overall system performance.

In the other example you cite, with the compiler: it is often better to
have the faster product, but the simple arithmetic of saying that if it
takes 10% off the time to do the compiles that there will be a result of
10% higher productivity doesn't follow.  The problem is that any system is
no faster than the slowest part, and eventually the slowest part ends up
being the human part: a certain amount of time is required for a person to
think and plan, and often times the forced "break" in the action caused by
waiting for a compile is used for that purpose.  (I haven't studied this
myself, but my theory is that if you could plot a graph of productivity vs
time needed to compile, that curve would be bell shaped for many people:
toward the bottom where compiles are instantaneous, productivity would be
less because the programmer would be charging ahead without ever stopping
to think about what they are doing, and would ending up wasting time doing
things that some pre-planning would have eliminated.  I don't think this
would be true for every programmer though, only some of them: for some
people, who don't even go to the terminal until they have figured out
exactly what they are going to do, this does not apply: but my guess is
that these people are a small minority of all programmers.)

-- 
Flint Pellett, Global Information Systems Technology, Inc.
1800 Woodfield Drive, Savoy, IL  61874     (217) 352-1165
INTERNET: flint%gistdev@uxc.cso.uiuc.edu
UUCP:     uunet!gistdev!flint

jeffrey@algor2.uu.net (Jeffrey Kegler) (08/18/89)

Flint Pellett (flint%gistdev@uxc.cso.uiuc.edu) writes:
=> I haven't studied this myself, but my theory is that if you could
=> plot a graph of productivity vs time needed to compile, that curve
=> would be bell shaped for many people: toward the bottom where
=> compiles are instantaneous, productivity would be less because the
=> programmer would be charging ahead without ever stopping to think
=> about what they are doing, and would ending up wasting time doing
=> things that some pre-planning would have eliminated.

A lot of my compiles, and I suspect many other people's, are
recompiles of large systems where we are testing the result of
changing a single define.  The change and test require next to no
thought but the recompile involves tens of thousands of lines.  As
often as not, the change is to a configuration file which is included
by *.c, so make does not speed things up--just saves a lot of typing.
Face it, compile speed is A Good Thing for productivity.  If you get
me a compiler that will recompile GNU Emacs in 30 seconds, I may waste
some time at first, but I will learn to live with it, I promise you.
I'm looking forward to the problem, in fact.

The productivity versus runtime efficiency dilemma, of course, still
remains in full force.  An interesting wrinkle is when the things you
are compiling are productivity tools themselves--do you want an editor
that compiles 5% faster or runs 5% faster?
-- 

Jeffrey Kegler, Independent UNIX Consultant, Algorists, Inc.
jeffrey@algor2.ALGORISTS.COM or uunet!algor2!jeffrey
1762 Wainwright DR, Reston VA 22090

tneff@bfmny0.UUCP (Tom Neff) (08/19/89)

In article <1989Aug18.152547.10774@algor2.uu.net> jeffrey@algor2.UUCP (Jeffrey Kegler) writes:
>A lot of my compiles, and I suspect many other people's, are
>recompiles of large systems where we are testing the result of
>changing a single define.  The change and test require next to no
>thought but the recompile involves tens of thousands of lines.  

To paraphrase my previous award winning aphorism about optimization:
Knowing WHEN to compile is as important as knowing HOW to compile. :-)

The most annoying single creature in programming's Wild Kingdom is the
raging Recompilasaurus.  Bellowing its characteristic mating call -- "OH
*THAT* MUST BE IT" -- this amazing creature snarls and stomps its way
through the Thrashed Systemian jungles, changing one line and then
recompiling!  changing one line and then recompiling! -- leaving the
twisted, still-moving hulks of object and listing files littered across
the forest floor.

When cornered by enraged packs of other programmer species, but especially
by the dreaded Managatherium, its cry changes to the well-known "BUT I
ONLY CHANGED ONE THING!"

Well, I'll stay here at Base Camp and radio our position while Jim and
Bob descend into the valley and put this radio collar on the raging
Recompilasaurus... :-)
-- 
"We walked on the moon --	((	Tom Neff
	you be polite"		 )) 	tneff@bfmny0.UU.NET

henry@utzoo.uucp (Henry Spencer) (08/20/89)

Flint Pellett (flint%gistdev@uxc.cso.uiuc.edu) writes:
=> I haven't studied this myself, but my theory is that if you could
=> plot a graph of productivity vs time needed to compile, that curve
=> would be bell shaped for many people: toward the bottom where
=> compiles are instantaneous, productivity would be less because the
=> programmer would be charging ahead without ever stopping to think
=> about what they are doing, and would ending up wasting time doing
=> things that some pre-planning would have eliminated.

This same "making programming easier will make programmers sloppy"
argument has been used against everything from timesharing to high
baud rates on terminals.  It's been nonsense every time, and I think
it's nonsense this time too.  Giving programmers more powerful tools
lets bad programmers make bigger messes, but it also liberates good
ones from productivity-reducing hassles.  Fast compiles, in particular,
encourage bad programmers to be sloppy, but also encourage good ones
to get things *right* rather than saying "oh plotz, it works well enough
and I can't stomach compiling it again to fix the little blemishes".
A good programmer will stop to plan when it's desirable, but *doesn't*
need to do that every time.

Flint, have you ever used a punchcard-based batch environment with a
turnaround measured in hours?  I have.  I'll take timesharing, thank you.
And I'll take the fastest compiles I can get.
-- 
V7 /bin/mail source: 554 lines.|     Henry Spencer at U of Toronto Zoology
1989 X.400 specs: 2200+ pages. | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

scs@adam.pika.mit.edu (Steve Summit) (08/20/89)

In article <484@gistdev.UUCP> flint@gistdev.UUCP (Flint Pellett) writes:
>...the simple arithmetic of saying that if it
>takes 10% off the time to do the compiles that there will be a result of
>10% higher productivity doesn't follow.  The problem is that any system is
>no faster than the slowest part, and eventually the slowest part ends up
>being the human part: a certain amount of time is required for a person to
>think and plan...

Just so.  The other thing that bugs me about requests for, or
claims of, faster compilers is that any compiler you have to wait
for will seem too slow.  Except when I'm tracking down exactly
one bug, I try to put all of my compiles in the background, and
work on another module in the meantime.  Under MS-DOS, I'd gladly
take a compiler two times *slower* if I could just put it in the
background.  (I'm not sure why I've put up with a job involving
DOS for as long as I have.)

                                            Steve Summit
                                            scs@adam.pika.mit.edu

cck@deneb.ucdavis.edu (Earl H. Kinmonth) (08/20/89)

In article <1989Aug20.024207.29079@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>Flint Pellett (flint%gistdev@uxc.cso.uiuc.edu) writes:
>
>Flint, have you ever used a punchcard-based batch environment with a
>turnaround measured in hours?  I have.  I'll take timesharing, thank you.
>And I'll take the fastest compiles I can get.

It's not quite that simple. I started programming in 1968 (yes, Bunky,
they had computers back in the dark ages). Sometimes the turnaround was
more than hours. A hardware failure could easily make it two or three
days. On the other hand, the usual two, three, or four hour turn around
was just right for a starving graduate student. I could honestly bill
the waiting time as part of my research assistantship at the same time
I was reading in my primary field (Japanese history).

With faster processors, but not really fast processors, waiting time
has dropped to the point where it is just about right for picking your
nose, but not for heavy weight intellectual activity such as reading a
chapter in a non-computer book.

I know that the faster the processor, the more prone I am to rely on
the compiler for syntax checking. When the turn around was in hours or
days, I really checked, and in the course of doing so, often found
logical errors.

I too dislike arguments that suggest that faster processors lead to
sloppy code. On the other hand, I would like to see some honest
research on the subject, so that it is not just a matter of rump
ideological assertions....

mcdonald@uxe.cso.uiuc.edu (08/21/89)

>Flint, have you ever used a punchcard-based batch environment with a
>turnaround measured in hours?  I have.  
Hours? I had to put up with days - plus floor sorts! (A floor sort ,
for those who don't know, is O(1), not O(n*log(n)) ).

>I'll take timesharing, thank you.

And you can have it. I'll take a standalone machine!!!

>And I'll take the fastest compiles I can get.
I'll agree to that.

Doug McDonald

vlcek@mit-caf.MIT.EDU (Jim Vlcek) (08/22/89)

Henry Spencer takes issue, in <1989Aug20.024207.29079@utzoo.uucp>,
with another poster who theorizes that too-fast compiles make for
worse programmers:

``This same "making programming easier will make programmers sloppy"
  argument has been used against everything from timesharing to high
  baud rates on terminals.  It's been nonsense every time, and I think
  it's nonsense this time too.

``... have you ever used a punchcard-based batch environment with a
  turnaround measured in hours?  I have.  I'll take timesharing, thank
  you.  And I'll take the fastest compiles I can get.''

This recalls for me a comical episode nine years ago, when I left St.
Petersburg Jr. College in Clearwater, FL, to come to MIT.  Many of my
friends at the JC went on to University of Florida at Gainesville.
During my first term at MIT, I took a course called ``Structure and
Interpretation of Computer Programs'' which was quite good to its
word.  It involved a lot of Lisp programming (some Algol in the
middle), and was done on 9600 baud CRTs talking to a DEC-20.  It was
the first course I ever took which taught real _programming_, as
opposed to mere _coding_ (which is what a lot of ``programming''
courses actually teach), and it taught it exceedingly well.

At the same time, my friends from the JC were taking a course in
Fortran (required for all engineering majors) at UF which involved,
you guessed it, punching up card decks and then submitting them for
compilation and execution.  Usually, when you came back the next day,
you received a listing of syntax errors - most often words misspelled
by one letter, etc.

What amused me no end was that my friends did not envy me for having
had access to more powerful resources; no, they genuinely pitied me!
They honestly felt that I had been robbed, in that I had not been
forced to develop the ``careful programming practices'' that batch
environments demanded.  And, given their druthers, they would choose
to do things the way they had...  I wonder, even now, whether many of
these people _ever_ got the chance to see what programming is really
about.

My experience has been that the quality of my programming style is
relatively independent of the resources available to me.  What _is_ a
function of the resources available is the size and sophistication
of the applications which can be developed.  For this reason, I
disagree with those who hold the 11th commandment to be ``Always
develop your application on the machine upon which it will be run.''
(Notice I say develop; validation is yet another thing.)  I could
_never_ have gotten my PhD thesis work, a control system for an
automated crystal grower, done on the PC-XT upon which it runs.  And,
in developing it elsewhere, I probably learned a lot about portable
coding (which I think is an almost unqualified ``good thing'') which I
wouldn't have otherwise.

Jim Vlcek (vlcek@caf.mit.edu  uunet!mit-caf!vlcek)

scs@hstbme.mit.edu (Steve Summit) (09/17/89)

In article <4151@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) writes:
>>And by the way: why do you need an operator for swapping?
>Because if a machine can do it with a coupla gates and half a cycle,
>I'd like to do it with an operator.
>
>Some machines (we've already seen an example involving a DG) have
>opcodes to swap two values, and it seems a bit ludicrous to code
>some elaborate swapping routine when the optimizer is just going
>to throw it all out and insert the single instruction.
>
>Still, any good optimizer will catch all the obvious cases, but it's
>gotta be harder to write a compiler to do it that way than to implement
>just another canned routine.

I'd much, much rather that the optimizer bend 'way over backwards
to detect cases which reduce to simple opcodes, than have the
language cluttered up with features corresponding to every
"useful" operation any architecture has ever offered, with the
resulting requirement that every compiler writer implement
emulations for all those operations not directly supported by his
particular machine.  (True, as Blair seems to suggest, the
emulations could be portable, "canned" routines.)

Someone has already posted an example of a compiler/optimizer
which translated the dumb, obvious, temporary-variable-using
exchange into a single EXCH instruction.  I cheered when I saw
that -- it's the right way to do optimizations.

From time to time the relative merits of power-of-two
multiplicative arithmetic vs. shift instructions are discussed.
x <<= 2 is ugly if x *= 4 is what is really meant, and it's
completely unnecessary -- any compiler worth its salt will
generate the left shift anyway.  (Ritchie's original PDP11
compiler did so even without the optimizer.)

People always say "but I have to do source-level optimizations,
because I don't have control over the optimizer and it might not
make them."  If the optimizer isn't making the optimizations you
want it to, remember that it may also not be making optimizations
that you don't know about, or have no control over.  If highly
optimized code is that important to you, you'd be much better off
buying a better compiler than spending time, introducing bugs,
and compromising maintainability by cluttering your code with
mechanical, source-level optimizations.

The beauty of low-level optimization (even peephole optimization
following code generation) is that it is automatic and comes in
to play even if you forget to apply your source-level
optimization, and on aspects of the code (such as scaled pointer
arithmetic and subscript calculation) for which you can't.  Every
time you discover a new machine instruction or sequence which
could better implement some C fragment, figure out a way to have
the optimizer recognize the equivalent, obvious C code and
generate the optimized sequence, rather than figuring out some
obfuscated C code which will happen to generate the sequence
using the existing code generator.  The payoff is much greater.

meissner@tiktok.dg.com (Michael Meissner) (09/19/89)

In article <14357@bloom-beacon.MIT.EDU> scs@adam.pika.mit.edu (Steve Summit) writes:
| In article <4151@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) writes:
| >>And by the way: why do you need an operator for swapping?
| >Because if a machine can do it with a coupla gates and half a cycle,
| >I'd like to do it with an operator.
| >
| >Some machines (we've already seen an example involving a DG) have
| >opcodes to swap two values, and it seems a bit ludicrous to code
| >some elaborate swapping routine when the optimizer is just going
| >to throw it all out and insert the single instruction.
| >
| >Still, any good optimizer will catch all the obvious cases, but it's
| >gotta be harder to write a compiler to do it that way than to implement
| >just another canned routine.
| 
| I'd much, much rather that the optimizer bend 'way over backwards
| to detect cases which reduce to simple opcodes, than have the
| language cluttered up with features corresponding to every
| "useful" operation any architecture has ever offered, with the
| resulting requirement that every compiler writer implement
| emulations for all those operations not directly supported by his
| particular machine.  (True, as Blair seems to suggest, the
| emulations could be portable, "canned" routines.)
| 
| Someone has already posted an example of a compiler/optimizer
| which translated the dumb, obvious, temporary-variable-using
| exchange into a single EXCH instruction.  I cheered when I saw
| that -- it's the right way to do optimizations.

Even though the DG MV and Eclipse computers support a SWAP
instruction, none of the compilers that I'm aware of use it (and I
have worked on two different compilers for the DG systems).  About the
only time I can recall using it in assembly language was to work with
instructions that expect things in different fixed registers.  But
then, when you only have 4 integer registers (and of those registers,
only 2 can be used for word memory references), the number of times
you need to swap registers is vanishingly small.

--
Michael Meissner, Data General.
Uucp:		...!mcnc!rti!xyzzy!meissner		If compiles were much
Internet:	meissner@dg-rtp.DG.COM			faster, when would we
Old Internet:	meissner%dg-rtp.DG.COM@relay.cs.net	have time for netnews?

kenny@m.cs.uiuc.edu (09/23/89)

>(I wonder if SIGN is used in any FORTRAN program outside a test suite :-)

You betcha, and it's used outside Fortran, also.  There's no nice way
to do this one simply, either; you *don't* want to do a multiply just
to copy a sign.

I just grepped for copysign in the 4.3BSD math library, and found
fifty-odd uses there alone.

| /         o            Kevin Kenny                             (217) 333-5821
|<  /) |  | | |/\        Department of Computer Science           o  ,    o  ,
| \ X_  \/  | | |        University of Illinois                 40 07 N 88 13 W
kenny@cs.uiuc.edu        1304 W. Springfield Ave.       
uunet!uiucdcs!kenny      Urbana, IL   61801                  AD ASTRA PER ARDUA
k-kenny@uiuc.edu
kenny%cs@uiucvmd.bitnet

tomas@u30003.rsv.svskt.se (Tomas Ruden) (03/22/91)

In article <18401@lanl.gov> jlg@cochiti.lanl.gov (Jim Giles) writes:
>In article <11109@dog.ee.lbl.gov>, torek@elf.ee.lbl.gov (Chris Torek) writes:
>|> [...]
>|> Smaller source code, yes.  Smaller or faster object code---well, if
>|> your compiler generates different code for
>|> 
>|> 	if (a = b)
>|> 
>|> than for
>|> 
>|> 	a = b;
>|> 	if (a)
>|> 
>|> then your compiler is not worth what you paid for it.  [...]
>
>Even if it was _free_!  The above 'optimization' was present
>in Fortran compilers more than 30 years ago - and their _fastest_
>machines were slow compared to today's micros (so the excuse that
>doing the above analysis makes the compiler too slow or complex is
>garbage).
>
>J. Giles

I agree!  A smart programmer can, when knowing the architecure of a CPU,
make code that is fast for that special CPU.  Those who are writing
the codegenerator for a compiler should have that knowledge and make
use of it.  When I am writing C-code, now on RISC-computer, can't write
code that is 'smart' for that perticular architecture because then
it's certanly not 'smart' for a VAX or a 386.

I use theese "if( a=b )" type of constructions because I that it's clearly
defined i C what it means and I don't think it's any obscurement. I
especially use it in "while( c=nextchar() )" kind of constructions. I
belive that when I have to think more carefully about what I have between
the ()'s I don't tend to do the mistake of using '=' instead of '=='
especially often.
-- 
Tomas Ruden,  ...!sunic!u30003!tomas or tomas@u30003.rsv.svskt.se
Don't blame the Swedish Tax      !  I wish I had an English
Administration for my opinions   !  spellingchecker