[comp.lang.c++] Generating temporaries

ericf@persoft.UUCP (Eric R. Feigenson) (04/04/89)

[I'm rather new to C++, so this may be a rather naive query.  Please bear
 with me]

I have a class M for which I want to define operator+.  My question(s) have
to do with how I generate the temporary that contains the result, and how
such a temporary gets destroyed.

First, if my operator+ returns an object of class M, it has to create
an object to return. If I understand things right, this will call
operator+, which will return an object of class M (which was
pushed onto the stack), and do a structure copy into c.  All the destructors
get called just fine, and no stray memory is left around, but there's the
overhead of the structure copy.

Now I've also fiddled around with trying to return a reference to an object
M, in order to avoid the structure copy.  The code for operator+() has to
allocate a pointer to an object of type M using the "new" operator.
So, if we have some code that looks like:

    M a, b, c;

    // some code to give a and b values

    c = a + b;  // calls "M& M::operator+(M& x)"

and we define operator=(M&) to do the assignment, how can we free the
space allocated by operator+() for the temporary result?  If operator=()
does the "delete" then it may destroy some space that we meant to keep
around.  Also, in an expression such as "a + b + c", how can the
intermediate temporaries be freed?  The destructors get called, but how
can they know to delete the space on the free store?

To summarize:  I want to be able to define operator+ (or any other operator
for that matter) using references (pointers) to avoid the overhead of
structure copy.  The problem seems to be knowing when and how to free the
space that the operator must dynamically allocate in order to generate a
result.

I hope this makes sense.  Thanks in advance for your help.  Please e-mail
any responses.

                        -Eric


---------------------------------------------------------------------------
                        -Eric Feigenson
                        Persoft, Inc.
            (Internet)  ericf@Persoft.COM
            (UUCP)      {ihnp4, seismo, uunet, allegra}!uwvax!persoft!ericf

hearn@claris.com (Bob Hearn) (04/05/89)

From article <7.UUL1.2#261@persoft.UUCP>, by ericf@persoft.UUCP (Eric R. Feigenson):
> [I'm rather new to C++, so this may be a rather naive query.  Please bear
>  with me]

Nope, good question.

> I have a class M for which I want to define operator+.  My question(s) have
> to do with how I generate the temporary that contains the result, and how
> such a temporary gets destroyed.
> 
> First, if my operator+ returns an object of class M, it has to create
> an object to return. If I understand things right, this will call
> operator+, which will return an object of class M (which was
> pushed onto the stack), and do a structure copy into c.  All the destructors
> get called just fine, and no stray memory is left around, but there's the
> overhead of the structure copy.
> 
> Now I've also fiddled around with trying to return a reference to an object
> M, in order to avoid the structure copy.  The code for operator+() has to
> allocate a pointer to an object of type M using the "new" operator.
> So, if we have some code that looks like:
> 
>     M a, b, c;
> 
>     // some code to give a and b values
> 
>     c = a + b;  // calls "M& M::operator+(M& x)"
> 
> and we define operator=(M&) to do the assignment, how can we free the
> space allocated by operator+() for the temporary result?  If operator=()
> does the "delete" then it may destroy some space that we meant to keep
> around.  Also, in an expression such as "a + b + c", how can the
> intermediate temporaries be freed?  The destructors get called, but how
> can they know to delete the space on the free store?
> 
> To summarize:  I want to be able to define operator+ (or any other operator
> for that matter) using references (pointers) to avoid the overhead of
> structure copy.  The problem seems to be knowing when and how to free the
> space that the operator must dynamically allocate in order to generate a
> result.
> 
> I hope this makes sense.  Thanks in advance for your help.  Please e-mail
> any responses.

I think this is a good question whose answer should be posted rather than
just emailed.

One thing you can do is have operator+ have a static local object of class
M that it puts the result into.  You can return a reference to this, and
after you do the copy

a = b + c

to a from the temp the temp is not referenced anymore, so it is free to be
used again whenever it is needed.  You don't have to use new or delete,
and in fact the storage for the temp never has to get thrown away.

Bob Hearn
hearn@claris.com

scp@raven.lanl.gov (Stephen Pope) (04/05/89)

In article <7.UUL1.2#261@persoft.UUCP> ericf@persoft.UUCP (Eric R. Feigenson) writes:

   [I'm rather new to C++, so this may be a rather naive query.  Please bear
    with me]

   I have a class M for which I want to define operator+.  My question(s) have
   to do with how I generate the temporary that contains the result, and how
   such a temporary gets destroyed.


Not at all trivial.  Any and all who have tried to create a usable
Vector/Matrix library can tell you so.  If you have followed the
recent discussions of Doug Lea et al. here concerning constructor
idempotency and the "X x ( f() );" vs "X x = f();" problem, you
will find that there is no *robust* solution, whether or not you
use reference counting.

The problem is that your object (class X) needs to identify itself
as being "bound" (as a storage area to a symbol, like X x), or
unbound (a function return value, for example, operator+()
included).  This can be handled in two ways:

1) Create a parallel "tmpX" class, define all functions normally
returning an X as returning an tmpX, and then supply the
appropriate X::X(tmpX&) and X::operator=(tmpX&) routines which
understand how to reuse storage space and data.  This
actually works well, but with a class hierarchy of more than
two or three derived classes and a small handful of object-
returning functions/operators, the combinatorics quickly get
out of hand.  Besides, who wants to write:

	tmpMatrix Matrix::operator+(Matrix&, Matrix&);

2) Include a "temporary" flag in each object, along with constructors
and assignment operators (and any other "smart" operators) that
check this flag and steal data/storage when appropriate.  Unfortunately,
because constructor semantics are still marginally defined (or,
to avoid 3rd degree burns, should I say "defined by their
implementation"), it is not possible to "count constructors"
on the way out of class-object-returning functions and through the
succeeding constructor(s)/assignments.  Furthermore, you cannot
avoid having to use some manner of "smart" return statement from
such functions, which mark the returned value as being reusable
(temporary).

Straight reference counting schemes, which actually pass around
some sort of pointer object to another object with the data
storage with built in reference counting (and suicide for
garbage collecting) fall short whenever assignment is
an important operation of class objects, as in Matrix classes.
Furthermore, as cfront 1.2 delays the destruction of *hidden*
temporaries (used for function return values) until the close
of the innermost enclosing block, you'll discover some
surprising inefficiencies because references you thought
were gone are actually still fully alive, forcing unneccesary
copying of data and/or allocation of new storage space.

If you have watched the evolution of Doug Lea's Matrix
package (still an *experimental* baby), you will have seen
much of this hashed out.  The problems have proven so
acute that for the time being, responsability for temporary
control and storage reuse has been largely pushed back
onto the class user, an unfortunate but seemingly necessary
evil (at least until constructor semantics are a little
better defined/implemented).

Stephen C. Pope                      Disclaimer:
The Santa Fe Institute                  "Employer?  What employer?"
scp@sfi.santafe.edu

bs@alice.UUCP (Bjarne Stroustrup) (04/05/89)

YAHI.STANFORD.EDU (Michael Tiemann) writes:

 > As a general rule of thumb, you can safely assume that no construct was
 > added to C++ which would not have good efficiency.  Return-by-value is
 > no exception.

Thanks, Michael.

richard@pantor.UUCP (Richard Sargent) (04/05/89)

> Received: by pantor.UUCP (UUL1.3#5109)
> 	from uunet with UUCP; Wed, 5 Apr 89 03:32:21 est
> Path: uunet!lll-winken!claris!hearn
> From: hearn@claris.com (Bob Hearn)
> Newsgroups: comp.lang.c++
> Subject: Re: Generating temporaries
> Message-ID: <9431@claris.com>
> Date: 4 Apr 89 18:53:41 GMT
> References: <7.UUL1.2#261@persoft.UUCP>
> Organization: Claris Corporation, Mountain View CA
> Lines: 59
> 
> From article <7.UUL1.2#261@persoft.UUCP>, by ericf@persoft.UUCP (Eric R. Feigenson):
> > [I'm rather new to C++, so this may be a rather naive query.  Please bear
> >  with me]
> 
> Nope, good question.
> 
> > I have a class M for which I want to define operator+.  My question(s) have
> > to do with how I generate the temporary that contains the result, and how
> > such a temporary gets destroyed.
> > 
> > First, if my operator+ returns an object of class M, it has to create
> > an object to return. If I understand things right, this will call
> > operator+, which will return an object of class M (which was
> > pushed onto the stack), and do a structure copy into c.  All the destructors
> > get called just fine, and no stray memory is left around, but there's the
> > overhead of the structure copy.
> > 
> > Now I've also fiddled around with trying to return a reference to an object
> > M, in order to avoid the structure copy.  The code for operator+() has to
> > allocate a pointer to an object of type M using the "new" operator.
> > So, if we have some code that looks like:
> > 
> >     M a, b, c;
> > 
> >     // some code to give a and b values
> > 
> >     c = a + b;  // calls "M& M::operator+(M& x)"
> > 
> > and we define operator=(M&) to do the assignment, how can we free the
> > space allocated by operator+() for the temporary result?  If operator=()
> > does the "delete" then it may destroy some space that we meant to keep
> > around.  Also, in an expression such as "a + b + c", how can the
> > intermediate temporaries be freed?  The destructors get called, but how
> > can they know to delete the space on the free store?
> > 
> > To summarize:  I want to be able to define operator+ (or any other operator
> > for that matter) using references (pointers) to avoid the overhead of
> > structure copy.  The problem seems to be knowing when and how to free the
> > space that the operator must dynamically allocate in order to generate a
> > result.
> > 
> > I hope this makes sense.  Thanks in advance for your help.  Please e-mail
> > any responses.
> 
> I think this is a good question whose answer should be posted rather than
> just emailed.
> 
> One thing you can do is have operator+ have a static local object of class
> M that it puts the result into.  You can return a reference to this, and
> after you do the copy
> 
> a = b + c
> 
> to a from the temp the temp is not referenced anymore, so it is free to be
> used again whenever it is needed.  You don't have to use new or delete,
> and in fact the storage for the temp never has to get thrown away.
> 
> Bob Hearn
> hearn@claris.com
I too am new to C++.  I found the answer informative, but it failed to
clarify what the solution is when the expression is "a = b + c + d" or
any other more complex expression.

Thanks.
Richard Sargent
Systems Analyst

mecklen%gr.utah.edu@wasatch.utah.edu (Robert Mecklenburg) (04/05/89)

>In article <7.UUL1.2#261@persoft.UUCP> ericf@persoft.UUCP (Eric R. Feigenson) writes:
>
>   [I'm rather new to C++, so this may be a rather naive query.  Please bear
>    with me]
>
>   I have a class M for which I want to define operator+.  My question(s) have
>   to do with how I generate the temporary that contains the result, and how
>   such a temporary gets destroyed.

We have a points package (xyz coordinates) which I have just converted
to use '+', '-', and '*' operators.  The basic method I used for
generating temporaries was to allocate a permanent pool of points and
treat it as a circular list.  Points are declared as

	struct point_type { 
	    real_type xyz[3];
	    point_type &operator+( point_type &p );
	 };

And the pool looks like

	extern const int MAX_POINT_POOL;
	extern point_type point_pool[];
	extern point_type *pt_ptr;

When I need another temporary I call the allocation function

	inline point_type *
	pt_nextpt()
	{
	    if ( ++pt_ptr >= &point_pool[MAX_POINT_POOL] )
		pt_ptr = point_pool;
	    return pt_ptr;
	}

So the following code allocates two temporaries and performs one
structure copy at the end

	point_type p;
	p = a + b + c;

The advantages of this are:

	1.  easy to implement and understand
	2.  efficient (overhead of an increment and test-and-branch)

The disadvantages are:

	1.  breaks easily if programmer holds on to a temporary too
	    long, the circular queue will eventually trash his
	    temporary with no warning;
	2.  to be reasonably safe the pool must be reasonably large
	    (we use MAX_POINT_POOL = 64).

Comments?
Robert Mecklenburg

dl@rocky.oswego.edu (Doug Lea) (04/05/89)

Managing temporaries in expression-oriented C++ classes is hard.  
None of the kinds of solutions I know of works in a way that everyone
thinks is entirely satisfactory.

I believe the reason for this lies in the fact that C++ does not (and
perhaps can not (and perhaps should not!)) support the kinds of
programmer-specified expression analyses required in order to translate
`functional' construction-oriented code dealing with arbitrary
programmer-defined classes into the object-oriented programming model
that C++ otherwise emphasizes.

Let me illustrate with a mostly `generic' example. (Pretending that
`M' stands for `Matrix' would not hurt here though!)

class M
{
   desc_type  desc;
   data_type* data;
public:
              M()               { quickly_get_into_a_legal_default_state(); }
              M(M& y)           { copy_desc_and_data(y); }

        void  operator = (M& y) { kill_data(); copy_desc_and_data(y); }
   
        void  add(M& a, M& b);  // add a and b into *this (i.e., *this=a+b)
        void  mul(M& a, M& b)   // similarly multiply a by b into *this

          M   operator + (M& b) { M res; res.add(*this, b); return res; }
          M   operator * (M& b) { M res; res.mul(*this, b); return res; }
};


void print(M&); // just to have a fn that uses M's

Now consider the user code,

void f()
{
    M a, b, c, d, e;

    // ...
    // do some things with a, b, c, d, e
    // ...

    a = b + c * d + e;
    print(a);
    print(d);
}

This is compiled into something equivalent to

void f2()
{
    // ...
    M t1; t1.mul(c, d); M t2; t2.add(b, t1); M t3; t3.add(t2, e); a = t3;
    // ...
}

(It is not exactly equivalent to f(), since t1, t2, and t3, have
lifetimes to the end of f2(), but only to the end of the expression
in f().)

However, anyone who wanted to write a substantially faster version
of this could do it via

void f3()
{
    // ...
    a.mul(c, d); c.add(b, a); a.add(c, e);
    // ...
}

which takes advantage of 
    (1) The fact that `a' can be used to hold partial calculations
         during the course of evaluation, since it is not otherwise 
         used in the expression
    (2) The fact that `c' can be reused as a temporary, since it is not
        used subsequently in f();
    (3) The fact that the final copy into `a' can be avoided by using
        it as the destination of the final operation.

In other examples, additional, similar analyses could have been
performed in order to further simplify things, by e.g., reusing common
subexpressions, taking advantage of the commutativity of addition, and
so on.

Now, if f() were dealing with simple builtin int's rather than objects
of type M, a good optimizing compiler generating 3-address assembly
code (like for a Vax) would have discovered all of this on its own,
and would have translated f() into the assembler analog of f3().
However, there is no way to tell C++ about such optimizations,
because, among many other things, M::operator +(M&) is not obligated
to behave `plus-like' at all.

Here are some things you can do about all this:

First is to encourage programmers to program in the idiom that is best
supported by the language. i.e., in an object-oriented, procedural
style. A good-enough caricature of object-oriented programming is one
in which relatively few objects are `born', they undergo series of
mutations during their lifetimes, then they die. This notion is less
than fully compatible with the caricature of expression oriented
programming in which values are created and passed around, without
ever being mutated, in order to construct new values.

Programming in this fashion has all the disadvantages and advantages
of programming in assembly language versus programming in, say, `pure'
Lisp or ML: It is less aesthetically pleasing to most people, places
more responsibility for temporary management, evaluation order, etc.,
on the programmer, and encourages trickery (or at least non-obvious,
difficult-to-verify coding techniques), but on the other hand, allows
much fuller specification and control, and generates code that can be
almost arbitrarily more time and space efficient.

In a sense, designing one of these `arithmetic-type' classes in a way
that supports this style is like designing your own 1-, 2-, and
3-address special-purpose assembly language. One can even go a bit
further by supporting a mechanism for programmers to specifically
indicate that variables can be `scrapped' and reused as temporaries
inside procedures (So, for example, `M r; r.add(a, scrap(b));' could
then be performed by allowing r to `steal' b's data if necessary
during the calculation), along with a few similar kinds of
constructs.

On the other hand, there is no reason for a class designer *not* to
provide operator syntax as `syntactic sugar' on top of the procedural
base: It allows programmers to write code in a form more closely
related to their conceptual framework, but at the same time providing
a mechanism (via the procedural versions) to fine-tune such code if
they so desire (`Make it right, *then* make it fast', as they say). It
is hard to ask for too much more, except to try to penalize all
programmers as little as possible for the mere act of constructing new
objects via functions, by fine-tuning C++ itself to more gracefully
and efficiently handle such constructions. Things like taking
advantage of X(X&) constructor idempotency via elision, supporting
named returned values, and allowing programmers to declare functions
as `const' to indicate that they are free of side-effects all help
here (and are all implemented as `experimental extensions' in
g++-1.34.2). 

It is also possible to use variations of shadow `temp' classes and/or
reference counting techniques -- each can eliminate some degree of
redundant copying and promote space reuse in the course of computing
expressions. While I have used several such strategies, my current
feeling is that since these techniques cannot be implemented so as to
guarantee anything close to optimal expression evaluation, and thus
lull users into the false impression that they need not worry about
such things, since their use makes otherwise transparent class designs
utterly baffling, and since they have occasionally surprising
side-effects to the unwary, they are rarely worth doing. (Oh, one
thing that does not work at all is to have something like
M::operator+(M&) return a reference to a single static `temp'
variable. This would fail for `a = (b + c) * (d + e);' which requires
2 temporaries.)

Finally, one could really solve the whole problem by establishing a
zillion #pragma's in C++ that could tell a C++ compiler everything it
might ever need to know about `M' and its constituent operations so
that it could perform expression analysis on its own. This looks very
difficult, but is perhaps worth contemplating in a slightly more
tractable guise: Those people who do not wish to break from
expression-oriented programming styles in C++ might want to try
writing special preprocessors for these kinds of classes, that
translate expression code into procedural/oo code, in order to scope out
the kinds of issues involved.

As Stephen Pope indicated, many of these ideas can someday be seen in
the libg++ Matrix package, that will be ready RSN. (Thanks to Stephen,
as well as others for helping me get some of these things straight!)

-Doug


Doug Lea, Computer Science Dept., SUNY Oswego, Oswego, NY, 13126 (315)341-2367
email: dl@rocky.oswego.edu        or dl%rocky.oswego.edu@nisc.nyser.net
UUCP :...cornell!devvax!oswego!dl or ...rutgers!sunybcs!oswego!dl

joel@dtscp1.UUCP (Joel Rives) (04/06/89)

In article <1531@wasatch.utah.edu> mecklen%gr.utah.edu.UUCP@wasatch.utah.edu (Robert Mecklenburg) writes:
>
>The disadvantages are:
>
>	1.  breaks easily if programmer holds on to a temporary too
>	    long, the circular queue will eventually trash his
>	    temporary with no warning;
>	2.  to be reasonably safe the pool must be reasonably large
>	    (we use MAX_POINT_POOL = 64).
>
>Comments?

Though i admit your method is clever, the primary disadvantage that you 
mention above is one which i -- as a developer -- would not want to live
with. It introduces the possibility of a side-effect which could (and 
probably would) create hard to track down bugs.

Joel

jima@hplsla.HP.COM (Jim Adcock) (04/13/89)

Hm, tiemann and bs seem to be saying they know
a good, fast, easy way to handle the 
"binary operator for matrices" class of 
problem.  If so, I'd sure like to know what
the simple solution is, 'cuz I can't figure it out!

I assume that matrices consist of a moderate and
fixed size chunk of store representing 
standard properties of the matrix: number
of rows and columns, zero or non-zero,
symmetric or non-symmetric, etc, plus a 
pointer to a chunk of heap to hold the
n by m numbers that make up that matrix.

So that both the fixed part and the variable
part of the matrix are bigger than one
would like to copy.  And one doesn't
want to introduce temporaries unnecessarily
since that also requires initialization and
a call to the heap manager to allocate
space for the variable-sized part of the
matrix.

Certainly small, and/or fixed sized objects
are not too hard to handle.  For example,
implementing a class of 3x3 matrices would
seem to be pretty straight-forward.

Has anybody found a simple, fast, easy way
to handle these large, variable sized
objects with binary operators?

----

While far from simple, or easy, I have been
playing around with a Matrix approach that
avoids generating unnecessary copies or
temporaries, and generates fast, compact
code.

The general approach is to have the binary
operators, instead of returning a "Matrix",
return a linked structure that represents
the operations that need to be performed,
along with the addresses of the Matrices
on which to perform the operations.  The
operations are not actually performed until
their result is assigned to a matrix, at
which point in time the operation can be
performed "in place."  Since the intermediate
structure that is formed is made of small, fixed
size objects, the intermediate class can
be entirely "inlined", resulting in surprisingly
good efficiencies when followed by a good,
optimizing backend C compiler.

For example:

a = b + c + d;

where a, b, c, and d are matrices, results
in 11 machine code instructions that set
up a linked structure saying what needs to
get added to what, and where to put the 
results, followed by one call to a routine
that copies b into a (resizing if necessary),
and then adds c to a, then adds d to a.

It seems if you mix adds and multiplies in
an expression you are forced to generate
a temporary [on heap space] which then
must be reclaimed at destruction time.

If anybody has actually figured out a
cleaner way to handle these difficult
large, variable sized object classes, 
please tell us.

patch@hpldola.HP.COM (Pat Chkoreff) (04/15/89)

/ hpldola:comp.lang.c++ / jima@hplsla.HP.COM (Jim Adcock) /  4:51 pm  Apr 12, 1989 /
> Hm, tiemann and bs seem to be saying they know
> a good, fast, easy way to handle the 
> "binary operator for matrices" class of 
> problem.  If so, I'd sure like to know what
> the simple solution is, 'cuz I can't figure it out!

Doug Lea (response #5) speaks from experience:  C++ is object-oriented,
not value-oriented.  I'm sure there are lots of clever ways to make the
expression syntax work, but at what price?  I prefer the object-oriented
syntax, if only to avoid that queasy feeling I get when I sin against my
programming language :-).

- Patrick Chkoreff

david@smythsun.JPL.NASA.GOV (David Smyth) (04/15/89)

In article <6590094@hplsla.HP.COM> jima@hplsla.HP.COM (Jim Adcock) writes:
**************************************************************************
*> While far from simple, or easy, ... avoids generating unnecessary
*> copies or temporaries, and generates fast, compact code.
*> 
*> The general approach is to have the binary operators, instead of
*> returning a "Matrix", return a linked structure that represents ...
*> 
*> For example:
*> 
*> a = b + c + d;
*> 
*> where a, b, c, and d are matrices, results in 11 machine code
*> instructions that set up a linked structure saying what needs to get
*> added to what, and where to put the results, followed by one call to a
*> routine that copies b into a (resizing if necessary), and then adds c
*> to a, then adds d to a.
*> 
*> It seems if you mix adds and multiplies in an expression you are forced
*> to generate a temporary [on heap space] which then must be reclaimed at
*> destruction time.
*> 
*> If anybody has actually figured out a cleaner way to handle these
*> difficult large, variable sized object classes, please tell us.
**************************************************************************

One way the is easy: DO NOT DEFINE OR USE OPERATORS FOR CLASSES!

What your approach is doing is a hidden building of messages which is
then given to the assignment method.  That is, as you pointed out, "far
from simple, or easy."

**** flame on ****
Supposedly, we are using C++ to make our lives simpler so we are more
productive.  This approach, as well as other binary operator
approaches, are simply not leading to that goal.
**** flame off ****

Methods are simple, operators are hard.  Therefore, forget the
operators, don't try to write code which looks like C, and then you
WILL see productivity improvements.

pj@hrc63.co.uk (Mr P Johnson "Baddow") (04/17/89)

In article <9431@claris.com>, hearn@claris.com (Bob Hearn) writes:

> I think this is a good question whose answer should be posted rather than
> just emailed.
> 
> One thing you can do is have operator+ have a static local object of class
> M that it puts the result into.  You can return a reference to this, and
> after you do the copy
> 
> a = b + c
> 
> to a from the temp the temp is not referenced anymore, so it is free to be
> used again whenever it is needed.  You don't have to use new or delete,
> and in fact the storage for the temp never has to get thrown away.
> 
> Bob Hearn
> hearn@claris.com


What happens when you do something like

    a = (b + c) + d;

C++ calls "operator+(b, c).  This puts the result in your internal static &
returns a reference.  This reference then goes into the outer addition, which
(without realising it) will overwrite its argument with its result.

I know how you feel: the way C++ does this is not very efficient.  It would
be nice if operators could be told where to put their results.

My only solution is to suggest providing a "+=" operator.  This would add to
its first argument and then return a reference to that argument.  Much
better.

It might be possible to pull some trick with the "M::M( M& )" constructor
to avoid this: if you define this constructor then it will get called for
function argument/return values instead of the default bit-copy function,
but you do have to be careful how this affects your other functions.  This
constructor is explained in Stroustrup section 6.6.

Also, remember that "=" can be defined, otherwise it is also a bit-copy.
Some compilers are intelligent and avoid doing two bit-copies.

Paul

BTW, I am not an expert, so this could be wrong.

If this has been dealt with, sorry.  We have problems with our mail feed
right now.

jima@hplsla.HP.COM (Jim Adcock) (04/27/89)

> **** flame on ****
> Supposedly, we are using C++ to make our lives simpler so we are more
> productive.  This approach, as well as other binary operator
> approaches, are simply not leading to that goal.
> **** flame off ****
> 
> Methods are simple, operators are hard.  Therefore, forget the
> operators, don't try to write code which looks like C, and then you
> WILL see productivity improvements.

There's a division of labor going on here, between a class writer,
and the users of that class.

Anyone who has used a language with larger "math objects" built in,
such as matrices, vectors, or complex numbers, will certainly
tell you what a pleasure and a godsend it is to be able to say:

	A = B + C*D;

rather than:

	mat temp[ROWSIZE][COLSIZE];

	matmpy(C,D,temp);
        matadd(B,temp,A);

-- and how much more readable -- and likely to be right when normal
math notation can be used!

Certainly, if I had a one-off programming task, where the code is
to be used once, then thrown away, I would not bother to used
operators.

Alternately, if I did not want to worry about speed, or storage
requirements, writing classes with operators is not too hard.

BUT, I do not see one-off, throw-away code as being the future
of C++ or object oriented programming.

I see the future of C++ and OOP being people willing to make
the investment in truly world-class easy-to-use, "obvious"
syntax foundation classes that extend the C++ [or other OOPL]
to broader horizons than can be built into a general 
purpose compiler.

So I believe it is worthwhile for a few dedicated souls to put
in the effort to do a really super job on: strings, vectors,
matrices, linked lists, trees, memory managers, etc -- and
then make these widely available to the rest of us.

So that their difficult effort makes our life much easier.

Also, these programming tasks are only really difficult until
someone finds a good solution.  Then the rest of us can apply
the same general principles in creating other "binary operator"
classes.