[comp.lang.c++] Efficiency of OOPLs

david@beowulf.JPL.NASA.GOV (David Smyth) (11/14/88)

In article <6590068@hplsla.HP.COM> jima@hplsla.HP.COM (Jim Adcock) writes:
>
>And among object oriented languages, only C++ seems to have the
>runtime efficiencies to make it viable for most commercially 
>developed code.

I hear this over and over.  And it just is NOT true.  Sit down at a
LISP workstation, and do som ewindowing stuff.  Sit down at a SUN, a Mac,
something running Smalltalk, Mesa, or what have you, and you will find
comparable performance.  The problems we are solving today are MUCH larger
than the problems we used to solve, when compiler efficiency mattered
so much.  Now, it is the efficiency of the human to language communication,
NOT the language to machine communication.

There is no way that a half million lines of code is going to be programmed
as intelligently as 22000 lines of code (approximate UNIX + Window system
+ compilers + tools + debuggers + editors + mailers + ... .vs. Smalltalk).

In C++, one spends too damn much time writing garbage collectors, 
making sure constructors are being called when they should be, ...
Things which have NOTHING to do with really solving the problem.
Think how painful your C code would be if you had to set up and
destroy the stack every time.  Barf.  And think how hard it would
be to really make your stuff run fast.

>Part of this difference comes from the C++ implied claim that there
>is not just one best way of doing these tasks -- that in more
>critical cases programmers are going to have to make informed
>choices as to what memory management schemes to use, ...

You probably remember one of the reasons why so many of us "real
programmers" were unwilling to use any HLL about 10 years ago: you
had to use the stack so much.  If you hand coded your routines, used
macros instead of subroutines, ...

What happend then was that the assembler coded systems were micro-optimized,
macro-stupid.  The same thing is and will happen with C and C++ when
used instead of a REAL object oriented language.
>
>Certainly, anyone who intends to become shall we say "professional"
>at writing Object Oriented code is going to have to become
>proficient in a lot of "new" issues that don't come up as often
>in C code writing -- memory management schemes, garbage collection
>schemes, multiple inheritance, container schemes, try/recover
>schemes, constructors, destructors.

Why?  Right now, people don't have to become proficient in serial port
programming, device drivers, schedulers, etc., even though we all use
them over and over, all day long.

johnr@praxis.co.uk (John Richards) (11/16/88)

In article <3514@jpl-devvax.JPL.NASA.GOV> david@beowulf.JPL.NASA.GOV (David Smyth) writes:
>>
>>And among object oriented languages, only C++ seems to have the
>>runtime efficiencies to make it viable for most commercially 
>>developed code.

>In article <6590068@hplsla.HP.COM> jima@hplsla.HP.COM (Jim Adcock) writes:
>I hear this over and over.  And it just is NOT true.  Sit down at a
>LISP workstation, and do som ewindowing stuff.  Sit down at a SUN, a Mac,
>something running Smalltalk, Mesa, or what have you, and you will find
>comparable performance.

I'll second Jim's message.  We are using Objective-C for user interface
work and yes it is for a commercial product with real live users.  The vast
majority of the code runs perfectly well with no performance problems.  At
the moment we have one or two areas where it needs to run a bit faster, but
we're doing things the proper way round - design it, code it and
then, ONLY IF YOU HAVE TO, optimise.

There are still people who spend too much time worrying about the efficiency
of their compilers.  Okay, there are applications which require every last
ounce of speed but often this can be narrowed down to small bits of code
that are executed frequently.

I once knew a student who was extremely proud of optimising his program
for analyzing results from an experiment he was doing.  "I managed to cut
the run-time by 30 seconds!"  He was less pleased when I pointed out that
it had taken him two hours' work and he only ran the program once a day on
average.

                                      John Richards

psrc@poseidon.ATT.COM (Paul S. R. Chisholm) (11/17/88)

< I'm *not* cross-posting to comp.lang.smalltalk; they've heard this before! >

In article <3514@jpl-devvax.JPL.NASA.GOV>, david@beowulf.JPL.NASA.GOV (David Smyth) writes:
> In article <6590068@hplsla.HP.COM> jima@hplsla.HP.COM (Jim Adcock) writes:
> >And among object oriented languages, only C++ seems to have the
> >runtime efficiencies to make it viable for most commercially 
> >developed code.

(This has a benifit that's often overlooked.  If I can say to
management, with a stright face, "We can write applications in this new
language that will be just as fast as applications in the old language,
both on the old platform", I've got a much better chance of convincing
management to let me use the new language.)

> I hear this over and over.  And it just is NOT true.  Sit down at a
> LISP workstation, and do some windowing stuff.

Excuse me.  Sit down at a IBM PC (4.77 MHz 8088, 640K RAM, two 360K
floppies), and develop and run a Smalltalk/V application.  Then develop
and run the same application in Turbo C.  Yes, the former will give you
opportunities to play around with different implementations, and you'll
resort to a three-fingered salute* a lot less often.  But the latter can
produce an application your customers will find a lot faster.

Not a valid example?  Fine.  Give every developer a Cray 2 with a
screen the size of a living room wall, and any language he or she
chooses.  Then get them to write the software for the central office
switch you just sold to the phone company.  No, you can *not* put a
Cray 2 in the central office; it'll cost more than the phone company is
willing to pay.

One last example that removes the processor and the language:  any
competent programmer with a degree in CS could write you a Travelling
Salesman search.  Of course, it'll run in exponential time.  One *very*
bright man at Murray Hill spent a lot of time figuring out a better
algorithm.  Does speed matter?  For ten nodes, probably not.  For a
thousand, you bet it does.

There are tradeoffs between programmer speed, application speed,
application cost, processor speed, and processor cost.  That's not
earth shattering.  But it's a tradeoff, not a limit.  You say the 6502
and the 8088 aren't *real* processors?  Fine.  Maybe Mitch Kapor
thought so, too, but he was willing to suffer with toys for a million
(VisiPlot for the Apple II) or so (Lotus 1-2-3).

> In C++, one spends too damn much time writing garbage collectors, 
> making sure constructors are being called when they should be, ...

Do you really think so?  Then implement new and delete (and auto
variables, if you like) using garbage collection.  C++ doesn't *forbid*
garbage collection.  Unlike Smalltalk, it doesn't *require* it, either.

> >Part of this difference comes from the C++ implied claim that there
> >is not just one best way of doing these tasks -- that in more
> >critical cases programmers are going to have to make informed
> >choices as to what memory management schemes to use, ...
> 
> You probably remember one of the reasons why so many of us "real
> programmers" were unwilling to use any HLL about 10 years ago: you
> had to use the stack so much.  If you hand coded your routines, used
> macros instead of subroutines, ...

The overhead of a function call is important in critical sections.
Inline functions are a boon for this reason.
 
> What happend then was that the assembler coded systems were micro-optimized,
> macro-stupid.  The same thing is and will happen with C and C++ when
> used instead of a REAL object oriented language.

That can happen with LISP and Smalltalk applications, too.  And fat,
sloppy C applications are all too common; not because of over
micro-optimization, but because of under macro-optimization throughout
the development "cycle", due to time constraints, budget constraints,
or plain carelessness.

> Why?  Right now, people don't have to become proficient in serial port
> programming, device drivers, schedulers, etc., even though we all use
> them over and over, all day long.

Would that it were so.  There are times when performance (of
communications systems, let alone real time software) demands knowing
more about the serial port device driver than the man page provides.
I've been up this creek in a former project, *after* carefully measuring
the performance of the entire system.  No, it's not something that
should probably be reflected in the design stage.  On the other hand,
I've seen designs which (over)specified behavior that was bound to be
inefficient when the code was done.

If I had to prototype (the user interface *or* the implementation of a
program), I'd choose a LISP workstation with Smalltalk over a dedicated
mainframe with a C++ compiler.  If performance wasn't important to my
eventual results (esthetic or otherwi$e), I'd stay with Smalltalk.  But
if my application has to run on boxes that *I'm* paying for, or that my
customer is paying form, I'll write the final program in C++.

Paul S. R. Chisholm, psrc@poseidon.att.com (formerly psc@lznv.att.com)
AT&T Bell Laboratories, att!poseidon!psrc, AT&T Mail !psrchisholm
I'm not speaking for the company, I'm just speaking my mind.