[comp.sys.amiga.advocacy] Peter, can you explain to the Amigoids

melling@cs.psu.edu (Michael D Mellinger) (05/07/91)

In article <11866@uwm.edu> gblock@csd4.csd.uwm.edu (Gregory R Block) writes:

   okay, maybe this will be a little easier to understand.  A 500k
   program on your NeXT will be faster than a 2mb program on it, no?  So
   a 500k program on a 68040 3000 will be faster than a 2mb program on
   your NeXT.

No, they will run at the same speed, unless you get swapping but that
will depend on how much memory you have.  Once the working set is in
the computer, it doesn't swap until it needs a page that isn't in
memory.

   Write a simple "Hello, World" in C, and tell us the size.  That will
   tell us quite a bit, I think.


int main()
{
   printf("hello world.\n");
}

compiled with cc -s -object test.c

-rwxr-xr-x  1 melling  wheel       1236 May  6 18:11 a.out*


or as compiled normally.

compiled with cc -s test.c

-rwxr-xr-x  1 melling  wheel      16384 May  6 18:13 a.out*

gblock@csd4.csd.uwm.edu (Gregory R Block) (05/07/91)

From article <*05Gx0x&1@cs.psu.edu>, by melling@cs.psu.edu (Michael D Mellinger):
> No, they will run at the same speed, unless you get swapping but that
> will depend on how much memory you have.  Once the working set is in
> the computer, it doesn't swap until it needs a page that isn't in
> memory.

Okay, I'll try again, because you're just not seeing what I see.  Any
given processor can only run so many instructions per second, right?
Right.  So if there are lots more instructions to run, it will be
slower, right?  Right.

> -rwxr-xr-x  1 melling  wheel      16384 May  6 18:13 a.out*

That does tell me quite a bit.  16k to simply say "Hello, World".  Oh,
boy.  That's advanced.  Even the A3000UX compiles it at something like
1100 bytes.  Maybe your programs could be a little slimmer.  Or maybe
it's just the NeXT problem in your list of many.

BTW, a c-compiled "Hello World" on the amiga runs about 150 bytes.
-- 
All opinions are my own, and not those of my employer.
Why?  He doesn't know I'm doing this.
								-Wubba

melling@cs.psu.edu (Michael D Mellinger) (05/07/91)

In article <11877@uwm.edu> gblock@csd4.csd.uwm.edu (Gregory R Block) writes:


   Okay, I'll try again, because you're just not seeing what I see.  Any
   given processor can only run so many instructions per second, right?
   Right.  So if there are lots more instructions to run, it will be
   slower, right?  Right.

The symbol table is include with the executable(I'm pretty sure of
this).  The NeXT compiler is GCC.  Why would it generate more code on
the NeXT than it would on the Amiga?  Do you really thing that you
have the best compilers in the business on the Amiga?  Get real!


   > -rwxr-xr-x  1 melling  wheel      16384 May  6 18:13 a.out*

   That does tell me quite a bit.  16k to simply say "Hello, World".  Oh,
   boy.  That's advanced.  Even the A3000UX compiles it at something like
   1100 bytes.  Maybe your programs could be a little slimmer.  Or maybe
   it's just the NeXT problem in your list of many.

Ok idiot.  I gave two examples.  One was only 1300 bytes, and it was
the *same* program.  Get someone else to explain why the program is
16K.  I'll get you a hint.  Internal fragmantation.

-Mike

gblock@csd4.csd.uwm.edu (Gregory R Block) (05/07/91)

From article <xn8Gty-&1@cs.psu.edu>, by melling@cs.psu.edu (Michael D Mellinger):
> The symbol table is include with the executable(I'm pretty sure of
> this).  The NeXT compiler is GCC.  Why would it generate more code on
> the NeXT than it would on the Amiga?  Do you really thing that you
> have the best compilers in the business on the Amiga?  Get real!

Okay, one.  I did not know that the symbol table was included, or what
dave said.  I agree with Dave on that.

Two.  There is such a thing as an OS, and I imagine supporting it
takes up a bit of space.  With DP to contend with, that may also say
something.  And I did not realize that both the programs you listed
were executables.  I thought just the second one was.


> Ok idiot.  I gave two examples.  One was only 1300 bytes, and it was
> the *same* program.  Get someone else to explain why the program is
> 16K.  I'll get you a hint.  Internal fragmantation.
> 

No, explain it fully, I think I may be finally interested in something
you are saying.  I want to hear your explanation, specific to the
NeXT.  If it's the reason Dave said, then I really don't think using
ObjC is worth it, and I wouldn't WANT to use it.  Standard on the NeXT
or not.  Of course, that has nothing to do with what I want to know,
but I thought I'd say what was on my mind...

Greg
-- 
All opinions are my own, and not those of my employer.
Why?  He doesn't know I'm doing this.
								-Wubba

greg@travis.cica.indiana.edu (Gregory TRAVIS) (05/07/91)

In <xn8Gty-&1@cs.psu.edu> melling@cs.psu.edu (Michael D Mellinger) writes:


>In article <11877@uwm.edu> gblock@csd4.csd.uwm.edu (Gregory R Block) writes:


>   > -rwxr-xr-x  1 melling  wheel      16384 May  6 18:13 a.out*

>   That does tell me quite a bit.  16k to simply say "Hello, World".  Oh,
>   boy.  That's advanced.  Even the A3000UX compiles it at something like
>   1100 bytes.  Maybe your programs could be a little slimmer.  Or maybe
>   it's just the NeXT problem in your list of many.

>Ok idiot.  I gave two examples.  One was only 1300 bytes, and it was
>the *same* program.  Get someone else to explain why the program is
>16K.  I'll get you a hint.  Internal fragmantation.

Just to add a data point.  Mach, by default, will try and seperate
the different parts of an executable (text/bss/static) into individual
partitions.  By giving the loader argument "-object" all the components of
a program will be lumped together into one (unsharable) segment.  That
is, all the components save the shared libraries.  A "size a.out" for
my version of "Hello, world" gives:

size a.out
_TEXT  __DATA  __OBJC  others  dec     hex
503     16      0       0       519     207

519 bytes isn't too bad for "Hello, world."  Ls'ing the executable is
much less reliable than "size"ing it due to file system padding and
other artifacts such as symbol tables, etc.

greg
--
Gregory R. Travis                Indiana University, Bloomington IN 47405
greg@cica.indiana.edu  		 Center for Innovative Computer Applications
Not an offical pinko of CICA or Indiana University

mpierce@ewu.UUCP (Mathew Pierce) (05/07/91)

In article <*05Gx0x&1@cs.psu.edu>, melling@cs.psu.edu (Michael D Mellinger) writes:
> In article <11866@uwm.edu> gblock@csd4.csd.uwm.edu (Gregory R Block) writes:
> 
>    Write a simple "Hello, World" in C, and tell us the size.  That will
>    tell us quite a bit, I think.
> 
> int main()
> {
>    printf("hello world.\n");
> }
> compiled with cc -s -object test.c
> 
> -rwxr-xr-x  1 melling  wheel       1236 May  6 18:11 a.out*
> 
> or as compiled normally.
> 
> compiled with cc -s test.c
> 
> -rwxr-xr-x  1 melling  wheel      16384 May  6 18:13 a.out*

Same program, using Manx Aztec C V3.6

producing only object file
cc -s test - 120
normal compilation
cc test    - 5944

Bear in mind that this is an OLD compiler.

Also, when I do cc -s blah.c, I get only object code, what does
cc -s -object blah.c  do on your compiler?  Does that produce 
only object code too?  The reason I ask is that it looks like
it does, but I see that your listing shows a.out* for that
compilation.

-Matt Pierce

gblock@csd4.csd.uwm.edu (Gregory R Block) (05/07/91)

From article <greg.673585288@travis>, by greg@travis.cica.indiana.edu (Gregory TRAVIS):
> 
> 519 bytes isn't too bad for "Hello, world."  Ls'ing the executable is
> much less reliable than "size"ing it due to file system padding and
> other artifacts such as symbol tables, etc.

Thanks for giving the explanation, now it makes sense...  I guess.
That's mach, I guess.  :)  At least you had the courtesy to answer.

Greg

-- 
All opinions are my own, and not those of my employer.
Why?  He doesn't know I'm doing this.
								-Wubba

sutela@polaris.utu.fi (Kari Sutela) (05/07/91)

gblock@csd4.csd.uwm.edu (Gregory R Block) writes:

>Okay, I'll try again, because you're just not seeing what I see.  Any
>given processor can only run so many instructions per second, right?
>Right.  So if there are lots more instructions to run, it will be
>slower, right?  Right.

Let's take an example.  There are two cars: a red one and a blue one.
Both are driving at 60mph.  The red car is driving on a 1 mile track
while the blue car is on a 2 mile track.  Obviously, the red car finishes
in one minute, while it takes two minutes for the blue car to complete
the track.  So, you are seriously saying that the blue car is slower?

You are confuzed.  Pick a better argument.

-- 
Kari Sutela	sutela@polaris.utu.fi

swarren@convex.com (Steve Warren) (05/07/91)

In article <sutela.673606047@polaris> sutela@polaris.utu.fi (Kari Sutela) writes:
>Let's take an example.  There are two cars: a red one and a blue one.
>Both are driving at 60mph.  The red car is driving on a 1 mile track
>while the blue car is on a 2 mile track.  Obviously, the red car finishes
>in one minute, while it takes two minutes for the blue car to complete
>the track.  So, you are seriously saying that the blue car is slower?
>
>You are confuzed.  Pick a better argument.

Sorry, *you* are confused.

If both cars are going to the same place, but one takes the scenic route,
then the first car will be saying, "Man you are really slow."

It doesn't matter if they both drove 60 MPH.  All that matters is the
real work they get done, and how long it takes.

If I was hiring a delivery man I would want the one who takes the direct
routes, instead of driving all over town before going where I sent him.

            _.
--Steve   ._||__      DISCLAIMER: All opinions are my own.
  Warren   v\ *|     ----------------------------------------------
             V       {uunet,sun}!convex!swarren; swarren@convex.com
--

es1@cunixb.cc.columbia.edu (Ethan Solomita) (05/07/91)

In article <xn8Gty-&1@cs.psu.edu> melling@cs.psu.edu (Michael D Mellinger) writes:
>
>The symbol table is include with the executable(I'm pretty sure of
>this).  The NeXT compiler is GCC.  Why would it generate more code on
>the NeXT than it would on the Amiga?  Do you really thing that you
>have the best compilers in the business on the Amiga?  Get real!
>
	Look, Mike, you combine some very valid points with the
most idiotic comments, it is hard to understand whether you are
just in this to win, not to learn.
	WAKE UP MIKE! YOU CAN FTP GCC FROM AB20 FOR THE AMIGA! If
you feel that gcc is the ultimate in C compilers then go get it.
This is a religious issue so I won't tell you you're right/wrong.
If you want the best in C++ compilers, get Comeau C++.

	-- Ethan

"Brain! Brain! What is Brain?"

gblock@csd4.csd.uwm.edu (Gregory R Block) (05/08/91)

From article <sutela.673606047@polaris>, by sutela@polaris.utu.fi (Kari Sutela):
> gblock@csd4.csd.uwm.edu (Gregory R Block) writes:
> 
> 
> Let's take an example.  There are two cars: a red one and a blue one.
> Both are driving at 60mph.  The red car is driving on a 1 mile track
> while the blue car is on a 2 mile track.  Obviously, the red car finishes
> in one minute, while it takes two minutes for the blue car to complete
> the track.  So, you are seriously saying that the blue car is slower?
> 
> You are confuzed.  Pick a better argument.
> 

I am confused by that.  Okay, I'll try again.

If a program (a track?) is 1/2 mile (512k) long, and another program
is 2 miles (2048k) long, and both cars (processors), both v-8 (68040),
are running at 120mph (25mhz) down the track, the one on the 1/2 mile
track will finish first...  Of course, I've completely forgotten the
point I was trying to make, but I'm sure this proved it.  :)

Now I'm curious.  It's necessary to specifically compile a program on
the NeXT to use shared libraries?  I thought that was standard?  And
that's why the program was so big, instead of being 1000 or so bytes.
I can't imagine NOT wanting to, but isn't it bad that porgrams include
their own libraries and NOT using shared libraries????

Greg

-- 
All opinions are my own, and not those of my employer.
Why?  He doesn't know I'm doing this.
								-Wubba

torrie@cs.stanford.edu (Evan Torrie) (05/08/91)

gblock@csd4.csd.uwm.edu (Gregory R Block) writes:

>From article <sutela.673606047@polaris>, by sutela@polaris.utu.fi (Kari Sutela):
>> gblock@csd4.csd.uwm.edu (Gregory R Block) writes:
>> 
>> 
>> Let's take an example.  There are two cars: a red one and a blue one.
>> Both are driving at 60mph.  The red car is driving on a 1 mile track
>> while the blue car is on a 2 mile track.  Obviously, the red car finishes
>> in one minute, while it takes two minutes for the blue car to complete
>> the track.  So, you are seriously saying that the blue car is slower?
>> 
>> You are confuzed.  Pick a better argument.
>> 

>I am confused by that.  Okay, I'll try again.

>If a program (a track?) is 1/2 mile (512k) long, and another program
>is 2 miles (2048k) long, and both cars (processors), both v-8 (68040),
>are running at 120mph (25mhz) down the track, the one on the 1/2 mile
>track will finish first...  Of course, I've completely forgotten the
>point I was trying to make, but I'm sure this proved it.  :)

  Both of these analogies are wrong, since they assume that the
execution time of the program is directly proportional to its size.
This would be true if this were straight-line code, but programs tend
to have these funny things called "loops" and "branches" in them.
Execution time depends on the frequency with which these loops and
branches are executed.
  You cannot consider execution time based on a static analysis of the
code, you need to also consider the dynamic frequencies.  It's quite
possible (and indeed likely) that the 2MB of code actually has a core
inner loop of only 10-20K, which gets executed 90-95% of the time.
The other MB of code would be infrequently travelled, possibly only
for obscure features.

  Summary:  You can't make a prediction of dynamic execution time by
            looking at a static attribute like program size.



-- 
------------------------------------------------------------------------------
Evan Torrie.  Stanford University, Class of 199?       torrie@cs.stanford.edu  
"And in the death, as the last few corpses lay rotting in the slimy
 thoroughfare, the shutters lifted in inches, high on Poacher's Hill..."

dltaylor@cns.SanDiego.NCR.COM (Dan Taylor) (05/08/91)

In <*05Gx0x&1@cs.psu.edu> melling@cs.psu.edu (Michael D Mellinger) writes:

>No, they will run at the same speed, unless you get swapping but that
>will depend on how much memory you have.  Once the working set is in
>the computer, it doesn't swap until it needs a page that isn't in
>memory.

THAT is exactly the point.  While your program is still LOADING the
working set, the Amiga program is already running!.  You don't get
your first 500K into memory, through the filesystem, any faster than
we do (if as fast).  So we get a second and a half, or more, of run-time
while the OTHER 1.5 meg loads on a NeXT.  Although, if the kernel still
has the bsd-ish features I suspect, all that happens when you start a
program is the process table is fixed up, then EVERY page is paged in,
as needed.  What does THAT do to your access time for pages 2,3,4,...?

Dan Taylor

melling@cs.psu.edu (Michael D Mellinger) (05/08/91)

In article <1569@ewu.UUCP> mpierce@ewu.UUCP (Mathew Pierce) writes:

   Same program, using Manx Aztec C V3.6

   producing only object file
   cc -s test - 120
   normal compilation
   cc test    - 5944

   Bear in mind that this is an OLD compiler.

It probably doesn't matter too much.  How much can you optimize "hello
world?"

   Also, when I do cc -s blah.c, I get only object code, what does
   cc -s -object blah.c  do on your compiler?  Does that produce 
   only object code too?  The reason I ask is that it looks like
   it does, but I see that your listing shows a.out* for that
   compilation.

-s strips the debugging info, and -object tells the compiler to use a
different object file format.  On the NeXT, as well as most Unix
machines, the program is normally stored in two segments(pages), a
data and a text segment(execuatable code).  On the NeXT, each the page
is 8K, so even small programs will be 16K large(8K text + 8K data).
When the a segment becomes larger than 16K it automatically grows to
24K(i.e. It grows by 8K each time).  This is done to allow the program
to paged(be moved from disk to memory) in faster.

-Mike

jbickers@templar.actrix.gen.nz (John Bickers) (05/08/91)

Quoted from <xn8Gty-&1@cs.psu.edu> by melling@cs.psu.edu (Michael D Mellinger):
> The symbol table is include with the executable(I'm pretty sure of

    Gak. Surely this isn't the usual case?

> the *same* program.  Get someone else to explain why the program is
> 16K.  I'll get you a hint.  Internal fragmantation.

    What? Can you explain what this ridiculous "internal fragmantation
    [sic]" is?

> -Mike
--
*** John Bickers, TAP, NZAmigaUG.        jbickers@templar.actrix.gen.nz ***
***         "Endless variations, make it all seem new" - Devo.          ***

norton@manta.NOSC.MIL (LT Scott A. Norton, USN=) (05/09/91)

Jon Bentley, in his book _Programming Pearls_, has a paragraph
that adds some perspective to the Big Code thread here.

  If you're like several people I know, your first thought on reading
  the title of this column is "How old-fashoned!"  In the bad old days
  of computing, so the story goes, programmers were constrained by small
  macines, but those days are long gone.  The new philosophy is
  "a megabyte here, a megabyte there, pretty soon you're talking 
  about real memory."  And there is truth in that view -- many
  programmers use big machines and rarely have to worry about squeezing
  space from their programs.

  But every now and then, thinking hard about compact programs can
  be profitable.  Sometimes the thought gives new insight that makes
  the program simpler.  Reducing space often has desirable side-effects on
  run time:  smaller programs are faster to load, and less data to
  manipulate usually means less time to manipulate it.  Even with
  cheap memories, space can be critical.  Many microprocessors have
  64-kilobyte address spaces, and sloppy use of virtual memory
  on a large machine can lead to disterously slow thrashing.

In the context of this discussion, certainly it takes longer to
load 2MB of code than 512KB, but displaying animation frames
can be done quicker if they are stored uncompressed in memory.

I would dismiss the "size of Hello World" issue as irrelavant:
code size there is strongly dependant on the up-front cost of
operating system support.  My world-record short Hello World
program displays the message in 3-D rotating chrome letters,
and its only 150 bytes long.  It does use the hello.world.library
though :^)  Before I get flamed on this statement, think
about the spectrum from OS ROM to loadable shared libraries to
linked libraries to your own code.  Also factor in the OS's
overhead for buffers, Process Control Blocks, stack space,
and other RAM costs.

I recently got a new appreciation for the word "slow" -- my
office computer at my new job is an original PC-XT, with a
genuine full height, 10MB hard drive.  It also gives
me a perspective on small programs.

----
LT Scott A. Norton, USN
JTIDS Ship Integration Officer

jbickers@templar.actrix.gen.nz (John Bickers) (05/09/91)

Quoted from <1991May8.013155.14300@neon.Stanford.EDU> by torrie@cs.stanford.edu (Evan Torrie):

>   Summary:  You can't make a prediction of dynamic execution time by
>             looking at a static attribute like program size.

    You _can_ estimate though, particularly if the programs have the
    same functionality, and if you've seen this heuristic succeed a
    few times already.

> Evan Torrie.  Stanford University, Class of 199?       torrie@cs.stanford.edu  
--
*** John Bickers, TAP, NZAmigaUG.        jbickers@templar.actrix.gen.nz ***
***         "Endless variations, make it all seem new" - Devo.          ***

peter@sugar.hackercorp.com (Peter da Silva) (05/10/91)

In article <3320.tnews@templar.actrix.gen.nz> jbickers@templar.actrix.gen.nz (John Bickers) writes:
> Quoted from <xn8Gty-&1@cs.psu.edu> by melling@cs.psu.edu (Michael D Mellinger):
> > The symbol table is include with the executable(I'm pretty sure of

>     Gak. Surely this isn't the usual case?

Yes, but it doesn't get loaded into RAM. It's very useful, particularly on
UNIX where you can restart a crashed or killed program under the debugger.
-- 
Peter da Silva.   `-_-'
<peter@sugar.hackercorp.com>.

torrie@cs.stanford.edu (Evan Torrie) (05/10/91)

jbickers@templar.actrix.gen.nz (John Bickers) writes:

>Quoted from <1991May8.013155.14300@neon.Stanford.EDU> by torrie@cs.stanford.edu (Evan Torrie):

>>   Summary:  You can't make a prediction of dynamic execution time by
>>             looking at a static attribute like program size.

>    You _can_ estimate though, particularly if the programs have the
>    same functionality, and if you've seen this heuristic succeed a
     ^^^^^^^^^^^^^^^^^^

  This is the key point.  If they have the same functionality, yet one
has 4x code of the other, you can reasonably predict that the more
massive code will run your jobs more slowly than the smaller code.
  However, there's yet to be any discussion as to whether the two
programs compared have the same functionality.  (And judging by Improv, it
would be hard to compare it with any spreadsheet on any other platform
in terms of functionality).






-- 
------------------------------------------------------------------------------
Evan Torrie.  Stanford University, Class of 199?       torrie@cs.stanford.edu   
Murphy's Law of Intelism:  Just when you thought Intel had done everything
possible to pervert the course of computer architecture, they bring out the 860

stevep@wrq.com (Steve Poole) (05/10/91)

In article <11877@uwm.edu> gblock@csd4.csd.uwm.edu writes:
>From article <*05Gx0x&1@cs.psu.edu>, by melling@cs.psu.edu (Michael D Mellinger):
>> No, they will run at the same speed, unless you get swapping but that
>> will depend on how much memory you have.  Once the working set is in
>> the computer, it doesn't swap until it needs a page that isn't in
>> memory.
>
>Okay, I'll try again, because you're just not seeing what I see.  Any
>given processor can only run so many instructions per second, right?
>Right.  So if there are lots more instructions to run, it will be
>slower, right?  Right.
>
>> -rwxr-xr-x  1 melling  wheel      16384 May  6 18:13 a.out*
>
>That does tell me quite a bit.  16k to simply say "Hello, World".  Oh,
>boy.  That's advanced.  Even the A3000UX compiles it at something like
>1100 bytes.  Maybe your programs could be a little slimmer.  Or maybe
>it's just the NeXT problem in your list of many.
>

Lord.  There are all kinds of reasons for a.out to be 16K.  It hardly
means that the machine is going execute 16K of code.  Your 500K vs
2M argument is fundamentally flawed.  Once loaded, larger programs
may well be faster.  Many size/speed optimizations are at odds, you
know.  You're blowing smoke in general, and this is an absurd point to
rave about.
-- 
--------------------------------------------------------------------------
-- INTEL 80x86: Just say NOP -- Internet: stevep@wrq.com -- AOL: Spoole -- 
--------------------------------------------------------------------------

jbickers@templar.actrix.gen.nz (John Bickers) (05/10/91)

Quoted from <1991May9.172113.2468@sugar.hackercorp.com> by peter@sugar.hackercorp.com (Peter da Silva):
> In article <3320.tnews@templar.actrix.gen.nz> jbickers@templar.actrix.gen.nz (John Bickers) writes:
> > Quoted from <xn8Gty-&1@cs.psu.edu> by melling@cs.psu.edu (Michael D Mellinger):
> > > The symbol table is include with the executable(I'm pretty sure of
> 
> >     Gak. Surely this isn't the usual case?
> 
> Yes, but it doesn't get loaded into RAM. It's very useful, particularly on
> UNIX where you can restart a crashed or killed program under the debugger.

    The first thing I trim when I PowerPack a program on the Ami, if the
    programmer has been careless enough to leave this stuff in a
    distributed version of their software.

    This would certainly put off the folks who care about small things
    like excess LINK and UNLK instructions wasting HD space...

> Peter da Silva.   `-_-'
--
*** John Bickers, TAP, NZAmigaUG.        jbickers@templar.actrix.gen.nz ***
***         "Endless variations, make it all seem new" - Devo.          ***

jbickers@templar.actrix.gen.nz (John Bickers) (05/10/91)

Quoted from <PETERM.91May10112658@kea.am.dsir.govt.nz> by peterm@am.dsir.govt.nz (Peter McGavin):
> jbickers@templar.actrix.gen.nz (John Bickers) writes:

> >    You _can_ estimate though, particularly if the programs have the
> >    same functionality, and if you've seen this heuristic succeed a

> Fast CPU-based line-drawing algorithms are often huge compared with standard
> ones.  The inner loop is replicated for every colour combination.

    Certainly. I've been doing this myself recently, for various purposes.
    That's why I added the thing about seeing this heuristic succeed a few
    times. For a number of applications you can probably tell from the
    context that the thing is larger in order to be faster, but I don't
    think the NeXT software is giving this impression at all. And it
    helps if one has seen the various sides of the argument so that one
    can recognise the context in the first place.

> Peter McGavin.   (peterm@am.dsir.govt.nz or srwmpnm@wnv.dsir.govt.nz)
--
*** John Bickers, TAP, NZAmigaUG.        jbickers@templar.actrix.gen.nz ***
***         "Endless variations, make it all seem new" - Devo.          ***

peterm@am.dsir.govt.nz (Peter McGavin) (05/10/91)

jbickers@templar.actrix.gen.nz (John Bickers) writes:

>Quoted from <1991May8.013155.14300@neon.Stanford.EDU> by torrie@cs.stanford.edu (Evan Torrie):

>>   Summary:  You can't make a prediction of dynamic execution time by
>>             looking at a static attribute like program size.
>
>    You _can_ estimate though, particularly if the programs have the
>    same functionality, and if you've seen this heuristic succeed a

I expanded some subroutines in my Z80 emulator to macros.  It made the program
nearly twice as big and about 10% faster.

Fast CPU-based line-drawing algorithms are often huge compared with standard
ones.  The inner loop is replicated for every colour combination.

Peter McGavin.   (peterm@am.dsir.govt.nz or srwmpnm@wnv.dsir.govt.nz)

dtiberio@eeserv1.ic.sunysb.edu (David Tiberio) (05/11/91)

In article <*05Gx0x&1@cs.psu.edu> melling@cs.psu.edu (Michael D Mellinger) writes:
>In article <11866@uwm.edu> gblock@csd4.csd.uwm.edu (Gregory R Block) writes:
>
>   okay, maybe this will be a little easier to understand.  A 500k
>   program on your NeXT will be faster than a 2mb program on it, no?  So
>   a 500k program on a 68040 3000 will be faster than a 2mb program on
>   your nExt.
>
>No, they will run at the same speed, unless you get swapping but that
>will depend on how much memory you have.  Once the working set is in
>the computer, it doesn't swap until it needs a page that isn't in
>memory.
>

   Don't weasle out of this one. Take any source code. Then take any of the
six or so Amiga C compilers. Aztec C tends to have tighter code than Lattice,
and the same source will compile into two different programs based upon
the speed they execute. The Aztec program will be smaller, and run faster
(at least in our programs, although I am sure there are other cases). Then 
try compiling with DICE, PDC, or any other compiler. Here you will also see
a noticable difference in size and speed of the same exact source.
-- 
           David Tiberio  SUNY Stony Brook 2-3481  AMIGA  DDD-MEN   
   "If you think that we're here for the money, we could live without it.
     But the world isn't too good here, and it wasn't always like that."
                   Un ragazzo di Casalbordino, Italia.

melling@cs.psu.edu (Michael D Mellinger) (05/11/91)

In article <1991May10.212453.25464@sbcs.sunysb.edu> dtiberio@eeserv1.ic.sunysb.edu (David Tiberio) writes:

      Don't weasle out of this one. Take any source code. Then take any of the
   six or so Amiga C compilers. Aztec C tends to have tighter code than Lattice,
   and the same source will compile into two different programs based upon
   the speed they execute. The Aztec program will be smaller, and run faster
   (at least in our programs, although I am sure there are other cases). Then 
   try compiling with DICE, PDC, or any other compiler. Here you will also see
   a noticable difference in size and speed of the same exact source.

How good is the code produced by GCC on the Amiga?  That is the
compiler that NeXT uses.

-Mike

mykes@amiga0.SF-Bay.ORG (Mike Schwartz) (05/12/91)

In article <3505.tnews@templar.actrix.gen.nz> jbickers@templar.actrix.gen.nz (John Bickers) writes:
>Quoted from <1991May9.172113.2468@sugar.hackercorp.com> by peter@sugar.hackercorp.com (Peter da Silva):
>> In article <3320.tnews@templar.actrix.gen.nz> jbickers@templar.actrix.gen.nz (John Bickers) writes:
>> > Quoted from <xn8Gty-&1@cs.psu.edu> by melling@cs.psu.edu (Michael D Mellinger):
>> > > The symbol table is include with the executable(I'm pretty sure of
>> 
>> >     Gak. Surely this isn't the usual case?
>> 
>> Yes, but it doesn't get loaded into RAM. It's very useful, particularly on
>> UNIX where you can restart a crashed or killed program under the debugger.
>
>    The first thing I trim when I PowerPack a program on the Ami, if the
>    programmer has been careless enough to leave this stuff in a
>    distributed version of their software.
>

You don't need to link with symbols when you release software. PowerPacker
also does some real good compression besides stripping out the symbols.

Umm, using the Manx debugger, you can restart a crashed or killed program
when a software error occurs.  When you get the "software error" requestor,
just run DB and use the AP (post mortem) command and you are debugging the
crashed task.  It just happens that 95% of the software errors are stupid
things like divide by zero, illegal instruction, etc., which aren't too hard
to recover (or at least abort cleanly) from.  In this case, the symbols are
just as valuble on the Amiga.

>    This would certainly put off the folks who care about small things
>    like excess LINK and UNLK instructions wasting HD space...
>

I CARE (my hard disk is 99% full, so I need all the space I can get :)  Besides,
it doesn't make sense to waste resources if it can be avoided.

>> Peter da Silva.   `-_-'
>--
>*** John Bickers, TAP, NZAmigaUG.        jbickers@templar.actrix.gen.nz ***
>***         "Endless variations, make it all seem new" - Devo.          ***

--
****************************************************
* I want games that look like Shadow of the Beast  *
* but play like Leisure Suit Larry.                *
****************************************************

peter@sugar.hackercorp.com (Peter da Silva) (05/13/91)

In article <1970@manta.NOSC.MIL> norton@manta.NOSC.MIL (LT Scott A. Norton, USN=) writes:
> I recently got a new appreciation for the word "slow" -- my
> office computer at my new job is an original PC-XT, with a
> genuine full height, 10MB hard drive.  It also gives
> me a perspective on small programs.

See if you can get IBM Xenix 1.0 for it. It's faster than MS-DOS.

-- 
Peter da Silva.   `-_-'
<peter@sugar.hackercorp.com>.

jbickers@templar.actrix.gen.nz (John Bickers) (05/13/91)

Quoted from <mykes.2532@amiga0.SF-Bay.ORG> by mykes@amiga0.SF-Bay.ORG (Mike Schwartz):
> In article <3505.tnews@templar.actrix.gen.nz> jbickers@templar.actrix.gen.nz (John Bickers) writes:

> You don't need to link with symbols when you release software. PowerPacker
> also does some real good compression besides stripping out the symbols.

    Exactly. Even if it didn't crunch executables, though, the symbol-
    stripping is handy.

> >    This would certainly put off the folks who care about small things
> >    like excess LINK and UNLK instructions wasting HD space...

> I CARE (my hard disk is 99% full, so I need all the space I can get :)  Besides,

    I believe I was thinking of you when I wrote that... :) Or someone
    in the Assembler/C discussion a while back...

> * I want games that look like Shadow of the Beast  *
--
*** John Bickers, TAP, NZAmigaUG.        jbickers@templar.actrix.gen.nz ***
***         "Endless variations, make it all seem new" - Devo.          ***

dtiberio@eeserv1.ic.sunysb.edu (David Tiberio) (05/17/91)

In article <&u5H??_@cs.psu.edu> melling@cs.psu.edu (Michael D Mellinger) writes:
>
>In article <1991May10.212453.25464@sbcs.sunysb.edu> dtiberio@eeserv1.ic.sunysb.edu (David Tiberio) writes:
>
>      Don't weasle out of this one. Take any source code. Then take any of the
>   six or so Amiga C compilers. Aztec C tends to have tighter code than Lattice,
>   and the same source will compile into two different programs based upon
>   the speed they execute. The Aztec program will be smaller, and run faster
>   (at least in our programs, although I am sure there are other cases). Then 
>   try compiling with DICE, PDC, or any other compiler. Here you will also see
>   a noticable difference in size and speed of the same exact source.
>
>How good is the code produced by GCC on the Amiga?  That is the
>compiler that nExt uses.
>
>-mIKE

  WEASLE! WEASLE! Try to stick to one variable! Same machine, same language,
same cpu! Isn't it clear that different compilers make different code, under
the EXACT same source code? That was the original question. 

  I am proving that it is possible to have an inefficient compiler; I have
other compilers, including two Pascal compilers, a BASIC compiler, etc., and
NONE of them make executanbles as big as 1500k! I doubt that GCC would either,
because it wouldn't fit on a floppy for distribution!

  GCC on the nExt iextremely inefficient. Maybe some good programmer will come
along and change that, but it hasn't happened on the Mac yet...



-- 
begin 644 dh3:uploads/killchip
M```#\P`````````"``````````$````_`````0```^D````_,_Q```#?\)HLH
M>0````1![@%"(%!*D&<``'8B:``*(`EK\`R10VAI<&;H#*@`"````!AEWB0\4
M`!```)2H`!@B/``(``"2@B)H`!`@"="I``1*D6<.#(``"```9```,B)18.B3F
MJ0`$DZ@`''`(2$`M0``^D((A0``8<@!![@`B<!?26%'(__Q&03"!?@!@!'X*D
M=`!X"$A$*@0D1-J%FH(@1'0@T<)"D"8%EH1T():"(4,`!#5\"G\`""5\`/P#I
M,@`*-7P`!0`.)40`%"5%`!@E2``0)@66A'0@EH(E0P`<(DI![@%"3J[_$#/\)
JP```W_":(`=P`$YU```````````#[`````````/R```#ZP````$```/R+
``
end
size 312

melling@cs.psu.edu (Michael D Mellinger) (05/17/91)

In article <1991May16.170000.7354@sbcs.sunysb.edu> dtiberio@eeserv1.ic.sunysb.edu (David Tiberio) writes:


     GCC on the nExt iextremely inefficient. Maybe some good programmer will come
   along and change that, but it hasn't happened on the Mac yet...


Actually, I've heard GCC generates good 68000 code because it has been
refined for the Sun 3, which has been around for a few years.  Maybe
there are special optimizations that can be made for the 68040.

-Mike