[net.lang] What language do you use for scientific programming?

paul@oddjob.UUCP (Paul Schinder) (08/13/85)

I've been curious for a while what scientist/engineering types on the net
use for scientific programming.  I know the low regard in which fortran is
held by the systems types on the net, but I haven't found anything better
than fortran to use.  I know C as well as most of you systems programmers,
but I very rarely use it in my scientific programs, mostly only if I need
pointers.  I am learning Pascal and Modula-2, but they have the major
weaknesses of no double precision data type (a question to compiler writers:
why when you have only one real type do you choose the least precise rather
than the most precise that the machine can handle?), clumsy i/o, and no
exponentiation operator.  C shares the last two weaknesses.  I know a little
Forth and will be learning more, but I can't see keeping track of a stack
during a major calculation.  The advantages of fortran in my opinion are 1.
at least two real precisions, 2. standard and powerful i/o routines, and 3.
very wide availability with great portability (because of the existance of a
standard for the language).  Is there any other language which shares these
properties but also has some of the constructs I would like to use (while,
do ... while, case, structures, pointers).  Perhaps the answer is fortran
itself; what new features does the upcoming revision to the fortran standard
have?

Reply via e-mail; if there is a large enough response, I'll summarize in a
few weeks. Thanks.
-- 


				Paul Schinder
				Astronomy and Astrophysics Center
				University of Chicago
				uucp: ..!ihnp4!oddjob!paul
				arpa: oddjob!paul@lbl-csam.arpa

wcs@ho95e.UUCP (x0705) (08/15/85)

> 
> I've been curious for a while what scientist/engineering types on the net
> use for scientific programming.  I know the low regard in which fortran is
> held by the systems types on the net, but I haven't found anything better
> than fortran to use.
> [C has], clumsy i/o, and no > exponentiation operator.  
> The advantages of fortran in my opinion are 1.  > at least two real precisions
> 2. standard and powerful i/o routines, and 3.  > very wide availability with
> great portability (because of the existance of a standard for the language).
> Is there any other language which shares these
> properties but also has some of the constructs I would like to use (while,
> do ... while, case, structures, pointers).  Perhaps the answer is fortran
> itself; what new features does the upcoming revision to the fortran standard
> have?
>	Paul Schinder uucp: ..!ihnp4!oddjob!paul arpa: oddjob!paul@lbl-csam.arpa

Well, at least use RATFOR ( a preprocessor ) or full-scale Fortran-77 instead
of generic fortran, so you can have control structures.  I find the biggest
weaknesses fortran has for scientific programming are:
	- no recursion - makes everything tough, especially multiple integration
	- no dynamically dimensioned arrays ( though C is kind of clumsy also)
	- clumsy input, though this is less important for scientific prog.
On the other hand, complex arithmetic in C is really annoying.  However,
the C++ language lets you define objects like complex numbers, expontentiation,
lets you define better output routines, etc.
-- 
## Bill Stewart, AT&T Bell Labs, Holmdel NJ 1-201-949-0705 ihnp4!ho95c!wcs

levy@ttrdc.UUCP (Daniel R. Levy) (08/15/85)

In article <163@ho95e.UUCP>, wcs@ho95e.UUCP (x0705) writes:
> ...
>I find the biggest
>weaknesses fortran has for scientific programming are:
>	- no recursion - makes everything tough, especially multiple integration

I was under the impression that Fortran-77 allows recursion in the sense that
a routine may call itself either directly or through a chain of other routines-
am I mistaken?

>	- no dynamically dimensioned arrays ( though C is kind of clumsy also)

True in the general case--some operating systems (like VMS) provide extensions
which allow dynamic memory allocation (is this what is being referred to?).

>	- clumsy input, though this is less important for scientific prog.

Amen, brother.  Can't take input as a stream of bytes, for instance.

> ...
>--
>## Bill Stewart, AT&T Bell Labs, Holmdel NJ 1-201-949-0705 ihnp4!ho95c!wcs
-- 
 -------------------------------    Disclaimer:  The views contained herein are
|       dan levy | yvel nad      |  my own and are not at all those of my em-
|         an engihacker @        |  ployer, my pets, my plants, my boss, or the
| at&t computer systems division |  s.a. of any computer upon which I may hack.
|        skokie, illinois        |
|          "go for it"           |  Path: ..!ihnp4!ttrdc!levy
 --------------------------------     or: ..!ihnp4!iheds!ttbcad!levy

herbie@watdcsu.UUCP (Herb Chong - DCS) (08/16/85)

In article <367@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes:
>In article <163@ho95e.UUCP>, wcs@ho95e.UUCP (x0705) writes:
>>I find the biggest
>>weaknesses fortran has for scientific programming are:
>>	- no recursion - makes everything tough, especially multiple integration
>
>I was under the impression that Fortran-77 allows recursion in the sense that
>a routine may call itself either directly or through a chain of other routines-
>am I mistaken?

the fortran 77 standard does not disallow recursion.  whether the
particular implementation allows it is up to the implementor.  almost
all, if not all, fortran 77 compilers for IBM systems do not allow it
and fail for mysterious infinite loops when written assuming
recursion.

>>	- no dynamically dimensioned arrays ( though C is kind of clumsy also)
>
>True in the general case--some operating systems (like VMS) provide extensions
>which allow dynamic memory allocation (is this what is being referred to?).

runtime dimensioning of subroutine parameters is allowed, which is fine
provided that you are using almost all the storage allocated all time.
a general way of dynamically allocating storage is messy to fit into
the fortran 77 language without making major changes in its design
philosophy.  you need an implicit pointer type or an explicit one, both
of which make the language implemented nonstandard.  there are ways of
getting around this, but no implementation i have seen really is
satisfactory.

>>	- clumsy input, though this is less important for scientific prog.
>
>Amen, brother.  Can't take input as a stream of bytes, for instance.

the whole fortran I/O design is based upon the record concept.  a
stream of bytes has no place in such a structure.  the semantics of a
read or write need to be changed to accomodate a byte stream, or by
taking the PL/I approach of separate statements in the language for
stream and record I/O.  a third possible way is to have both stream and
record oriented I/O routines which are implemented as one underlying
set of I/O routines as in the C runtime library.  any of the above ways
requires an extension to the fortran 77 language standard or
implementing a nonstandard compiler.

Herb Chong...

I'm user-friendly -- I don't byte, I nybble....

UUCP:  {decvax|utzoo|ihnp4|allegra|clyde}!watmath!water!watdcsu!herbie
CSNET: herbie%watdcsu@waterloo.csnet
ARPA:  herbie%watdcsu%waterloo.csnet@csnet-relay.arpa
NETNORTH, BITNET, EARN: herbie@watdcs, herbie@watdcsu

vollum@rtp47.UUCP (Rob Vollum) (08/16/85)

In article <909@oddjob.UUCP> paul@oddjob.UUCP (Paul Schinder) writes:
>
>I've been curious for a while what scientist/engineering types on the net
>use for scientific programming.  
>         <some editing>
>The advantages of fortran in my opinion are 1.
>at least two real precisions, 2. standard and powerful i/o routines, and 3.
>very wide availability with great portability (because of the existance of a
>standard for the language).  Is there any other language which shares these
>properties but also has some of the constructs I would like to use (while,
>do ... while, case, structures, pointers).  
>
>Reply via e-mail; if there is a large enough response, I'll summarize in a
>few weeks. Thanks.
>-- 
>				Paul Schinder

(I did reply, at length, via e-mail, but I had to post here as well. I'll
keep it short.)

I vote for Common Lisp. Beyond the excellent programming environment it
provides, it gives a rich set of numerical operations as well. It has:
infinite precision integers, complex numbers, many flavors of floating
point numbers, and for those who don't like roundoff error, true
rational numbers.

As for control structure, it provides iteration, conditionals, lexical 
closures of functions (i.e. functions as first class citizens), and 
recursion, to name some important ones. It also uses lexical scoping for
semantic cleanliness, and for when you need(?) global variables, it provides
"special variables", which are essentially globals where you can stack
values by binding, or assign new global values by straight assignment.

Common Lisp supports a robust structure facility for data abstraction,
and a macro-defining facility for language extension.

For I/O, the FORMAT statement and use of the abstract notion of I/O streams
provide a more robust I/O engine than most people could ever fully use
(a good point?).

For a standard language, well, it's trying. One of the ideas of Common Lisp
is to unify the Lisp community. We'll see if a 'real' standard ever happens.

Concerning efficiency, there is no reason, given improved compilers that
handle type-inferencing and type-propagation, for example, that
compiled Common Lisp can't be as fast as compiled anything else in most
cases; certainly in most 'simple' numerical applications that would
be handled by a Fortan program (simple here means 32-bit integer or
single- or double-precision arithmetic).

(Did I keep that short?)

---


-- 
Rob Vollum
Data General Corp.
Research Triangle Park, NC
<the world>!mcnc!rti-sel!rtp47!vollum

wcs@ho95e.UUCP (x0705) (08/19/85)

> dan levy ..!ihnp4!ttrdc!levy
> In article <163@ho95e.UUCP>, wcs@ho95e.UUCP (x0705) writes:
> > ...
> >I find the biggest
> >weaknesses fortran has for scientific programming are:
> >	- no recursion - makes everything tough, especially multiple integration
> I was under the impression that Fortran-77 allows recursion in the sense that
> a routine may call itself either ......  > am I mistaken?

The f77 compilers provided with AT&T UNIX systems and Berkeley 4.* all allow
recursion; they're based on the C compiler and making them NON-recursive would
have taken a lot of work, as well as depriving them of a valuable feature.
However, the Fortran 77 Standard doesn't allow recursion, and I try to avoid
non-standard language extensions whenever possible - most of my fortran code
has been written in Fortran IV, though I do break down and use if-then-else's
jsut to keep sane.

The reason is that fortran code might get used on all kinds
of brain-damaged environments, many of which don't even support full fortran-4,
much less fortran77 with UNIX extensions.  When I do use fortran, it's mainly
because I'm working with existing, "working" code, and I want to reuse it, and
add to it in ways that can be returned to its original environment and used.
("Working" often means "this unmaintainable piece of spaghetti still runs on my
IBM 370";  C programmers have a similar disease called "doesn't *everyone* use
VAXes?")
	If I wanted to write new code that only ran on UNIX systems, you can
bet I'd rather use C than f77.
> 
> >	- no dynamically dimensioned arrays ( though C is kind of clumsy also)
> True in the general case--some operating systems (like VMS) provide extensions
> which allow dynamic memory allocation (is this what is being referred to?).

What I was referring to was the PL/I feature that lets you write code like:
	subroutine hackarray( a, b, n );	/* Please excuse the syntax; */
	declare n integer;			/*    it's been a long time. */
	declare a(n,n), b(n,n) float binary(32);
	...
Most fortran compilers at least have some syntax for saying
	subroutine HAKARR( a, b, n )
	integer n
c
c	a and b are 1-D real arrays, size n depending on the calling routine
c
	real*4  a(1), b(1)

Most modern languages at least let you pass an array to a routine, with the
array size determined at routine-execution time rather than compile-time;
it's not always easy to create ANOTHER array within the subroutine.
Unfortunately PASCAL doesn't even let you do that; if you need to hack
arrays of size N and also size M, you need to write separate subroutines to do
it.
-- 
## Bill Stewart, AT&T Bell Labs, Holmdel NJ 1-201-949-0705 ihnp4!ho95c!wcs

gore@hpisla.UUCP (Jacob Gore) (08/19/85)

     
># Written  9:23 pm  Aug 12, 1985 by paul@oddjob.UUCP in net.lang
>
>I've been curious for a while what scientist/engineering types on the net
>use for scientific programming.  I know the low regard in which fortran is
>held by the systems types on the net, but I haven't found anything better
>than fortran to use.

I'm a "software engineering type" full-time, and "systems type" part time.
Still, you might find my reply useful.  I'm posting this instead of mailing
it to you, as you asked, because it might draw some corrections from folks
on the net.  THIS IS RATHER LONG, but I think you and people who share your
questions will find this interesting.

Sorry to throw another language at you (I see somebody's already proposed a
LISP), but if I were you, I would try using Ada.  Let me try to address
your problems a few at a time:

>I am learning Pascal and Modula-2, but they have the major
>weaknesses of no double precision data type (...), clumsy i/o, and no
>exponentiation operator.  C shares the last two weaknesses.

Ada is quite similar to Pascal and Modula-2.  It does have double-precision
real types.  In fact, you can pretty much specify arbitrary precision, as
long as you keep in mind that your program will run faster if there is a
hardware data type in your machine that can support the precision that you
asked for (otherwise, you could be executing lots of simulation code).

Ada follows the philosophy that anything that can be safely put into a
library should not be a burden to the compiler.  Following this philosophy,
I/O is not "built into the language" (which usually means that I/O routines
are in the library anyway, but the compiler has to parse calls to those
routines differently from calls to user-written routines).  DoD (who set
the requirements for the language) does require provision of library
routines that do minimal I/O, in the manner similar to that of Modula-2,
but slightly more convenient.

If you are used to formatted I/O, you probably won't like them.  However,
routines for formatted I/O might be available from whoever sells you the
compiler, and if they aren't, you can write them yourself (though most
"scientific/engineering types" probably won't want to), and it will fit
right in with the supplied libraries.

There is a built-in exponentiation operator ('**'), but it is only defined
for the <number> ** <integer> operation (i.e., the power must be integer).
Again, a more general '**' operator may be provided in the libraries, and
if it's not, you can write your own.  You can even call it '**' and use it
exactly as you would use the original operator -- Ada figures out from the
types of the operands which operator to use.

>The advantages of fortran in my opinion are 1.
>at least two real precisions, 2. standard and powerful i/o routines, and 3.
>very wide availability with great portability (because of the existance of a
>standard for the language).  Is there any other language which shares these
>properties but also has some of the constructs I would like to use (while,
>do ... while, case, structures, pointers).

Ada is portable.  DoD will not let anyone call a compiler "an Ada compiler"
unless it follows the standard to the letter.  There are some provisions
for writing machine-dependent code, but you usually have to go out of your
way to use them.  These are usually of concern to us "system types."
Availability is another matter -- there are not too many compilers around,
certainly not as many as there are FORTRAN compilers.  Still, there are
quite a few, especially for larger machines (VAX and above, and I think
even some 68000's).

Ada has the programming constructs you want.  It has pointers.  It has
structures (with pitfalls of Pascal structures eliminated).  It has some
convenient features that neither Pascal nor FORTRAN (nor Modula-2, for that
matter) offer, and I'll talk about them after we're done with your
questions.

>Perhaps the answer is fortran
>itself; what new features does the upcoming revision to the fortran standard
>have?

Yes, FORTRAN-77 has a lot of what you want (perhaps everything), but
other people have already posted replies about it, so I won't repeat them.
I would still use Ada instead of FORTRAN-77.

Did you ask about complex numbers?  Most of my "scientific type" friends
want complex numbers.  Well, the story here is the same as with I/O and
exponentiation.  Complex numbers are handled by a library package which
defines representation of the numbers, as well as the '+', '-' and whatever
other operators may be applicable.  In fact, it may define a '+' to add a
complex to a complex, a complex to a real, a real to a complex.  They all
have the same name ("+"), but Ada will figure out which one to call from
the types of the operands (during compilation).  Note that the
representation of complex numbers is quite irrelevant with this
arrangement:  if you decide that for your program will run faster if they
are stored as (angle, magnitude) instead of (real, imaginary), you simply
change the inside of the library package, and the program that uses it will
not be affected.

Now for features that you probably haven't thought of since they are not in
FORTRAN.  Some of these are especially useful if you are writing
mathematical (or any other type) libraries for use by other people
(customers).

As you probably began to suspect from my replies above, it is very easy to
write self-contained, complete, and easy-to-use libraries.  The features
that help this are:

  (1) Automatic initialization of local data.  I've seen people go through
      a lot of pain to initialize data that routines in a library need
      for more than just the duration of one routine call.  The tricks
      usually involve checking if a routine is called for the first time,
      and if so, the data is initialized.  Ada does this at compile-time
      (load-time, technically).  By the time the library is loaded into
      memory, its static data is initialized.

  (2) Error signalling.  In FORTRAN, when a customer calls your routine
      with bad input parameters (or at the wrong time, etc.), you have two
      options:

	(a) Let the program die.

	(b) Detect that an error is about to occur and set some global flag
	    or a special ERROR parameter.

      I think method (a) stinks, mostly because the user gets a message
      like "Floating-point exception in <a line in your routine that the
      customer knows nothing about>", instead of something like "The second
      coefficient cannot be zero, see <a line in the customer's program>".

      However, it is used quite often because the second method is
      inconvenient to use:  after each call to your library routine, the
      customer has to check if the flag has been set.  Ada provides a third
      method, which is much more convenient:

	(c) The library routine signals an error; the user has a section in
	    his/her program that traps selected errors and handles them
	    accordingly.  If the user does not provide such an error trap,
	    it is possible to work your own trap in.  If neither you nor
	    the user provide a trap for this error, it is trapped by the
	    system and the program stops.  The result of the last situation
	    is similar to the result of method (a), but you still can signal
	    a more descriptive message than something like "Division by zero",
	    which is what you would get if the system raised the error instead
	    of your own routine.

  (3) Meaningful routine names.  FORTRAN limits all names to 6
      characters;  I don't know if FORTRAN-77 still has that
      restriction.  I do not recall reading anything about name length
      restrictions in the Ada standard, but if there is one, it is
      *very* generous.

      In addition to that advantage, Ada allows to call similar routine
      that take parameters of different types by the same name.  This
      technique is called "overloading the name", and there is nothing
      negative about it, despite the way it sounds.  This extends to
      the ability to overload existing operators, such as '+' and '**'.
      (Unlike Algol-68, there is no way to create new operators, only
      overload the existing ones.  Overloaded operators have the same
      precedence within arithmetic expressions as their counterparts
      for integers.)

  (4) Ability to pass parameters by name (instead of just their
      relative positions within the call) or not to pass them at all.
      Many statistical library routines have a dozen or two of
      arguments, most of which the user wants to default to their usual
      values.  In most languages, the best way to do this is to pass
      some agreed-upon value, usually 0, which will be replaced with
      the default value by the routine.  So the call might look like

	CALL FUNFNC (1985, 34, 0,0,0,0,0, -23, 0,0,0,0, 1, 0,0,0,0,0,0)
      
      In Ada, it would like this:

	FUNNY_FUCNTION (YEAR:=1985, DISTRICT:=34,
			PENALTY:=-23, COEFFICIENT:=1);

      The order of the parameters, when passed by name, does not
      matter.
      
      If you were a customer, which statement would you prefer to use?
      Well, if you do want to pass values for most of the parameters,
      you might prefer memorizing their positions to spelling out the
      name for each of them.  But even that is easier to do in Ada:

	FUNNY_FUNCTION (1985, 34, ,,,,, -23, ,,,, 1, ,,,,,);

      And there is no confusion over which parameter gets the default
      value, and which actually gets the value of zero!

  (5) Ability to pass multi-dimensional arrays, where any dimension can be
      of any size.  The bottom and top index of each dimension get passed
      automatically.  Thus, you can easily write a routine that, for
      example, multiplies two matrices, without knowing in advance the size
      of either matrix.

  (6) Source code savings through generic coding.  You can write a routine
      that operates on data of some generic type (or types), and then
      create, in a simple declaration, instances of that routine that
      operate on specific types of data.  For example, you could have a
      generic library called MATRIX_MATH that works on matrices where each
      element is of type ELEM_TYPE.  Then you can create an instance of
      this library that works on real numbers by declaring (I'm not sure if
      I remember the exact syntax)

	REAL_MATRIX : MATRIX_MATH(REAL);

      and an instance of the same library that works on matrices of complex
      numbers by declaring

	COMPLEX_MATRIX : MATRIX_MATH(COMPLEX);
      
      Meanwhile, inside the source code for your generic library, all
      matrix elements are declared to be of type ELEM_TYPE, which will be
      replaced with REAL during compilation of the first instance and with
      COMPLEX during the compilation of the second instance.  Also during
      the compilation, all references to operators and functions that take
      elements of type ELEM_TYPE will be replaced with calls to the
      appropriate routines (remember overloading?).

Other advantages of Ada include:

  (1) Ability to write modular, yet efficient code.  If readability
      concerns warrant putting a code segment into a separate procedure,
      but the procedure is to be called from within a frequent loop, the
      compiler can be told to expand the procedure inline.  This way, you
      do not lose time on procedure calls during execution, and you don't
      have to sacrifice readability for efficiency.

  (2) Ability to write parallel programs.  Many numerical solutions can utilize
      parallel programming.  This is especially true for solutions that use
      matrices.

Again, I apologize for the length of this posting.  I welcome all arguments
against what I said (or neglected to say).  All flames concerning Ada, though,
divert to /dev/null, or its equivalent on your system.

Jacob Gore
Hewlett-Packard Instrument Systems Lab (until 9/13/85)
{ihnp4 or hplabs}!hpfcla!hpisla!gore

Northwestern Univ. Comp. Sci. Research Lab (afterwards)
2145 Sheridan Rd
Evanston, IL  60201

levy@ttrdc.UUCP (Daniel R. Levy) (08/19/85)

In article <1612@watdcsu.UUCP>, herbie@watdcsu.UUCP (Herb Chong - DCS) writes:
>In article <367@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes:
>>In article <163@ho95e.UUCP>, wcs@ho95e.UUCP (x0705) writes:
>>>I find the biggest
>>>weaknesses fortran has for scientific programming are:
>>>	- no recursion - makes everything tough, especially multiple integration
>>
>>I was under the impression that Fortran-77 allows recursion in the sense that
>>a routine may call itself either directly or through a chain of other routines-
>>am I mistaken?
>
>the fortran 77 standard does not disallow recursion.  whether the
>particular implementation allows it is up to the implementor.  almost
>all, if not all, fortran 77 compilers for IBM systems do not allow it
>and fail for mysterious infinite loops when written assuming
>recursion.

I plead very guilty to my ignorance in saying that.  Someone else also flamed
me for this.  I presume this adds to the number-crunching efficiency when
recursion is disallowed since there would need be no provision for stuffing the
old variables on the stack and reinitializing them upon calling an already
called procedure, and restoring them when the call returns.  Unix f77 and
(I believe) VMS Fortran allow for this.

>>>	- clumsy input, though this is less important for scientific prog.
>>
>>Amen, brother.  Can't take input as a stream of bytes, for instance.
>
>the whole fortran I/O design is based upon the record concept.  a
>stream of bytes has no place in such a structure.  the semantics of a
>read or write need to be changed to accomodate a byte stream, or by
>...
>requires an extension to the fortran 77 language standard or
>implementing a nonstandard compiler.
>

Actually, Unix' f77 allows byte input to be done on any file which can be
seeked upon--you do something like

      character abyte
      integer n
      open(unit=1,file='foo.bar',access='direct',form='unformatted',
     +     recl=1)
      n=[whatever]
      read [write] (unit=1,rec=n,err=nnn)abyte

but this will bomb on a system like VMS when the type of file you are trying
to input from (output to) was created with a different structure.  It is also
impossible to do this on terminal I/O, or on standard input/output.  It would
be nice if you could mix different types of access with different file struc-
tures and have the I/O routines deal with this in a sane manner.  Then you
could have your bytes and eat them too :-).

>Herb Chong...
-- 
 -------------------------------    Disclaimer:  The views contained herein are
|       dan levy | yvel nad      |  my own and are not at all those of my em-
|         an engihacker @        |  ployer, my pets, my plants, my boss, or the
| at&t computer systems division |  s.a. of any computer upon which I may hack.
|        skokie, illinois        |
|          "go for it"           |  Path: ..!ihnp4!ttrdc!levy
 --------------------------------     or: ..!ihnp4!iheds!ttbcad!levy

herbie@watdcsu.UUCP (Herb Chong - DCS) (08/20/85)

In article <371@ttrdc.UUCP> levy@ttrdc.UUCP (Daniel R. Levy) writes:
>Actually, Unix' f77 allows byte input to be done on any file which can be
>seeked upon--you do something like
>
>      character abyte
>      integer n
>      open(unit=1,file='foo.bar',access='direct',form='unformatted',
>     +     recl=1)
>      n=[whatever]
>      read [write] (unit=1,rec=n,err=nnn)abyte
>
>but this will bomb on a system like VMS when the type of file you are trying
>to input from (output to) was created with a different structure.  It is also
>impossible to do this on terminal I/O, or on standard input/output.  It would
>be nice if you could mix different types of access with different file struc-
>tures and have the I/O routines deal with this in a sane manner.  Then you
>could have your bytes and eat them too :-).

this is only workable, as you point out, on a system where you can open
files with arbitrary attributes like unix.  on systems where the
underlying filesystem is record oriented, doing this requires a lot of
runtime library support.  many fortran 77 implementations would
disallow the open statement changing the record length at execution
time.  if portability is not an issue, then there should be no problem,
but then why not use C to begin with, if the mathematical library is
adequate.  i avoid fortran like the plague, despite it being the first
programming language i learned and will use PL/I for the application
rather than put up with the limited control structures and I/O
facilities of fortran.  even pascal is fine if you don't need separate
compilation and the math library is adequate.

Herb Chong...

I'm user-friendly -- I don't byte, I nybble....

UUCP:  {decvax|utzoo|ihnp4|allegra|clyde}!watmath!water!watdcsu!herbie
CSNET: herbie%watdcsu@waterloo.csnet
ARPA:  herbie%watdcsu%waterloo.csnet@csnet-relay.arpa
NETNORTH, BITNET, EARN: herbie@watdcs, herbie@watdcsu

vollum@rtp47.UUCP (Rob Vollum) (08/21/85)

>> 
>> I've been curious for a while what scientist/engineering types on the net
>> use for scientific programming.  
>
>Well, at least use RATFOR ( a preprocessor ) or full-scale Fortran-77 instead
>of generic fortran, so you can have control structures.  I find the biggest
>weaknesses fortran has for scientific programming are:
>	- no recursion - makes everything tough, especially multiple integration
>	- no dynamically dimensioned arrays ( though C is kind of clumsy also)
>	- clumsy input, though this is less important for scientific prog.
>On the other hand, complex arithmetic in C is really annoying.  However,
>the C++ language lets you define objects like complex numbers, expontentiation,
>lets you define better output routines, etc.
>-- 
>## Bill Stewart, AT&T Bell Labs, Holmdel NJ 1-201-949-0705 ihnp4!ho95c!wcs
>

Let me start by saying that I don't do any "scientific programming". But I 
still don't know why no one but me (in a previous reply posting) has even
mentioned Lisp (in particular Common Lisp, Maclisp, Zetalisp) as an option!
Every objection that has been raised is handled naturally in Lisp. I'm not 
advocating rewritting existing code in Lisp, but for prototyping or new
development, why not? Compilers for Lisp are getting good enough so that 
applications can run just as efficiently in Lisp as C or Fortran,
with the added benefit of a robust development and debugging environment.
Also, Lisp allows natural extension into arbitrary precision integer
calculation and rational arithmetic. One example of a huge application
written in Lisp is MACSYMA, which allows engineers to do SYMBOLIC (i.e.
without annoying roundoff errors, etc) differentiation, integration,
matrix manipulation, factorizations, expansions, etc.

I guess that I have a follow-up question. Is Lisp totally unknown in the 
scientific community?


-- 
Rob Vollum
Data General Corp.
Research Triangle Park, NC
<the world>!mcnc!rti-sel!rtp47!vollum

mj@myriasb.UUCP (Michal Jaegermann) (08/21/85)

About recursion in f77 under 4.2.  We tried some small experiments and
it turned out that the following recursive calls:

	 integer function foo(....)

	    integer x, y

*         all necessary stopping conditions

	     x = foo (left )
	     y = foo (right)
	     foo = x + y
	 end

and
	 integer function foo(....)

	    integer x, y

*         all necessary stopping conditions

	     foo =  foo (left ) + foo (right)
	 end

give DIFFERENT answers. I am not 100% sure if I like that kind of
recursion.
 
Michal Jaegermann - Myrias Research Corporation
& all usual disclaimers .........             &
....ihnp4!alberta!myrias!mj

eugene@ames.UUCP (Eugene Miya) (08/21/85)

Some time ago, a person posted a small FORTRAN program, and I blew up
at a comment along the lines "you can see why I used FORTRAN."

I tend to like C.  I try to use C on the Cray, VAX, PDP, and other machines
I use.  I am forced to use FORTRAN on some Cray applications because it
is the only language with multiprocessor support currently.  I hear
the Cray Pascal is decent, but have not used it.  C is far from
perfect [apologies to dutoit!dmr], but I like it for getting the job done.
There are features in many other languages I would not mind having:
operator overloading would be nice for complex or quaterions in C,
some protection and structuring features of CLU, better interfaces to
things like graphics, monitoring, and debugging [some of this bordering
closer to hardware], but these are all syntaxtic or semantic sugar to C.

The problem is that lots of problems are not language related, or a
different smaller specialized language would do the job better.
Otherwise, if you look at it, there is very little difference whether you
program in Pascal, FORTRAN, C, ... and stretching it: even LISP and PROLOG.
You can include specialized packages in this category, too: SPSS,
BMDP, and so forth.  Just more of the same.  The problems come with
moving data around between these different systems: hence, good programming
environments.  It just so happens that FORTRAN is not enough of such
an environment to do this moving.

There is no one perfect language for `scientific programing.'  You might
argue that English or German is the perfect scientific language to discuss
this in.

--eugene miya
  NASA Ames Research Center
  {hplabs,ihnp4,dual,hao,decwrl,allegra}!ames!aurora!eugene
  emiya@ames-vmsb

warack@aero.ARPA (Chris Warack) (08/22/85)

	4]  Theoretically, CLU can handle any precision specified.  This
is implementation dependent.  In any case, if precision is a problem,
it's not to difficult to add your own type -- to the precision you need.

Other disadvantages:
	1]  It's VAX implementation is a resource hog.  Each CLU program
has a copy of the garbage collector and other 'system' functions linked
in, thus they are large.  Also the CLU compiler and linker suck up a
good amount of the CPU.

The language is not too difficult to learn especially if someone is
fimiliar with type-checked and/or object-oriented  languages.

The CLU Reference Manual is Lecture Notes in Computer Science #114.
Edited by G. Goos and J. Hartmanis.  Published by Springer-Verlag.  CLU
was developed by Barbara Liskov, Toby Bloom, Robert Scheifler, J.C.
Schaffert, Lab of Comp Sci, MIT; Russell Atkinson, Xerox Research
Center; Alan Snyder, Hewlett Packard; Eliot Moss.  [taken from CLU
Reference Manual].

Chris
-- 
 _______
|/-----\|    Chris Warack			(213) 648-6617
||hello||
||     ||    warack@aerospace.ARPA
|-------|    warack@aero.UUCP
|@  ___ |       seismo!harvard!talcott!panda!genrad!decvax!ittatc!dcdwest!
|_______|         sdcsvax!sdcrdcf!trwrb!trwrba!aero!warack
  || ||  \   Aerospace Corporation, M1-117, El Segundo, CA  90245
 ^^^ ^^^  `---------(|=

chris@umcp-cs.UUCP (Chris Torek) (08/23/85)

>About recursion in f77 under 4.2.  We tried some small experiments and
>it turned out that the following recursive calls:
>	 integer function foo(....)
>	    integer x, y
>*         all necessary stopping conditions
>	     x = foo (left )
>	     y = foo (right)
>	     foo = x + y
>	 end

It just might help to declare your variables automatic, so they
won't get stomped on by the recursive calls....

For example, the following program prints "2", as it should:

	program main
	integer foo
	integer i
	i = foo(5)
	print *, i
	stop
	end

C I'm not sure what this computes, but the answer is different if
C the variables are made static....
	integer function foo(param)
	integer param
	automatic x, y, left, right
	left = param / 2
	right = param - (left * 2)
	x = left
	y = right
	if (left .gt. 1) x = foo(left)
	if (right .gt. 1) y = foo(right)
	foo = x + y
	return
	end

Change the declarations of x, y, left, and right to "integer" and
it prints -1.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251)
UUCP:	seismo!umcp-cs!chris
CSNet:	chris@umcp-cs		ARPA:	chris@maryland

mangoe@umcp-cs.UUCP (Charley Wingate) (08/23/85)

In article <152@rtp47.UUCP> vollum@rtp47.UUCP (Rob Vollum) writes:

>>> I've been curious for a while what scientist/engineering types on the net
>>> use for scientific programming.  

>Let me start by saying that I don't do any "scientific programming". But I 
>still don't know why no one but me (in a previous reply posting) has even
>mentioned Lisp (in particular Common Lisp, Maclisp, Zetalisp) as an option!
>Every objection that has been raised is handled naturally in Lisp. I'm not 
>advocating rewritting existing code in Lisp, but for prototyping or new
>development, why not? *** Compilers for Lisp are getting good enough so that 
>applications can run just as efficiently in Lisp as C or Fortran, ***
>with the added benefit of a robust development and debugging environment.
>Also, Lisp allows natural extension into arbitrary precision integer
>calculation and rational arithmetic. One example of a huge application
>written in Lisp is MACSYMA, which allows engineers to do SYMBOLIC (i.e.
>without annoying roundoff errors, etc) differentiation, integration,
>matrix manipulation, factorizations, expansions, etc.

Having worked as a programmer for engineers for a number of years, let me
make a few comments.  First of all, these people generally are not taught
Lisp in any flavor.  (Engineers indeed are generally not given the
opportunity.)  Almost all of them are taught Fortran as a matter of course
(frequently by engineering professors who've never bothered to learn anything
past Fortran IV).  The one thing that Fortran excells at is huge numbers of
iterative calculations, especially with the high level of optimization that
is almost always available.  Lisp, insofar as it is able to match such speed,
gives up Lisp features in order to be more like Fortran.

MACSYMA is actually a good case in point.  MACSYMA was developed as an
artificial intelligence project.  The kind of symbolic manipulation is
distinctly unlike much scientific programming.

I'm afraid Fortran is here to stay.  But I agree that it is only reasonable
to ask people to at least try to use 77 (and presumably 8X, when it comes
out) rather than stay stuck in 1966.

Charley Wingate

mff@wuphys.UUCP (Swamp Thing) (08/23/85)

In article <147@rtp47.UUCP> vollum@rtp47.UUCP (Rob Vollum) writes:
>Concerning efficiency, there is no reason, given improved compilers that
>handle type-inferencing and type-propagation, for example, that
>compiled Common Lisp can't be as fast as compiled anything else in most
>cases; certainly in most 'simple' numerical applications that would
>be handled by a Fortan program (simple here means 32-bit integer or
>single- or double-precision arithmetic).
>-- 
>Rob Vollum
>Data General Corp.
>Research Triangle Park, NC
><the world>!mcnc!rti-sel!rtp47!vollum

Most people are concerned with what's available now, not what will be available
sometime, mabye.

As for the original question, most scientists I know crunch there numbers in
Fortran (myself included, although I've done some other programing in C).  Most
people who have switched to C now consider Fortran to be slime.  And most
people who refuse to program in C have never done so.  (This all applies to
people I know, not in general).  F66 certainly had it's problems, most notably
a serious lack of control strucures.  F77 is much better.

As a side note, while we've all heard that the Cray-2 will be running Unix, the
one curently running at Livermore is not (it has the same operating system as
the Cray-1's there) and all the number crunching will still be done in Fortran.
There will be a C compiler for support of the Unix system.  I suspect that it
will be a long time before someone comes up with a C compiler that's as
vectorizing and as optimized as the Fortran compilers that they use.

						Mark F. Flynn
						Department of Physics
						Washington University
						St. Louis, MO  63130
						ihnp4!wuphys!mff

"There is no dark side of the moon, really.
 Matter of fact, it's all dark."

				P. Floyd

root@bu-cs.UUCP (Barry Shein) (08/24/85)

(Re: Recursion in F77)

>From: levy@ttrdc.UUCP (Daniel R. Levy)
>Subject: Re: What language do you use for scientific programming?
>I presume this adds to the number-crunching efficiency when
>recursion is disallowed since there would need be no provision for stuffing the
>old variables on the stack and reinitializing them upon calling an already
>called procedure, and restoring them when the call returns.  Unix f77 and
>(I believe) VMS Fortran allow for this.

This idea that recursion is inefficient for speed is a common and by
and large completely wrong idea, given any reasonable machine architecture.
Look at the code generated by a UNIX C compiler and how it addresses
automatic variables, they are just memory references relative to the
stack, when they are register variables it is true, they must be stored
on *any* call if the registers might be used in the called routine(s),
recursive or not. All stack relative addressing requires for efficiency
is some sort of base+displacement architecture. The IBM 360/370 series
has no hardware stack, but again this does not in any way detract from
recursive coding, a perusal of ASM code from a 370 C is instructive.

One protest against recursion was the inability to predict the potential
depth of the stack, a simple compile-time analysis can not determine this
in general as it usually depends on termination conditions rather than
who calls whom.

Back when cars had fins (ie. the days of Fortran) memory was generally
allocated on a system at job initiation time (eg. MVT) and it was important
to be able to predict the maximum memory your job needed. Things like
malloc() [via sbrk()] would have been only in the domain of true system
hackers, not application programmers. Recursion defied these schemes for
the general case, about all you could do was allocate for the worst case.

An important feature of UNIX from its V6 days (on) was its silent expansion
of the user's stack region, typically growing down towards the data area
(though not necessarily.) This was handled by recognizing that an illegal
memory reference by the user was due to a stack relative reference beyond
the currently allocated stack. Rather than raising the typical signal
(Segmentation Violation or whatever) it just allocated more stack and re-
scheduled the job. This of course did cause slight interruptions in the
running of a program, though generally the chunks allocated were big enough
that a program rarely faulted. Other systems (not IBM) at least had a
provision for requesting some amount of stack space separate from other
requests.

At any rate, the point is, the protest is not against speed, but rather
memory predictability (given enough memory to start this becomes less
important, remember, when Fortran was important time-sharing systems
had 64K of memory, big ones maybe 256K, that was for everyone!)

So, the resistance to recursion by Fortran is historical. At this point
many Fortran coders consider a reference to a function by the same function
an error, so they may see allowing recursion as removing a desired error
message. I also suspect it is just some backwards compatibility hold-out.

Oh, another important point, on most pre-F77 fortrans variables were all
*static*, when a routine was re-entered it was not unusual for a program
to assume the values of variables were as they were left the last time
the routine was exited. Fortran had no way of specifying a static versus
automatic variable (except, perhaps, by abuse of common). Recursion requires
this distinction for any real usage. If all variables were suddenly made
automatic I am sure there is a fear that all the promised forwards
compatibility from F66->F77 would be lost as many programs would mysteriously
fail (eg. many many user written rand() routines.)

	-Barry Shein, Boston University

joe@petsd.UUCP (Joe Orost) (08/26/85)

As an Ada implementer, I just want to correct some technical issues in the
following article:

In article <64500002@hpislb.UUCP> gore@hpisla.UUCP (Jacob Gore) writes:
>
>Ada is quite similar to Pascal and Modula-2.  It does have double-precision
>real types.  In fact, you can pretty much specify arbitrary precision, as
>long as you keep in mind that your program will run faster if there is a
>hardware data type in your machine that can support the precision that you
>asked for (otherwise, you could be executing lots of simulation code).

No simulation code is ever required.  Ada allows operations to be performed
with more precision than the user requested.  Therefore, all operations are
done with the next better floating point type in the machine.  The
"predefined types" of the implementation should correspond with supported
types on the machine.

In addition, fixed point types are defined in the language to allow
representation of fractions on machines that have no floating point
hardware.

>
>Ada follows the philosophy that anything that can be safely put into a
>library should not be a burden to the compiler.  Following this philosophy,
>I/O is not "built into the language" (which usually means that I/O routines
>are in the library anyway, but the compiler has to parse calls to those
>routines differently from calls to user-written routines).  DoD (who set
>the requirements for the language) does require provision of library
>routines that do minimal I/O, in the manner similar to that of Modula-2,
>but slightly more convenient.

No, the compiler doesn't parse I/O calls differently than non-I/O calls.

>
>  (4) Ability to pass parameters by name (instead of just their
>      relative positions within the call) or not to pass them at all.
>      Many statistical library routines have a dozen or two of
>      arguments, most of which the user wants to default to their usual
>      values.  In most languages, the best way to do this is to pass
>      some agreed-upon value, usually 0, which will be replaced with
>      the default value by the routine.  So the call might look like
>
>	CALL FUNFNC (1985, 34, 0,0,0,0,0, -23, 0,0,0,0, 1, 0,0,0,0,0,0)
>      
>      In Ada, it would like this:
>
>	FUNNY_FUCNTION (YEAR:=1985, DISTRICT:=34,
>			PENALTY:=-23, COEFFICIENT:=1);
>
>      The order of the parameters, when passed by name, does not
>      matter.

The correct syntax is:
	FUNNY_FUNCTION (YEAR => 1985,   DISTRICT => 34,
			PENALTY => -23, COEFFICIENT => 1);

>      
>      If you were a customer, which statement would you prefer to use?
>      Well, if you do want to pass values for most of the parameters,
>      you might prefer memorizing their positions to spelling out the
>      name for each of them.  But even that is easier to do in Ada:
>
>	FUNNY_FUNCTION (1985, 34, ,,,,, -23, ,,,, 1, ,,,,,);
>
>      And there is no confusion over which parameter gets the default
>      value, and which actually gets the value of zero!

Ada does not allow this!


All in all, I agree with you in selecting Ada as your language of choice.
Unlike C, however, an efficient Ada program requires a damn-good optimizing
compiler.  This is good for programmers, because they can concentrate on the
algorithm, rather than on an efficient implementation.

				regards,
				joe

--

 ........        .........	Full-Name:  Joseph M. Orost
 .       .       .		UUCP:       ihnp4!vax135!petsd!joe
 . ......   ...  ........	ARPA:	    vax135!petsd!joe@BERKELEY
 .               .		Phone:      (201) 758-7284
 .               .........	Location:   40 19'49" N / 74 04'37" W
				US Mail:    MS 313; Perkin-Elmer; 106 Apple St
					    Tinton Falls, NJ 07724

ken@boring.UUCP (08/26/85)

In article <152@rtp47.UUCP> vollum@rtp47.UUCP (Rob Vollum) writes:
>One example of a huge application
>written in Lisp is MACSYMA

Huge is right. It soaks up an entire VAX. I use it now and then tho.

>I guess that I have a follow-up question. Is Lisp totally unknown in the 
>scientific community?

Not wishing to start a language debate, but I can think of a few
reasons why Fortran is preferred: (1) there are many scientific
subroutine packages available; (2) a sufficiently restricted subset of
Fortran is quite portable whereas Common Lisp is not widely available;
and what seems to me the most formidable barrier: (3) getting engineers
to think "functionally".

Don't send me flames, I have no desire to defend either language.

	Ken
-- 
UUCP: ..!{seismo,okstate,garfield,decvax,philabs}!mcvax!ken Voice: Ken!
Mail: Centrum voor Wiskunde en Informatica, Kruislaan 413, 1098 SJ, Amsterdam.

dik@zuring.UUCP (08/27/85)

In article <1350@umcp-cs.UUCP> chris@umcp-cs.UUCP (Chris Torek) writes:
>>About recursion in f77 under 4.2.  We tried some small experiments and
>...
>It just might help to declare your variables automatic, so they
>won't get stomped on by the recursive calls....
>...
>	automatic x, y, left, right
Eh? Fortran 77?
-- 
dik t. winter, cwi, amsterdam, nederland
UUCP: {seismo|decvax|philabs}!mcvax!dik

chris@umcp-cs.UUCP (Chris Torek) (08/27/85)

>In article <238@zuring.UUCP> dik@zuring.UUCP (Dik T. Winter) writes:
>In article <1350@umcp-cs.UUCP> chris@umcp-cs.UUCP (Chris Torek) writes:
>>>About recursion in f77 under 4.2.  We tried some small experiments and
>>...
>>It just might help to declare your variables automatic, so they
>>won't get stomped on by the recursive calls....
>>...
>>	automatic x, y, left, right
>Eh? Fortran 77?

Note the original text: "about recursion *in f77 under 4.2*".

(If you're going to be nonportable, at least you should do it cleanly :-) )
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251)
UUCP:	seismo!umcp-cs!chris
CSNet:	chris@umcp-cs		ARPA:	chris@maryland

macrakis@harvard.ARPA (Stavros Macrakis) (08/27/85)

> >[Ada] does have double-precision real.... you can ... specify
> >arbitrary precision, [but if there's no appropriate] hardware data
> >type,... you could be executing... simulation code).
> 
> No simulation code is ever required.  Ada allows operations to be
> performed with more precision than the user requested.  Therefore,
> all operations are done with the next better floating point type in
> the machine.  The "predefined types" of the implementation should
> correspond with supported types on the machine.

These two notes appear to be talking to to different issues: run-time
calculations in floating types, and calculation of `universal numbers'
at compile time.

Ada allows <specification> of the precision of floating types.  An
implementation chooses which precisions it supports, and must give an
error if a program specifies a precision it does not support.
Implementations are required to support at least one floating
precision.  Precisions are specifed by number of digits rather than
as `single' or `double' precision, increasing portability.

As for compile-time constants (`universal numbers'): "The accuracy of
the evaluation of [such numbers] is at least as good as that of the
most accurate predefined floating point type...." [Ada RM 4.10/4]
Earlier versions of Ada did apparently require arbitrary-precision
arithmetic at compile time (but never at run time).

Of course, if you want run-time multiple precision, you are free to
implement it in a package and use overloading to call its operations
`+', `*', etc.  This holds true for vector-matrix, complex, and other
types as well.

> All in all, I agree with you in selecting Ada as your language of
> choice.  Unlike C, however, an efficient Ada program requires a
> damn-good optimizing compiler.	-- Joe Orost

I also agree.  Choose Ada for numeric calculations.  This year's crop
of compilers finally seems to be fulfilling Ada's promise.

	-s

franka@mmintl.UUCP (Frank Adams) (08/31/85)

This is in response to an article recommending Ada for scientific
programming.  The author stated that because Ada is rigidly standardized,
there is no portability problem.  Unfortunately, this only addresses half
of the problem.  The other half is having a compiler exist on the machine
you want to use.  Today, this is a real problem with Ada.  As time goes
on, it should become less of one.

There is one other problem that I know of with Ada, which is that many
of the compilers which do exist produce quite inefficient code.  Again,
this should improve with time.

mark@apple.UUCP (Mark Lentczner) (09/04/85)

[]
I know this may sound bizarre, but why not Smalltalk for scientific
programming?  Its class hierarchy makes numerical stuff a breeze when
it comes to working with different types of numbers.  It comes with
IEEE floating point, infinite percisions integers, fractions, efficient
small integers, and complete indendance from having to know the types
of the numbers you are working with (i.e. in an array, zeros are automatically
represented by the efficient (in space and time) small integer zero, while
other values can be of the appropriate type as need, float, large int, etc.)

Furthermore, Smalltalk provides a very interactive and fast development
enviornment that is perfect for doing modeling and for using programing
as a tool (vs. a production system).  Lastly, Smalltalk is inherently
graphical so that display of results is easy and natural.

Of course the only real limitation on Smalltalk as a scientific compute
engine is its speed.  Most implementations of Smalltalk tend to be slow.
On the Macintosh implementation a (not optimised but OK) complex FFT of 64
points takes 2950 milliseconds to run, for 256 points it takes 14187 millisec.
Obviously for some applications this is too slow, but a) this is on a $2000
workstation (not bad in terms of price/performance, eh?) b) there are some
implementations from Tektronix that run circles around this (for still far less
than lisp machine prices) and c) as a environment to develop and play with
scientific algorithms, I think it is hard to beat in its flexibility and
price performance (yes, lisp machines are great, but how much did you say?)

Well, THERE'S some fuel for the fire....


--Mark Lentczner
  Smalltalk Group
  Apple Computer, Inc.

  UUCP:  {nsc, dual, voder, ios}!apple!mark
  CSNET: mark@Apple.CSNET

  "All opnions are mine and/or from beings form outer space..."
-- 
--Mark Lentczner
  Apple Computer

  UUCP:  {nsc, dual, voder, ios}!apple!mark
  CSNET: mark@Apple.CSNET

jack@boring.UUCP (09/06/85)

Well, since everybody has defended their favorite language for
scientific programming already, I might as well throw in my .001 cents
worth.

I do all my sciemtific programming in assembler.
It has numerous advantages:
- Executes fast
- Possible to use every data structure you can imagine (for instance,
  I can implement 2d arrays as matrices in one place, and with
  dope vectors in another, in the same program).
- Very tight control over your code, so you never hit things
  like 'the funny way this compiler handles floating point rounding'.
- Not having to use stacks, jump subroutines, and all those nifty
  (but *expensive*) features you are forced to use in a hll.

And I could probably go on for hours.....
-- 
	Jack Jansen, jack@mcvax.UUCP
	The shell is my oyster.

jans@orca.UUCP (Jan Steinman) (09/06/85)

In article <29418@apple.UUCP> mark@apple.UUCP (Mark Lentczner) writes:
>I know this may sound bizarre, but why not Smalltalk for scientific
>programming?...  Of course the only real limitation on Smalltalk as a
>scientific compute engine is its speed...

It's just a matter of time before Smalltalk interfaces to compiled languages
are available.  The the Smalltalk speed issue will be put to rest by writing
the number crunch objects in C or Pascal (or perhaps standard scientific
routines in a FORTRAN library?).  Write it first in Smalltalk, then compile
the bottlenecks.  (Rapid prototype driven design.)

Do YOU want scientific Smalltalk?  Contact your favorite implementor.  Special
primitive routines are not difficult to add.  All that is needed is stong
Smalltalk community demand.

>... there are some implementations from Tektronix that run circles around...
>... for still far less than lisp machine prices...

(Thanks for the plug!)  Some Tek implementations include primitives for many
trig functions that are normally performed in Smalltalk code in a "by the
book" implementation.  Floating point co-processors are also used to VASTLY
improve Smalltalk's number-crunching ability.  Research is under way
concerning primitive representation of floating point objects.  The
combination of these techniques ought to result in well-written Smalltalk
number-crunching methods that are just as fast as equivalent routines written
in a compiled language.

>Well, THERE'S some fuel for the fire....

Keep those flames a comin'!
-- 
:::::: Jan Steinman		Box 1000, MS 61-161	(w)503/685-2843 ::::::
:::::: tektronix!tekecs!jans	Wilsonville, OR 97070	(h)503/657-7703 ::::::

mff@wuphys.UUCP (Swamp Thing) (09/06/85)

In article <29418@apple.UUCP> mark@apple.UUCP (Mark Lentczner) writes:
>Of course the only real limitation on Smalltalk as a scientific compute
>engine is its speed.

One thing to keep in mind here is what type of "scientific programming" you're
talking about.  Are you concerned with analyzing some data with a few FFT's or
burning up a few CPU-days of Crey time.  Speed is clearly the dominant
consideration in the later case.


						Mark F. Flynn
						Department of Physics
						Washington University
						St. Louis, MO  63130
						ihnp4!wuphys!mff

schwrtze@csd2.UUCP (Eric Schwartz group) (09/07/85)

Algol 68 is the right language for scientific programming. Rich operators,
good sizing of types (modes) i.e. long long real x; (probably H format on 
a Vax). Complex data type supported. Ease of algorithm specification. 
I cannot see why anyone needs anything else. It has everthing C does and more.

	Hedley Rainnie.

	hedley@alaya

mjs@sfmag.UUCP (M.J.Shannon) (09/08/85)

> Algol 68 is the right language for scientific programming. Rich operators,
> good sizing of types (modes) i.e. long long real x; (probably H format on 
> a Vax). Complex data type supported. Ease of algorithm specification. 
> I cannot see why anyone needs anything else. It has everthing C does and more.
> 
> 	Hedley Rainnie.
> 
> 	hedley@alaya

I dare say it's the "more" you refer to that makes it less popular than C.
PL/I has the same problem (and more :-)).
-- 
	Marty Shannon
UUCP:	ihnp4!attunix!mjs
Phone:	+1 (201) 522 6063
Disclaimer: I speak for no one.

jaap@mcvax.UUCP (Jaap Akkerhuis) (09/09/85)

In article <1714@orca.UUCP> jans@orca.UUCP (Jan Steinman) writes:
 > In article <29418@apple.UUCP> mark@apple.UUCP (Mark Lentczner) writes:
 > >I know this may sound bizarre, but why not Smalltalk for scientific

 > It's just a matter of time before Smalltalk interfaces to compiled languages
 > are available.  The the Smalltalk speed issue will be put to rest by writing

As long as nobody gives a definition for "scientific programming" this
discussion will drag on until all merits of PAL/8, ADA and COBOL have been
beaten to death.

	--jaap

mac@uvacs.UUCP (Alex Colvin) (09/17/85)

> Algol 68 is the right language for scientific programming....
Got any for VAX UN*X?

rcd@opus.UUCP (Dick Dunn) (09/18/85)

> Algol 68 is the right language for scientific programming. Rich operators,
> good sizing of types (modes) i.e. long long real x; (probably H format on 
> a Vax). Complex data type supported. Ease of algorithm specification. 
> I cannot see why anyone needs anything else. It has everthing C does and more.

It has everything C does except available implementations, interfaces to
operating systems on which it exists, a user community for help, accessible
tutorial information and readable reference material...
in short, everything except contact with reality (as we know it).

Don't misunderstand me--it's a marvelous language design (except for the
`long' and `short' qualifiers, which are the same screwup as C, FOOTRAN,
etc.)  But I don't use language designs to do my work.  I use language
implementations.
-- 
Dick Dunn	{hao,ucbvax,allegra}!nbires!rcd		(303)444-5710 x3086
   ...Lately it occurs to me what a long, strange trip it's been.

dik@zuring.UUCP (09/20/85)

In article <59@opus.UUCP> rcd@opus.UUCP (Dick Dunn) writes:
(Again, he did not write this...)
>> Algol 68 is the right language for scientific programming. Rich operators,
>> good sizing of types (modes) i.e. long long real x; (probably H format on 
>> a Vax). Complex data type supported. Ease of algorithm specification. 
>> I cannot see why anyone needs anything else. It has everthing C does and more.
>
>It has everything C does except available implementations, ...
Yes but what is the reason, lack of imagination from compiler writers?
We have been using a truly marvellous compiler since about 1971, supplied
by CDC Nederland for CDC Cyber systems.  Also in the UK there are lots of
compilers for other systems.  It appears to be the same as with Algol 60,
popular in Europe, not used in the US.
>
>Don't misunderstand me--it's a marvelous language design (except for the
>`long' and `short' qualifiers, which are the same screwup as C, FOOTRAN,
>etc.)
No, it's worse than C etc, because you won't know what is implemented.
>
>   ...Lately it occurs to me what a long, strange trip it's been.
Right.
-- 
dik t. winter, cwi, amsterdam, nederland
UUCP: {seismo|decvax|philabs}!mcvax!dik

dik@zuring.UUCP (09/20/85)

In article <243@zuring.UUCP> dik@zuring.UUCP (Dik T. Winter) writes:
>We have been using a truly marvellous compiler since about 1971, supplied
>by CDC Nederland for CDC Cyber systems.
Oops, 1975  is more like it.
-- 
dik t. winter, cwi, amsterdam, nederland
UUCP: {seismo|decvax|philabs}!mcvax!dik

iwm@icdoc.UUCP (Ian Moor) (09/23/85)

In article <59@opus.UUCP> rcd@opus.UUCP (Dick Dunn) writes:
>> Algol 68 is the right language for scientific programming. Rich operators,
>It has everything C does except available implementations, interfaces to

Come on ! What about:
Two portable implementations : ALGOL 68C and ALGOL68 68RS working on IBM
and DEC kit, 
One manufacturer supported full (including PAR BEGIN END & SEMAPHORES) 
implementation :  CDC 
A usable subset on PDP11 '68S

-- 
Ian W Moor
                                   
 Department of Computing     Whereat a great and far-off voice was heard, saying,
 Imperial College.           Poop-poop-poopy, and it was even so; and the days
 180 Queensgate              of Poopy Panda were long in the land.
 London SW7 Uk.              Filtered for error, synod on filtration & infiltration

arnold@gatech.CSNET (Arnold Robbins) (09/27/85)

In article <59@opus.UUCP> rcd@opus.UUCP (Dick Dunn) writes:
>> Algol 68 is the right language for scientific programming. Rich operators,
>It has everything C does except available implementations, interfaces to

Rumor has it that the Amsterdam Compiler Kit people are working on an
Algol 68 front end. That'll be a nice thing to have, along with the global
optimizer they're working on too.

(See the September '83 CACM for more info on the Amsterdam Compiler Kit, and
who to write to get it; you need a Unix license, since their C compiler is
PCC. We have it, and I've been told that it generates pretty good code.)
-- 
Arnold Robbins
CSNET:	arnold@gatech	ARPA:	arnold%gatech.csnet@csnet-relay.arpa
UUCP:	{ akgua, allegra, hplabs, ihnp4, seismo, ut-sally }!gatech!arnold

Hello. You have reached the Coalition to Eliminate Answering Machines.
Unfortunately, no one can come to the phone right now....