[comp.lang.perl] Ruminations on the future of Perl

worley@compass.com (Dale Worley) (06/28/91)

I've been giving some thought to the future of Perl.  Of course, these
comments are a mite arrogant, but at least they may help Larry plan.

Probably the best thing for the long-run health of Perl (although the
worst for everybody now involved with it) is to scrap it and redesign
(and reimplement) it from the ground up.  Having implemented and used
Perl, we now know *what it should have been*.

In my opinion, Perl's biggest weaknesses are (1) its syntax is
fantastically complex (consider the multiple meanings of / and $), and
(2) it is a collection of features more than a coherent language for
expressing algorithms.

However, rebuilding Perl is quite impractical.  Not only would it
involve an enormous amount of work, all existing Perl code would have
to be rewritten.

The other idea I have for the future of Perl is that it is now
undergoing the transition from a "program" to a "program product".
This means that the users are demanding that it be much more reliable,
understandable, and portable than ever before.  And, as Brooks noted,
it takes about three times as much effort to produce a program product
as to produce a program.  (Consider the number of bugs and
difficulties that version 4 is having, because of the number of users
exercising it.)  Where is all of this effort going to come from?  Can
we really expect Larry to devote the rest of his life to maintaining
Perl?

In order for Perl to take its rightful place as a standard Unix tool,
it needs to become a product.  However, I can't see how to finance the
large amount of work that is going to be necessary on a continuing
basis.

Dale

Dale Worley		Compass, Inc.			worley@compass.com
--
If you can't drink a lobbyist's whiskey, take his money, sleep with his
women and still vote against him in the morning, you don't belong in
politics.	-- Speaker of the California Assembly Jesse Unruh

flee@cs.psu.edu (Felix Lee) (06/28/91)

>Probably the best thing for the long-run health of Perl (although the
>worst for everybody now involved with it) is to scrap it and redesign
>(and reimplement) it from the ground up.

I started to reimplement (but not redesign) Perl a while back.  My
intent was to create a stand-alone parser (and scanner) that would
read a Perl program and output a syntax tree in lispy form.  If you
then take the syntax tree and do a huge amount of semantic analysis,
you can output, say, Scheme code.  Then you just need to implement a
Scheme engine with enough libraries to support Perl builtins.

Once you have a Scheme suitable for Perl-type programming tasks, you
can easily create a more politically correct syntax.  And then you can
automatically convert Perl to the new syntax by way of the Perl to
Scheme translator.

I halfway implemented the parser before the project dropped by the
wayside.  Do people care enough about Perl that it would be worth
pursuing again?
--
Felix Lee	flee@cs.psu.edu

ddj@zardoz.club.cc.cmu.edu (Doug DeJulio) (06/28/91)

In article <WORLEY.91Jun27163439@sn1987a.compass.com> worley@compass.com (Dale Worley) writes:
>In order for Perl to take its rightful place as a standard Unix tool,
>it needs to become a product.  However, I can't see how to finance the
>large amount of work that is going to be necessary on a continuing
>basis.

It's not like it's a commercial product, it's available under
copyleft.  People *will* continue to improve it, and we'll all benefit
from the result.
-- 
-- 
Doug DeJulio
ddj@zardoz.club.cc.cmu.edu (NeXT mail)
dd26+@andrew.cmu.edu (AMS/ATK mail)

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (06/28/91)

In article <WORLEY.91Jun27163439@sn1987a.compass.com> worley@compass.com (Dale Worley) writes:
  [ on what to do in Son Of Perl ]
> In my opinion, Perl's biggest weaknesses are (1) its syntax is
> fantastically complex (consider the multiple meanings of / and $), and
> (2) it is a collection of features more than a coherent language for
> expressing algorithms.

Wait a minute. What's wrong with a complicated syntax, and what's wrong
with having a grab-bag of features?

The first thing I'd do in a rewrite is have the parser produce strictly
threaded intermediate code. Then setjmp() and longjmp() would disappear,
we could turn the optimizer back on, the intermediate code could easily
be compiled into C, and the Perl semantics (though never its syntax :-))
would be much easier to handle. The second thing I'd do is give the
program a strong sense of its internal state, so that it could properly
(and portably!) checkpoint and restart, pass state to another Perl
process even running on a different machine, and allow independent
debuggers. Third, I'd add non-preemptive threads. Finally, I'd try to
educate Perl's users into understanding when the established tools are
more appropriate for the job.

---Dan

worley@compass.com (Dale Worley) (06/28/91)

Here's the quote I was looking for:

    A programming language cooked up haphazardly as a collection of
    briliant ideas is a menace to good programming methodology.  It is
    also more difficult to implement, so the odds favor an unreliable
    compiler, whose unreliable parts programmers learn to avoid through
    bitter experience...
    -- Harland Mills, "Software Productivity"

(He was referring to PL/1.)

Dale

worley@compass.com (Dale Worley) (06/28/91)

In article <1991Jun28.020603.1069@zardoz.club.cc.cmu.edu> ddj@zardoz.club.cc.cmu.edu (Doug DeJulio) writes:
   It's not like it's a commercial product, it's available under
   copyleft.  People *will* continue to improve it, and we'll all benefit
   from the result.

My belief is that it needs a *lot* of improvement -- many times more
work than has been put into it so far.  (That's the screw of going
from "program" to "product".)  The present situation will only allow
for small, incremental improvements.

Of course, no one can make a commercial product out of it; the
copyleft will guarantee that anybody who tries will go broke.  The one
exception is Larry -- he can make proprietary changes and keep people
from copying them!

Dale

Dale Worley		Compass, Inc.			worley@compass.com
--
There are three kinds of people: those that can count, and those that can't.

rampson@flash (Michael T. Rampson) (06/29/91)

In article <WORLEY.91Jun27163439@sn1987a.compass.com>,
worley@compass.com (Dale Worley) writes:
> I've been giving some thought to the future of Perl.  Of course, these
> comments are a mite arrogant, but at least they may help Larry plan.
> 
> Probably the best thing for the long-run health of Perl (although the
> worst for everybody now involved with it) is to scrap it and redesign
> (and reimplement) it from the ground up.  Having implemented and used
> Perl, we now know *what it should have been*.

I don't have any major problems with Perl.  What do you envision that it
should have been?

> 
> In my opinion, Perl's biggest weaknesses are (1) its syntax is
> fantastically complex (consider the multiple meanings of / and $), and
> (2) it is a collection of features more than a coherent language for
> expressing algorithms.

I think that 'fantasically complex' is too strong  (IMHO), although I
readily admit that it's syntax does have a few quirks that take some 
getting used to.    It's syntax is closely related to C, csh, awk and sed
(with regards to regex) so it doesn't take too long to start doing 
something useful with perl.  Of course, as you want to do more and
more complex things with perl, you have to delve deeper into perl.

> 
> However, rebuilding Perl is quite impractical.  Not only would it
> involve an enormous amount of work, all existing Perl code would have
> to be rewritten.

Which means that you are basically talking about implementing 
another language (if existing code is going to have to re-written)
unless your thinking about some kind of transition ala C-> ANSI-C.

> 
> The other idea I have for the future of Perl is that it is now
> undergoing the transition from a "program" to a "program product".
> This means that the users are demanding that it be much more reliable,
> understandable, and portable than ever before.  And, as Brooks noted,
> it takes about three times as much effort to produce a program product
> as to produce a program.  (Consider the number of bugs and
> difficulties that version 4 is having, because of the number of users
> exercising it.)  Where is all of this effort going to come from?  Can
> we really expect Larry to devote the rest of his life to maintaining
> Perl?

The number of bugs isn't really all that suprising considering that 
basically larry is mostly doing all of the work and he has no real
test organization to help him out other than his unit tests and his
user community.  I don't think we can really expect larry to devote the
rest of his life to maintaining perl.  I some point he is going to have 
enough of perl and either someone else is going to have to pick it up,
or he is going to have to make arrangements with FSF to pick up
M&E on perl.  As a member of this user community, I think in the near
future, larry is going to have to ask us for help, or we should volunteer our
help to larry.

> 
> In order for Perl to take its rightful place as a standard Unix tool,
> it needs to become a product.  However, I can't see how to finance the
> large amount of work that is going to be necessary on a continuing
> basis.
> 

I beleive the original intent of perl was to be more functionally oriented
instead of being some general elegant language.  The idea being that perl
could replace all of your csh, sh, awk, sed, etc scripts with one tool
that could
do all of these things, thus simplifing maintenance by only have to know one
tool, and speediup execution by not having to spawn off other processes to
do awk, sed, etc.  To extend the functionality even further, include OS calls
so you didn't have to write C programs on top of writing perl, losing some in
speed, but gaining maintainability in that you still only needed to know
one tool
to get even more work accomplished, more quickly.  Perl has been a godsend to
my organization and group where we had an SCM (Software Configuration
Management) tool set written in csh, ksh, sed, awk, etc.  By rewriting in perl,
we have since improved maintainability of the toolset dramatically, letting us
add even more functionality.  It has also sped up our build process
dramatically
seeing speed ups from 3 time faster on most tools, upto 30 time faster on
others (especially tools that were made up of csh and awk scripts, all
of the time was
spent firing up awk to do parsing on strings and lines).  I'd like to
extend my thanx
to larry (and randal) for all of the hard work they have put in to
provide a tool
that has saved me and my company countless hours.

			sincerely,
			Mike Rampson


__
The objective of all dedicated employees should be to thoroughly
analyze all situations, anticipate all problems prior to their
occurrence, have answers for these problems, and move swiftly to solve
these problems when called upon.

However, When you are up to your ass in alligators it is difficult to
remind yourself your initial objective was to drain the swamp.

    	    	    	    	    	    rampson@uswat

mullins@convex.com (Don Mullins) (06/29/91)

In <WORLEY.91Jun27163439@sn1987a.compass.com> worley@compass.com (Dale Worley) writes:

>I've been giving some thought to the future of Perl.  Of course, these
>comments are a mite arrogant, but at least they may help Larry plan.

    I like having Perl in it's preset form.  I don't think it was really
    intended to be a "programming" language as much as it was intended to be
    a "scripting" language.

    The one major improvement, IMHO, would be the ability to dump compiled
    (interpreted?) code separate from the perl core, as done today.

Don

--
Don Mullins       mullins@convex.COM  {uiucuxc, uunet, sun, ...}!convex!mullins
Product Engineering              Convex Computer Corporation, Richardson, Texas

flee@cs.psu.edu (Felix Lee) (06/29/91)

Perhaps I should clarify.  What I have is pieces of a Perl parser.
Once that's done, the obvious next step is a semantic analysis and
optimization phase.  And then a code generator can generate whatever
sort of back-end code you like: assembler, C, Scheme, whatever.
Scheme seems to me to have several advantages over C.

If this ever gets that far and I do decide that Scheme would work
well, there will probably be a Scheme compiler and interpreter bundled
in.  One of my friends has been wishing for a lightweight Scheme that
can occupy the same niche as Perl and C as a systems programming tool.

If you're unfamiliar with Scheme, the language is somewhat a
descendent of Algol and Lisp.  The lispy syntax will probably turn off
anyone who doesn't like lispy parentheses, but it's pretty trivial to
paste a different syntax in front.  Existing Scheme compilers generate
code comparable in speed to C compilers; heavily recursive code is
usually faster in Scheme.
--
Felix Lee	flee@cs.psu.edu

tchrist@convex.COM (Tom Christiansen) (06/29/91)

From the keyboard of worley@compass.com (Dale Worley):
:   It's not like it's a commercial product, it's available under
:   copyleft.  People *will* continue to improve it, and we'll all benefit
:   from the result.
:
:My belief is that it needs a *lot* of improvement -- many times more
:work than has been put into it so far.  (That's the screw of going
:from "program" to "product".)  The present situation will only allow
:for small, incremental improvements.

:Of course, no one can make a commercial product out of it; the
:copyleft will guarantee that anybody who tries will go broke.  The one
:exception is Larry -- he can make proprietary changes and keep people
:from copying them!

This is not true: there are companies that ship perl.  Mine is one of
them.  Of course, we also ship GNU emacs.  Sigh. :-)  But I can't imagine
the goal of the copyleft is to limit accessbility to perl.  Notice the 
alternative licence it's now shipped with.

What do you want added for a commercial product?  Something you can
hand over to kindergarteners?  Or do you want marketing glossies?
User manuals that teach it in a slower, more tutorial style?
Interactive learning tools?  Salesman who push it because they get a
commission?  Some of these ideas perhaps have merit.  Some perhaps do
not.  What are you looking for?

Larry does have a test organization, and a vast one: comp.lang.perl,
distributed regressions testing in action.  There aren't many people who
are up for fixing the bugs we sometimes find.  I admit that we probably
couldn't handle it without Larry's dedication.

If I had my druthers, I wouldn't mind seeing the following things in 
perl.  Some are more feasible than others.


1)  Replace the $~, $|, etc handle-specific i/o variables with something 
    less painful.   Maybe FILE'property or such.

2)  Be able to generate C code.  Be able to easily link in C code.
    Not have to make a stab at getting your structure formats right.

3)  Real structured variables.  $struct'field'subfield'subsubfield
    look ok.  conversions between these and C structs.

3)  Dump internal state as Dan Bernstein mentioned.   Not rogue dumping.
    Keeping the whole thing on disk for each dump is a pain.

4)  More knowledge of touching variables outside your scope.  -w 
    should be able to know this.  This would catch more typos and
    potentially dangerous opeations.
    
5)  Dynamic loading of C libraries.  [should be feasible on say Suns.]

6)  Ability to get call co-routines in perl from a C program, such as
    for string routines.
	 
7)  Optimized (k)sh2p translator.  Obviously sed and awk calls are easy,
    but I mean catching `basename' and head and tail etc.

8)  Some nice way to have perl as my shell environment.

9)  Perl used as an extension language.   (For editors, newsreaders, etc.)

10) Guaranteed presence on more machines.  That means shipment by 
    more vendors.  Target the workstation vendors first for market
    penetration: sun, dec, ibm, hp, sgi, etc.

11) Shared memory variables for cooperating processes.

12) Easier IPC (or RPC) library interfaces than what's there now.

13) Lightweight threads.  fork (or open(FOO, "-|") cost too much.

14) Fewer surprises for newusers.  See the gotchas to know what I mean.

15) Better recovery from syntax errors.

16) More fine-grained (and determistic) signal handling.  I want
    sigblock() etc.
    
17) Being able to  know whether my read()s will restart or not instead
    of being at the mercy of the O/S I'm running under.

18) A nice, moderately-paced book for non-hackers who want to learn it.
    Don't get me wrong: I love the Camel book.  It's just not for everyone.
    Maybe such users should go back and learn C and UNIX and awk and
    all that first before they return to the Camel book, but maybe
    there should be another way.

19) More applications written in perl publicly available.   A place
    where these can be stored and inquired about.

20) More people to answer questions about it.  Maybe 18 would help.

21) More people who understand the actual code well so we don't have
    all our eggs in one basket.  Larry, look out for falling rocks.

22) A database with good access methods for all the archives of 
    comp.lang.perl.


That's all for now.   


--tom
--
Tom Christiansen		tchrist@convex.com	convex!tchrist
		"So much mail, so little time."  

emv@msen.com (Ed Vielmetti) (06/29/91)

   19) More applications written in perl publicly available.   A place
       where these can be stored and inquired about.

   22) A database with good access methods for all the archives of 
       comp.lang.perl.

Good things to have.  Might be worth sticking the archives into WAIS?

--Ed

-- MSEN Archive Service file verification
quake.think.com
total 5561
drwxrwxrwx  2 14           1024 Jun 25 00:07 wais-discussion
-rw-rw-rw-  1 1637       635857 Jun 21 21:50 WAIStation-Canned-Demo.sit.hqx
-r--r--r--  1 14         463981 Jun 13 20:44 wais-8-b1.tar.Z
-rw-rw-r--  1 1556       475161 May 21 18:43 wais-8-a12-3.tar.Z
-rw-rw-rw-  1 1637       635225 May 16 03:01 WAIStation-0-62.sit.hqx
-rw-rw-rw-  1 999        321268 May 13 20:48 wais-ir12.ZU
-rw-rw-rw-  1 14         409388 Apr  5 00:44 wais-8-a11.tar.Z
-rw-rw-rw-  1 1637      1094536 Mar 28 00:37 WAIStation-0-62-Sources.sit.hqx
-rw-rw-rw-  1 14        1070714 Mar 23 01:24 WAIStation-0-61.sit.hqx
-rw-rw-rw-  1 14         475815 Mar 23 01:19 wais-8-a10.tar.Z
found wais ok
quake.think.com:/pub/wais/

rusty@groan.Berkeley.EDU (Rusty Wright) (06/29/91)

I suppose everyone has their list of what they'd like to see done to
improve perl.  This list is what I dislike about perl; things that I
consider to be bad design and unnecessarily confuse new (and old) perl
users.  It often seems to me that perl is one big hoary blob of
creeping featurism.  I'll reference the perl man page here as that's
what I'm using to jog my memory.

1) The man page gives too many ways to do the same thing, for no
apparent reason other than hackeritis.  For example, towards the end
of the "Data Types and Objects" section are some examples that are
preceded by the sentence "Anyway, the following lines are equivalent
to each other" and the first example is

	while ($_ = <STDIN>) { print; }

and the last is

	print while <STDIN>;

Similarly, in the "Syntax" section are examples for opening "foo" and
dying if it can't be opened, with the classic

	open(foo) || die "Can't open $foo: $!";

(but which I find to be just hackeritis show-off).

And then further on there are the various examples on the "foo:"
block.

I suppose one might argue that some of these wierd constructs execute
faster than others; my response would be that programmers shouldn't be
wasting their time tweaking their perl code to try to speed it up.  In
other words, perhaps perl needs more work in optimizing when it
compiles.

2) Functions as unary operators.  Maybe I'm missing something here.
In the paragraph that precedes the list of perl functions it says "If
an operation is listed both with and without parentheses around its
arguments, it means you can use it as a unary operator or as a
function call."  Now, tell me, when would I ever be required to or
need to or want to use chdir as a unary operator?  Seems to me they
should all be functions.  Towards the end of the man page there's the
"Precedence" section which tries to explain this mess.

3) getpgrp and getppid: what's wrong with this picture?  I was once
trying to create a temporary file (wanting to use the process id for
the file's name) and was amazed that perl doesn't have a getpid
function.  Then, when I was working on another perl program I stumbled
across $$.  I was so aggravated I swore out loud.

4) last and next.  What's wrong with using break and continue?  Just
seems like a gratuitous incompatibility with C, csh, awk, etc.

5) length should accept an array as an argument and return the number
of elements in it.  See for exmaple, tcl.

6) open.  What a nightmare.  I simply cannot believe that crap of
putting >, <, and | into the EXPR argument for specifying redirecting
input/output or piping to/from a command.  I'd use the same syntax
that fopen(3) uses and popen for piping to/from commands.

7) Predefined names; $_, $., et. al.  This is a worse nightmare than
open.  What's the point of this?  Hasn't anyone ever heard of using
DESCRIPTIVE NAMES for variables?  I like the man page's disclaimer,
"Most of them have reasonable mnemonics, or analogues in one of the
shells."  Where's my barf bag.

jv@mh.nl (Johan Vromans) (06/29/91)

In article <2gaH&?b#@cs.psu.edu> flee@cs.psu.edu (Felix Lee) writes:

   Perhaps I should clarify.  What I have is pieces of a Perl parser.
   Once that's done, the obvious next step is a semantic analysis and
   optimization phase.  And then a code generator can generate whatever
   sort of back-end code you like: assembler, C, Scheme, whatever.
   Scheme seems to me to have several advantages over C.

How about a perl front-end for gcc?

	Johan
-- 
Johan Vromans				       jv@mh.nl via internet backbones
Multihouse Automatisering bv		       uucp: ..!{uunet,hp4nl}!mh.nl!jv
Doesburgweg 7, 2803 PL Gouda, The Netherlands  phone/fax: +31 1820 62911/62500
------------------------ "Arms are made for hugging" -------------------------

spaf@cs.purdue.EDU (Gene Spafford) (06/30/91)

I use Perl because I can quickly come up with a solution to
programming problems that need a fast, simple hack that I may need to
reconfigure later.  I used to use C, but for a long time C has had too
much overhead because of all the include files, declaring structs,
etc.   With some of the cruft in ANSI C, its worse.  I've used ksh and
awk for a while, but neither had all the functions I wanted.  Then I
found Perl.

The "hackeritis" features that Rusty complains about are *exactly* why
people like me like to use Perl -- if there was only one way to
accomplish each task, the amount of memorization and understanding
involved to accomplish any arbitrary problem would be huge.  It's the
same reason why people prefer C to Ada, for instance.  It's also why a
knowledgable mechanic treasures his pair of ViseGrips over his 7mm
open-end box wrench.  And its the same principle that gives us many
hundreds of thousands of words in spoken language that few of us know
or use, but we can still communicate effectively with a smaller set of
less precise words.

Larry is a "hacker's hacker" in the sense that he knows how to produce 
working code to do useful things.  Perl is a tool that he developed to
match the way he produces code.  Not all of us have the same
programming "idioms" in mind Larry does, and that may be why we view
some of the features of Perl with curiosity, horror, or indifference.
I've found as I've used Perl, however, and read the book, that I am
developing new idioms and techniques, and the language helps me
express them without undue difficulty.

I don't mean to discourage people from talking about how Perl might be
changed to be more efficient, or add new features, or otherwise change
its nature.  However, I caution everyone to consider that moves in
that direction may lead to horrors like ANSI standards :-).  The very
concept of an ANSI committee to standardize Perl horrifies me.

Before we take anyone's comments on changes too seriously, I'd suggest
we ought to make them take this little quiz:

1) Have you read the Camel book at least twice?
    a) No
    b) Yes
    c) The what? Camels?

2) Can you understand 6 months of the various "Just another Perl hacker"
signature programs Randal has used in his postings?
    a) No way
    b) Yes, and I can optimize some of them
    c) Randal who?

3) Your favorite compiled language of the following is
    a) Pascal
    b) C++ 
    c) Ada

4) You think design-by-committee is a good thing
    a) Of course
    b) Hell no -- look at Ada
    c) Hell yes -- look at Ada

5) How many working, useful lines of Perl code have you written in the
last 6 months:
    a) less than 10,000
    b) more than 10,000
    c) I have others write my code for me

6) How many of the following languages are you familiar with (written
substantial amounts of code): APL, nawk, C++, Eiffel, Scheme, abc, Modula,
SNOBOL, ML, SmallTalk, Prolog, Icon, .....(insert other appropriate
languages here)
   a) 1 or 2
   b) most of them
   c) There are languages with names like that?

7) How may compilers/interpreters have you every written or helped with?
   a) One in my compilers undergrad class, once
   b) Many
   c) I have others write my code for me

8) Approximately how many people are regularly using anything you designed
and/or wrote?
   a) Me, and maybe a few friends
   b) Several hundred at a minimum
   c) I use the code written by others


Anybody answering (a) to more than 2 questions is not yet qualified to
discuss the future of Perl (or pretty much any language).

Anybody answering (c) to any of them shouldn't be allowed to post to
this group! :-)

# Besides, why would you want to change a language that lets you
# write a program like this?  :-)
exit unless eval q[sub then {substr($b,1)cmp substr($a,1);} print sort then 
 split if ($\,$_,$,)=(qx,echo .,,q,Perl Just hacker another,,q, ,)];

rlk@think.com (Robert Krawitz) (06/30/91)

A couple of other things I would like in perl:

1)  True multidimensional arrays.  Associative arrays simply don't cut
it, if nothing else for the fact that looping over associative arrays
yields values in an unpredictable order, whereas I want something that
looks a bit more like a matrix.  Some ideas:

rank(@a) returns the rank of a (the number of dimensions).  This enables
support for true arbitrarily (and variably) dimensioned arrays.

$a[@dims] uses the list as a group of dimensions.  Thus a list of
dimensions could be used to access an array that may be of variable
dimensionality (such as might be passed to a subroutine).  Scalar and
vector dimensions could be mixed, e. g. $a[$dim0,@dim1_n].

maxindex(@a,$dim) returns the maximum index along dimension $dim (since
the array might not be square, it would return the largest index along
any vector aligned with dimension $dim).  Similar for minindex.

foreach $i (@array) sets a variable @_DIMS to a list of the indices of
the current element of @array.

isin (@array @indices) returns true or false (1 or 0) depending on
whether @indices represent an element of @array.

2)  A mode to catch references to an undefined variable and error out at
runtime, along with some kind of optional typing system to catch stupid
errors (for example, using a string as a number).  Also a method to
"declare" typed arguments to subroutines with runtime type checking.
-- 
ames >>>>>>>>>  |	Robert Krawitz <rlk@think.com>	245 First St.
bloom-beacon >  |think!rlk	(postmaster)		Cambridge, MA  02142
harvard >>>>>>  .	Thinking Machines Corp.		(617)234-2116

stripes@eng.umd.edu (Joshua Osborne) (07/01/91)

In article <RUSTY.91Jun28181513@groan.Berkeley.EDU>, rusty@groan.Berkeley.EDU (Rusty Wright) writes:
> 1) The man page gives too many ways to do the same thing, for no
> apparent reason other than hackeritis.

Ahh, but there is a good reason.  One way is good for programs that will
live a long time.  One way is for programs that won't.  One way is easy
for C people to use, another is for sh'ers, yet another for awk'ers.  Another
way is more general.  I don't really think we should have one way for C, another
way for sh, and another way for awk'ers, but that may just be because I only
know C (well, C is the one I know well), and of corse I want the C version to
be taken in just about all conflicting cases...

>                                         For example, towards the end
> of the "Data Types and Objects" section are some examples that are
> preceded by the sentence "Anyway, the following lines are equivalent
> to each other" and the first example is
> 
> 	while ($_ = <STDIN>) { print; }

This is showig a "longcut", I _like_ the way "while(<FOO>)" works, it is
very easy to use.

> and the last is
> 
> 	print while <STDIN>;

I'll admit, this may be a bit much (am I just saying that because _I_ never
use it?).

> Similarly, in the "Syntax" section are examples for opening "foo" and
> dying if it can't be opened, with the classic
> 
> 	open(foo) || die "Can't open $foo: $!";
> 
> (but which I find to be just hackeritis show-off).

This is useless for a script you will use for a while.  It is useful for
something you are going to write in 3 minutes or less (because it will only
take 4 to do by hand...).  I'll have to admit I almost never use it.

> And then further on there are the various examples on the "foo:"
> block.
> 
> I suppose one might argue that some of these wierd constructs execute
> faster than others; my response would be that programmers shouldn't be
> wasting their time tweaking their perl code to try to speed it up.  In
> other words, perhaps perl needs more work in optimizing when it
> compiles.

Rember that it compiles every time you run your script.  I don't want to make
the compile much slower, do you?

> 2) Functions as unary operators.  Maybe I'm missing something here.
> In the paragraph that precedes the list of perl functions it says "If
> an operation is listed both with and without parentheses around its
> arguments, it means you can use it as a unary operator or as a
> function call."  Now, tell me, when would I ever be required to or
> need to or want to use chdir as a unary operator?  Seems to me they
> should all be functions.  Towards the end of the man page there's the
> "Precedence" section which tries to explain this mess.

chdir?  Likely never.  However sin you may, some of the string functions would
be good that way.  For once perl's syntax is regular (all foo bay be goo, not
all foo except bar may be goo, unless used elsewhere in the script as baz, or
when $* is set...), and you jump on it?

> 3) getpgrp and getppid: what's wrong with this picture?  I was once
> trying to create a temporary file (wanting to use the process id for
> the file's name) and was amazed that perl doesn't have a getpid
> function.  Then, when I was working on another perl program I stumbled
> across $$.  I was so aggravated I swore out loud.

Nothing, I would like them.

> 4) last and next.  What's wrong with using break and continue?  Just
> seems like a gratuitous incompatibility with C, csh, awk, etc.

I donno, mabie Larry wants to remind us that they are diffrent from C's
break & continue?  Mabie because Larry just like last and next, and chose them
before perl came out of the closet?  Mabie because perl isn't C?  Who knows?

> 5) length should accept an array as an argument and return the number
> of elements in it.  See for exmaple, tcl.

This would be quite nice.

> 6) open.  What a nightmare.  I simply cannot believe that crap of
> putting >, <, and | into the EXPR argument for specifying redirecting
> input/output or piping to/from a command.  I'd use the same syntax
> that fopen(3) uses and popen for piping to/from commands.

More convient, sometimes.  However it is (somewhat) difficult to use when
the filename is variable, but the mode isn't.  I offen worry about someone passing
"foo|" as a filename (what happens if I open "<foo|"?  or "sort -n foo||"?).

I hate the fopen mode charactors, I allway need to use the man page to
fine them.  I like perl's more.  Alot more.  But some wouldn't work well
as a 3rd argument.  What does fopen(BAZ, "cat foo", "|") do?

> 7) Predefined names; $_, $., et. al.  This is a worse nightmare than
> open.  What's the point of this?  Hasn't anyone ever heard of using
> DESCRIPTIVE NAMES for variables?  I like the man page's disclaimer,
> "Most of them have reasonable mnemonics, or analogues in one of the
> shells."  Where's my barf bag.

I like $_, something more descriptave would be harder to type.  I don't like
most of the others.  $Perl'lineno, or $Perl'euid would be easyer to rember, it
would be nice to have them.  I wouldn't mind keeping $., and $( (or is it $), or
$<, or $>, or what?) and the rest around for both compatibilty, and people with
better memmorys.  Besides we are running out of symbols.  (we needs an APL char
set if we keep it up...)
-- 
           stripes@eng.umd.edu          "Security for Unix is like
      Josh_Osborne@Real_World,The          Multitasking for MS-DOS"
      "The dyslexic porgramer"                  - Kevin Lockwood
"CNN is the only nuclear capable news network..."
    - lbruck@eng.umd.edu (Lewis Bruck)

rodney@dali.ipl.rpi.edu (Rodney Peck II) (07/01/91)

In article <RUSTY.91Jun28181513@groan.Berkeley.EDU> rusty@groan.Berkeley.EDU (Rusty Wright) writes:
[...]
>Similarly, in the "Syntax" section are examples for opening "foo" and
>dying if it can't be opened, with the classic
>
>	open(foo) || die "Can't open $foo: $!";
>
>(but which I find to be just hackeritis show-off).

I don't understand -- I use this all the time.  really.  what's the
problem?

[...]

>3) getpgrp and getppid: what's wrong with this picture?  I was once
>trying to create a temporary file (wanting to use the process id for
>the file's name) and was amazed that perl doesn't have a getpid
>function.  Then, when I was working on another perl program I stumbled
>across $$.  I was so aggravated I swore out loud.

You must be new to unix -- from the csh man page:

CSH(1)                   USER COMMANDS                     CSH(1)

[...]

     $$   Substitute the process number of the (parent) shell.


Sun Release 4.1    Last change: 2 October 1989                  9

It wasn't Larry's idea.  Everyone has been using $$ as the pid since time
began.  It didn't occur to me that it might not be obvious until you
mentioned that you were swearing out loud about it.  In fact, if you say
man perl and then search for 'process number' it goes right to $$.

Please, lets not be changing perl just because some people can't read the
man page.

-- 
Rodney

Tom Christiansen <tchrist@convex.COM> (07/01/91)

From the keyboard of rodney@dali.ipl.rpi.edu (Rodney Peck II):
:In article <RUSTY.91Jun28181513@groan.Berkeley.EDU> rusty@groan.Berkeley.EDU (Rusty Wright) writes:
:>Similarly, in the "Syntax" section are examples for opening "foo" and
:>dying if it can't be opened, with the classic
:>
:>	open(foo) || die "Can't open $foo: $!";
:>
:>(but which I find to be just hackeritis show-off).
:
:I don't understand -- I use this all the time.  really.  what's the
:problem?

Perhaps those folks not heavily into shell programming do not take well 
to using && and || for flow control.  But those who are, do, and are 
glad it's there.

--tom
--
Tom Christiansen		tchrist@convex.com	convex!tchrist
		"So much mail, so little time."  

rodney@sun.ipl.rpi.edu (Rodney Peck II) (07/01/91)

In article <1991Jun30.224532.23556@convex.com> tchrist@convex.COM (Tom Christiansen) writes:
>From the keyboard of rodney@dali.ipl.rpi.edu (Rodney Peck II):
>:In article <RUSTY.91Jun28181513@groan.Berkeley.EDU> rusty@groan.Berkeley.EDU (Rusty Wright) writes:
>:>Similarly, in the "Syntax" section are examples for opening "foo" and
>:>dying if it can't be opened, with the classic
>:>
>:>	open(foo) || die "Can't open $foo: $!";
>:>
>:>(but which I find to be just hackeritis show-off).
>:
>:I don't understand -- I use this all the time.  really.  what's the
>:problem?
>
>Perhaps those folks not heavily into shell programming do not take well 
>to using && and || for flow control.  But those who are, do, and are 
>glad it's there.

hm.  I suppose that's probably what the problem is.  I find it perfectly
normal.  I don't do much with shell scripts, but I used to program in 
Lisp a lot and it's pretty common to write:
  (or (function-1)
      (fun-2)
      (fun-3))

to try a sequence of things to get something done... like:

  (or (open-local)
      (open-remote-via-tftp)
      (open-remote-via-ftp)
      (open-remote-via-rlogin)
      (open-remote-via-telnet))

so... open || die makes perfect sense to me and is very readable.

What isn't so readable is:

$number-- || print "Number is zero.";

then again, maybe it is...

-- 
Rodney

stripes@eng.umd.edu (Joshua Osborne) (07/01/91)

In article <1991Jun30.224532.23556@convex.com> tchrist@convex.COM (Tom Christiansen) writes:
>From the keyboard of rodney@dali.ipl.rpi.edu (Rodney Peck II):
>:In article <RUSTY.91Jun28181513@groan.Berkeley.EDU> rusty@groan.Berkeley.EDU (Rusty Wright) writes:
>:>Similarly, in the "Syntax" section are examples for opening "foo" and
>:>dying if it can't be opened, with the classic
>:>
>:>	open(foo) || die "Can't open $foo: $!";
>:>
>:>(but which I find to be just hackeritis show-off).
>:
>:I don't understand -- I use this all the time.  really.  what's the
>:problem?
>
>Perhaps those folks not heavily into shell programming do not take well 
>to using && and || for flow control.  But those who are, do, and are 
>glad it's there.

Lots of people have assumed he is objecting to the use of || for flow
control.  However I think he is objecting to open(foo) using $foo to
find the file-name to open.  It is certinly a less then oft-used feature.

I almost never use it.  I don't rember any example programs using it.
However I don't rember any JAPH's using it, so it can't be ther just to
make obscure perl code :-)

(this is what I assumed he was refering to when I made my orig reply)
-- 
           stripes@eng.umd.edu          "Security for Unix is like
      Josh_Osborne@Real_World,The          Multitasking for MS-DOS"
      "The dyslexic porgramer"                  - Kevin Lockwood
"CNN is the only nuclear capable news network..."
    - lbruck@eng.umd.edu (Lewis Bruck)