mark@DRD.Com (Mark Lawrence) (06/27/90)
[I sent this via e-mail and then thought that the comments might be of general interest] Don, I saw your post in comp.lang.perl and wanted to share our (admittedly limited) experience with Perl. Being fairly novice to UNIX (I'm the senior UNIX user in-house having used it since 1986, others are much less comfortable with it), basic capabilities that experienced folks might take for granted (effective use of RegExps, awk, sed, sophisticated use of shell and so forth) has come very hard to us. Perl sort of tied everything together in one place, gave all these things a sense of cohesiveness, and now we understand a lot more about the features we discover in awk, sed, shell and the like that Perl obviously derived from. Incidently, we use Perl to write a lot of the code that makes up the core of an application that I'm the project manager for. It involves data management (because the application deals with a lot of data from various sources) and generating code to model structures, initialize maps and so forth is a very straightforward job with perl (as it probably would be with a combination of shell, awk and sed, but as I say -- it took perl to put it all together for us). The documentation ain't great, but I found that a single serious read-throughof the notorious man page gave me enough to get going pretty well. At present, I think it lacks heavily in the area of packages and how to use them effectively. The reference cards that Vromans put together are an invaluable help. Of course, Schwartz and Wall claim that a book is in the works, and we'll probably purchase multiple copies when it comes available. The text-oriented-ness of Perl seems really logical to us and having all the capabilities in one tool seems like it should be a performance win. Actually, the original reason I got interested in it was because awk didn't have a debugger (except: bailing out at line n :-) and perl did. In summary, our experience with Perl has been fairly positive. Obtuse code *can* be written in Perl, but then, I've seen some obtuse shell/awk/sed scripts, too. Certainly, Larry seems to be able to top anybody in terms of reducing an algorithm to the tersest and most efficient set of statements, but then, he wrote it. Doesn't bother me. I get done what needs to get done. -- mark@DRD.Com uunet!apctrc!drd!mark$B!J%^!<%/!!!&%m!<%l%s%9!K(B(918)743-3013
tchrist@convex.COM (Tom Christiansen) (06/27/90)
I suspect that most readers here have already read things I've posted extolling the virtues of perl programming over shell programming, so I'll try to skip such scintillating remarks. On the darker side, I honestly do maintain that there are several areas in which perl is weak and therefore a sub-optimal pick as the programming tool of choice. Interacting with binary data is cumbersome and error-prone, albeit feasible. I say cumbersome and error-prone because even if you set things up to automagically rebuild the perl version of <sys/acct.h> when the C version in updated (I do), you've probably got an $acct_t variable somewhere to serve as the format for pack/unpack conversions, and this WON'T get automagically rebuilt. So you lose and there's nothing to warn you of this. I'm not entirely convinced that socket-level networking is really moost appropriately done in perl, although I've written some programs of the order of 500 lines that do appear easier in perl. There are no facilities for RPC calls. I'm not sure there ought to be, either. I don't know that I'd be thrilled to see Xlib built into perl, and while I know Larry's adding curses, or at least providing the ability to do so, I wonder how well this will work out. I'm concerned about efficiency and ease of coding of these things. Will the ability to patch in your own C functions cause people to turn from C in cases where this is not honestly merited? I also wonder how well perl scales to very large applications. My largest single perl program (man) is itself a bit over 1300 lines long, not a long program as programs go, but due to the frequency with which it is run and the annoyance factor of having to wait a couple seconds for the parse to complete each time, I've undumped the script into an a.out, at which point it does beat the C version man program (and does a lot more, too.) But I'm sure there must be a point of diminishing returns. I've also had plenty of experiences with bugs, although to his credit I must admit that Larry's been a lot more responsive in this arena than any software vendor I've ever had dealings with, even though THEY were getting paid for maintenance. Still, sometimes you encounter a nasty bug and get a core dump or wrong answer and spend hours isolating it to prove to yourself it's not your own fault. Sometimes even when I'm convinced it's not, it really is, such as a sprintf() problem I had with a %-1000s field or some such similar nonsense. The bug that bites me worst right now is that sometimes in large programs, function calls under the debugger mysteriously return 1 rather than the value they are supposed to return. This problem evaporates when subjected to intense scrutiny: if run NOT under the debugger, or reduced to a small test case, all works well. One of the criticisms that one can make of perl is that it's prone to obfuscation, even more so than C. The regular expressions can easily become illegible, and with the ability to eval newly generated code on the fly, all things are possible. Of course, much of the guilt lies on the individual programmer for poor coding and commenting habits, but nonetheless there seems something in the language that too easily lends itself to obfuscation. Don Libes, the original poster, mentions that most of what he's read in magazines and on USENIX has been over-enthusiastic, with little criticism to the contrary. Well, if you've read Kolstad's UNIX REVIEW articles of the past three months (inspired/derived to a certain extent from my USENIX tutorials), you'll see that Rob has in several places been less than fawningly complimentary. He mentions that it's a kitchen-sink language, perhaps a little feature heavy. He speaks of the daunting, information-dense man "page". He complains that how are you supposed to just "know" that to access the aliases DBM database you have to concatenate a null byte, as in $aliases{'kolstad'."\000"}. (This latter actually makes sense when you figure it out, but I won't try to explain it here.) So he's at least trying to acknowledge some of the difficulties people may have with it. [Don Libes also ponders what "real computer scientists" have to say about the language. Well, what's it take to be a "real computer scientist"? Do O/S people count, or only language designers and compiler writers? Do you need certain degrees, publications, or world renown?] It's true that I was the first here to use perl; I grabbed it when the first version came out. But unlike Don Libes's site, there are quite a lot of people using perl here. Some of these are for projects purely in perl, some are as auxiliary tools for major projects involving C and C++, while others are for automated software test scripts or system administration purposes. It was originally for purposes of system management that it was first appreciated, but in the last year or so many others have embraced it as well. I don't really know how many perl programmers we have here now: it's well over a dozen, maybe two, and the number continues to grow weekly. So in answer to Don's question, yes, I do think that other people than Larry can program in perl. I might amend that to say that the answer is a qualified yes. The qualification is that I don't believe anyone can program quite so effectively in perl as can Larry. He of course understands not just some but each and every one of the semi- and undocumented nuances of the language. I think I'm pretty good at programming in perl, but still most of what I do still comes out looking like C with embedded sed. Larry takes a problem, looks at it a different way, and often comes up with something two orders of magnitude simpler and faster because of his intimate acquaintance with the language. It's only now and then that I come up with something that doesn't look very C-like, as in: next if $sections && !grep($mandir =~ /man$_/, @sections); and even then I feel somewhat guilty about it. :-) I hope that most of the subtleties of the language will be outlined in that fabled tome, the perl book he and Randal are working on. I'm especially interested in matters of efficiency and optimization. Larry often writes thing with big multi-line evals, and I'd like to have a better grasp on why this is so often so important for getting the promised 'faster-than-sed' performance. I think that this book has the potential for making perl more accessible to the general public. One final concern still makes me wonder, and is not a new one: just where is this thing called perl going to? Towards what is it evolving? Will it reach a point in its evolution when it is "done"? I hope so, but let it not be at the hands of some maiming standards committee. Let it be the handiwork of just one craftsman, one vision. I'd like to be fair and optimistic without an undue quantity of zeal fueling my discussions. I, too, am very interested to hear what others who've used this tool long enough to have a balanced view of it have to say. I've heard, and myself written, plenty of the good, and I, too, would appreciate hearing the darker experiences people have had about it. There is no ultimate answer to anything, let alone programming. But for what it was designed for, perl is a refreshing and pleasant change of pace. I'm reminded of around a decade ago on a little Z-80 running CP/M with only an assembler how very painful it was to generate any program at all. When I finally got a C compiler, it was such a refreshing pleasure, I cranked out a new tool on nearly a daily basis. (Of course, some may argue that the pleasure was as that of stopping banging your head against the wall. :-) I will dare to suggest that some of the bad experiences people may have had with perl stem from trying to use the wrong tool for the job, but I don't know that for sure. All I know is that for much of the quotidian toil that faces the tool builder and the system administrator, who often have to whip together a passably functioning piece of software in nothing at all resembling the normal, well-deliberated process of planned software development, that perl is a true blessing. It is in my sincere and considered opinion the most significant piece of general-purpose software to hit the software community since awk, and in that respect far exceeds awk's humble ambitions. --tom -- Tom Christiansen {uunet,uiucdcs,sun}!convex!tchrist Convex Computer Corporation tchrist@convex.COM "EMACS belongs in <sys/errno.h>: Editor too big!"
evans@decvaxdec.com (Marc Evans) (06/28/90)
In article <8497@jpl-devvax.JPL.NASA.GOV>, lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) writes: |> (However, I do have a complaint against people that don't know how to |> use the / key on a manual page--presuming their pager knows about the |> / key. With many of the questions that people ask in comp.lang.perl, |> I just search through the man page using the very keyword they used, |> and find the thing right there in the manual. People really don't know |> how to use computers yet. Sigh.) There is an midnight effort inside of DEC ULTRIXland to convert the manual page for perl to DEC's bookreader format (kind of a hypertext reader with lots of cross referencing). The / and ? mechanisms of more/less are great, but hypermedia is a whole lot quicker (IMHO). - Marc =========================================================================== Marc Evans - WB1GRH - evans@decvax.DEC.COM | Synergytics (603)635-8876 Unix and X Software Contractor | 21 Hinds Ln, Pelham, NH 03076 ===========================================================================
peter@ficc.ferranti.com (Peter da Silva) (06/28/90)
In article <103428@convex.convex.com> tchrist@convex.COM (Tom Christiansen) writes: > I don't know that I'd be thrilled to see Xlib built into perl, and > while I know Larry's adding curses, or at least providing the ability > to do so, I wonder how well this will work out. I'm concerned about > efficiency and ease of coding of these things. Will the ability to > patch in your own C functions cause people to turn from C in cases > where this is not honestly merited? One thing I have found useful is John Ousterhout's TCL: Tool Command Language. It's designed to add an extension language to various tools and (at least in the original, and in Karl Lehenbauer's AmigaTCL version) uses an RPC mechanism to communicate between separate programs. This way no individual program becomes a kitchen-sink. I have published, to the net, a version of my "browse" directory browser with a TCL interface. It's a nice clean language (sort of like a text- oriented Lisp), and adding extensions to it is amazingly easy. Here's a section of my browse.rc: proc key_'K' {} { browse message {Edit key } set key [get key] set func key_[get keyname $key] set file [get env HOME]/.function if { [length [info procs $func] ] != 0 } { set def [list proc $func {} [info body $func]] } else { set def [list proc $func {} { ... }] } print $def\n $file browse message !vi $file browse shell [concat vi $file] source $file } proc key_'F' {} { set func [get response {Edit function }] if { [length $func chars] == 0 } return set file [get env HOME]/.function if { [length [info procs $func] ] != 0 } { set def [list proc $func {} [info body $func]] } else { set def [list proc $func {} { ... }] } print $def\n $file browse message !vi $file browse shell [concat vi $file] source $file } proc key_'d' {} { if { [string compare d [get key -d-]] == 0 } { set file [get file .] set prompt [concat Delete $file {? }] if { [string match {[yY]} [get key $prompt]] } { if { ![eval [concat browse delete $file]] } { perror } } } } -- Peter da Silva. `-_-' +1 713 274 5180. <peter@ficc.ferranti.com>
leo@ehviea.ine.philips.nl (Leo de Wit) (06/29/90)
In article <8497@jpl-devvax.JPL.NASA.GOV> lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) writes: [stuff left out...] |(However, I do have a complaint against people that don't know how to |use the / key on a manual page--presuming their pager knows about the |/ key. With many of the questions that people ask in comp.lang.perl, |I just search through the man page using the very keyword they used, |and find the thing right there in the manual. People really don't know |how to use computers yet. Sigh.) Unfortunately, the very keyword you're looking for is often underlined, or typed over multiple times, so it contains embedded backspaces (and possible underscores or repetitions) in the manual text. I had a few bad experiences with that lately. There really should be an option of the pager to compare modulo underlining/fat printing. Leo.
dstrombe@ucqais.uc.edu (pri=2 Dan Stromberg) (06/29/90)
In article <814@ehviea.ine.philips.nl>, leo@ehviea.ine.philips.nl (Leo de Wit) writes: > Unfortunately, the very keyword you're looking for is often underlined, > or typed over multiple times, so it contains embedded backspaces (and > possible underscores or repetitions) in the manual text. I had a few > bad experiences with that lately. There really should be an option of > the pager to compare modulo underlining/fat printing. > > Leo. I don't know if this is possible on all systems, but: man ls | col -b | pg seems to work nicely for me on a couple different Sys V machines. - Dan Stromberg ...!tut.cis.ohio-state.edu!uccba!ucqais!dstrombe
cruff@ncar.ucar.edu (Craig Ruff) (06/29/90)
In article <SUA45BF@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: >One thing I have found useful is John Ousterhout's TCL: Tool Command >Language. ... I used TCL as part of a library on a project, and it turned out to be useful. However, I would have liked to use a subroutine callable version of perl instead! Then I wouldn't have had to add all sorts of additional functions to TCL. -- Craig Ruff NCAR cruff@ncar.ucar.edu (303) 497-1211 P.O. Box 3000 Boulder, CO 80307
schaefer@ogicse.ogc.edu (Barton E. Schaefer) (06/30/90)
In article <814@ehviea.ine.philips.nl> leo@ehviea.UUCP (Leo de Wit) writes: } In article <8497@jpl-devvax.JPL.NASA.GOV> lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) writes: } [stuff left out...] } |With many of the questions that people ask in comp.lang.perl, } |I just search through the man page using the very keyword they used, } |and find the thing right there in the manual. } } Unfortunately, the very keyword you're looking for is often underlined, } or typed over multiple times, so it contains embedded backspaces (and } possible underscores or repetitions) in the manual text. I had a few } bad experiences with that lately. There really should be an option of } the pager to compare modulo underlining/fat printing. I have taken to using man perl | less -i Searches in the "less" pager, at least in more recent versions, will match underlined or overstruck text when the ignore-case option is used. -- Bart Schaefer schaefer@cse.ogi.edu
leo@ehviea.ine.philips.nl (Leo de Wit) (06/30/90)
In article <2407@ucqais.uc.edu> dstrombe@ucqais.uc.edu (pri=2 Dan Stromberg) writes: |In article <814@ehviea.ine.philips.nl>, leo@ehviea.ine.philips.nl (Leo de Wit) writes: |> Unfortunately, the very keyword you're looking for is often underlined, |> or typed over multiple times, so it contains embedded backspaces (and |> possible underscores or repetitions) in the manual text. I had a few |> bad experiences with that lately. There really should be an option of |> the pager to compare modulo underlining/fat printing. |> |> Leo. | |I don't know if this is possible on all systems, but: | | man ls | col -b | pg | |seems to work nicely for me on a couple different Sys V machines. Yep, works here too. Normally, using man(1) in the UCB universe (on a Pyramid), I get it for nothing, because the output is piped through ul(1); lately I did something like att man curses|more (without the 'col' or 'ul'), which caused my problem. Well, I guess that's what you deserve if you want the best of two worlds 8-). Also thanks to John Merritt, who gave me the 'ul' suggestion (mail to him bounced). Leo.
peter@ficc.ferranti.com (Peter da Silva) (07/02/90)
In article <814@ehviea.ine.philips.nl> leo@ehviea.UUCP (Leo de Wit) writes: > Unfortunately, the very keyword you're looking for is often underlined, > or typed over multiple times, so it contains embedded backspaces (and > possible underscores or repetitions) in the manual text. What I do is run it through a program I wrote called "strike" that converts this: _^Hu_^Hn_^Hd_^He_^Hr_^Hl_^Hi_^Hn_^He into this: _________^M underline. It's much nicer on the printer, and you can do searches on it... -- Peter da Silva. `-_-' +1 713 274 5180. <peter@ficc.ferranti.com>
logan@rockville.dg.com (James L. Logan) (07/04/90)
In article <814@ehviea.ine.philips.nl> leo@ehviea.UUCP (Leo de Wit) writes: # In article <8497@jpl-devvax.JPL.NASA.GOV> lwall@jpl-devvax.JPL.NASA.GOV # (Larry Wall) writes: # | [ . . . ] People really don't know # |how to use computers yet. Sigh.) # # [ . . . ] There really should be an option of # the pager to compare modulo underlining/fat printing. Use the public-domain pager called "less". It can be configured to ignore underlining, boldfacing, etc. In fact, I use it to scan the perl man pages myself. Just another happy hacker, -Jim -- James Logan UUCP: uunet!inpnms!logan Data General Telecommunications Inet: logan@rockville.dg.com 2098 Gaither Road Phone: (301) 590-3198 Rockville, MD 20850
peter@ficc.ferranti.com (Peter da Silva) (07/06/90)
In article <7825@ncar.ucar.edu> cruff@handies.UCAR.EDU (Craig Ruff) writes: > In article <SUA45BF@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: > >One thing I have found useful is John Ousterhout's TCL: Tool Command > >Language. ... > I used TCL as part of a library on a project, and it turned out to be useful. > However, I would have liked to use a subroutine callable version of perl > instead! Then I wouldn't have had to add all sorts of additional functions > to TCL. Yes, TCL is sort of short in the subroutines department, but I think it makes a better extension language than, say, perl (or REXX, for that matter) because it's such a clean language... like a cross between lisp and awk. This makes it relatively easy to operate on programs as data... something I'd hate to have to do with (say) an algol-like language. I think I'd really prefer a postscript core to the language. Anyone know how to get hold of the author of the Gosling postscript? He doesn't seem to be the Emacs Gosling, and the address in the docco is defunct. -- Peter da Silva. `-_-' +1 713 274 5180. <peter@ficc.ferranti.com>
jbw@zeb.uswest.com (Joe Wells) (07/06/90)
In article <602@inpnms.ROCKVILLE.DG.COM> logan@rockville.dg.com (James L. Logan) writes: In article <814@ehviea.ine.philips.nl> leo@ehviea.UUCP (Leo de Wit) writes: # In article <8497@jpl-devvax.JPL.NASA.GOV> lwall@jpl-devvax.JPL.NASA.GOV # (Larry Wall) writes: # | [ . . . ] People really don't know # |how to use computers yet. Sigh.) # # [ . . . ] There really should be an option of # the pager to compare modulo underlining/fat printing. Use the public-domain pager called "less". It can be configured to ignore underlining, boldfacing, etc. In fact, I use it to scan the perl man pages myself. I like to look at the man page from inside GNU Emacs (where I can use find-tag to jump to the relevant source code with the touch of a key). So I use the Emacs function nuke-nroff-bs to clean up the man page. I've also got a version of nuke-nroff-bs that also correctly strips all types of man page headers and footers, if anyone wants one. On a separate issue, does anyone know where less version 123 is archived? I have a copy I can email to people, but I'd prefer to refer people to a convenient archive. I looked for one a few months ago, but I couldn't find less version 123 anywhere. -- Joe Wells <jbw@uswest.com>