mark@DRD.Com (Mark Lawrence) (06/27/90)
[I sent this via e-mail and then thought that the comments might be of general interest] Don, I saw your post in comp.lang.perl and wanted to share our (admittedly limited) experience with Perl. Being fairly novice to UNIX (I'm the senior UNIX user in-house having used it since 1986, others are much less comfortable with it), basic capabilities that experienced folks might take for granted (effective use of RegExps, awk, sed, sophisticated use of shell and so forth) has come very hard to us. Perl sort of tied everything together in one place, gave all these things a sense of cohesiveness, and now we understand a lot more about the features we discover in awk, sed, shell and the like that Perl obviously derived from. Incidently, we use Perl to write a lot of the code that makes up the core of an application that I'm the project manager for. It involves data management (because the application deals with a lot of data from various sources) and generating code to model structures, initialize maps and so forth is a very straightforward job with perl (as it probably would be with a combination of shell, awk and sed, but as I say -- it took perl to put it all together for us). The documentation ain't great, but I found that a single serious read-throughof the notorious man page gave me enough to get going pretty well. At present, I think it lacks heavily in the area of packages and how to use them effectively. The reference cards that Vromans put together are an invaluable help. Of course, Schwartz and Wall claim that a book is in the works, and we'll probably purchase multiple copies when it comes available. The text-oriented-ness of Perl seems really logical to us and having all the capabilities in one tool seems like it should be a performance win. Actually, the original reason I got interested in it was because awk didn't have a debugger (except: bailing out at line n :-) and perl did. In summary, our experience with Perl has been fairly positive. Obtuse code *can* be written in Perl, but then, I've seen some obtuse shell/awk/sed scripts, too. Certainly, Larry seems to be able to top anybody in terms of reducing an algorithm to the tersest and most efficient set of statements, but then, he wrote it. Doesn't bother me. I get done what needs to get done. -- mark@DRD.Com uunet!apctrc!drd!mark$B!J%^!<%/!!!&%m!<%l%s%9!K(B(918)743-3013
tchrist@convex.COM (Tom Christiansen) (06/27/90)
I suspect that most readers here have already read things I've posted extolling the virtues of perl programming over shell programming, so I'll try to skip such scintillating remarks. On the darker side, I honestly do maintain that there are several areas in which perl is weak and therefore a sub-optimal pick as the programming tool of choice. Interacting with binary data is cumbersome and error-prone, albeit feasible. I say cumbersome and error-prone because even if you set things up to automagically rebuild the perl version of <sys/acct.h> when the C version in updated (I do), you've probably got an $acct_t variable somewhere to serve as the format for pack/unpack conversions, and this WON'T get automagically rebuilt. So you lose and there's nothing to warn you of this. I'm not entirely convinced that socket-level networking is really moost appropriately done in perl, although I've written some programs of the order of 500 lines that do appear easier in perl. There are no facilities for RPC calls. I'm not sure there ought to be, either. I don't know that I'd be thrilled to see Xlib built into perl, and while I know Larry's adding curses, or at least providing the ability to do so, I wonder how well this will work out. I'm concerned about efficiency and ease of coding of these things. Will the ability to patch in your own C functions cause people to turn from C in cases where this is not honestly merited? I also wonder how well perl scales to very large applications. My largest single perl program (man) is itself a bit over 1300 lines long, not a long program as programs go, but due to the frequency with which it is run and the annoyance factor of having to wait a couple seconds for the parse to complete each time, I've undumped the script into an a.out, at which point it does beat the C version man program (and does a lot more, too.) But I'm sure there must be a point of diminishing returns. I've also had plenty of experiences with bugs, although to his credit I must admit that Larry's been a lot more responsive in this arena than any software vendor I've ever had dealings with, even though THEY were getting paid for maintenance. Still, sometimes you encounter a nasty bug and get a core dump or wrong answer and spend hours isolating it to prove to yourself it's not your own fault. Sometimes even when I'm convinced it's not, it really is, such as a sprintf() problem I had with a %-1000s field or some such similar nonsense. The bug that bites me worst right now is that sometimes in large programs, function calls under the debugger mysteriously return 1 rather than the value they are supposed to return. This problem evaporates when subjected to intense scrutiny: if run NOT under the debugger, or reduced to a small test case, all works well. One of the criticisms that one can make of perl is that it's prone to obfuscation, even more so than C. The regular expressions can easily become illegible, and with the ability to eval newly generated code on the fly, all things are possible. Of course, much of the guilt lies on the individual programmer for poor coding and commenting habits, but nonetheless there seems something in the language that too easily lends itself to obfuscation. Don Libes, the original poster, mentions that most of what he's read in magazines and on USENIX has been over-enthusiastic, with little criticism to the contrary. Well, if you've read Kolstad's UNIX REVIEW articles of the past three months (inspired/derived to a certain extent from my USENIX tutorials), you'll see that Rob has in several places been less than fawningly complimentary. He mentions that it's a kitchen-sink language, perhaps a little feature heavy. He speaks of the daunting, information-dense man "page". He complains that how are you supposed to just "know" that to access the aliases DBM database you have to concatenate a null byte, as in $aliases{'kolstad'."\000"}. (This latter actually makes sense when you figure it out, but I won't try to explain it here.) So he's at least trying to acknowledge some of the difficulties people may have with it. [Don Libes also ponders what "real computer scientists" have to say about the language. Well, what's it take to be a "real computer scientist"? Do O/S people count, or only language designers and compiler writers? Do you need certain degrees, publications, or world renown?] It's true that I was the first here to use perl; I grabbed it when the first version came out. But unlike Don Libes's site, there are quite a lot of people using perl here. Some of these are for projects purely in perl, some are as auxiliary tools for major projects involving C and C++, while others are for automated software test scripts or system administration purposes. It was originally for purposes of system management that it was first appreciated, but in the last year or so many others have embraced it as well. I don't really know how many perl programmers we have here now: it's well over a dozen, maybe two, and the number continues to grow weekly. So in answer to Don's question, yes, I do think that other people than Larry can program in perl. I might amend that to say that the answer is a qualified yes. The qualification is that I don't believe anyone can program quite so effectively in perl as can Larry. He of course understands not just some but each and every one of the semi- and undocumented nuances of the language. I think I'm pretty good at programming in perl, but still most of what I do still comes out looking like C with embedded sed. Larry takes a problem, looks at it a different way, and often comes up with something two orders of magnitude simpler and faster because of his intimate acquaintance with the language. It's only now and then that I come up with something that doesn't look very C-like, as in: next if $sections && !grep($mandir =~ /man$_/, @sections); and even then I feel somewhat guilty about it. :-) I hope that most of the subtleties of the language will be outlined in that fabled tome, the perl book he and Randal are working on. I'm especially interested in matters of efficiency and optimization. Larry often writes thing with big multi-line evals, and I'd like to have a better grasp on why this is so often so important for getting the promised 'faster-than-sed' performance. I think that this book has the potential for making perl more accessible to the general public. One final concern still makes me wonder, and is not a new one: just where is this thing called perl going to? Towards what is it evolving? Will it reach a point in its evolution when it is "done"? I hope so, but let it not be at the hands of some maiming standards committee. Let it be the handiwork of just one craftsman, one vision. I'd like to be fair and optimistic without an undue quantity of zeal fueling my discussions. I, too, am very interested to hear what others who've used this tool long enough to have a balanced view of it have to say. I've heard, and myself written, plenty of the good, and I, too, would appreciate hearing the darker experiences people have had about it. There is no ultimate answer to anything, let alone programming. But for what it was designed for, perl is a refreshing and pleasant change of pace. I'm reminded of around a decade ago on a little Z-80 running CP/M with only an assembler how very painful it was to generate any program at all. When I finally got a C compiler, it was such a refreshing pleasure, I cranked out a new tool on nearly a daily basis. (Of course, some may argue that the pleasure was as that of stopping banging your head against the wall. :-) I will dare to suggest that some of the bad experiences people may have had with perl stem from trying to use the wrong tool for the job, but I don't know that for sure. All I know is that for much of the quotidian toil that faces the tool builder and the system administrator, who often have to whip together a passably functioning piece of software in nothing at all resembling the normal, well-deliberated process of planned software development, that perl is a true blessing. It is in my sincere and considered opinion the most significant piece of general-purpose software to hit the software community since awk, and in that respect far exceeds awk's humble ambitions. --tom -- Tom Christiansen {uunet,uiucdcs,sun}!convex!tchrist Convex Computer Corporation tchrist@convex.COM "EMACS belongs in <sys/errno.h>: Editor too big!"
evans@decvaxdec.com (Marc Evans) (06/28/90)
In article <15610@bfmny0.BFM.COM>, tneff@bfmny0.BFM.COM (Tom Neff) writes: |> If a few vendors started shipping Perl binaries with their |> OS releases, it'd become a standard in months.. You may be glad to know that I am doing all that I can to get DEC to provide perl on the unsupported tape that comes with ULTRIX. This tape has historically contained many publicly available programs (sources and/or binaries), including GNU stuff and even Larrys' rn. I'll let you know when everthing is in place... - Marc =========================================================================== Marc Evans - WB1GRH - evans@decvax.DEC.COM | Synergytics (603)635-8876 Unix and X Software Contractor | 21 Hinds Ln, Pelham, NH 03076 ===========================================================================
evans@decvaxdec.com (Marc Evans) (06/28/90)
In article <8497@jpl-devvax.JPL.NASA.GOV>, lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) writes: |> (However, I do have a complaint against people that don't know how to |> use the / key on a manual page--presuming their pager knows about the |> / key. With many of the questions that people ask in comp.lang.perl, |> I just search through the man page using the very keyword they used, |> and find the thing right there in the manual. People really don't know |> how to use computers yet. Sigh.) There is an midnight effort inside of DEC ULTRIXland to convert the manual page for perl to DEC's bookreader format (kind of a hypertext reader with lots of cross referencing). The / and ? mechanisms of more/less are great, but hypermedia is a whole lot quicker (IMHO). - Marc =========================================================================== Marc Evans - WB1GRH - evans@decvax.DEC.COM | Synergytics (603)635-8876 Unix and X Software Contractor | 21 Hinds Ln, Pelham, NH 03076 ===========================================================================
tony@oha.UUCP (Tony Olekshy) (06/28/90)
In <STEF.90Jun25223718@zweig.sun>, stef@zweig.sun (Stephane Payrard) writes: - - Perl is becoming one of my favorite tools. - Today, I will discuss some of its limitations. - - ... more precisely, Perl so far - 1/ has no clean way of: allocating, deallocating, - accessing data in memory (ie: no pointer) Optimizations left as an exercise to the reader: $Memory{&UniqueKey($datum)} = $datum; push(@Pointers, &UniqueKey($datum)); Stop thinking so low level; that's not what perl is for (IMHO)... $C = "-?\\d+\\.?\\d*"; # Coordinate. $P = "p:($C):($C):"; # Point. $L = "l:($P)($P)b:($C):"; # Line. $S = "s:(($L)+)"; # Line String. sub GetLines # Return list of spatial lines in $Polygon. { local($Polygon) = $_[0]; local(@Out); die "Bad Polygon" unless $Polygon =~ s/^$S$/\1/; while ($Polygon =~ s/^($L)//) { push(@Out, $1); } return (@Out); } Now that's a data structure (and don't laugh, it works, with a GKS child, and makes a great prototyping workbench -- so there ;-). -- Yours etc., Tony Olekshy. Internet: tony%oha@CS.UAlberta.CA BITNET: tony%oha@UALTAMTS.BITNET uucp: alberta!oha!tony
peter@ficc.ferranti.com (Peter da Silva) (06/28/90)
In article <103428@convex.convex.com> tchrist@convex.COM (Tom Christiansen) writes: > I don't know that I'd be thrilled to see Xlib built into perl, and > while I know Larry's adding curses, or at least providing the ability > to do so, I wonder how well this will work out. I'm concerned about > efficiency and ease of coding of these things. Will the ability to > patch in your own C functions cause people to turn from C in cases > where this is not honestly merited? One thing I have found useful is John Ousterhout's TCL: Tool Command Language. It's designed to add an extension language to various tools and (at least in the original, and in Karl Lehenbauer's AmigaTCL version) uses an RPC mechanism to communicate between separate programs. This way no individual program becomes a kitchen-sink. I have published, to the net, a version of my "browse" directory browser with a TCL interface. It's a nice clean language (sort of like a text- oriented Lisp), and adding extensions to it is amazingly easy. Here's a section of my browse.rc: proc key_'K' {} { browse message {Edit key } set key [get key] set func key_[get keyname $key] set file [get env HOME]/.function if { [length [info procs $func] ] != 0 } { set def [list proc $func {} [info body $func]] } else { set def [list proc $func {} { ... }] } print $def\n $file browse message !vi $file browse shell [concat vi $file] source $file } proc key_'F' {} { set func [get response {Edit function }] if { [length $func chars] == 0 } return set file [get env HOME]/.function if { [length [info procs $func] ] != 0 } { set def [list proc $func {} [info body $func]] } else { set def [list proc $func {} { ... }] } print $def\n $file browse message !vi $file browse shell [concat vi $file] source $file } proc key_'d' {} { if { [string compare d [get key -d-]] == 0 } { set file [get file .] set prompt [concat Delete $file {? }] if { [string match {[yY]} [get key $prompt]] } { if { ![eval [concat browse delete $file]] } { perror } } } } -- Peter da Silva. `-_-' +1 713 274 5180. <peter@ficc.ferranti.com>
leo@ehviea.ine.philips.nl (Leo de Wit) (06/29/90)
In article <8497@jpl-devvax.JPL.NASA.GOV> lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) writes: [stuff left out...] |(However, I do have a complaint against people that don't know how to |use the / key on a manual page--presuming their pager knows about the |/ key. With many of the questions that people ask in comp.lang.perl, |I just search through the man page using the very keyword they used, |and find the thing right there in the manual. People really don't know |how to use computers yet. Sigh.) Unfortunately, the very keyword you're looking for is often underlined, or typed over multiple times, so it contains embedded backspaces (and possible underscores or repetitions) in the manual text. I had a few bad experiences with that lately. There really should be an option of the pager to compare modulo underlining/fat printing. Leo.
dstrombe@ucqais.uc.edu (pri=2 Dan Stromberg) (06/29/90)
In article <814@ehviea.ine.philips.nl>, leo@ehviea.ine.philips.nl (Leo de Wit) writes: > Unfortunately, the very keyword you're looking for is often underlined, > or typed over multiple times, so it contains embedded backspaces (and > possible underscores or repetitions) in the manual text. I had a few > bad experiences with that lately. There really should be an option of > the pager to compare modulo underlining/fat printing. > > Leo. I don't know if this is possible on all systems, but: man ls | col -b | pg seems to work nicely for me on a couple different Sys V machines. - Dan Stromberg ...!tut.cis.ohio-state.edu!uccba!ucqais!dstrombe
cruff@ncar.ucar.edu (Craig Ruff) (06/29/90)
In article <SUA45BF@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: >One thing I have found useful is John Ousterhout's TCL: Tool Command >Language. ... I used TCL as part of a library on a project, and it turned out to be useful. However, I would have liked to use a subroutine callable version of perl instead! Then I wouldn't have had to add all sorts of additional functions to TCL. -- Craig Ruff NCAR cruff@ncar.ucar.edu (303) 497-1211 P.O. Box 3000 Boulder, CO 80307
schaefer@ogicse.ogc.edu (Barton E. Schaefer) (06/30/90)
In article <814@ehviea.ine.philips.nl> leo@ehviea.UUCP (Leo de Wit) writes: } In article <8497@jpl-devvax.JPL.NASA.GOV> lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) writes: } [stuff left out...] } |With many of the questions that people ask in comp.lang.perl, } |I just search through the man page using the very keyword they used, } |and find the thing right there in the manual. } } Unfortunately, the very keyword you're looking for is often underlined, } or typed over multiple times, so it contains embedded backspaces (and } possible underscores or repetitions) in the manual text. I had a few } bad experiences with that lately. There really should be an option of } the pager to compare modulo underlining/fat printing. I have taken to using man perl | less -i Searches in the "less" pager, at least in more recent versions, will match underlined or overstruck text when the ignore-case option is used. -- Bart Schaefer schaefer@cse.ogi.edu
leo@ehviea.ine.philips.nl (Leo de Wit) (06/30/90)
In article <2407@ucqais.uc.edu> dstrombe@ucqais.uc.edu (pri=2 Dan Stromberg) writes: |In article <814@ehviea.ine.philips.nl>, leo@ehviea.ine.philips.nl (Leo de Wit) writes: |> Unfortunately, the very keyword you're looking for is often underlined, |> or typed over multiple times, so it contains embedded backspaces (and |> possible underscores or repetitions) in the manual text. I had a few |> bad experiences with that lately. There really should be an option of |> the pager to compare modulo underlining/fat printing. |> |> Leo. | |I don't know if this is possible on all systems, but: | | man ls | col -b | pg | |seems to work nicely for me on a couple different Sys V machines. Yep, works here too. Normally, using man(1) in the UCB universe (on a Pyramid), I get it for nothing, because the output is piped through ul(1); lately I did something like att man curses|more (without the 'col' or 'ul'), which caused my problem. Well, I guess that's what you deserve if you want the best of two worlds 8-). Also thanks to John Merritt, who gave me the 'ul' suggestion (mail to him bounced). Leo.
peter@ficc.ferranti.com (Peter da Silva) (07/02/90)
In article <814@ehviea.ine.philips.nl> leo@ehviea.UUCP (Leo de Wit) writes: > Unfortunately, the very keyword you're looking for is often underlined, > or typed over multiple times, so it contains embedded backspaces (and > possible underscores or repetitions) in the manual text. What I do is run it through a program I wrote called "strike" that converts this: _^Hu_^Hn_^Hd_^He_^Hr_^Hl_^Hi_^Hn_^He into this: _________^M underline. It's much nicer on the printer, and you can do searches on it... -- Peter da Silva. `-_-' +1 713 274 5180. <peter@ficc.ferranti.com>
inc@tc.fluke.COM (Gary Benson) (07/03/90)
The person who originally posted the request for pros and cons about perl asked for things that had been published, but this forum seems to be the major place where perl topics are published! My experiences and observations may be of interest, since I am not a programmer, but I have had to learn a little about sed and awk and shell programming, and now perl, out of necessity to support our group, a Technical Publications department. I have written a number of filters that clean up common problems in text files. For example, our current typesetting equipment requires one space to separate sentences, not two, as most people learned in typing class. So sed was the logical choice. I did a bit of that kind of thing on and off -- it is not really my job, but the need was there, and we did not have regular access to programming expertise. Occasionally we have had to extract and rearrange some information from a database, and so I've learned a little bit about awk. Then, we developed a system to "centralize" our department archives of manuals in print, and so I learned enough about shell programming to automate that process somewhat. Then one day we decided that we had enough information to ask a "real programmer" to attack a problem we had been facing for a long time. The work of inserting typesetting codes into a document is tedious, boring, and error-prone. When the people who had been doing typesetting were no longer in the group, or were doing other work, we decided to write a software requirement for a program that would scan a text file and based on structural clues (like the word CAUTION centered on a line by itself), would insert the appropriate typesetting codes for font changes, bolding, centering, and so on. Our original hope was that perhaps 90% of a document could be auto-coded by such clues, leaving the remaining 10% for hand work, which we thought could be done by someone without typesetting expertise. A programmer here at Fluke by the name of Corey Satten had wanted to tackle this problem for some time, but we had never organized the boring, tedious requirements. When we did, back in April of 1988, Corey had been looking at perl and was itching to try it out. Seeing our requirements, he determined that perl would be a good language choice, and in less than a week, his perl script easily met our 90% target. When he left the company, it fell to me to maintain the script he left us; I have been able to permute it into over a dozen variations, covering all the differing formats we use for publishing our manuals. Along the way, I have managed to add a bit more functionality, and only one hurdle keeps us from 99% automatic coding of "clear text files". We are in the process of purchasing an electronic page-makeup package and our perl script will become the front end, interfacing on the input side with the files Technical Writers provide, and it will generate SGML coding for the page makeup program. We have come much further than we ever anticipated we would back in the spring of 1988. I am certain that our success is in large measure due to the many tools that perl brings under the same umbrella. Our only alternative would be a shell script calling an awk script, a few lines of sed, multiple intermediate temporary files, and all-in-all, a generally ugly, hard-to-maintain, twistingly interactive group of programs. From my perspective, perl is much easier to understand and learn than sed, awk, grep, and shell programming. Because it is one program, there are no syntax discontinuities that used to drive me up a wall (Hmmm, that sounds odd for some reason). Sure, as others have pointed out, the syntax may be wierd and difficult at times, but at least it is cohesive. In fact, I have even translated my old sed filters into perl, a painless process using s2p. Last year, we were preparing to publish a manual that required a new feature in the typesetting program, so we asked for help from an engineering group who we knew would be needing the feature in one fo their upcoming manuals. They wrote an awk script, then used a2p to translate it to perl. It fit into the larger perl script as a module in need of only minor tweaking. It worked like a charm. Our programs are 200 to 400 lines long; not daunting by any means; but substantial, at least to me, whose shell scripts were usually about one-tenth that length. It may be true that there are too many ways to do things in perl, but for me, that is a decided plus, since I know that all I really have to do is find ONE of them, and it is going to fly. Too often it has seemed that there is only one way to accomplish a task, and that way was hidden in some tricky corner. The problem I have is simply in seeing an analog between what I am trying to accomplish and any particular feature of the language. But this is a failing in me, not the language. For example, I totally ignored the "system" command for a long time because its name didn't seem to apply to anything I needed to do ... after all, I wasn't concerned with system tasks, setuid and password files and all that... Having heard that perl is quite well suited to system administration kinds of jobs, I always just passed over "system", assuming it applied only to that kind of programming. How wrong I was! After I learned about the power of the system call, the world opened for me, literally. I finally saw that this command made my entire repertoire of system commands available, just as if I was popping out to the shell for a while. The major drawback that non-programmers like myself face is that the manual is written at a pretty high level; the descriptions and examples assume you already know, for example, what a system call is. All indications are that the book will be written at the same level, but will have many more examples and will guide the reader up a less steep learning curve. I'm all for that - I have read the entire manual 7 times, and there are still great gaping areas that I read and remain clueless. Again, this may not be a failing of the manual, but of this reader. Occasionally, the discussion in this group turns to "perl as a job requirement". Today was the first day of work for a contract programmer whom we hired specifically to program in perl. I have more studying to do before I will be up to the task we have set him, but I know for certain that the language will bend flexibly to his will. We interviewed over a dozen programmers for this position, and the number one requirement was a familiarity with perl. I am also pleased to note that last week, our corporate Software Technology Group announced that perl would become "officially supported software" effective immediately. Of course, I am all for official support, because as I tell my boss, I need all the gurus I can get! Perl now takes its place here at Fluke in /usr/local, alongside awk and sed and grep; it is no longer "user supported" in /usr/public. The significance of this comes through clearly in a remark I heard only a few weeks ago, to the effect that "software is not given corporate support just because it's cool". It is good to learn that support is not withheld BECAUSE of coolness, either! I realize that most of what I have written here is anecdotal and may not provide much insight. Then again, all perspectives have merit and everything I am learning is proving that Tom Christiansen is correct in characterizing perl as a significant and important contribution. -- Gary Benson -=[ S M I L E R ]=- -_-_-_-inc@fluke.tc.com_-_-_-_-_-_-_-_-_- Those who mourn for "USENET like it was" should remember the original design estimates of maximum traffic volume: 2 articles/day. -Steven Bellovin
logan@rockville.dg.com (James L. Logan) (07/04/90)
In article <814@ehviea.ine.philips.nl> leo@ehviea.UUCP (Leo de Wit) writes: # In article <8497@jpl-devvax.JPL.NASA.GOV> lwall@jpl-devvax.JPL.NASA.GOV # (Larry Wall) writes: # | [ . . . ] People really don't know # |how to use computers yet. Sigh.) # # [ . . . ] There really should be an option of # the pager to compare modulo underlining/fat printing. Use the public-domain pager called "less". It can be configured to ignore underlining, boldfacing, etc. In fact, I use it to scan the perl man pages myself. Just another happy hacker, -Jim -- James Logan UUCP: uunet!inpnms!logan Data General Telecommunications Inet: logan@rockville.dg.com 2098 Gaither Road Phone: (301) 590-3198 Rockville, MD 20850
peter@ficc.ferranti.com (Peter da Silva) (07/06/90)
In article <7825@ncar.ucar.edu> cruff@handies.UCAR.EDU (Craig Ruff) writes: > In article <SUA45BF@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes: > >One thing I have found useful is John Ousterhout's TCL: Tool Command > >Language. ... > I used TCL as part of a library on a project, and it turned out to be useful. > However, I would have liked to use a subroutine callable version of perl > instead! Then I wouldn't have had to add all sorts of additional functions > to TCL. Yes, TCL is sort of short in the subroutines department, but I think it makes a better extension language than, say, perl (or REXX, for that matter) because it's such a clean language... like a cross between lisp and awk. This makes it relatively easy to operate on programs as data... something I'd hate to have to do with (say) an algol-like language. I think I'd really prefer a postscript core to the language. Anyone know how to get hold of the author of the Gosling postscript? He doesn't seem to be the Emacs Gosling, and the address in the docco is defunct. -- Peter da Silva. `-_-' +1 713 274 5180. <peter@ficc.ferranti.com>
jbw@zeb.uswest.com (Joe Wells) (07/06/90)
In article <602@inpnms.ROCKVILLE.DG.COM> logan@rockville.dg.com (James L. Logan) writes: In article <814@ehviea.ine.philips.nl> leo@ehviea.UUCP (Leo de Wit) writes: # In article <8497@jpl-devvax.JPL.NASA.GOV> lwall@jpl-devvax.JPL.NASA.GOV # (Larry Wall) writes: # | [ . . . ] People really don't know # |how to use computers yet. Sigh.) # # [ . . . ] There really should be an option of # the pager to compare modulo underlining/fat printing. Use the public-domain pager called "less". It can be configured to ignore underlining, boldfacing, etc. In fact, I use it to scan the perl man pages myself. I like to look at the man page from inside GNU Emacs (where I can use find-tag to jump to the relevant source code with the touch of a key). So I use the Emacs function nuke-nroff-bs to clean up the man page. I've also got a version of nuke-nroff-bs that also correctly strips all types of man page headers and footers, if anyone wants one. On a separate issue, does anyone know where less version 123 is archived? I have a copy I can email to people, but I'd prefer to refer people to a convenient archive. I looked for one a few months ago, but I couldn't find less version 123 anywhere. -- Joe Wells <jbw@uswest.com>