[comp.unix.questions] VMS: logicals UNIX: links, but...

rfinch@caldwr.UUCP (Ralph Finch) (04/07/89)

VMS has logicals, and UNIX has links, but they are not quite the same.
There is one feature of VMS logicals I would like in Unix--can anyone
tell me if it's available?

We run models, and often want to run the same model, simultaneously,
in the same directory, with two different input files, and produce two
corresponding output files.  In VMS it is easy by defining logical
names which apply only to that process.  Thus, two processes don't
see each other's logical names, and don't interfere with each other.

Is this possible in Unix?  Links are nice, but since they apply to the
directory, and not just the process, they don't work in the above
situation. 

Thanks,
-- 
Ralph Finch
...ucbvax!ucdavis!caldwr!rfinch

gwyn@smoke.BRL.MIL (Doug Gwyn ) (04/08/89)

In article <475@caldwr.UUCP> rfinch@caldwr.UUCP (Ralph Finch) writes:
>Is this possible in Unix?

Probably using an environment variable would suffice.

ekrell@hector.UUCP (Eduardo Krell) (04/08/89)

In article <475@caldwr.UUCP> rfinch@caldwr.UUCP (Ralph Finch) writes:

>We run models, and often want to run the same model, simultaneously,
>in the same directory, with two different input files, and produce two
>corresponding output files.

>Is this possible in Unix?

If that is all you want, what's wrong with

command < input1 > output1

and

command < input2 > output2

?
    
Eduardo Krell                   AT&T Bell Laboratories, Murray Hill, NJ

UUCP: {att,decvax,ucbvax}!ulysses!ekrell  Internet: ekrell@ulysses.att.com

chris@mimsy.UUCP (Chris Torek) (04/08/89)

In article <475@caldwr.UUCP> rfinch@caldwr.UUCP (Ralph Finch) writes:
>We run models, and often want to run the same model, simultaneously,
>in the same directory, with two different input files, and produce two
>corresponding output files.

Instead of using `logicals' or links or symlinks or . . ., just:

	model < input1 > output1 &
	model < input2 > output2 &

If the program uses several input files and creates several output
files, and if it cannot be made to name its files based on a single
argument, either of these more complex (and therefore less desirable)
approaches will work:

subdirectories:
	mkdir subdir1
	mkdir subdir2
		[ put specific inputs into subdirectories ]
	...
		[ put common inputs into both subdirectories ]
	ln common_files subdir1
	ln common_files subdir2
	(cd subdir1; ../model in1 out1 in2 out2) &
	(cd subdir2; ../model in1 out1 in2 out2) &

or (csh syntax) environment variables:
	(setenv INPUT1 inA1; setenv INPUT2 inA2; model &)
	(setenv INPUT1 inB1; setenv INPUT2 inB2; model &)

If you MUST emulate VMS logical symbols, use environment variables.
They are not identical, but they are usually close enough.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

gph@hpsemc.HP.COM (Paul Houtz) (04/11/89)

chris@mimsy.UUCP (Chris Torek) writes:
>In article <475@caldwr.UUCP> rfinch@caldwr.UUCP (Ralph Finch) writes:
>>We run models, and often want to run the same model, simultaneously,
>>in the same directory, with two different input files, and produce two
>>corresponding output files.
>
>Instead of using `logicals' or links or symlinks or . . ., just:
>
>	model < input1 > output1 &
>	model < input2 > output2 &

   The solutions you recommend are workable in only the simplest 
situation.  If you have a system that uses many programs interacting
with many different data files or data bases, this approach is not
practical.

   If you ask me, this is a another area where UNIX is deficient.  Once
you get used to being able to move large complex systems that use hundreds
of files around at will, as you can on VMS and even MPE systems, you run 
run into significant problems on UNIX.

   By specifying a simple target file name, such as "journ_entries" or
"time_card_edited", you can make a command file that puts logicals in
place to point all these files at the appropriate directories.   If you 
need to run the program multiple times, or against different data (say
test data versus real data) or if you need to run a benchmark and have
multiple instances of the software running at the same time, it becomes
VERY difficult on UNIX.   I have had to change the SOURCE CODE of    
programs in order to get them to open different files.   This is really
a pain.   If you don't happen to have the source (say you bought object
but not source rights) then you are very limitied as to where you can
put the program, and what files it accesses.  

   A good example is a program that creates a report destined for a 
printer named, for example "edit.listing".  One may not like the fact
that a "named file" is used instead of stdout, but many such programs
exist in the real world.   So what do you do to run 20 of these programs 
simultaneously?  On a 20 user system, you may have to.   The only way 
I know of is to change the program to make it write to stdout.   This 
isn't too bad if it's only one file, but what if that same program 
reads from 10 files that are unique to each user?   Here's a real world
example:   many businesses run 2 or 3 general ledgers at the same time.
They would probably want to use the same objects, but run them against
different files.   They would probably all run at the same time at 
say, month-end?

    I really don't know a good way to solve this problem, except to 
run the three programs on three different machines.

    Does anyone out there know a better solution, or if any vendor has
decided to add this flexibility to UNIX?

jes@mbio.med.upenn.edu (Joe Smith) (04/12/89)

I've also wondered if VMS-style logicals are something missing from
UNIX.  But I came to the conclusion that environment variables could
function almost as well, given a program that was _designed_ to use
them.  Are you sure that there are situations that a properly designed
program CAN'T work with environment variables?  Aren't there examples
of UNIX programs/systems as complex as you describe that work (I'm
asking, I don't know the answer)?

If it's program design that's the problem, let's not discuss "fixing"
UNIX.

<Joe
--
 Joe Smith
 University of Pennsylvania                    jes@mbio.med.upenn.edu
 Dept. of Biochemistry and Biophysics          (215) 898-8348
 Philadelphia, PA 19104-6059

tj@mks.UUCP (T. J. Thompson) (04/12/89)

In article <810035@hpsemc.HP.COM>, gph@hpsemc.HP.COM (Paul Houtz) writes:
> chris@mimsy.UUCP (Chris Torek) writes:
> >In article <475@caldwr.UUCP> rfinch@caldwr.UUCP (Ralph Finch) writes:
> >>We run models, and often want to run the same model, simultaneously,
> >>in the same directory, with two different input files, and produce two
> >>corresponding output files.
> >
> >Instead of using `logicals' or links or symlinks or . . ., just:
> >
> >	model < input1 > output1 &
> >	model < input2 > output2 &
> 
>    The solutions you recommend are workable in only the simplest 
> situation.  If you have a system that uses many programs interacting
> with many different data files or data bases, this approach is not
> practical.

The UNIX paradigm is for programs taking input from several files to
have these enumerated on the command line, or listed in a file.
Programs generating several output streams choose output files names
which are:
- constructed to be unique (temp files);
- simple transforms of input file names (cc -c x.c --> x.o);
- specified via command line options (cc -o x x.c);
- (other strategies i have overlooked or not encountered).

>    If you ask me, this is a another area where UNIX is deficient.  Once
> you get used to being able to move large complex systems that use hundreds
> of files around at will, as you can on VMS and even MPE systems, you run 
> run into significant problems on UNIX.
[...]
>    By specifying a simple target file name, such as "journ_entries" or
> "time_card_edited", you can make a command file that puts logicals in
> place to point all these files at the appropriate directories.

Programs that read and write ``known'' filenames like this are simply
mis-designed. This practice is a relic of JCL. It is assumed that there
will only be one instance of the ``job'' at a time, and that the user
will tolerate tedious job preparation procedures. Now it is true that
some UNIX programs nonetheless do have built-in file names
(yacc, for example; i know not why).
It is still a bad design practice.

> If you 
> need to run the program multiple times, or against different data (say
> test data versus real data) or if you need to run a benchmark and have
> multiple instances of the software running at the same time, it becomes
> VERY difficult on UNIX.

I have some experience with ``large complex systems'' under VMS, with
attendant command files to set hundreds of logicals. The exercise of
changing the files for some of the programs is fraught with error and
frustration, because it is never obvious which programs use which files.

I suspect that most programs using built-in names on UNIX were ported
(or reimplemented) from some JCL-like system, with no appreciation of
the above-mentioned paradigm. For UNIX programs respecting the paradigm,
retargetting the I/O to different files is trivial.

> [...]
>    A good example is a program that creates a report destined for a 
> printer named, for example "edit.listing".  One may not like the fact
> that a "named file" is used instead of stdout, but many such programs
> exist in the real world.

Many? On UNIX? Not in my experience.

> So what do you do to run 20 of these programs 
> simultaneously?  On a 20 user system, you may have to.   The only way 
> I know of is to change the program to make it write to stdout.

As it should have been in the first place.

> This 
> isn't too bad if it's only one file, but what if that same program 
> reads from 10 files that are unique to each user?

Use command line arguments!

> Here's a real world
> example:   many businesses run 2 or 3 general ledgers at the same time.
> They would probably want to use the same objects, but run them against
> different files.   They would probably all run at the same time at 
> say, month-end?

See above.
-- 
     ||  // // ,'/~~\'   T. J. Thompson              uunet!watmath!mks!tj
    /||/// //|' `\\\     Mortice Kern Systems Inc.         (519) 884-2251
   / | //_// ||\___/     35 King St. N., Waterloo, Ont., Can. N2J 2W9
O_/                                long time(); /* know C */

chris@mimsy.UUCP (Chris Torek) (04/12/89)

>chris@mimsy.UUCP (Chris Torek) writes:
>>	model < input1 > output1 &
>>	model < input2 > output2 &

In article <810035@hpsemc.HP.COM> gph@hpsemc.HP.COM (Paul Houtz) writes:
>   The solutions you recommend are workable in only the simplest 
>situation.  If you have a system that uses many programs interacting
>with many different data files or data bases, this approach is not
>practical.

Had you read the rest of my article, you would have seen that I
acknowledged this.

>   If you ask me, this is a another area where UNIX is deficient.

Hardly.

>   By specifying a simple target file name, such as "journ_entries" or
>"time_card_edited", you can make a command file that puts logicals in
>place to point all these files at the appropriate directories.   If you 
>need to run the program multiple times, or against different data (say
>test data versus real data) or if you need to run a benchmark and have
>multiple instances of the software running at the same time, it becomes
>VERY difficult on UNIX.

Not so.  In fact, I claim it is *easier* on Unix than on VMS, because
a Unix machine comes with the tools you need to automate the creation
of such scripts (awk, grep, sed, lex, yacc).

>I have had to change the SOURCE CODE of programs in order to get them
>to open different files.

If the source is poorly written, or not adapted to the Unix environment,
you may have this sort of problem.  (As long as the code does not attempt
to use pathnames, the solution---which I already gave you---to this
sort of problem is to use subdirectories.)

>   A good example is a program that creates a report destined for a 
>printer named, for example "edit.listing".  One may not like the fact
>that a "named file" is used instead of stdout, but many such programs
>exist in the real world.

Many other idiocies exist in the real world; most of them also have
workarounds.

>So what do you do to run 20 of these programs simultaneously?

	for i in user1 user2 user3 ... user20; do
		mkdir $i; ln ../common/* $i; (cd $i; ../../common/prog) &
	done

>    Does anyone out there know a better solution, or if any vendor has
>decided to add this flexibility to UNIX?

It is already there.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

chris@mimsy.UUCP (Chris Torek) (04/12/89)

In article <757@mks.UUCP> tj@mks.UUCP (T. J. Thompson) writes:
>Programs that read and write ``known'' filenames like this are simply
>mis-designed.

Quite so.

>... it is true that some UNIX programs nonetheless do have built-in
>file names (yacc, for example; i know not why).

Because their authors goofed.  `Well known' names and defaults are
fine, as long as they can be overridden%.  But in particular, if the
program has a single input and a single output (as does yacc), the
input should come from stdin and the output go to stdout, unless
overridden with command line arguments.  (Yacc has reason to want
its input to come from a file, so that it can pass information to
the C compiler in case the C code in the parser is incorrect.  But
it really should be used as `yacc foo.y > foo.c'.)
-----
% Make's `makefiles' are a good example: make reads [Mm]akefile, or
  the file(s) specified via -f arguments.  Typically one uses only a
  single makefile, but the tool is not crippled, merely convenient.
-----

>I have some experience with ``large complex systems'' under VMS, with
>attendant command files to set hundreds of logicals. The exercise of
>changing the files for some of the programs is fraught with error and
>frustration, because it is never obvious which programs use which files.

`Just change 'em all' :-) :-(

>>... One may not like the fact that a "named file" is used instead
>>of stdout, but many such programs exist in the real world.

>Many? On UNIX? Not in my experience.

A number of commercial systems available for Unix machines *are*
remarkably poorly thought out.  Most of the tools that come with the
system do not have this sort of trouble, but some of the ones you can
buy do.  Fortunately, they can be worked around with links or symlinks
or the like.  If the author hard codes full pathnames *and* provides no
way to override them, you may be out of luck, but this situation is
comparable to someone hard coding specific file ID numbers in a VMS
program.  Such a program should not be put aside lightly: it should be
thrown with great force.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

jdc@naucse.UUCP (John Campbell) (04/12/89)

From article <16880@mimsy.UUCP>, by chris@mimsy.UUCP (Chris Torek):
: In article <757@mks.UUCP> tj@mks.UUCP (T. J. Thompson) writes:
:>Programs that read and write ``known'' filenames like this are simply
:>mis-designed.
: 
: Quite so.
: 
:>... it is true that some UNIX programs nonetheless do have built-in
:>file names (yacc, for example; i know not why).
: 
: Because their authors goofed.  `Well known' names and defaults are
:
: or the like.  If the author hard codes full pathnames *and* provides no
: way to override them, you may be out of luck, but this situation is
: comparable to someone hard coding specific file ID numbers in a VMS
: program.  Such a program should not be put aside lightly: it should be
: thrown with great force.

I remember being quite surprised to discover (as I was learning unix)
that common practice was to open full pathname entities on unix.  Instead,
of the nice abstraction that I was used to on VMS I found references to
/usr/dict/words, /usr/lib/tex/fonts, /usr/spool/uucp, etc.  This is
so common and osidious that I have both /usr/local and /local on my
3b1 at home because some silly-assed program referenced some data file
it needed as /local (whereas I like everything in /usr/local).

If you happen to work on a unix system without source you must adhere
to full pathnames (with *no* way to override them) for much of your
system.  /lib, /bin, /usr/ucb (why is this one on my SysV machine!!!)
etc. all seemed, when I started out, to violate my sensibilities.  Now
I'm afraid I've just given up and joined the "unix" camp...
-- 
	John Campbell               ...!arizona!naucse!jdc
                                    CAMPBELL@NAUVAX.bitnet
	unix?  Sure send me a dozen, all different colors.

ok@quintus.UUCP (Richard A. O'Keefe) (04/14/89)

In article <475@caldwr.UUCP> rfinch@caldwr.UUCP (Ralph Finch) asks:
> [How to simulate VMS "Logical Names"]
chris@mimsy.UUCP (Chris Torek) replies:
> [redirection, symlinks, logical variables]
In article <810035@hpsemc.HP.COM> gph@hpsemc.HP.COM (Paul Houtz) says:
> [That's not good enough.]

Houtz's basic point is that there are lots of programs with hardwired
file names in them, and that there is a practical need for something
much more like logical names than Torek's recommended substitutes.

I take leave to doubt this.  It has always been good style in UNIX
for programs to read from standard input and write to standard output,
and to take other file names either from the command line, from an
environment variable, or from a data set.  Anyone who or any company
that offers for sale to be used under UNIX which uses hard-wired
file names for the user's input and output data sets is so incompetent
(or so contemptuous of the UNIX environment) that you would be foolish
to trust any other aspect of the program.

Houtz says:
>I have had to change the SOURCE CODE of programs in order to get them
>to open different files.   This is really >a pain. 

Yes it is.  But it isn't UNIX's fault.  It's the fault of a programmer
who couldn't give a damn about his customers.  UNIX provides all the
tools you need to write programs which do not have this problem.

Chris Torek showed how to use environment variables in the C shell.
In the Bourne shell it is even easier:
 % master=db.mst update=todays.upd log=error.log my-db-update-prog

Another common practice for UNIX programs is to have a command line
argument (or environment variable) which specifies one or more
directories where relative file names are to be looked for.

I would like to point out that there are severe conceptual problems
with VMS logical names.  Amongst other things, there can be file
names X, Y such that X translates to Y (and translation terminates
at that point), while Y translates to X (and translation terminates
at that point).

There is no difficulty in running any number of instances of a
well-written general ledger program under UNIX.  Why buy a bad one?

We have a library package which lets a program apply variable and
tilde expansion to a file name if the programmer so chooses.  It was
easy to write.  (About 80 lines of C, excluding comments.)  We have
another package which lets you search through a list of directories
for a file (like logical names with multiple alternatives).

If a program contains hard-wired file names, then you have to know
what those file names are in order to use it.  Suppose for argument's
sake that they are called tom, dick, and harry.

	#!/bin/sh
	# shoddy RealTom RealDick RealHarry
	# invokes the really-shoddy program with files tom, dick,
	# and harry associated with $RealTom, $RealDick, $RealHarry.
	# You can pass the real file names as arguments; if they are
	# omitted they will be taken from the environment variables
	# of the same name.
	case $# in
	  0) ;;				# use $tom, $dick, $harry
	  1) tom=$1 ;;			# use $1,   $dick, $harry
	  2) tom=$1 dick=$2 ;;		# use $1,   $2,    $harry
	  3) tom=$1 dick=$2 harry=$3 ;;	# use $1,   $2,    $3
	  *) echo "Usage: shoddy tom dick harry" >&2 ; exit 1 ;;
	esac
	here=`pwd`			# current directory
	case ${tom:?} in		# error if tom is unbound
	  /*) t=$tom ;;			# tom is already absolute
	  *)  t=$here/$tom ;;		# prepend current directory
	esac				# t is absolute version of tom
	case ${dick:?} in		# error if dick is unbound
	  /*) d=$dick ;;		# dick is already absolute
	  *)  d=$here/$dick ;;		# prepend current directory
	esac				# d is absolute version of dick
	case ${harry:?} in		# error if harry is unbound
	  /*) h=$harry ;;		# harry is already absolute
	  *)  h=$here/$harry ;;		# prepend current directory
	esac				# h is absolute version of harry
	WorkingDir=/usr/tmp/shoddy$$	# $$ identifies this process
	mkdir $WorkingDir
	chdir $WorkingDir
	ln -s $t tom			# ./tom points to $tom
	ln -s $d dick			# ./dick points to $dick
	ln -s $h harry			# ./harry points to $harry
	really-shoddy			# run the really shoddy program
	s=$?				# save its status code
	rm tom dick harry		# clean up files
	cd $here			# just to be at a definite place
	rmdir $WorkingDir		# clean up directory
	exit $s

It's possible to simplify this:  I have a tool 'sl' which converts
its file name argument to absolute file and then does a symbolic link,
so I could just do
	sl ${dick:?} $WorkingDir/dick
instead of using the case statements above.  The 'trap' construct
should be used to make sure that the cleanup happens.  And so on.
If the program uses relative file names as well as hard-wired file
names, you're still in trouble, but you're likely to be in worse
trouble if you _do_ succeed in running the program!

What was that about "it is better to light a single candle than to
curse the darkness?"

ok@quintus.UUCP (Richard A. O'Keefe) (04/14/89)

In article <757@mks.UUCP> tj@mks.UUCP (T. J. Thompson) writes:
>Programs that read and write ``known'' filenames like this are simply
>mis-designed. This practice is a relic of JCL. It is assumed that there
>will only be one instance of the ``job'' at a time,

This is a little bit unfair to MVS.  (Which is hard.)  MVS interposes
"DD names" in between the program and the data sets.  The effect of this
is to make it IMPOSSIBLE to wire a file name (in MVS jargon, "DS name")
into a program.  [Well, not absolutely impossible, but you have to know
a friendly necromancer.]  The file names you use in a VS/FORTRAN program,
for example, are *all* analogous to environment variables, every last
one of them.  That's a pain too, but it's a different pain.

The problem is that VMS fails to draw a distinction between DD names
(environment variables) and DS names (file path-names), thus encouraging
people to get muddled between the two.  One of the pains this can lead
to in VMS is that if someone is *supposed* to set up a logical name to
specify the input for a program, but forgets, the program may happen to
find a file of that name by mistake.  Another pain is having several
programs which use the same logical names, but inconsistently...

For real heroic fun, see the equivalent of logical names in
VM/CMS Version 6 (look in the Shared File System manual).

--- 
MVS, the operating system that likes to say "DIE, sucker!"

ok@quintus.UUCP (Richard A. O'Keefe) (04/14/89)

In article <1337@naucse.UUCP> jdc@naucse.UUCP (John Campbell) writes:
>I remember being quite surprised to discover (as I was learning unix)
>that common practice was to open full pathname entities on unix.  Instead,
>of the nice abstraction that I was used to on VMS I found references to
	^^^^^^^^^^^^^^^^
>/usr/dict/words, /usr/lib/tex/fonts, /usr/spool/uucp, etc.

If you can give me a self-contained description of the rules for
VMS version 5.0 logical name translation, taking one normal-sized piece
of paper and using a 10-point or larger font, then I will concede that
VMS has a "nice abstraction", and I shall further send you US$100.
Use any well-known functional notation: ML, Miranda, SASL, KRC, FP...
If you can do it in two pages, I'll send you $50 (I could use the clear
description that DEC didn't see fit to provide), but I won't call something
that complex a "nice abstraction".

As for the UNIX way of getting to standard files, the names have to be
stored *somewhere*.  System utilities are entitled to put information
anywhere the system designers chose to; those programs are not intended
to work with any other file, and this approach keeps those names out of
the user's "logical name" space.  Any add-on product which is sold to
people ought to come with an installation script which people can edit
to place the files in the directory of their choice.  (This makes it a
lot easier for the customers to install an upgrade and keep on running
the old version for a while.)

For another method of locating files, look at the various .*rc files.
Consider 'mail': it picks up either $MAILRC if that is defined or
$HOME/.mailrc if it is not.  In that file you can say
	DEAD=filename		# save botched messages in this file
	folder=directory	# where to save mail files
	record=filename		# save copies of outgoing messages here
amongst other things.  So you can build up a table of what files you want
the mailer to use, and say
	% MAILRC=that/file mail
You can easily set up a script to generate that file dynamically, if you
want.

gph@hpsemc.HP.COM (Paul Houtz) (04/15/89)

ok@quintus.UUCP (Richard A. O'Keefe) writes:

>Houtz's basic point is that there are lots of programs with hardwired
>file names in them, and that there is a practical need for something
>much more like logical names than Torek's recommended substitutes.

>I take leave to doubt this.  It has always been good style in UNIX
>for programs to read from standard input and write to standard output,

Okay, let me put it to you from my perspective.   I AGREE with you if
you are talking about programs that are designed on a Unix system.
However, I work in a center who's main focus is to port software from
non-unix platforms to unix.   Now, none of these software packages are
designed on unix systems.   Therefore, they aren't designed to write
to stdout, because there ain't any stdout.

There are far more applications running on IBM, VMS, MPE, etc., platforms
than unix. 

If Unix is to be THE operating system of the future, it is going to have
to make an effort to accommodate the billions (sorry Carl) of lines of
application code that exists for those systems.  All of those systems allow
you to associate a LOGICAL or VIRTUAL file name with a PHYSICAL file name.
It doesn't seem like such an illogical thing for an OS to do.   There are
perfectly good General Ledger systems out there by Price Waterhouse, 
Arthur Anderson, and there are thousands of people who already know how
to use them.  It is absurd to say that you are going to redevelop all new
GL systems and retrain users because unix doesn't have LOGICAL file names.
It also absurd to say that you are going to redevelop all applications that
weren't originally developed on unix because they don't use good unix
programming conventions. 

It would seem to me that in this case, unix is Mohammed, and that huge pile
of application software is the mountain.  Now, do you want to make the
mountain move to mohammed?   I can tell you what will happen.   Companies
have enormous investments in this software, and they don't wan't to change
source or go to enormous trouble to modify JCL to get to unix.  So many 
won't.   It would be much easier for all this IBM, VMS, and MPE software
to move to unix if this BASIC functionality was there.   Which is more
economical, as the say goes, moving Mohammed or the Mountain?   At best,
you are going to have many software developers spending man-years implementing
tedious fixes like the one Torek recommends.   At the worst, you are 
going to have 14 different versions of LOGICAL file names, a different 
one implemented by every vendor of Unix systems.

You say there are conceptual problems in using VMS logicals.  Are you 
so sure there are no conceptual problems in using unix?  How about 
aliases?   Concievably you could alias one command to another and
then that command to the first command.   The same EXACT conceptual
problem you mention with logicals.   Unix solves it simply by not
allowing it.  No problem.  You could do the same with logical file
names.   Don't allow recursive definitions.   However if you say there
is a conceptual problem there, then it is there JUST THE SAME problem with 
unix aliases.

Finally, when I hear people tell me that unix SHOULDN'T do something, I 
wonder just who is playing God.  It sounds like a circular argument.  
Are you saying that Unix shouldn't do it because it isn't a good thing,
or are you saying that it isn't a good thing because Unix doesn't do it?

steve@nuchat.UUCP (Steve Nuchia) (04/15/89)

The analogy between "unix aliases" and VMS LOGICALS is not
at all accurate.  Unix has no aliases.  Certain Unix applications,
notably csh and ksh, have alias-like features.

There is already a perfectly useful and well-understood kernel
mechanism for associating a LOGICAL name with a PHYSICAL file --
links.  In fact there are two ways -- hard links and symbolic
links.  Your system doesn't have symbolic links?  Mine doesn't
either, but that's just because I'm too poor to get real Unix,
and even AT+T will have symlinks once Sun gets through giving
them SVR4.

The correct way to deal with billions of lines of JCL and associated
applications code is to implement compilers and/or interpreters, as
appropriate, for them.  It is the responsibility of the language
environment to implement the semantics of the language, and if that
semantics demands LOGICAL names then find a way to do it.  I'll eat
my V7 manual if you can point to an important (in terms of number of
applications that depend on it) feature that can't be implemented
on top of a merged BSD/SYSVR3 kernel.

For logicals specifically, set up some twinky little aliases
or commands to manipulate symbolic links residing in ~/.logicals
(or $HOME/.LOGICALS.DIR if you prefer).  If you can't require
symbolic links then you will also need a database of logical->physical
mappings to go with a directory full of hard links.

An even better way to access all that antique code is to find a
museum that will let you run an ethernet to their 370 and run it
in a window on your workstation.

Personal opinion mode, continued: the widely-held belief that
rewriting code is prohibitively expensive is a falacy.  It is
very dear to the hearts of management types, but it is bad economics.
The economic value in a software product resides in its crystalization
of the problem it solves, not in the billions of lines of FORTRAN
that it was implemented in, iteratively, back in the sixties and
that can't be changed now because no one understands it.  Code
that can't be maintained has *negative* value.
-- 
Steve Nuchia	      South Coast Computing Services
uunet!nuchat!steve    POB 890952  Houston, Texas  77289
(713) 964 2462	      Consultation & Systems, Support for PD Software.

ok@quintus.UUCP (Richard A. O'Keefe) (04/16/89)

In article <810036@hpsemc.HP.COM> gph@hpsemc.HP.COM (Paul Houtz) writes:
>ok@quintus.UUCP (Richard A. O'Keefe) writes:
>>I take leave to doubt this.  It has always been good style in UNIX
>>for programs to read from standard input and write to standard output,
>
>Okay, let me put it to you from my perspective.   I AGREE with you if
>you are talking about programs that are designed on a Unix system.
>However, I work in a center who's main focus is to port software from
>non-unix platforms to unix.   Now, none of these software packages are
>designed on unix systems.   Therefore, they aren't designed to write
>to stdout, because there ain't any stdout.
>
>There are far more applications running on IBM, VMS, MPE, etc., platforms
>than unix. 

I don't know about MPE.  I don't even know what the initials mean.
But what on earth do you think SYSOUT and SYSPRINT are in MVS if not
equivalents of standard output?  And what is SYSIN but an equivalent
of standard input?  Having
	//SYSOUT  DD  (DSN='FOO.BAZ',DISP=MOD)
in a JCL deck is just like doing
	some-command >>foo.baz
in a UNIX shell.

And what on earth do you think the logical names SYS$INPUT, SYS$OUTPUT,
and so on are in VMS?  The VAX-11 C compiler thinks they are stdin, stdout,
and so on.  Both C and PL/I provide you with a way of obtaining input from
a "standard" source (which is bound in the environment of the program's
caller) and sending output to a "standard" sink (similarly bound) without
mentioning the ``name'' of the file at all.  (printf() in C, PUT LIST or
PUT EDIT in PL/I.)

If you have source, all the editing it takes is this:

	#include <stdio.h>

	FILE *envopen(envname, access)
	    char *envname, *access;
	    {
		extern char *getenv();
		char *filename = getenv(envname);

		if (filename == NULL) return NULL;
		return fopen(filename, access);
	    }

Then replace the misbegotton fopen(...) calls by envopen(...).
Any company which is porting a program from some operating system to UNIX
and cannot take the trouble to include a 6-line function to take care of
this problem is such a lousy bunch of incompetents that this is the least
of your worries.

>All of those systems allow
>you to associate a LOGICAL or VIRTUAL file name with a PHYSICAL file name.

So does UNIX.  The method is called "environment variables".  It is very
very easy to use.

>Companies
>have enormous investments in this software, and they don't wan't to change
>source or go to enormous trouble to modify JCL to get to unix.

Come now, next thing you'll say that "sh" statements should start with //.
Look, I've watched people try to port VMS programs to MVS.  Not a pretty
sight.  I've got more news for you:  IBM have *two* important operating
systems for the /370 range (there are more).  They are MVS and CMS.

	MVS	has no equivalent of VMS logical file names.
		It *does* have an equivalent of environment variables
		(namely, DD definitions).

	CMS	has no equivalent of VMS logical file names.
		It *does* have an equivalent of environment variables
		(several, in fact: REXX variables and GLOBALV among them).
		It also simulates MVS DD definitions with FILEDEF cards,
		but they are *not* VMS-style logical file names.

The distinction I am trying to draw is this:  in UNIX, or in CMS, or in
MVS if you have a routine which opens a file by its DSNAME, when you
specify a file name in a program, that is *exactly* the file you get,
not a file with some other name.  The scheme is

	"environment" --> "file name" --> file
	getenv, DD,	  open, fopen,    (VTOC, directories)
	GLOBALV, ...	  dsna, ...	  
	   ^		  ^
	   +--------------+-------- programs can enter at either point

If a CMS program has a hard-wired file name in it, such as
	'LATEX PROFILE A1'
for argument's sake, there is no way of mapping that name to something else,
and CMS users do not expect there to be one.

I'd also like to cite the Burroughs B6700 MCP (the last version I used
was III.0).  In that operating system, a program declared a file like this:
	FILE MYDICT(TITLE = "(CCC002S)DICTIONARY/BASE ON MYPACK", ...);
There are two things named here, the file interface block (called MYDICT),
and the file it refers to (/mypack/ccc002s/dictionary/base, as it were).
In a shell script (WFL program), you can say
	RUN MYPROG; FILE MYDICT(TITLE = "(MAT212X)DICTIONARY/REVISED");
In effect, the "internal name" MYDICT acts like an MVS DDNAME, and you
can over-ride default information in the program in the shell script.
But the program gets another chance:  if you say
	MYDICT.TITLE := "(CCC002S)DICTIONARY/BASE ON MYPACK";
in the program, that over-rides what the shell script may have specified,
and there is absolutely nothing that a shell script or any other environment
can do about this.

In summary, UNIX, CMS, MVS, and MCP have *two* kinds of names:
	file names, and file name names.
The means whereby the mapping from file name names is maintained differ
between the systems, but the essential idea is the same, and in none of
then can the environment alter the meaning of a file name, only the
binding of a file name name.

The exception is not UNIX, it is VMS.  VMS mingles file names and file
name names (logical names) in one confused mess.  For example, it is
possible to create a file in VMS, write to it, close it, then use the
identical same sequence of characters (with *no* intervening change to
the logical name tables) to request that a file be opened for input,
and get a different file!

>Are you 
>so sure there are no conceptual problems in using unix?  How about 
>aliases?

UNIX has no aliases.  There is *NO* kernel support for aliases whatsoever.
They are a feature of the C shell.  I write Bourne shell scripts.  If you
call a program using exec(), you do not see aliases.

>Finally, when I hear people tell me that unix SHOULDN'T do something, I 
>wonder just who is playing God.  It sounds like a circular argument.  
>Are you saying that Unix shouldn't do it because it isn't a good thing,
>or are you saying that it isn't a good thing because Unix doesn't do it?

Let me make my position perfectly plain.
(1) Every operating system should provide some means whereby the caller
    of a program may provide an association between tokens used by a
    program and actual files which those tokens are to refer to.
	***UNIX DOES THIS***
(2) Since any number of programs may use the same token for different
    purposes, it is important that it should be easy to make this association
    on a call-by-call basis.
	***UNIX DOES THIS***
(3) Conversely, some tokens may be regarded as "well known", so it should
    be possible to make the association on a relatively permanent basis.
	***UNIX DOES THIS***
(4) It is important that a program should be able to contain some form of
    reference to a file which is immune to sabotage by malicious users.
	***UNIX DOES THIS*** (only the super-user can chroot())
(5) To encourage the porting of programs, it should be easy to open a
    file using a token to refer to its name.
	***UNIX DOES THIS*** (see envopen() above)

I *do* most vehemently state that UNIX should not copy VMS's "logical name"
mess.  I think I am finally beginning to understand ADA.  VMS's logical
names are still too hard for me.  (It may be the documentation at fault.)
It is superflous to do so, because UNIX ***ALREADY*** provides all the
tools you need to handle the file name name approach in PRECISELY the way
that you do it under CMS.

If there are programs offered for sale under UNIX which have hardwired
file names in them which the user has legitimate reason to rebind, the
reason for this is that the developers or porters of the program either
were incompetent or just didn't give a damn.  It is *EASY* for them to
let you bind file name names to file names in the environment.

kent@ssbell.UUCP (Kent Landfield) (04/16/89)

In article <6457@nuchat.UUCP> steve@nuchat.UUCP (Steve Nuchia) writes:
>The correct way to deal with billions of lines of JCL and associated
>applications code is to implement compilers and/or interpreters, as
>appropriate, for them.  

Ahhhhh... I can see it now!

      F UID   PID  PPID CP PRI NI  SZ  RSS WCHAN    STAT TT  TIME COMMAND
   8001   0  1376     1  0   1  0 2080  304 select   S    ?  10:00 jcl_emul
   8001   0  1377     1  0   1  0 1040  304 select   S    ?   6:00 hasp_emul
   8001   0  1378     1  0   1  0 5020  304 select   S    ?   1:00 vms_emul

			-Kent+

rang@cpsin3.cps.msu.edu (Anton Rang) (04/16/89)

Warning: I'm somewhat biased on this topic (read my .sig).

With that out of the way...I agree that VMS logical names can be
unnecessarily complex if you choose to utilize all of their features.
Maybe some of the things you can do with them are unnecessary.
However, here are two uses I really like.  (One of these really isn't
"logical names" as such, but a difference in program execution.)

1.  You can use a logical name to change which shareable image is
    loaded with a program.  I believe BSD 4.3 has shareable libraries;
    is there some way to do this?  (Why, you ask?  Suppose you have a
    graphics library which can support N different output devices.
    You write a new output driver and want to use it with your
    existing programs.  What's the best method?)  This isn't critical,
    but I'm curious....  (more details @ end of message)

2.  You can define a logical name to translate to more than one name.
    While this definitely can be confusing, it does let me essentially
    "merge" two directories.  For instance, I could keep libraries in
    two directories (perhaps writeable by different users).  Using a
    name like LIB:FRED (where LIB is my logical name), both
    directories will be searched (in the order defined by the name),
    and if a FRED file is found, it will be used.

    "Aha!" you say.  "Use symbolic links!"  Yes, but...suppose that
    directory /LIB1 is owned by USER1, and directory /LIB2 by USER2.
    Let there be a global directory /LIB.  Then adding a file to LIB1
    or LIB2 necessitates adding a new symbolic link in /LIB.  What's
    the best way to deal with this?  (It's come up here in connection
    with manual pages--I'd like to make /usr/man/manl really be a
    search path through directories for the various packages we have,
    and allow new manual pages to be added by the person managing each
    package without necessitating superuser intervention.)

This is not intended to generate flames.  Take the above with a grain
of salt; I've only used UNIX for two years or so, and I've had about 4
years experience with VMS, so I know it somewhat better.

			Anton

--------------------
  A (hopefully more clear) explanation of my graphics package example:

		$ PASCAL MY_PLOT			-- user writes program
		$ LINK MY_PLOT, SYS$LIBRARY:GRAPHLIB	-- user links it up

  Later, to run it:

		$ ASSIGN VT340 DI_DEVICE		-- pick device
		$ RUN MY_PLOT				-- run program

  Now the program will run with VT340 as the device.  There is a
device-independent (D-I) library and a device-dependent (D-D) one.
I'd like a technique for doing this.  At the moment, the best I've
come up with is using an environment variable for the device name,
then using that (in the D-I part) to fork a process to run the D-D
part, then communicating over a socket.  Since there is LOTS of
bidirectional comm. going on (i.e. sending bunches of 4x4 matrices at
each step), this is slow (I've tried it with a prototype).
  Ideas?

+---------------------------+------------------------+----------------------+
| Anton Rang (grad student) | "VMS Forever!"         | "Do worry...be SAD!" |
| Michigan State University | rang@cpswh.cps.msu.edu |                      |
+---------------------------+------------------------+----------------------+

rang@cpsin3.cps.msu.edu (Anton Rang) (04/16/89)

This is my last note on this topic (I can hear the cheering now).  One
thing which *nobody* has yet pointed out is that VMS logical names can
be (1) persistent, and (2) not just local to a process.  This can be
useful.  Why?  Here's an example.

I write a server using mailboxes (VMS)/sockets(UNIX).  I'm not a
superuser, just an ordinary person.  Now, other people in my group
(remember this) want to be able to access the server via a client
which I've also written.  How can they get to it?

(1) VMS.  Here, I have my program installed with GRPNAM privilege,
    letting me write into the group table.  I create a logical name,
    SERVMBX perhaps, with the name of the mailbox which my program was
    assigned.  The client program opens SERVMBX, and is connected to
    my mailbox (talking to the server).  All is fine and dandy.

(2) UNIX.  I don't know sockets as well, so please bear with me.
    Assume first that I'm using UNIX-domain sockets.  I can then
    create a socket, named /.../some-directory/some-name.  The client
    can look in this place.  BUT...how can I handle the case where
    there is more than one server on the system?  (I.E., how will the
    client know where to look?)  Also, if I want this program to be
    easily portable between systems, I don't want to hard-code a
    directory path.  I can use an environment variable instead.
    BUT...how can the client find the socket without the user having
    to manually set up this environment variable?  (In the Internet
    domain, things get even more complicated--but that's a different
    issue since it really deals with intermachine communication.)

Thoughts?  I would also like to be able to have a program which sets
up environment variables.  This way, I could create a program (called
'setup' perhaps) which would assign a default search path to my
environment variable, and the user could type 'setup' to get the
environment right before they ran the program.
  However, as far as I can tell, there is no way to programmatically
set (for later use by a shell) environment variables.  True?  False?
(If false, how do you do it???)
  These arguments also apply to system-wide packages.  Suppose I
install SuperPackage.  It wants an environment variable SPDIR to
contain its search path.  Is writing a shell script to first set this
variable, then run the package, the only way to make it accessible to
users?  If so, what happens to the unsuspecting user who sets SPDIR to
~/my_sp and expects it to look there?
  As I said, my last posting (unless something here needs
clarification.)  I think I have philosophical differences with some of
the others discussing this, and shouldn't waste time on a public forum
arguing them.

		Anton

+---------------------------+------------------------+----------------------+
| Anton Rang (grad student) | "VMS Forever!"         | "Do worry...be SAD!" |
| Michigan State University | rang@cpswh.cps.msu.edu |                      |
+---------------------------+------------------------+----------------------+

mtsu@blake.acs.washington.edu (Montana State) (04/17/89)

How can Unix Environment vars provide all of the same features as VMS logicals?
In VMS most of the logicals can be set up as system-wide, such that every
user has the logical in his environment.  There isn't any easy way for me
to force a user to execute a startup script that sets up all of the
environment variables for him.  

gph@hpsemc.HP.COM (Paul Houtz) (04/17/89)

ok@quintus.UUCP (Richard A. O'Keefe) writes:

> And what on earth do you think the logical names SYS$INPUT, SYS$OUTPUT,
>and so on are in VMS?  The VAX-11 C compiler thinks they are stdin, stdout,

Wrong.  Wrong.  Wrong.   The names are similar.  Some of the functionality
is similar.  But they are NOT stdout and stdin.  Not unless the functionality
is identical.  You see, Unix stdou and stdin are used a lot by programmers
because there is a REASON.  You can use them in pipes and you can easily
redirect them.   There is no such reason on VMS.  So they sound the
same, but they aint.

> UNIX has no aliases.  There is *NO* kernel support for aliases whatsoever.
> They are a feature of the C shell.  I write Bourne shell scripts.  If you
> call a program using exec(), you do not see aliases.

The first time I heard this, I made me mad.  I mean, who after all, uses
Unix and uses only the bourne shell and never uses an ALIAS.  Seemed like
a pretty stupid argument.  

But then, I realized,  you are absolutely right.  The place for aliases 
and logicals is not in UNIX, but in the shell.   Probably what will happen
is a new shell will eventually appear that will have this functionaly 
which UNIX does NOT have, and that will solve the problem.


>Let me make my position perfectly plain.
>(1) Every operating system should provide some means whereby the caller
>    of a program may provide an association between tokens used by a
>    program and actual files which those tokens are to refer to.
>	***UNIX DOES THIS***

Of course Unix doesn't do this.  It does some of this, but not all.  
Example:  I recently ported a cobol system written in a latin american
country that ran 10 jobs sequentially and they all wrote to PRINTER-LISTADO.
It was a benchmark, so I was not allowed to change source.  I could not use 
ln because that would overlay the results each time.   So I had to 
write a script to mv the PRINTER-LISTADO file to some other file at the
end of each job.  It seems and is trivial, but it an example of how incomplete
the unix "logical" functionality is (in my opinion).

Anyway, your note is very enlightening.  I appreciate the information you 
have provided.   It is unfortunate that this notes string generates at 
least as much heat as light.   I admit that I contributed some of the heat
as well.

gwyn@smoke.BRL.MIL (Doug Gwyn) (04/17/89)

In article <1604@blake.acs.washington.edu> mtsu@blake.acs.washington.edu (Montana State) writes:
>There isn't any easy way for me to force a user to execute a startup
>script that sets up all of the environment variables for him.  

It's called /etc/profile.
Cshell users?  They should log in to a Bourne shell and their .profile
should contain:
	exec csh

ok@quintus.UUCP (Richard A. O'Keefe) (04/17/89)

In article <2560@cps3xx.UUCP> rang@cpswh.cps.msu.edu (Anton Rang) writes:
>One thing which *nobody* has yet pointed out is that VMS logical names
>can be (1) persistent, and (2) not just local to a process.

People who know VMS don't need to be told.
People who don't know VMS probably aren't interested in the discussion.

I don't quite follow what Rang wants the server example to do, mainly
because the implementation is described rather than the _problem_.
There is a problem with the proposed VMS hack:  what happens if Rang's
server smashes the FOOBAZ name in my group logical name table, and I
_also_ want to use Lazarus Long's server which _also_ smashes FOOBAZ?
For that matter, how do I run two copies of Rang's server at once?

The limited amount of IPC hacking I've done under UNIX has been managed
by having the client create a socket or a set of named pipes and shipping
the socket name or the names of the named pipes in the connection request.
(In the BSD world, don't overlook the possibility of passing file
descriptors in a message.)

>I would also like to be able to have a program which sets
>up environment variables.  This way, I could create a program (called
>'setup' perhaps) which would assign a default search path to my
>environment variable, and the user could type 'setup' to get the
>environment right before they ran the program.

It is usual for such programs to be shell scripts.  Here we have quite
a few such scripts; I can set up the "development" environment by doing
	. development		# Bourne shell
	source development	# C shell
You can also do things like
	eval `echo foo=1 baz=2`
where the command between the backquotes writes out a set of assignments.

By the way, it is straightforward to set up scripts like this which can
be sourced by either the Bourne shell or the C shell.  "development"
would look like this:
	test {a} = "{a}" && .  /usr/local/scripts/development.sh
	test {a} = a && source /usr/local/scripts/development.csh
The development.sh version contains lines like
	export foo; foo=something-or-other
and the development.csh version contains lines like
	setenv foo something-or-other
It works.

>  These arguments also apply to system-wide packages.  Suppose I
>install SuperPackage.  It wants an environment variable SPDIR to
>contain its search path.  Is writing a shell script to first set this
>variable, then run the package, the only way to make it accessible to
>users?  If so, what happens to the unsuspecting user who sets SPDIR to
>~/my_sp and expects it to look there?

If the author of the script for SuperPackage was any good, he wrote
	SPDIR=${SPDIR:-/the/default/value/if/unset/or/null}
or something like that in the script, and it _will_ look where the user
asked it to.  And no, shell scripts aren't the only possible answer:
aliases (C shell) and shell functions (Bourne shell) are two others.
Perl scripts are a third.

>As I said, my last posting (unless something here needs
>clarification.)  I think I have philosophical differences with some of
>the others discussing this.

The more I study MVS and CMS, the better I like VMS.  I think that when
the computer centre at my home University decided that they want to
replace their IBM painframe by VAXes running VMS they made the right
decision, and that UNIX wouldn't suit their needs quite as well.  (Most
of the departments that would benefit more from UNIX are already using
it.)  But simply bodging one of the messier features of one operating
system into another is _not_ a good approach to design.  The main
philosophical difference is that between "but I wanna do it *MY* way"
and "let's master the tools we've got before changing them".

ok@quintus.UUCP (Richard A. O'Keefe) (04/17/89)

In article <1604@blake.acs.washington.edu> mtsu@blake.acs.washington.edu (Montana State) writes:

>How can Unix Environment vars provide all of the same features as VMS logicals?

They don't, they can't, and they shouldn't.
The _problems_ which VMS logical names solve can for the most part be
solved quite easily using existing UNIX mechanisms, that's the point
at issue.  Environment variables are part of it, so are links.

>In VMS most of the logicals can be set up as system-wide, such that every
>user has the logical in his environment.  There isn't any easy way for me
>to force a user to execute a startup script that sets up all of the
>environment variables for him.  

I'm sorry to disappoint you, but yes there is.  In fact, several ways.
In BSD-derived systems, there is a file /etc/gettytab (described in
man 5 gettytab).  Basically, to put the bindings X=a Y=b Z=c and so on
into everyone's initial environment, you add the entry
	:ev=X=a,Y=b,Z=c
to the default: entry in that file.  That isn't executing a startup
script, admitted, but it does set up environment variables.  Alternatively,
you could set the entry
	:lo=/usr/local/bin/mylogin
When getty has done its processing, it execs a program, which is normally
/bin/login.  'lo' overrides that.  Your program can add things to the
environment, and then exec /bin/login.  In _any_ UNIX system, you can get
the effect you want by editing /etc/passwd.  Where people have /bin/sh
as their shell, put /usr/local/bin/initsh.  Where people have /bin/csh
as their shell, put /usr/local/bin/initcsh.  Disable the 'chsh' command
(if you have it) so that users can't change it back.  Have the init*sh
programs set up the environment and then exec the corresponding shell.

I don't guarantee the gettytab method, never having been superuser to
try it.  My point is that instead of moaning about how UNIX wouldn't
let me do something, I spent 5 minutes looking through the manuals and
found how I _could_ do it (if I knew the right password...).

If you are content to _assist_ your users rather than _force_ them to
execute your script, there's an easy way.  First create a copy of your
setup script for each supported shell.  When you create a new account,
put the line
	. /usr/local/scripts/setup.sh
in ".profile" (which the Bourne shell sources at login) and put the line
	source /usr/local/scripts/setup.csh
in ".login" (which the C shell sources at login).  Do this to or for
existing users as well.  People who know what they are doing can take
the line out.

ok@quintus.UUCP (Richard A. O'Keefe) (04/17/89)

In article <10058@smoke.BRL.MIL> gwyn@brl.arpa (Doug Gwyn) writes:
>In article <1604@blake.acs.washington.edu> mtsu@blake.acs.washington.edu (Montana State) writes:
>>There isn't any easy way for me to force a user to execute a startup
>>script that sets up all of the environment variables for him.  
>
>It's called /etc/profile.

The method I suggested (hacking /etc/gettytab) is a BSD-ism.
The method Doug Gwyn suggests is a SysV-ism, not available on this
copy of SunOS 3.5, but obviously the right way to go.

steve@nuchat.UUCP (Steve Nuchia) (04/17/89)

Excuse me, but isn't the reason that VMS logicals are "global" that
they get put in a script that everybody executes at login time?  Or
was that "SYMBOLS"?

Need I point out that this isn't too tough in Unix?
-- 
Steve Nuchia	      South Coast Computing Services
uunet!nuchat!steve    POB 890952  Houston, Texas  77289
(713) 964 2462	      Consultation & Systems, Support for PD Software.

ok@quintus.UUCP (Richard A. O'Keefe) (04/18/89)

In article <810037@hpsemc.HP.COM> gph@hpsemc.HP.COM (Paul Houtz) writes:
>ok@quintus.UUCP (Richard A. O'Keefe) writes:
>
>> And what on earth do you think the logical names SYS$INPUT, SYS$OUTPUT,
>>and so on are in VMS?  The VAX-11 C compiler thinks they are stdin, stdout,
>
>Wrong.  Wrong.  Wrong.   The names are similar.  Some of the functionality
>is similar.  But they are NOT stdout and stdin.  Not unless the functionality
>is identical.  You see, Unix stdou and stdin are used a lot by programmers
>because there is a REASON.  You can use them in pipes and you can easily
>redirect them.   There is no such reason on VMS.  So they sound the
>same, but they aint.

Oh well, then, by _that_ criterion *nothing* in VMS has an equivalent in UNIX.
What an easy way to knock down a straw man.  May I respectfully suggest that
before someone claims that VMS hasn't got redirection, they look in the DCL
manual?  Specifically at the RUN command.

Bourne shell:	myprog <source >destination 2>error-log

VMS DCL:	RUN MYPROG /INPUT=SOURCE /OUTPUT=DESTINATION /ERROR=ERROR-LOG

And then of course there is DEC/Shell, but that's another story.  (UNIX is
not the only system with two different CLIs.)

Then again, when you read the Guide to Programming on VAX/VMS (you _did_
read it, didn't you?) you found that Fortran makes it easy to read from
SYS$INPUT (use UNIT=* or omit the unit entirely or use ACCEPT) and write
to SYS$OUTPUT (use UNIT=* or omit the unit entirely or use PRINT or TYPE).
In fact that chunk of the manual explicitly says "A person using your
program can REDIRECT input and output ...".  Then too, LIB$GET_INPUT and
LIB$PUT_OUTPUT encourage you to use SYS$INPUT and SYS$OUTPUT.

Maybe you don't want to call the DCL /INPUT ... facility redirection, but
that's what DEC call it.  And there is no reason why you can't redirect
to a mailbox if that's what takes your fancy.

Stop knocking VMS!  (:-)

dhesi@bsu-cs.bsu.edu (Rahul Dhesi) (04/21/89)

In article <810040@hpsemc.HP.COM> gph@hpsemc.HP.COM (Paul Houtz) writes:
>The code I just ported was from an IBM System/34.  It would have been 
>easy to run the benchmark on a VMS system, because it has LOGICALS that
>are LOCAL to the user.   It was a real PAIN because Unix doesn't.  And
>in my opinion there is NO "badly" written code.   There is only code
>that is less portable or less maintainable.

Seldom do we so so many rebuttable arguments in the same paragraph.

>The code I just ported was from an IBM System/34.  It would have been 
>easy to run the benchmark on a VMS system, because it has LOGICALS...

I assume "ported" in this context means "modified so it would run on a
new system".  But the code did not run under UNIX.  This means that
your port wasn't complete.  You left the hard-coded filenames in.  As
has been suggested before, removing them would have been fairly easy.
Don't blame the target system for not accepting code that isn't
correctly ported.

>It would have been 
>easy to run the benchmark on a VMS system, because it has LOGICALS that
>are LOCAL to the user.   It was a real PAIN because Unix doesn't.

It turns out that VMS has a clearly-defined syntax for its logical
names.  (Well, precisely-defined, at least, if not very clearly.)  A
logical name like ":yzzy:abc$^&*" will definitely not work.  Your
program did not use filenames with such a syntax.  Was it simply
coincidence that you (or the original programmer) used names that VMS
would accept?  Now if you (or the original programmer) had indeed used
":yzzy:abc$^&*" as a filename, VMS would have choked on it but UNIX
would have handled it just fine.  In that case could we then conclude
that VMS was broken and UNIX was perfect?

>...in my opinion there is NO "badly" written code.   There is only code
>that is less portable or less maintainable.

It doesn't matter whether you classify the code that you were running
as badly-written, less portable, or less maintainable.  Whichever it
was, it wasn't what UNIX was designed to run.  Neither will UNIX
correctly run code like

	  #include stdio
	  main() {int i; i = SYS$QIO ( ... etc. ...); }

Not badly-written necessarily (VMS C being a different kettle of fish)
just not portable.  And UNIX won't run this, thank heavens.

P.S.  Just to make you think, try this with VMS C.

	 $ CC /standard /define=DEBUG /define=VMS  /define=BIG myprog.c

(a) The switch /standard is interpreted to mean "don't check for
conformance to standard."  (b) Only the last define has any effect.
-- 
Rahul Dhesi <dhesi@bsu-cs.bsu.edu>
UUCP:    ...!{iuvax,pur-ee}!bsu-cs!dhesi

Kemp@DOCKMASTER.NCSC.MIL (04/22/89)

Like all flame wars, this one is becoming not only tedious, but
incoherent.  The latest, from Rahul Dhesi:

 > It turns out that VMS has a clearly-defined syntax for its
 > logical names.  (Well, precisely-defined, at least, if not very
 > clearly.)  A logical name like ":yzzy:abc$^&*" will definitely
 > not work.  [ . . . ]

If you could read, you would know that the argument is about the
semantics of VMS's logical name facility, not the syntax of the
identifiers used as path/file/logical names.

I don't have anything useful to add to this pile, except to note that I
too have found logical names to be convenient, and the suggested unix
surrogates (envars, .profiles, symbolic links, etc) to be not as
convenient.  I agree with the suggestion made earlier that if a third
party were to provide a logical name facility for unix, the marketplace
would decide on its merit.  They would have at least one customer.

  Dave Kemp <Kemp@dockmaster.ncsc.mil>

epmcmanus@csvax1.cs.tcd.ie (04/24/89)

In article <1009@quintus.UUCP>, ok@quintus.UUCP (Richard A. O'Keefe) writes:
> Bourne shell:	myprog <source >destination 2>error-log
> 
> VMS DCL:	RUN MYPROG /INPUT=SOURCE /OUTPUT=DESTINATION /ERROR=ERROR-LOG

In fact this is not really what you want in VMS, since /input etc will cause
the program to be run in a subprocess, whereas ordinarily programs are all
run in the same process.  So the real VMS syntax is:

$ define/user sys$input source
$ define/user sys$output destination
$ define/user sys$error error-log
$ run myprog

which I think is sufficiently gross to discourage people from using
redirection as the main technique to specify files for programs to use.
-- 
Eamonn McManus		emcmanus@cs.tcd.ie	uunet!mcvax!cs.tcd.ie!emcmanus

Eamonn McManus <emcmanus@cs.tcd.ieK (04/24/89)

ur u Computer Science Department, Trinity College Dublin
Lines: 18

In article <1009@quintus.UUCP>, ok@quintus.UUCP (Richard A. O'Keefe) writes:
> Bourne shell:	myprog <source >destination 2>error-log
> 
> VMS DCL:	RUN MYPROG /INPUT=SOURCE /OUTPUT=DESTINATION /ERROR=ERROR-LOG

In fact this is not really what you want in VMS, since /input etc will cause
the program to be run in a subprocess, whereas ordinarily programs are all
run in the same process.  So the real VMS syntax is:

$ define loader sys$input source
$ define loader sys$output destination
$ define user sys$error error-log
$ run myprog

which I think is sufficiently gApr 89 1s to discourage people from using
redirection as the main technique to specify files for programs to use.
-- 
Eamonn McManus		emcmanut@cs.tcd.ie	uunet!mcvax!cs.tcd.ie!e

madd@bu-cs.BU.EDU (Jim Frost) (05/01/89)

In article <810040@hpsemc.HP.COM> gph@hpsemc.HP.COM (Paul Houtz) writes:
|And
|in my opinion there is NO "badly" written code.   There is only code
|that is less portable or less maintainable.
|
|To say that the code is basly written is a value judgement,  and it's 
|unfair to the thousands of programmers that are writing valuable useful 
|code but have never seen or used a UNIX system.   

If you think there is no "badly" written code, you haven't worked in
the same environments I have.

If code written for a commercial system is broken and/or
unmaintainable, it is "bad" by my standards and by those of customers
I have dealt with.  Good code can be non-portable, but code that is
unmaintainable or just plain broken is just plain bad.

[Of course most people haven't seen forty nested '?' ':' statements
which surround a 'case' statement in a C program, nor other similar
programming practices that I ran across recently.  If that's not bad
code, it's at least ignorant.]

jim frost
madd@bu-it.bu.edu

gph@hpsemc.HP.COM (Paul Houtz) (05/03/89)

chris@mimsy.UUCP (Chris Torek) writes:

>Now we begin to see the real problem.  You want customers to be able
>to take their 10 billion line Fortran or COBOL programs and run it on
>a Unix machine, without changing it.  (The chance that your customers'
>programs are written in C is vanishingly small, so I picked two more
>likely targets.)
>So:  Where in the F77 standard does it say that
	>OPEN (UNIT=6, FILE='FOO.BAR', DISP='NEW')
>*must* map to the Unix system call
>	fd = creat("foo.bar", 0666);
>
>  I submit that the problem is, not that the Unix system does not
>provide the means, but rather that your compilers do not take care of
>the task.  It is true that (e.g.) the 4.3BSD f77 compiler does not do

  Yes.  Now you are getting to the jist of the problem.  But you are 
now offloading the functionality I want in unix to the compiler.   

  Don't get me wrong.  That may be OK.   But on VMS if I have developed
a large application system and expected all the gl data files to be in 
the directory "gld" and all the executables to be in "gle"  and all the
parameters (startup, user customization, etc.) in "glp", I can simply
set up logicals for those files:  gle: disk1:gl/mysoft/executables,
gld: disk1:gl/mysoft/data, glp: disk1:gl/mysoft/parms.   By modifying 
the logicals file, the software is portable around the directory 
structure WITHOUT re-compiling.   Now, I won't really be able to port
that "portability" to unix.  

chris@mimsy.UUCP (Chris Torek) (05/03/89)

In article <810047@hpsemc.HP.COM> gph@hpsemc.HP.COM (Paul Houtz) writes:
>  Yes.  Now you are getting to the jist of the problem.
[You mean `gist'.  Normally I skip over these, but this one has been
jrating :-) on me lately.]

>But you are now offloading the functionality I want in unix to the compiler.   

Well, actually, `runtime system'.  File name interpolation has to be done
at run time, not compile time (obviously).

>  Don't get me wrong.  That may be OK.   But on VMS if I have developed
>a large application system and expected all the gl data files to be in 
>the directory "gld" and all the executables to be in "gle"  and all the
>parameters (startup, user customization, etc.) in "glp", I can simply
>set up logicals for those files:  gle: disk1:gl/mysoft/executables,
>gld: disk1:gl/mysoft/data, glp: disk1:gl/mysoft/parms.   By modifying 
>the logicals file, the software is portable around the directory 
>structure WITHOUT re-compiling.   Now, I won't really be able to port
>that "portability" to unix.  

No problem:

	$ PATH=/gl/mysoft/bin:$PATH; export PATH
	$ F77LIBINPUTPATH=.:/gl/mysoft/parms; export F77LIBINPUTPATH
	$ cd /gl/mysoft/data; prog &
	1234
	$ cd ../data2; prog &
	1235
	$ cd ../data3; prog &
	1236

or similar.  The PATH and F77LIBINPUTPATH (or COBOL_RUNTIME_INPUT_FILE_PATH
[such a name seems appropriate for COBOL :-) ]) commands can be in a
script, for convenience.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris

gwyn@smoke.BRL.MIL (Doug Gwyn) (05/03/89)

In article <810047@hpsemc.HP.COM> gph@hpsemc.HP.COM (Paul Houtz) writes:
>Now, I won't really be able to port that "portability" to unix.  

There is a lesson to be learned there -- and it's NOT that UNIX
provides deficient support for portable applications!