[comp.unix.questions] Why does this shell program run under csh???

MCGUIRE%GRIN2.BITNET@wiscvm.wisc.EDU (06/23/87)

> Date: Tue, 16 Jun 87 18:01:35 EDT
> From: Root Boy Jim <rbj@icst-cmr.arpa>
> Subject:  Why does this shell program run under csh????
>
> Obviously, *some* attempt must be made to determine which shell to use.
> And that complicates matters. One must conform or get bitten.
> The environment variable SHELL comes to mind, but is quickly discarded.
> The information must be *in the script itself*. The #! convention is
> (a start at) a good general solution, as there are bound to be more
> than two shells, just as there are bound to be more than one.
>
> How would you distinguish which shell to run?

Disclaimer:  I grok VMS, and I'm just learning UNIX.  I'm submitting this
because somebody will undoubtedly correct me, and I'll learn something.

Why couldn't the file's name be used to determine what shell to use?  Such
a convention ought to be easy to implement in the kernel.  In a manner of
speaking, the information would be `in the script itself' because the
filename is the unique key that locates the file contents.

For example: .../myshell.sh would be run under sh, but .../myshell.csh
would be run under csh.

Ed <MCGUIRE@GRIN2.BITNET>

gwyn@brl-smoke.ARPA (Doug Gwyn ) (06/23/87)

In article <7953@brl-adm.ARPA> MCGUIRE%GRIN2.BITNET@wiscvm.wisc.EDU writes:
>For example: .../myshell.sh would be run under sh, but .../myshell.csh
>would be run under csh.

Consider:  During the early stages of porting my software to a new
hostile (i.e. 4BSD) system, I use a shell script named "cc" to get
things compiled right.  No way am I going to edit a zillion Makefiles
to redefine "cc" to "cc.sh", then later change them back.  (Admittedly
augmented "make" provides a way to accomplish this through use of an
environment variable, but the point is that the name of a command
should encode only the command's function, not information about its
type, creator, size, or other irrelevancies.)

jerry@oliveb.UUCP (Jerry F Aguirre) (06/25/87)

In article <6014@brl-smoke.ARPA> gwyn@brl.arpa (Doug Gwyn (VLD/VMB) <gwyn>) writes:
>In article <7953@brl-adm.ARPA> MCGUIRE%GRIN2.BITNET@wiscvm.wisc.EDU writes:
>>For example: .../myshell.sh would be run under sh, but .../myshell.csh
>>would be run under csh.
>
>things compiled right.  No way am I going to edit a zillion Makefiles
>to redefine "cc" to "cc.sh", then later change them back.  (Admittedly

I think the suggestion implied that the various programs would
automatically look for the file with the recognized extensions.  Thus if
you type "cc" the shell (including the one invoked by the make command)
would first look for "cc.sh", then "cc.csh", and finally a binary
executable program of "cc".

This is the way it works on other systems and it is really no more
bizzare than the PATH variable that says first check for "/bin/cc", then
"/usr/bin/cc", etc.  One is adding prefixes while the other is adding
suffixes.

Having it check for shell scripts first is convenient as you can write a
"cc.sh" that invokes the "cc" program with special arguments.  On Unix
you are forced to use two different names or hope that the PATH finds
the script first.

Other systems also have file types that can be used to accomplish the
same thing.  Types are assigned for binaries, command files, Basic
programs, etc.  What is really interesting is that other systems usually
have both extensions and file types and different utilities will use one
or the other to decide if a file is of a particular sort.

By example under DG AOS if "foo.cli" doesn't have the file type "CLI
command file", it will still run.  But, if "foo.pr" doesn't have the
type of "executable program" it won't run.

				Jerry Aguirre

ken@rochester.arpa (Ken Yap) (06/26/87)

|Having it check for shell scripts first is convenient as you can write a
|"cc.sh" that invokes the "cc" program with special arguments.  On Unix
|you are forced to use two different names or [hope that the PATH finds
|the script first.]

No, that still won't work. If you don't reset the PATH within the
script, cc will keep calling itself until you run out of processes.
Normally one specifies the full path within the private cc.

	Ken

PAAAAAR%CALSTATE.BITNET@wiscvm.wisc.EDU (06/26/87)

Here is my $0.02 worth.
The prescence of two languages in a system is always a problem, n'est pas?
Unless they can be distinguished in some simple manner.
I have a problem on my machine because 'at' tends to generate csh scripts
for users who use 'sh': in other words some scripts exist which are a
mixture of csh and sh! The result is somwahat unusable. I expect to patch
the code 'real soon now'.
If there was a point at which the trouble could have been avoided it was
during the design of the C  shell's syntax. Then would have been a time
to include a piece of upward compatabillity. We can learn from the hardware
and language designer's here!
Am I getting at Bill Joy? - No! I am willing to bet that I would have chosen
to use the more logical csh syntax and d*** the consequences...
The nesting of ed inside ex inside vi is an example of the kind
of approach I mean.
>Why couldn't the file's name be used to determine what shell to use?
Because their must be 20-100 scripts out there that will have to be renamed
along with all the scripts that call them. Including all the scripts that
users have just thrown together....
Again the General Principle for system designers is that of MNC:
      MINIMUM  NECESARY   CHANGE
(Credit: Asimov, "The End of Eternity").
I will now go back to wrestling with the code of 'csh' 'at' 'atrun' etc...

Dick Botting, Comp Sci, Cal State, San Ber'do
paaaaar@calstate.bitnet
PAAAAAR@CCS.CSUSCC.EDU
PAAAAAR%CALSTATE.BITNET@WISCVM.WISC.EDU
5500, State University Pkwy, San Bernardino, CA 92407
(714) 887-7368 (voice) (714)887-7365(modem: login as guest)
Disclaimer: "I am only an egg"

chris@mimsy.UUCP (Chris Torek) (06/27/87)

In article <7953@brl-adm.ARPA> MCGUIRE%GRIN2.BITNET@wiscvm.wisc.EDU writes:
>>How would you distinguish which shell to run?

>Why couldn't the file's name be used to determine what shell to use?

You could.  But consider what happens if you do.

>Such a convention ought to be easy to implement in the kernel.

Not that easy, for names are forgotten as soon as possible; but
not hard.  So say we taught the kernel that `.sh' meant /bin/sh,
and `.csh' meant /bin/csh.  That would mean that no programs could
ever be named `foo.sh' (no great loss), and that all shell scripts
would have to be named foo.sh or foo.csh (somewhat annoying---who
*cares* whether lpr is a shell script or a binary or whatever).
It would also mean that after you wrote your FOOGOL interpreter,
you would have to rebuild the kernel to know that `.foogol' meant
/usr/mcguire/bin/foogol.interp, and then after your friend wrote
his BARGOL based on your FOOGOL, that you would have to rebuild it
again.  Of course, you could put a configurable table inside the
kernel that could be loaded at boot time.  But then you would have
to be privileged to alter the table, just as you have to be privileged
to rebuild the kernel; and your favourite extension might have
already been taken; and lots of other things.

Using `#! /usr/mcguire/bin/foogol.interp' seems so much simpler,
and more flexible, although it does require that FOOGOL allow
`#! /usr/mcguire/bin/foogol.interp' as the first line of a FOOGOL
program.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
Domain:	chris@mimsy.umd.edu	Path:	seismo!mimsy!chris

chris@mimsy.UUCP (Chris Torek) (07/01/87)

In article <8030@brl-adm.ARPA> PAAAAAR%CALSTATE.BITNET@wiscvm.wisc.EDU writes:
>If there was a point at which the trouble could have been avoided it was
>during the design of the C shell's syntax. Then would have been a time
>to include a piece of upward compatability.

As others have mentioned, the C shell was generally available before
the Bourne shell.  The shells were experimental, and not something for
which compatibility was desired.

[later:]
>Again the General Principle for system designers is that of MNC:
>      MINIMUM  NECESARY   CHANGE
>(Credit: Asimov, "The End of Eternity").

Actually, MNC is a good general principle for those fixing bugs.
For designers of new systems, MNC is quite wrong.  If something is
botched, making it just barely useful is useful, but better is
fixing it entirely, whether the result is compatible or not.  If
it can be fixed entirely, *and* still compatible---via a compatibility
library, say---that is even better, but that is not really required.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
Domain:	chris@mimsy.umd.edu	Path:	seismo!mimsy!chris