[net.unix] 'c' shell scripts

jlh%amsaa@sri-unix.UUCP (12/09/83)

From:      John Halliburton (SSAO) <jlh@amsaa>

   At the risk of seeming stupid...can anyone tell me how to read a file
in a csh script I have tried 6 versions of redirect. (<) and nothing seems
to work.  what I want to do is read a line from a file into a variable,
use it in various ways and then read another line into that or another
variable.....any sugguestions
                John Halliburton  Amsaa-oper

mark@umcp-cs.UUCP (12/14/83)

I don't know if this is best, but I use the following to set a 
csh variable to the contents of a file:
	set x = `cat file`
Note that those are backquotes, and of course you can run anything in
there, even pipes.
-- 
spoken:	mark weiser
UUCP:	{seismo,allegra,brl-bmd}!umcp-cs!mark
CSNet:	mark@umcp-cs
ARPA:	mark.umcp-cs@CSNet-Relay

guidi@pegasus.UUCP (12/14/83)

How to read lines from a file and then go back to standard input
inside a shell script?  I use the following MODEL (Bourne Shell) which
may give some inkling on how to do it with other shells:
    tty=`tty`        # where std in is
    exec < a-file    # the file I want to read
    read line        # read a line from the file
    while [ "$?" = 0 ] # until error return (EOF) from "read"
    do
    echo $line       # process what I read from file
    read line        # get the next line (if any)
    done
    exec < $tty      # change std in to something else
    echo "terminal input?\c"   # etc.....
    read line
    echo $line

woods@hao.UUCP (Greg Woods) (12/15/83)

The construct

set x=`cat file`

works quite well. In fact, the shell will parse newlines just like blanks.
So if your file contains a list of other files, that looks like

file1
file2
....

then $x expands to "file1 file2 ....". More usually, I use something like

foreach f (`cat file`)
....
end

It should be pointed out that this does have limitations. It works fine
as long as the file in question isn't too long. Otherwise you get an
"Out of memory" diagnostic.

	     Greg "C-shell hacker" Woods
-- 
{ucbvax!hplabs | allegra!nbires | decvax!kpno | harpo!seismo | ihnp4!kpno}
       		        !hao!woods

stan@teltone.UUCP () (12/17/83)

# Here's 3 ways of indexing through a file's contents within a
# C shell script.  Assume the file name is "file".
# (you can even write all the below to some file and run csh on it)
#=================================================

echo "Method 1"
set f = ("`cat file`")	# makes f a list of elements, each being a line of file

# foreach line ($f)	# note that you can't do it with this foreach loop.
# 	echo "$line"
# end

@ linenum = 1
while ($linenum <= $#f)
 	echo "$f[$linenum]" # gets each line, including embedded white space.
 	# echo $f[$linenum]   # gets each line, embedded white space collapses
			    # to single spaces.
	@ linenum++
end

	# echo "$f[1]"	# gets the first line of the file
	# echo "$f[$#f]"	# gets the last line of the file
#------------------------------
# limitation on above is the length of the file. (approx. 10,290 bytes)
#=====================================================

echo "Method 2"
set linecount = `wc -l < file`		# have to use '<' here.
@ linenum = 1
while ($linenum <= $linecount)
	set line = "`awk 'NR == $linenum  {print;exit}' file`"
		    # above makes $line have a single string value.
			# note: $linenum will be expanded properly
	echo "$line"
	@ linenum++
end

#======================================================
# similar to above, but use 'sed', which might be (and probably is) faster
# than awk.

echo "Method 3"
set linecount = `wc -l < file`
@ linenum = 1
while ($linenum <= $linecount)

    # 2 backslashes at end of each of next 2 lines
set line = "`sed -n '$linenum{p\\
q\\
}' file`"
		# can't put comments after 2 lines with backslashes above
echo "$line"
@ linenum++

end

##### Whew!  Took a while to make all this to work, especially since I've
##### never required the csh to do this.

jjb@pyuxnn.UUCP (12/20/83)

When I first saw this question, the way in which I would do it
seemed obvious but it was interesting to note that none of the
posted solutions were what I would have done.  With that in mind,
I felt I had to get my two cents worth in:

	while read line
	do
		.
		.
		process the line here
		.
		.
	done <file

I much prefer this solution to using exec;  It's really debatable
which one is more efficient.  I don't think there is any basis for
comparing this solution to the use of awk or sed.

			Jeff Bernardis, AT&T Western Electric @ Piscataway NJ
			{eagle, allegra, cbosgd, ihnp4}!pyuxnn!jjb

olson@fortune.UUCP (12/23/83)

#R:sri-arpa:-1450400:fortune:26900010:000:659
fortune!olson    Dec 22 13:14:00 1983

While Jeff Bernardis' solution (to the problem of reading sequential
lines from a file in a csh script) is quite nice,  it is a solution
for the BOURNE shell, NOT the csh.  The original question asked
how to do it for the csh.  The equivalent of read is $<, but
that reads from the standard input of the entire while loop,
so the redirect is no good (see note in next para).

The old equivalent was gets, (which wasn't a builtin), but even that
is no good, since the csh does not allow you to redirect the input
for a 'here' document, unlike the bourne shell.  Hence the
convolutions posted here to accomplish the required task.

	Dave Olson, Fortune Systems

tim@unc.UUCP (12/31/83)

If you need interpreted sequential line processing of files in UNIX (tm),
why would you ever use anything except awk(1)?

jeff@heurikon.UUCP (01/01/84)

> From: fair@dual.UUCP (Erik E. Fair)
> I once used `a-truly-ugly-and-evil' way to read successive lines
> of a file for csh. Combinations of head and tail can do the trick:
> 
> # To keep the evil sh away...
> set x=1
> while ($x < ????)
> 	head -$x FILE | tail -1 (or minus however many lines you want)
> end

Sorry Erik, I think you'll have problems with big files.  At least
we would because our 'tail' has a bug: if the line number is > 99,
strange things happen.  To test, try:  "tail -nnn bigfile | wc -l"
And, you've got the same Un*x port that we do...Un*Plus+.
(Let's both report it!)

Ah!, but along the lines of your suggestion there *is* a solution!
I've had good luck using 'sed' instead of the 'head'/'tail' combination.
It's fast and flexible.  Try a varient of this:

	x=3
	oneline=`sed -n $x,"$x"p`
	echo $oneline

-- 
	Jeffrey Mattox, Heurikon Corp, Madison, WI
	{harpo, hao, philabs}!seismo!uwvax!heurikon!jeff
	(That path is correct, desipte what the headers might show.)

fair@dual.UUCP (Erik E. Fair) (01/04/84)

I once used `a-truly-ugly-and-evil' way to read successive lines of a file
for csh. Combinations of head and tail can do the trick:

# To keep the evil sh away...
set x=1
while ($x < ????)
	head -$x FILE | tail -1 (or minus however many lines you want)
end

It is interminably slow (but then what can you expect with a two process
pipe being invoked to read successive line(s) of a file?).

	Erik E. Fair	{ucbvax,amd70,zehntel,unisoft,onyx,its}!dual!fair
			Dual Systems Corporation, Berkeley, California

jacob%nrl-css@sri-unix.UUCP (01/04/84)

From:  Rob Jacob <jacob@nrl-css>

What's wrong with sed rather than head + tail to read successive (single)
lines of a file from a cshell script?  Should be faster than head + tail.
Like this--

	#! /bin/csh
	@ x = 1
	while ($x < ??)
		sed -n ${x}p FILE | ..wherever...
		@ x = $x + 1
	end

Or the mailer mangle a previous message on this topic?

Rob Jacob
jacob@nrl-css

fair@dual.UUCP (01/04/84)

If you look more carefully at my hack, you will notice that head does the
real dirty work. Granted you can't get tail to pass back more than 99 lines,
but, hey, that's life!

	Erik E. Fair	{ucbvax,amd70,zehntel,unisoft,onyx,its}!dual!fair
			Dual Systems Corporation, Berkeley, California