[comp.unix.questions] Patching multiple files with same text

root@rdb1.UUCP (Robert Barrell) (03/28/90)

     In a case where several files may have a given group of text lines, and
that group of lines must be replaced in all files by another group of lines,
is there any way to use patch, sed, or awk (sorry, I don't have perl) to perform
such a patch?  The problems I am encountering are:

1) The original text contains multiple lines, and ALL lines of the original must
   be replaced by the patch.
2) The patch is not the same length as the original text.
3) The original text is not always in the same relative position within each
   file, so "diff -e" scripts will only work for one file, due to the specific
   line numbers, and context diffs have the same problem because of the actual
   context (which may be different in all files).
4) Normal "diff" output, run through "patch" DOES work, but isn't specific
   enough about matching the ENTIRE original text, so if a similar group of
   lines appears in a file which doesn't contain the exact original, or if
   such appears BEFORE the original, a false patch can occur.

     I see, in the manuals, that sed has an "N" command for, supposedly, dealing
with multiple lines, but I haven't quite figured-out how to use it yet (all the
examples in the manuals are for single lines, and the description of "N" is
rather terse.

-- 
Robert Barrell      | ...!cbmvax!gvlv2!lock60!rdb1!root | Cody Computer Services
Milo's Meadow BBS   |        root@rdb1.canal.org        | 55 East High Street
login: nuucp or bbs |-----------------------------------| Pottstown, PA   19464
(215) 323-0497      | Business and Police Dept Software | (215) 326-7476

lwall@jpl-devvax.JPL.NASA.GOV (Larry Wall) (03/29/90)

In article <193@rdb1.UUCP> root@rdb1.UUCP (Robert Barrell) writes:
:      In a case where several files may have a given group of text lines, and
: that group of lines must be replaced in all files by another group of lines,
: is there any way to use patch, sed, or awk (sorry, I don't have perl) to perform
: such a patch?  The problems I am encountering are:
: 
: 1) The original text contains multiple lines, and ALL lines of the original must
:    be replaced by the patch.
: 2) The patch is not the same length as the original text.
: 3) The original text is not always in the same relative position within each
:    file, so "diff -e" scripts will only work for one file, due to the specific
:    line numbers, and context diffs have the same problem because of the actual
:    context (which may be different in all files).
: 4) Normal "diff" output, run through "patch" DOES work, but isn't specific
:    enough about matching the ENTIRE original text, so if a similar group of
:    lines appears in a file which doesn't contain the exact original, or if
:    such appears BEFORE the original, a false patch can occur.
: 
:      I see, in the manuals, that sed has an "N" command for, supposedly, dealing
: with multiple lines, but I haven't quite figured-out how to use it yet (all the
: examples in the manuals are for single lines, and the description of "N" is
: rather terse.

Just construct yourself a fake diff listing that has one hunk containing
all the lines, with no context.

1,3c1,8
< This are
< the old
< lines
---
> And these
> are the new
> lines--
> as many
> as you like,
> but make sure
> the line numbers
> are right.

Then use patch.  Depending on the lines, you may be able to generate them
with diff -c0.  Or maybe not, if there are common lines.

The shell script to iterate over all the files is left as an exercise.

Or GET perl, and say something like

#!/usr/bin/perl -i.bak

eval 'exec /usr/bin/perl -Si.bak $0 ${$1+"$@"}'
    if $running_under_some_shell;

$old = <<'EOF';
This are
the old
lines
EOF

$new = <<'EOF';
And these
are the new
lines--
as many
as you like,
and it
doesn't matter
how many lines you
put here
EOF

$old =~ s/(\W)/\\$1/g;		# protect any metacharacters.
undef $/;			# treat each file as one line
while (<>) {			# for each file
    s/$old/$new/;		# (add g to do multiple times in each file)
    print;
}

This will iterate over all the files you mention on the command line
and edit them in place.

Larry Wall
lwall@jpl-devvax.jpl.nasa.gov

karish@mindcrf.UUCP (Chuck Karish) (03/29/90)

In article <193@rdb1.UUCP> root@rdb1.UUCP (Robert Barrell) writes:
>     In a case where several files may have a given group of text lines, and
>that group of lines must be replaced in all files by another group of lines,
>is there any way to use patch, sed, or awk (sorry, I don't have perl) to
>perform such a patch?  The problems I am encountering are:

When I have to do this, I write a shell script that runs ex on each
of the files.  'pattern1' and 'pattern2' delimit the text to be removed;
'newtext' is the name of a file containing the replacement text.
'newtext' cound easily be entered inline in the script.  ed is faster
than ex if its file size limitation (~128 K) doesn't cause problems.

 *-*-*-*

for file in $*
do

	echo "
$file"
	cp $file $file.new
	ex $file.new << EOF
	1
	/pattern1/,/pattern2/d
	-
	r newtext
	w
	q
EOF

	diff $file $file.new

done

 *-*-*-*

If the diffs look OK, move each $file.new to $file.

skwu@boulder.Colorado.EDU (WU SHI-KUEI) (03/30/90)

'Ed' and a 'here' document will do what you need.  E.g:

for i in file1 file2 ......
do
ed $i << THE_END
g/RE_1/.,/RE_2/c\\
new text, with each new line escaped
w
.
THE_END
done

Where RE_1 (RE_2) matches the first (last) line of the text that must be
replaced.