jwright@cfht.hawaii.edu (Jim Wright) (10/06/90)
Is the limitation of number of fields per record in (n)awk insurmountable? My data files consist of a julian date in "%13.5f " format followed by 100 data readings in " %8.2f" format, followed by a newline. Pretty tame stuff, but awk, nawk and vi all claim the lines are too long to deal with. Other tools have varying success. Am I screwed or is there and easy way to fix awk? (Without a source license.) -- Jim Wright jwright@cfht.hawaii.edu Canada-France-Hawaii Telescope Corp.
jwright@cfht.hawaii.edu (Jim Wright) (10/07/90)
I write:
>Is the limitation of number of fields per record in (n)awk insurmountable?
And the results are in.
use gawk -- no mention if the limitation is still present, but you do
get source to fix it
use perl -- no such limitation
use a2p -- converts awk scripts to perl scripts
Choice three is for me. At least until I learn perl.
--
Jim Wright
jwright@cfht.hawaii.edu
Canada-France-Hawaii Telescope Corp.
oscarh@hpdmd48.boi.hp.com (Oscar Herrera) (10/10/90)
|Is the limitation of number of fields per record in (n)awk insurmountable? |My data files consist of a julian date in "%13.5f " format followed |by 100 data readings in " %8.2f" format, followed by a newline. Pretty |tame stuff, but awk, nawk and vi all claim the lines are too long to |deal with. Other tools have varying success. Am I screwed or is |there and easy way to fix awk? (Without a source license.) I've run into similar problems in the past but have been able to 'work around' the limitations by combinations of the fold command and by redefining the record separator awk uses. Of course it is always easier if the data a files are preformatted by whatever process produces it with the post processing by awk in mind. Here is an example of a reformatted record. In the BEGIN part of your awk script you would redefine the record separator to # 1234567890123.12345 12345678.12 . . . 12345678.12 12345678.12 #