[comp.databases] CLIPPER array dups problem

maurit@nrtc.nrtc.northrop.com (Mark Aurit <maurit>) (04/25/91)

In a program I take the contents of an ascii file, read it into a
.dbf and manipulate it, then present it to the user with
achoice(). Unfortunately, the ascii file contains dups, and I
dont want my picklist to have any. What is the best way to remove
them? Where is the best place - in the .dbf or in
the array. Ascan() and adel() dont seem to do the trick. I guess I
could try a locate/continue; Im wondering if anyone has a slick (but
maintainable!) idea.

On input, the ascii file might look like:
   4
   buffed.dbf
   trick.dbf
   stormin.dbf
   norman.dbf
I APPEND it into a database, delete the first line, pack the file, read
it into an array, then achoice() it.

Thanks in advance
Mark Aurit
maurit@nrtc.northrop.com

msholind@csr (Murray Sholinder) (04/25/91)

In article <22963@gremlin.nrtc.northrop.com> maurit@nrtc.nrtc.northrop.com (Mark Aurit <maurit>) writes:
>In a program I take the contents of an ascii file, read it into a
>.dbf and manipulate it, then present it to the user with
>achoice(). Unfortunately, the ascii file contains dups, and I
>dont want my picklist to have any. What is the best way to remove
>them? Where is the best place - in the .dbf or in
>the array. Ascan() and adel() dont seem to do the trick. I guess I
>could try a locate/continue; Im wondering if anyone has a slick (but
>maintainable!) idea.
>
>On input, the ascii file might look like:
>   4
>   buffed.dbf
>   trick.dbf
>   stormin.dbf
>   norman.dbf
>I APPEND it into a database, delete the first line, pack the file, read
>it into an array, then achoice() it.

One solution to your problem would be to append the ASCII file into a 
database, delete the first line, pack the db, THEN create a UNIQUE index 
of the db on the field where the duplicates occur. 

    eg. INDEX ON dbname TO namendx UNIQUE

By indexing the db with UNIQUE on and then accessing your db with the 
index active, the duplicates will no longer be visible. This way you won't 
have to spend the time skimming through the db to find dups, mark them for 
deletion, and then pack the db. If it is absolutely necessary to remove the 
dups, you could use the above procedure and then append the indexed db 
to a similarly structured db, which would then only contain unique names 
without the need of an index.

I hope this helps.

Murray Sholinder
Sholinder Computing
Victoria, BC 

mpd@anomaly.sbs.com (Michael P. Deignan) (04/27/91)

maurit@nrtc.nrtc.northrop.com (Mark Aurit <maurit>) writes:

>I APPEND it into a database, delete the first line, pack the file, read
>it into an array, then achoice() it.				   ^
								   |

Why not add two commands here:

SET UNIQUE ON
INDEX ON <key> TO <tempfile>


This will index the file, leaving the index active, and allow you to
then read the data into an array, without dups.

MD
-- 
--  Michael P. Deignan                      / Since I *OWN* SBS.COM,
--  Domain: mpd@anomaly.sbs.com            /  These Opinions Generally
--    UUCP: ...!uunet!rayssd!anomaly!mpd  /   Represent The Opinions Of
-- Telebit: +1 401 455 0347              /    My Company...