[comp.lang.modula2] lost mail

JIM@UCF1VM.BITNET (Jim Ennis) (10/05/87)

+=========================================================================
+eceived: from Princeton.EDU by PUNFS.Princeton.EDU on 10/03/87 at 17:36:37 EDT
+Received: by Princeton.EDU (5.51/1.38)
+    id AA04807; Sat, 3 Oct 87 17:34:28 EDT
+To: info-modula-2@ucf1vm.BITNET
+Path: phoenix!princeton!udel!gatech!hao!oddjob!gargoyle!ihnp4!alberta!myrias!sj
+From: sjl@myrias (Stuart Lomas)
+Newsgroups: comp.lang.modula2
+Subject: Re: Sets
+Message-Id: <532@myrias.UUCP>
+Date: 2 Oct 87 22:45:25 GMT
+References: <INFO-M2%87093023532046@UCF1VM>
+Organization: Myrias Research, Edmonton
+Lines: 52
+
+
+> How big should a set be?
+>
+> Wirth created the idea of a bitset because it would be easy to implement
+> a very efficient way to do sets the size of the computer's word.  This
+> means sets are likely to be 8, 16 or 32 bits, possibly other sizes,
+> depending on the implementation.
+>
+> Bruce MacLennan in his book, Principles of Programming Languages presents
+> an idea which I quite agree with.  "The only reasonable numbers are zero,
+> one, and infinity."
+>
+> Since zero and one are not very usefull numbers when you're dealing with
+> sets ( they're rather special case sets ), I would propose a that the limit
+> be infinity.  The actual limit would be dependant on the available memory
+> for storing the set, but it could be a very large number.  Since the type
+> of each set is known, the compiler could determine how much memory is
+> required to store the set.  Now, we have the ability to say something
+> like SET OF CARDINAL, since most implementations have cardinal values of
+> 0 .. 65535.  If you were to store one member per bit, the entire set
+> could be represented in 8k {of memory, assuming 8 bit bytes.  This is a
+> reasonable size block of memory to consider even on older 8 bit machines.
+>
+>                                                      Tom Ruby
+>                                                   MSRS002@ECNCDC
+
+Hear hear! The only reason for limiting set sizes is to reduce (by a tiny
+amount) the work required to implement the compiler. Long bit vectors are
+not difficult to implement, and can be very useful. The presence of long
+bit vectors in no way reduces the efficiency of short bit vectors, so it
+is fallacious to claim that set sizes must be limited for efficiency
+reasons. The element type of a set should be able to be ANY SCALAR TYPE
+including LONGINT and LONGCARD (and, of course, CHAR).
+
+That being said, I must give some credit to the opposition. I think the
+proposal here is that these long sets be implemented as bit vectors. It
+certainly is a major problem for the compiler implementer if they are to
+be anything else. Long bit vectors are NOT an efficient way to represent
+large sparse sets, but they are a nice way to handle large dense sets like
+bitmaps. Any user who has an application for large sparse sets should
+choose some other representation (such as binary trees) and write a
+module to handle them.
+
+Now for the crux of the matter. I believe that it should be explicit in
+any eventual Modula-2 standard that "sets" are indeed "bit vectors". It
+is simply not possible to choose an implementation for large sets that
+will handle every application well, so the language-supported implementation
+(if there is to be one at all) should be the simplest available.
+
+Stuart Lomas
+Myrias Research Corporation
+Edmonton, Alberta, Canada
+{ihnp4,ubc-vision,rutgers}!alberta!myrias!sjl