[comp.sys.ibm.pc] AutoCad -- How much memory overhead for a block?

ag0@k.cc.purdue.edu (Colin Jenkins) (01/21/87)

	I am working with an art professor at Purdue University who wants to
integrate some of his artwork with AutoCad.  He has a Targa high res graphics
card and an optical digitizer.  Between the two, he is capable of producing
color images with near photographic quality (rgb, 400X512, 32000 colors).  
His goal is to take a picture file generated by the targa imaging system, and
after a little manipulation, load it into AutoCad representing each pixel as a
block.  Because of the large size of the targa image file, we use the targa 
software to first grey the image (eliminating color) and then "mossaic" the 
image so that the screen resolution is effectively quartered.  After that step
we read the resulting picture file in quarters to further reduce the output
file size.  The result is an output file representing a selected quadrant 
of the original digitized image at a quarter resolution.   The number of blocks
in the resultant file is 50 X 256 = 12800.  Is this a large number of blocks for
AutoCad?

	We tried two different AutoCad input file formats.  The .DXF format 
and script (.SCR) file.  Both of these formats cause the PC to hang, or to 
produce an "Out of ram" error message.  This particular PC is a 286 machine 
running with about 2 MBytes of memory.  Unfortunately this version of AutoCad
only recognizes the bottom 640K.  AutoCad hangs after reading approximately 
5000 block inserts (this is only ~40% of the input file).  Does anyone know 
what kind of memory overhead is associated with inserted blocks?

	If the problem is not AutoCad block sizes, it may be in the hardware.
The PC tends to be a bit brain damaged and does not operate properly at 10MHz
speed.  In fact, AutoCad blows up with a "parity error" if it is run at 
anything higher than 6MHz.

	If anyone has any detailed information about AutoCad (internal block
representations) or any ideas please let my know.




				Thanks

				Colin Jenkins

				ihnp4!pur-ee!k.cc.purdue.edu!ag0

jbn@glacier.UUCP (01/28/87)

In article <1710@k.cc.purdue.edu> ag0@k.cc.purdue.edu (Colin Jenkins) writes:
>
>
>...load it into AutoCad representing each pixel as a block.
       A block is a large entity.  This is a very expensive representation.
Try using LINE entities for runs of dark pixels, which produces a sort of
raster scan effect at high magnifications but at the right level of zoom
will reproduce the raster quite nicely.

>
>	If the problem is not AutoCad block sizes, it may be in the hardware.
>The PC tends to be a bit brain damaged and does not operate properly at 10MHz
>speed.  In fact, AutoCad blows up with a "parity error" if it is run at 
>anything higher than 6MHz.
>
      Let's put it this way.  If you undertake to "speed up" a computer,
you are going into the electronic engineering business.  You should know
what you are doing.  A good understanding of such concepts as "operating
margins", "MTBF", "derating", "semiconductor part selection", "temperature
ranges" and "burn-in testing" is essential.  The absolutely essential tools
for someone trying to speed up a computer are a really good set of diagnostic
programs and the facilities to run the machine at a controlled, elevated
temperature for an extended period of time.  Just changing the crystal and
observing that the machine still boots is NOT enough.  Sometimes you have
a faster machine.  More often you now have a machine which makes errors.

      Incidentally, you can get in touch with Autodesk directly on
CompuServ; use "GO ADESK".  

					John Nagle