bill@smoke.BRL.MIL (William E. Hatch) (12/04/89)
Subject: Mac C Question My current job assignment involves porting a moderate size (250K - 400K) C program from Xenix to a Macintosh II. The program only uses those library calls supported by ANSI C; it is a simple filter in that it reads several input files, performs many floating point computations and writes several output files. The program does not perform any graphics or interactive operations - there are no calls to the Macintosh toolbox or to any other Macintosh specific libraries. We plan to use Think C 4.0 as the Mac C compiler and code management system. In reviewing the Think C manuals, I discovered that the programmer is expected to hard code memory management logic and system calls because the "code" segment of the program is limited to 32 K. Furthermore, this is a Macintosh O/S limitation, not a limit imposed by Think C. Being very new to the Macintosh world, I never expected that a 68030 machine with demand paging hardware and 4 to 8 Mb of RAM would be so limited in the size of programs supported. We can break the code up into 16 or more 32 K segments; however, the labor costs incurred will overrun the fixed price contract by about 30%, and the source code packaging (into files) will become highly unstructured. We are using calloc() to allocate all arrays and data structures so that these will not reside in the code segment. There are no arrays or data structures greater than 32K. The code for the inner loop of the program is highly likely to require a code segment greater than 32K. My questions are as follows: 1.) Will this 32K code segment (and other related 32K limitations) be removed in the next version of the Macintosh O/S (7.0 i think) ? 2.) Are we using the right C development system (Think C 4.0)? Do other development systems such as MPW currently provide a workaround that is transparent to the programmer? 3.) If the 32K limitation is removed in 7.0, how soon will a compatible C development system be available (MPW or Think C) ? Any answers or recommendations will be greatly appreciated. The mail address on my home machine is: uunet!bts!bill Thanks in advance. Bill Hatch Computational Engineering 14504 Greenview Drive Suite 500 Laurel, Maryland 20708 Phone (301)470-3839 FAX (301)776-5461 HOME (301)441-1675
t-jlee@microsoft.UUCP (Johnny Lee) (12/08/89)
In article <11736@smoke.BRL.MIL> bill@smoke.brl.mil.UUCP (William E. Hatch) writes: >Subject: Mac C Question > [Stuff Deleted] >We plan to use Think C 4.0 as the Mac C compiler and code management system. ^^^^^^^^^^^^^^^^^^^^^^^^^^^ What do you mean by this (I'm kinda confused by the wording)? If you mean source code control like SCCS or RCS then forget it. Think C doesn't do it. If you mean something like make, OK. >In reviewing the Think C manuals, I discovered that the programmer is expected >to hard code memory management logic and system calls because the "code" >segment of the program is limited to 32 K. Furthermore, this is a Macintosh >O/S limitation, not a limit imposed by Think C. Being very new to the ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ I'm sorry. But your information or what you inferred from it is incorrect. Think C is the only commercial mac C compiler which limits you in this way. Aztec C and MPW C (when it generates correct code :-)) will allow you data segments larger than 32K and code segments larger than 32K (not sure about Aztec C but MPW - yes.) From what I remember the 32K code segment limit has to do with using 16-bit offsets and indirect addressing in the JSR instruction to the routine being called. The 32K data limit is plagued by the same problem - using 16-bit offsets off of the A5 register (which points to your global data). Half of the 64K area accessible off of A5 is used by QuickDraw and the Jump Table (for your segments). So you've only got 32K left. I believe the other compilers use absolute addressing for global data, though I'm not sure. >Macintosh world, I never expected that a 68030 machine with demand paging >hardware and 4 to 8 Mb of RAM would be so limited in the size of programs >supported. We can break the code up into 16 or more 32 K segments; >however, the labor costs incurred will overrun the fixed price contract >by about 30%, and the source code packaging (into files) will become highly >unstructured. There must be some pretty big source files you're using! If every source file compiles to larger than 32K, it'll be a pain in Think C. If you use Think C all you should need to do is just drag the source files from one segment to another segment in the Project window. I ported Nethack over to the Mac and that beast is 650K. I had to use Think C because it was the one that would compile with the least complaining. MPW C doesn't like Nethack. Aztec C is a schizophrenic compiler - a pseudo-ANSI C compiler; it reserves keywords (like const) but doesn't use them and complains if you try to. > >We are using calloc() to allocate all arrays and data structures >so that these will not reside in the code segment. There are no arrays >or data structures greater than 32K. The code for the inner loop of the >program is highly likely to require a code segment greater than 32K. > >My questions are as follows: > >1.) Will this 32K code segment (and other related 32K limitations) be > removed in the next version of the Macintosh O/S (7.0 i think) ? As I stated before, this is not a limitation of the MacOS. It used to be. When the original Macs (64K ROMS) came out they had the 32K limit, but since the Mac Plus (128K ROMS) this limit has disappeared. MPW requires that a routine calling another routine be within 32K of each other, but I've never had to do this before. >2.) Are we using the right C development system (Think C 4.0)? Do > other development systems such as MPW currently provide a > workaround that is transparent to the programmer? MPW does. If you consider a couple of command line switches to be transparent, then yes. >3.) If the 32K limitation is removed in 7.0, how soon will a compatible > C development system be available (MPW or Think C) ? This has been answered above. Certainly this question comes up every couple of months. Think C is a great environment for small to medium-sized projects. But for large projects or for porting medium to large-sized projects, Think C suffers from the 32K global data limit greatly and to a lesser extent on the 32K code segment limit. It's fast and they include source for the libraries. I could keep going on but I have to do some work. Johnny Lee t-jlee@microsoft.UUCP ...!{uw-beaver, uunet, sun}!microsoft!t-jlee