chen@MITRE-GATEWAY.ARPA (MS W420) (09/04/85)
Hah, hah, har, har, snicker, tee hee, guffaw... (Sorry, but I couldn't help myself. I've been working myself with the 80188's younger brother, the 8088, although bashing my head against it might be more correct. Anyway...) Personally, I know of one compiler that generates better code for the 80188/6 than for the 8088/6 and supports the large memory model. It's the CI-C86 C compiler by Computer Innovations. They also sell something called ROMPAK for ROMing your code. However, your friend left out one crucial detail -- namely what operating system the code will run under (if any). CI-C86 will run under MS-DOS and CPM. I don't know if they support anything else. If the CI compiler doesn't work out, I hear that Lattice is also a good compiler. Myself, I prefer the CI compiler as it's a 4-pass compiler (including an optimizer), includes all sort of hardware fp support, large, small, medium (just out), and compact memory model support, AND because they give you the source code to their libraries. The only gripes I have with the CI compiler is that it doesn't know about structure assignments yet and the way they handle "extern". They handle it like the System V Release 1 compiler did. (I'm going to bitch about both of these to CI.) If neither compiler works out, you might want to look into Whitesmith's C compiler. Now, a general note about memory models. You'll be able to find a compiler that lets you have > 64K code and > 64K data. However, there are going to be certain restrictions that are possible to get around when programming in assembler, but are very difficult for a compiler. Namely, you're going to have 64K restrictions are a lot of objects. Basically, if the compiler would have to know when to change the value of a segment register when accessing two pieces within the same object, forget it. For example, chances are, you won't be able to have static arrays > 64K. If you did, this would mean that the compiler would have to break the array up across two or more segments, make sure the linker loaded the segments contiguously (to make array indexing and pointer arithmetic a reasonable proposition) and check for overflows on indexing (for wrap- around if the index variable is 16 bit (int)) or do boundary checking on every random access into the array (if the index variable is 32 bit (long) in order to keep track of when to change the segment register. If you don't make the linker load segments contiguously, then you'd have to keep some sort of segment map indicating which segments held which parts of what array and index into that when you had to change the segment register. Plus, values in the table would probably have to be addresses relative to the address where the program was loaded. And on top of all this, remember, this isn't a 68K or a VAX where you've got a lot of registers to play with. Ugh. You run into the same problem with pieces of dynamically allocated memory > 64K, code modules > 64K, stack sizes of > 64K, etc. It's possible to write a compiler that could handle this sort of stuff, but it'd be a real pain and the resulting code would have all sorts of special-case checks that would slow it down a *lot*. Large model code is going to be slower anyway. First of all, all your procedure calls are going to be far calls instead of near calls. Second, all your pointers are going to be 32-bit instead of 16 as the compiler now needs both an offset and a segment value. Also, watch out for pointer arithmetic. If your pointers aren't pointing to things in the same segment, the resulting value is liable to be implementation-dependent. If you decide to go with CI's compiler, their phone number is 201-542-5920. Note that I have no connection with the Computer Innovations or Intel except as a customer. Good luck... Ray Chen chen@mitre-gw "This message brought to you courtesy of Intel -- maker of the world's finest 16 bit elevator controllers..."
warren@tikal.UUCP (Warren Seltzer) (09/05/85)
[replace this multiply with a shift...] There are a number of other compiler vendors that offer cross compilation of C from various Unix systems to 8086/186 family CPU's. These vendors include: Intermetrics Inc., of Cambridge Mass. Oasys, also of Cambridge Mass. Systems and Software, of Costa Mesa Ca. Lattice Inc., of Glen Ellyn Ill. All of these companies charge much more for multi-user Unix system cross-tools than similar compilers cost for desktop machines. We ourselves are looking at all of the above, as an upgrade over what we do now. All of them (except Intermetrics, I think) also offer the same compilers for IBM PC's that they offer for Unix systems. Support for creating ROM's is also available, as is a variety of memory models, special code generation for the extended 186 instruction set, and linkers, locators and loaders. Most of these companies will gladly port their stuff to the Unix system of your choosing, if you choose something reasonably popular. We use Pyramids and Vaxen, and have no problem with availability for either. Some of these vendors also offer compatible compilers and linkers, etc., for Motorola and/or National chips as well, so you can plan portability for your embedded system, if you wish. Not all vendors offer all features, of course. teltone!warren
peter@graffiti.UUCP (Peter da Silva) (09/13/85)
> Namely, you're going to have 64K restrictions are a lot > of objects. Basically, if the compiler would have to know > when to change the value of a segment register when > accessing two pieces within the same object, forget it. Not true. The Lattice 'C' compiler has 2 large memory models. One with this restriction and one which uses library routines for pointer arithmetic. thus you can get efficient code or huge arrays, but not both. Of course Lattice isn't the world's greatest compiler, but at least it does a decent job of handling the utterly brain damaged peice of hardware known as the 8086. Suggestion: forget it. Use a 68000. We probably will if it ever comes out in a low-power form.