COTTRELL@BRL.ARPA, JAMES (08/08/85)
> The bottom line is which machine offers the best SOLUTION. The processor > inside should not matter to the end-user. After all, usability is usually > more dependent on software than on hardware. I absolutely agree. But it makes me wonder the application programs that AREN'T developed because of a messy or impossible architecture. No, you DONT need a benchmark to tell you what a piece of trash it is! Intel has been carrying the weight of its 4004/8080 evolution for too long now. The chip is now usable, but I find the concept of small model/ large model code repulsive. I mean nobody seriously suggests a dual model for the pdp-11/70 (16 bits for small model/22 bits for large )-: THERE IS NO ROOM IN THE C STANDARD FOR MEMORY MANAGEMENT CONSIDERATIONS! If you want stuff in another segment call a magic subroutine to get it for you by passing it two separate pointers explicitly like everyone else. Went to seek a micro Strangest one I could find Laid my proposition down Laid it on the line I won't slave for beggar's pay Likewise gold & jewels But I would slave to learn the way To sink your CHIP OF FOOLS rm intel The National Bureau of Standards neither endorses nor condemns any specific product or vendor, so don't blame them. jim cottrell@nbs */ ------
tanner@ki4pv.UUCP (Tanner Andrews) (08/13/85)
] in above-referenced note, commentary is made about applications ] not developed on the intel advanced-elevator-controller-286 due ] to architectural ugliness. Having here one of the aforementioned chips, and being the developer of some huge code on these little monsters, I feel not only qualified to comment but compelled by the frustrations. Due to the problems of the segmentation, large (>64K in theory) arrays are impossible. Due to other bugs, the actual limit from malloc() is 32K. Large programs must be built using special compiler options with which no other machine in my experience is plagued; option is required for large (>64K code), and options to group code into segments are supplied though due to bugs they don't work (tnx microsoft). Building progs with the larger code option makes bigger code, as code pointers grow to 32 bits. Data pointers don't grow, which is a relief if it is necessary to manipulate data -- but the old scheme of stashing a general pointer into a (char *) no longer works! Sorry, k&r. There are lots of bugs associated with the unix/xenix and the c compiler and libraries on this machine, and many of them are the result of casual trust in the segmented architecture not to come along and give you a kick in the sensitive locations. Advice from a `286 user: buy a 68K. However, please note. Applications _are_ being developed on the miserable thing. In C; the C used is not changed to reflect any peculiarities of the '286. The compiler options are being used, of course; the Makefiles are thus different. The code is kept particularlyy "clean" so that it will compile and run under the '286 as well as other, more sane, architectures. There is also the problem of specifying segments to the debugger, and making sense of arguments passed to functions. I guess that has caused as much frustration as anything with the segmented architecture. -- <std dsclm, copies upon request> Tanner Andrews, KI4PV uucp: ...!decvax!ucf-cs!ki4pv!tanner