fbaube@note.nsf.GOV (Fred Baube) (07/31/87)
Does anyone have a scheme for documenting C with Ada ? I have in mind the use of cpp commands to bring in both Ada-style package and object declarations, and the corresponding C .h's that had global definitions and declarations of both data and functions. I'll post a digest of any ruminations sent directly to me. CPU1: How many humans does it take to multiply 123456789L by 3.1415926535e00 ? CPU2: Can they use their toes ?
scs@adam.mit.edu (Steve Summit) (03/15/91)
In article <!+%=PZ$@rpi.edu> fechuny@aix01.aix.rpi.edu (felix chung yau) writes: > I am a new programer in C and was wondering if someone could help me with >a problem I have in using a two dimensional array. I am currently using >Microsoft Quick C and my problem is in dimensioning an integer array like >test[400][400]. When I compile the program I get the message "allocation >exceeds 64k". This qualifies as a frequently-asked question (not to fault Felix for having to ask it, though). There is nothing particularly unusual about asking for a 400x400 array; people do it all the time. The "allocation exceeds 64k" message is concise enough, but it leaves you without a clue as to how to proceed. Even if you check the handy-but-nearly-useless error message summary in the QuickC Compiler Programmer's Guide (I confess with some embarrassment that I have one right here) you find the helpful elucidation: C2125 'identifier': allocation exceeds 64K The given item exceeds the size limit of 64K. Should you happen to have the full-blown Microsoft C Compiler, its error message reference adds the sentence The only items that are allowed to exceed 64K are huge arrays. This at least suggests that you might want to go investigate the "huge" keyword, which is one of the three recommended workarounds. Vendors could do an awful lot better at providing truly useful (not merely comprehensive, copious, or compendious) documentation. Two popular but essentially mechanical techniques for producing aspects of the documentation yield nearly useless results if done unthinkingly: 1. Alphabetical lists of commands (keywords, command line options, etc.). These help only if you know the name of the command but have forgotten what it does. If (as is more likely) you remember that the system can do something but have forgotten how to invoke it, you're out of luck unless you read through *all* the commands, looking for a description of the functionality you remember (or unless the reference manual has a useful index, which is never the case unless the author is Brian Kernighan). The nice thing about these command lists (from the developer's point of view) is that you can generate them definitively and mechanically from the parse tables or equivalent data structures in the source code. But which is more important: whether the documentation is easy to write, or whether it is useful to the user? 2. Lists of error messages and "explanations." I've never understood what these were for, because all they ever contain are essentially wordier versions of the error messages. Again, these are attractive because they're easy to generate and you can brag about them (as if you spent a *lot* of *hard* work generating them!): "This appendix contains a list of *every* error message the system will ever generate." (Great. So what's it good for? If I just wanted all the error messages I could run strings on the executable.) Would it be too hard, when writing up these error message lists, to spend a minute or two and *think* about the circumstances under which a given error message might arise, and then try to offer the user a bit of proactive advice? How much more helpful it would have been to say: C2125 'identifier': allocation exceeds 64K Under the PC's segmented architecture, single data objects cannot generally be larger than 65,536 bytes in size. If a single data object must exceed 64K in size and must be allocated continuously, it must be declared with the huge qualifier (see page xx). Very large data structures can be dynamically allocated with the halloc() routine (see page yy). Whenever possible, it is preferable to break up a large data object so that its individual pieces do not exceed 64K. For example, a very large two-dimensional array can be allocated using an array of pointers to subarrays, each smaller than 64K. Rather than double x[100][100]; which would require 80,000 contiguous bytes, use double *x[100]; for(i = 0; i < 100; i++) x[i] = (double *)malloc(100 * sizeof(double)); which will still let you access x[i][j]. Of course, the manual for a PC compiler would probably decline to mention the third recommended workaround ("Get a Real Computer!"). This has nothing to do with C per se; followups should probably go somewhere else. Steve Summit scs@adam.mit.edu