Will@cup.portal.com (Will E Estes) (09/13/89)
Can someone explain at a high level what an interface description language (IDL) is? My understanding is that IDL is a pre-processor to an existing language such as C that implements a high-level language that is well-suited to dealing with complicated data structures like those found in compilers. Now, with that said, can someone answer the following: 1) what is it about the IDL language that makes it better suited to dealing with C data structures than C itself? 2) can someone quantify what kind of performance benefits can be expected using IDL over using just C? I have had several sources suggest some good books on this topic, and I may get around to buying and reading these someday ( :). So far I haven't been able to find anyone who understands enough about IDL to be able to answer the two specific questions I ask above without resorting to saying that I "should read the book." Can someone here who has experience with IDL give me concise answers to my questions? Thanks, Will
dalamb@qucis.queensu.CA (David Lamb) (09/14/89)
In article <22096@cup.portal.com> Will@cup.portal.com (Will E Estes) writes: >1) what is it about the IDL language that makes it better suited to >dealing with C data structures than C itself? - IDL supports multiple inheritance as in object-oriented languages. It can coordinate the definitions of structs corresponding to subclasses so that fields are in "the same place" - An IDL processor can generate runtime descriptors that let you apply generic tools to manipulate your data structures. e.g automatically write out the contents of a graph. > >2) can someone quantify what kind of performance benefits can be >expected using IDL over using just C? - I don't understand what performance you mean. To my knowledge no one has yet studied productivity improvements; it's all anecdotal. > >I have had several sources suggest some good books on this topic, and - try "Sharing Intermediate Representations: The Interface Description Language", ACM Trans. Prog. Lang and Systems, July 1987 for a somewhat longer introduction. David Alex Lamb Department of Computing and Information Science Queen's University Kingston, Ontario, Canada K7L 3N6 (613) 545-6067 ARPA Internet: David.Lamb@cs.cmu.edu dalamb@qucis.queensu.ca uucp: ...!utzoo!utcsri!qucis!dalamb
Will@cup.portal.com (Will E Estes) (09/14/89)
< 2) can someone quantify what kind of performance benefits can be < expected using IDL over using just C? Just a clarification: what I mean by the above is how much time will be saved writing the code (not performance of the resulting program at run-time). Thanks, Will
marc@apollo.HP.COM (Marc Gibian) (09/15/89)
One well defined IDL that I have experience with is the Network Interface Definition Language, or NIDL, that is part of the Apollo NCS product. NIDL is used to isolate the application implementor from the nitty gritty details of using the NCS RPC mechanism. It makes for very fast implementation of distributed, heterogeneous, applications as well as fast modification of conventional software to distribute the application across multiple nodes/platforms. Marc -- Project Engineer, email project: Apollo Systems Division of HP Internet: marc@apollo.hp.COM NETel: Apollo: 508-256-6600 x2077 (Copyright 1989 by author. All rights reserved. Free redistribution allowed.)
joshua@athertn.Atherton.COM (Flame Bait) (09/16/89)
marc@apollo.HP.COM (Marc Gibian) writes: >One well defined IDL that I have experience with is the Network >Interface Definition Language, or NIDL, that is part of the >Apollo NCS product. NIDL is used to isolate the application >implementor from the nitty gritty details of using the NCS >RPC mechanism. If you want a better IDL, I'd look at Sun's XDR system. Unlike NIDL, it was designed from the begining to be a general purpose data language (ie. to be used for more than just remote procedure calls). This difference in design philosophy shows. Also, XDR can support any data type used by C, NIDL can not. For example, NIDL can not process structures which contain pointers to structures, except via arcane, complicated functions, which you must write yourself. XDR on the other hand, can handle an entire B-tree as a single data unit. Also, XDR is freely distributable, and available from several archive servers on the net. This distribution includes the source for an XDR compiler, which can form the basis for many useful utilities. You need to pay a little money for NIDL, and the source code for it is very expensive. Finally, I'd like to say that I have used NIDL, XDR, and a similar product from Netwise. Also, I do not work (and have not worked) for Apollo, Netwise, or Sun. Joshua Levy -------- Quote: "If you haven't ported your program, it's not Addresses: a portable program. No exceptions." joshua@atherton.com {decwrl|sun|hpda}!athertn!joshua work:(408)734-9822 home:(415)968-3718
djones@megatest.UUCP (Dave Jones) (09/17/89)
From article <12697@joshua.athertn.Atherton.COM>, by joshua@athertn.Atherton.COM (Flame Bait): > If you want a better IDL, I'd look at Sun's XDR system. Then, not finding it there, look elsewhere. > [It] was designed from the begining to be a general purpose data language > (ie. to be used for more than just remote procedure calls). This difference > in design philosophy shows. Also, XDR can support any data type used by > C ... Huh? It can't even handle directed acyclic graphs, even with all the pointers nicely pointing to the "tops" of the structures. It turns them into trees. On a cyclic graph, it will get into a loop and spew data until something bursts. There are other problems with XDR. It uses binary format for integers, floating point, etc.. The formats are exactly the ones used internally in Sun workstations. (Amazing coincidence that.) So if you use only Sun workstation, there is no data-conversion necessary. If you don't, get ready to start programming. You may have to write conversion routines for going between IEEE floating-point (XDR standard) and whatever else you have: IBM or MIT or some other floating-point format. I don't think those routines are provided. So, why not just do what XDR should have done... use text to represent integers and reals? Everybody has printf and scanf. Good idea, but there is no provision for automatic conversion of text from ascii (the unspoken XDR standard) to ebcdic and back to ascii. Perhaps the worst thing about XDR is that if you use it, you may be tempted to use RPC.
kan@dg-rtp.dg.com (Victor Kan) (10/07/89)
In article <8092@goofy.megatest.UUCP> djones@megatest.UUCP (Dave Jones) writes: > >From article <12697@joshua.athertn.Atherton.COM>, by joshua@athertn.Atherton.COM (Flame Bait): > > >> If you want a better IDL, I'd look at Sun's XDR system. > > >Then, not finding it there, look elsewhere. > > >> [It] was designed from the begining to be a general purpose data language >> (ie. to be used for more than just remote procedure calls). This difference >> in design philosophy shows. Also, XDR can support any data type used by >> C ... > >Huh? > >It can't even handle directed acyclic graphs, even with all the pointers >nicely pointing to the "tops" of the structures. It turns them into >trees. On a cyclic graph, it will get into a loop and spew data until >something bursts. I've never done this, but I assume you have tried and know it behaves this way. > >There are other problems with XDR. It uses binary format for integers, >floating point, etc.. The formats are exactly the ones used internally in >Sun workstations. (Amazing coincidence that.) So if you use only >Sun workstation, there is no data-conversion necessary. This is true, but there's nothing inherently wrong with this scheme. It would be stupid to use a scheme that no existing computer uses, wouldn't it (unless some really smart person comes up with the perfect, never before seen format)? So it might as well be their own. >If you don't, get >ready to start programming. You may have to write conversion routines for >going between IEEE floating-point (XDR standard) and whatever else you have: >IBM or MIT or some other floating-point format. I don't think those routines >are provided. This statement makes it hard to believe that you've even read the xdr man page. Of course those routines are provided. xdr_float(), xdr_int(), xdr_double(), etc. are provided to do what's necessary. Here's a quote from the man page (this may be a copyright infringement, but I don't think they'd mind in this case): xdr_float(xdrs, fp) XDR *xdrs; float *fp; A filter primitive that translates between C floats and their external representations. This routine returns one if it succeeds, zero otherwise. This means that any implementation of the XDR protocol on ANY machine, IBM, MIT or otherwise, will include routines to translate the native C format to the XDR format for ints, floats, etc.. It's just like the TCP/IP routines/macros (e.g. htons()) that convert words and long words to network byte ordering. On some lucky machines, those routines/macros are null; on others they require a little bit shuffling. True, XDR does not provide services to convert from every format to every other format. But that would be silly; by using a single, universal format that every XDR supporting architecture understands, you end up with easier implementation and greater portability. While not every architecture currently supports XDR, implementation of XDR would be fairly simple (at least for systems programmers who would do the job). Each vendor only has to write code to convert from their proprietary format to XDR. They don't have to provide translation for the N other formats in use. Portability naturally follows from this (barring proprietary handling of the XDR format, which is certainly not the case). I can think of few cases where a complete translation system is the best solution. There are some of course (especially in the PC arena), but they are generally in the application level (Lotus 123 vs. Supercalc vs. Framework format, etc.), not the low level of XDR. For a real life example, look at human language. There are dozens of dialects of Chinese spoken in mainland China. But because they all use a single, universal language format (the written form), everyone can understand what everyone else is saying (i.e. writing). > >So, why not just do what XDR should have done... use text to represent >integers and reals? Everybody has printf and scanf. Good idea, but there >is no provision for automatic conversion of text from ascii (the unspoken >XDR standard) to ebcdic and back to ascii. > >Perhaps the worst thing about XDR is that if you use it, you may be >tempted to use RPC. > Text is a good idea, but it's much more expensive than a universal binary format (time and space wise). Everybody does have printf and scanf. But those routines are generally poorly written considering how often they are used and abused. I took an OS course where programming assignments were device drivers. Due to the limited access to the target experimental hardware, printf() in C was the first assignment (perhaps as a weed out for the programming part of the course -- it worked!). But since it was an advanced OS course, we had to pretend we didn't have any library routines to work with - no ecvt(), malloc(), varargs, etc. We were only allowed to use putc() since a tty driver with our own putc() was the next assignment. Printf is HARD to write correctly; in fact, most (if not all) companies don't write it correctly. The PhD student/instructor showed us how no commercial printf() he had seen could live up the the man page specs. Bugs cropped up most often on conversions for floating point numbers and field width specifiers. This included System V, Sun, HP, Vax, Masscomp and Microsoft C. Hopefully, things have gotten better since then, but I doubt it. | Victor Kan | I speak only for myself. | *** | Data General Corporation | Edito cum Emacs, ergo sum. | **** | 62 T.W. Alexander Drive | Columbia Lions Win, 9 October 1988 for | **** %%%% | RTP, NC 27709 | a record of 1-44. Way to go, Lions! | *** %%%