dolf@idca.tds.PHILIPS.nl (Dolf Grunbauer) (02/01/90)
In article <7727@pt.cs.cmu.edu> mwm@a.gp.cs.cmu.edu (Mark Maimone) writes: ]In article <960@pyrite4.UUCP> Larry Baltz writes: ]>The problem I'm having is trying to have the macro return a value of the ]>deleted link. ] You can just use the comma operator to return a value (the value of ]a comma-expression is the last item in the list). Here's one solution: ] ]#define delete_head(head) ((header->next == header) ? NULL : \ ] (dh_temp = (head)->next, dh_temp->next->prev = (head), \ ] (head)->next = dh_temp->next, dh_temp)) ] ]static LINK dh_temp; Oke, but note that the solution of Mark doesn't work when the code is supposed to be shared by other processes at the same time. E.g. in a kernel or a shared library there probably will only be one dh_temp for all processes thus (due to scheduling cq process switching) it is likely that these processes ruin the value of dh_temp, creating very hard to trace errors/bugs/panics. -- Dolf Grunbauer Tel: +31 55 433233 Internet dolf@idca.tds.philips.nl Philips Telecommunication and Data Systems UUCP ....!mcvax!philapd!dolf Dept. SSP, P.O. Box 245, 7300 AE Apeldoorn, The Netherlands n n n It's a pity my .signature is too small to show you my solution of a + b = c
robert@isgtec.UUCP (Robert Osborne) (02/03/90)
In article <1990Jan28.040427.22679@virtech.uucp> cpcahil@virtech.uucp (Conor P. Cahill) writes: >A second problem is that the macros provide a slight performance gain by not >having the function call overhead, but they increase the program size by >duplicating code (the exact problem that functions/subroutines were designed >to solve). This is especially worse with complicated macros. > >Don't get me wrong. I have nothing against macros and use them quite often. >However, one should not blindly implement functions as macros. The important thing is to know when to use macros. Sometimes inline macros can cut the execution time from 12 minutes to 2-3 (to use a actual example from here). Often the performance gain is NOT slight. Using a macro inside a critical loop that gets performed 500,000 times is not only "good" it's a necessity. Writing a macro to replace fprintf is stupid. -- Robert A. Osborne {...uunet!mnetor,...utzoo}!lsuc!isgtec!robert
john@stat.tamu.edu (John S. Price) (02/03/90)
In article <282@isgtec.UUCP> robert@isgtec.UUCP (Robert Osborne) writes: >The important thing is to know when to use macros. Sometimes inline macros >can cut the execution time from 12 minutes to 2-3 (to use a actual example >from here). Often the performance gain is NOT slight. Using a macro >inside a critical loop that gets performed 500,000 times is not only "good" >it's a necessity. Writing a macro to replace fprintf is stupid. >-- >Robert A. Osborne {...uunet!mnetor,...utzoo}!lsuc!isgtec!robert I agree. Writing a macro that gets used in alot of different places can cause the size of the executable to grow. These types of macros should be relative small (I believe putc() is a good example). If you have a large macro, and it's used in one or two places, as in the loop example above, by all means use it, because, as it is mentioned above, the function call overhead does add up. Just my $.02 worth. -------------------------------------------------------------------------- John Price | It infuriates me to be wrong john@stat.tamu.edu | when I know I'm right.... --------------------------------------------------------------------------