[comp.os.minix] Future of Minix

aeusemrs@csun.UUCP (01/25/87)

I think as Minix gets around, we will start to see device
drivers, and patches for all the different types of PCs out
there.  I see no reason why an ambitious knowledgable
person could not write a device driver for EGA, CGA, Hurcules,
or other hard disk controllers.  Some might port it to native
mode 286, or 386 as time wears on.  I have one question that
relates to all of this, is anyone planning to port, or has
anyone ported patch?  It seems that patching up Minix, and
keeping some of the most global patches striaght is going to
be hard work, therefor, I would place patch at the top of my
wish list of programs for Minix.  I don't have any sources to
patch, so I will ask; is patch a well behaived program, 
(portable)?  Also does it use any external library routines
that aren't, or might not be found in Minix?  I would like to
see an official modarated patch and/or device driver list.
I would not like to sugest anyone for this, but rather let
people who have spare time, and a strong willingness, sugest
themselves.  The above is mearly an idea, any constructive
comments from people with experience?
-- 
Mike Stump, Cal State Univ, Northridge Comp Sci Department
uucp: {sdcrdcf, ihnp4, hplabs, ttidca, psivax, csustan}!csun!aeusemrs

tonyb@peewee.UUCP (01/27/87)

With all this talk about "add this" and "add that", has anyone thought of the
value of this nice public domain OS *AFTER* everyone is done hacking in their
favorite modifications?  Today someone could claim that program x runs under
Minix.  Tomorrow though, Joe X. User writes an application under his version
of Minix and posts it.  Unfortunately, he's added some mods that allow sharred
text, swapping, or demand paged VM.  It doesn't run anywhere else.  Joe claims
that he just laid in messageID 3904765@foobar.edur, if you're behind or didn't
get it, it's your problem.

The reason applications like rn, patch, and other programs are successful is
that they are centrally controlled.  That is, modifications are researched,
tested and distributed by a known channel and approved by a know person(s).
This provides consistancy and a "release" type of upgrade system.

Under old V7 and the later versions of Unix, knowing that a new release was 
up and comming helped prevent too many local hacks from being laid in.  If 
an enhancement is in order, the modifications are sent to the author (or
release manager) of the software to be included in the next "standard" release.
If you've worked in an OS support related environment you'll know what it's
like to have to sort out, test, and lay in all your local mods of the next
"standard" release.  This process occurs every couple of years and usually take
a couple (or few) months to get *your* (the one with all your hacks) ready to
run.

What this is all getting to, is that for enhancements/modifications to Minix to
be useful to the greatest number of people, there must be some coordination of
effort.  This is a very time consuming affair.  

I propose that (before everyones fingers get too busy) there be a means set 
up to archive, test, and implement future versions of Minix.  Results should 
be publicly available so that no one is excluded.  If a request for
modification is turned down, there should be valid reasons and these should
be publicly available.  Additionally the responsibility (or decisions) should 
not rest on any one individual.  A committee (God help us!) or similar group 
of competent people with OS maintenance experience attempt to coordinate 
distribution releases.  Of course, none of this is enforceable nor should it
be.  It would be a voluntary means because a large percentage of the user
base felt it was desireable.

Ideas, suggestions, comments  are welcome.  

Tony Birnseth
tonyb@tektronix.tek.com

P.S. No, I'm not volunteering!

news@cit-vax.UUCP (01/27/87)

Organization : California Institute of Technology
Keywords: future of a useable piece of software
From: tim@tomcat.Caltech.Edu (Tim Kay)
Path: tomcat!tim

tonyb@peewee.uss.tek.com (Tony Birnseth) writes:
>I propose that (before everyones fingers get too busy) there be a means set 
>up to archive, test, and implement future versions of Minix.  Results should 
>be publicly available so that no one is excluded.  If a request for

MINIX is copyrighted.  The "results" cannot be made publicly available.
Does anybody know if it would be OK for somebody to distribute a complete,
modified MINIX, assuming that each person it was sent to showed a receipt for
an original MINIX code purchase?

In any case, everybody should SAVE YOUR RECEIPT.

				Timothy L. Kay (tim@csvax.caltech.edu)

liz@unirot.UUCP (01/27/87)

I know I am going to regret this but....

I volunteer space on unirot for minix archiving purposes.  Hopefully somebody
else will volunteer to maintain the archive (I am getting too busy these days
to volunteer a lot of time).

We have limited disk space on this system, but hopefully by the time serious
development starts taking place, we will have found some more disks or 
machines to take some of the slack.

unirot is partially funded by Unipress, and we have a close relationship with
them, so maybe we will be able to tell you about compiler stuff before it 
happens.  No guarantees though.

Minix stuff will be somewhere on my /src filesystem.  Check around for where,
I am not sure yet where I will put it.

Any volunteers out there?

The phone number for unirot is
201 752-2820

Log in as guest (no password) to build yourself an account.  There is
no password to uucp either.  The program uucp is in a state of flux on 
this machine right now (it doesn't ALWAYS work), so it is better to start
the requests from unirot then depend on remote requests.

Now to dig the disks out of the effluem left from usenix...

liz

-- 
liz sommers
everywhere!rutgers!{unirot|soup|mama}!liz   sommers@rutgers.edu

olsen@ll-xn.UUCP (01/29/87)

In article <1619@cit-vax.Caltech.Edu> tim@tomcat.UUCP (Tim Kay) writes:

>Does anybody know if it would be OK for somebody to distribute a complete,
>modified MINIX, assuming that each person it was sent to showed a receipt for
>an original MINIX code purchase?

It is not OK.  A copyright holder has the exclusive right to make 'derivative
works.'  A modified MINIX would be a derivative of MINIX.

Tenenbaum may wish to waive some of his rights to control derivatives of
MINIX, but that's up to him.
-- 
Jim Olsen	...!{decvax,linus,adelie}!ll-xn!olsen

V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) (07/14/89)

Concerning the future of Minix ST I would propose to generate an update
kit for Minix ST v1.3 and not for Minix ST 1.4a as Frans did propose,
because all current bug fixes and the PC update to 1.4b are posted as cdiffs
to PC version 1.3 and it is therefore necessary that every ST owner has
a 1.3 version. A ST 1.3 to 1.4a update may be posted seperatly, but I would
vote for a direct 1.3 to 1.4b update kit.

Because it looks like that Johan and Andy don't have the time to generate and
test an official update, I would propose that Frans (or someone else) should
generate a 1.1 to 1.3 update kit and email it for completeness and conformity
tests and checks against PC version 1.3 to a few volunteers who have updated
to PC version 1.3 for themselves. When this group comes to an agreement
concerning the update kit it should be posted as an official ST update and
be accepted by Andy and Johan.

I myself have updated to PC version 1.3 and 1.4a and would volunteer for
testing and checking an update kit against my updates.


Bitnet:  V61@DHDURZ1                               Ronald Lamprecht
UUCP:    ...!unido!DHDURZ1.bitnet!V61              Theoretische Physik
ARPAnet: V61%DHDURZ1.BITNET@CUNYVM.CUNY.EDU       (Heidelberg, West Germany)

V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) (07/14/89)

A few weeks ago I definitely decided that Minix will be my main future OS that
I will port to every computer that I will own. The main reason is the fact that
if I detect a bug I don't have to wait half a year for the next release of
the software but can fix the bug myself (see my Flex & make post.) or ask
the author via email. I'm quite sure that Minix will not only have a future
as an educational OS but also as a serious OS because the abilities will
increase and with all the bug reports it will be one of the best debugged
systems in future.

I hope that we result in a common version for all computers as Andy announced,
but the most urgent topic to reach this aim is to define an offical set of
macros to include the different parts of sources by conditionals. ATARI_ST or PC
for example should only be used for computer specific hardware dependent code.
Code that depends on the processor family but is common to several computers
(ST,Amiga,...) should be included with another macro(one for the family, one
for every processor); code like the shadow code does neither depend on the
68000 processor nor on the ST and should therefor be included with a different
macro and not be called stshadow but simply shadow,...
Furthermore processor and especially computer device dependend stuff shouldn't
be spread over all files, but collected in a few specific modules (bad examples:
see kernel/clock.c,...).

The next important task would be to develop a new compiler, that could be
spread as public domain and is appropriated for all Minix versions.
I decided to write a compiler that supports the 680x0 with PIC (position
independent code) and PID (position independent data). I checked that this can
be done without increasing code size, a loss of performace or any other
restrictions ! As a result it would be easy to implement shared code, preload
code, shared libs and swapping on 680x0 machines. I didn't decide whether I
should take the GCC or the much smaller Sozobon source as a starting point.
I would be glad if other Minix fans would help me and especially if some
80x86 owners would join this venture (How about that Bruce -- you wrote once
that you are missing a 32 bit compiler for the 80386)

Another idea of mine is to introduce debugging stuff into all library modules.
This stuff should check all arguments and everything that is possible and
print appropriated error messages and warnings. The additional code should
be included in #ifdef DEBUG -- #endif so that it is possible to generate
a fast run-time library and a debugging library version. THe reason I want
to introduce this stuff is that the most problems that occur in porting
sources to Minix are wrong arguments in library calls (besides compiler bugs).

Of course I have a lot of other fancy ideas, but I thinks that's enough for
today and I will need all my spare time within the next half year to realize
these features.

Bitnet:  V61@DHDURZ1                               Ronald Lamprecht
UUCP:    ...!unido!DHDURZ1.bitnet!V61              Theoretische Physik
ARPAnet: V61%DHDURZ1.BITNET@CUNYVM.CUNY.EDU       (Heidelberg, West Germany)

hjg@amms4.UUCP (Harry Gross) (07/14/89)

In article <19654@louie.udel.EDU> V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) writes:
>Another idea of mine is to introduce debugging stuff into all library modules.
>This stuff should check all arguments and everything that is possible and
>print appropriated error messages and warnings. The additional code should
>be included in #ifdef DEBUG -- #endif so that it is possible to generate
>a fast run-time library and a debugging library version. THe reason I want
>to introduce this stuff is that the most problems that occur in porting
>sources to Minix are wrong arguments in library calls (besides compiler bugs).

I _LIKE_ it.  Alot.  In fact, if the consensus is that this is a good thing to
do, I would be willing to tackle some the routines myself (not all - I don't
have enough time for that :-).  If Andy likes this idea and ok's it (or if the
MINIX population in general likes it and wants it) then let me know.  I would
be able to start on such a venture almost immediately, but would like to
converse with others who would be interested (able) to help, so we don't
duplicate efforts.

Just for info's sake, I run PC-MINIX 1.3 on an (original 4.77 MHz 256k
Motherboard) IBM-PC, modified to handle a hard disk.  (No, it is NOT an XT :-).
If the keepers of the info sheet are reading this, 1.3 for the PC, XT and AT
(640K version) worked on my hard drive/controller right out of the box.  The
drive is a Seagate ST225 (20Meg - 65ms access) and the controller is a
WD-1002-???  (I can't recall right now - if you want it, send me e-mail :-).  It
took me all of an evening or two to split my disk between MINIX and MS-DOS, and
if it weren't for the fact that I need to write DOS stuff for some clients, I
don't think I would keep the DOS partition around.

By the way, I seem to recall a line in The Book remarking that if someone had
a spare decade, it would be possible to hook in DOS support into MINIX.  Has
anyone seriously looked at this?  It might be a real nice thing to add.  After
all, if UNIX and XENIX can have a DOSMerge capability, why can't MINIX?

Anyway, that's my $0.02 for today :-)


-- 
		Harry Gross				 |  reserved for
							 |  something really
Internet: hjg@amms4.UUCP   (we're working on registering)|  clever - any
UUCP: {jyacc, rna, bklyncis}!amms4!hjg			 |  suggestions?

jca@pnet01.cts.com (John C. Archambeau) (07/15/89)

The problem with implementing GCC is that the memory manager would have be
rewritten to support 386 mode with the full 32-bit segments.  GCC demands a
32-bit address space because it's so huge.  Sozobin might be doable, I have
the source code, but I haven't taken a look at it.  I know that GCC will not
work with a 286 easily and it's nearly impossible on an 8086.  It can be done
in 8086 mode if you have a large chunk of LIM 4.0 memory.  The Evans' 286
kernel still only supports the 64K seperate I&D memory model.

 /*--------------------------------------------------------------------------*
  * Flames: /dev/null (on my Minix partition)
  *--------------------------------------------------------------------------*
  * ARPA  : crash!pnet01!jca@nosc.mil
  * INET  : jca@pnet01.cts.com
  * UUCP  : {nosc ucsd hplabs!hd-sdd}!crash!pnet01!jca
  *--------------------------------------------------------------------------*/

ast@cs.vu.nl (Andy Tanenbaum) (07/15/89)

In article <19653@louie.udel.EDU> V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) writes:
>I would propose that Frans (or someone else) should
>generate a 1.1 to 1.3 update kit and email it for completeness and conformity
>tests and checks against PC version 1.3 to a few volunteers who have updated
>to PC version 1.3 for themselves. When this group comes to an agreement
>concerning the update kit it should be posted as an official ST update and
>be accepted by Andy and Johan.
That's fine with me.  The chance that I will be able to work on the ST
upgrade kit is 0.  The chance that Johan will is 0 plus epsilon.

Andy Tanenbaum (ast@cs.vu.nl)

ast@cs.vu.nl (Andy Tanenbaum) (07/15/89)

In article <19654@louie.udel.EDU> V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) writes:
>The additional code should
>be included in #ifdef DEBUG -- #endif so that it is possible to generate
>a fast run-time library and a debugging library version. 

Argh!  I think #ifdefs make code very hard to read and ugly.  I am not all
that wild about the idea.

Andy Tanenbaum (ast@cs.vu.nl)

wayne@csri.toronto.edu (Wayne Hayes) (07/16/89)

In article <2885@ast.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
>In article <19654@louie.udel.EDU> V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) writes:
>>The additional code should
>>be included in #ifdef DEBUG -- #endif so that it is possible to generate
>>a fast run-time library and a debugging library version. 
>
>Argh!  I think #ifdefs make code very hard to read and ugly.  I am not all
>that wild about the idea.
>
>Andy Tanenbaum (ast@cs.vu.nl)

Oh, c'mon. There are very clean ways to do this. One way I do it (if you
want nice looking code) is to write it with the #ifdef stuff, test the
program, save away a copy and then delete the #ifdef stuff, and then do a
diff between the two, then delete the one with the #ifdefs. That way the
final product doesn't have all the #ifdef stuff but it's there in the diff
file if you need it.
-- 
------------------------------------------------------------------------------
"Open the pod bay doors, HAL."   "I'm sorry Dave, I'm afraid I can't do that."
Wayne Hayes	INTERNET: wayne@csri.toronto.edu	CompuServe: 72401,3525

mills@ccu.UManitoba.CA (Gary Mills) (07/16/89)

In article <2885@ast.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
>Argh!  I think #ifdefs make code very hard to read and ugly.  I am not all
>that wild about the idea.
>
>Andy Tanenbaum (ast@cs.vu.nl)


Would not _lint_ be a better solution for discovering function call errors,
rather than putting debugging code in all the library functions?






-Gary Mills-            -University of Manitoba-            -Winnipeg-

-- 
-Gary Mills-             -University of Manitoba-             -Winnipeg-

hjg@amms4.UUCP (Harry Gross) (07/17/89)

In article <2885@ast.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
>In article <19654@louie.udel.EDU> V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) writes:
>>The additional code should
>>be included in #ifdef DEBUG -- #endif so that it is possible to generate
>>a fast run-time library and a debugging library version. 

In a followup, I expressed the fact that I liked this, and volunteered to
work on it (pending approval from Andy and/or the rest of everyone)

Andy's comment:

>Argh!  I think #ifdefs make code very hard to read and ugly.  I am not all
>that wild about the idea.

To which I reply:

Agreed.  However, there are other possibilities.  I have given this a bit of
thought over the past day or two.  If a very fast, very limited output routine
were created, and calls to it were dependent on a runtime Debug_State variable,
it could be done without #ifdefs (which I don't like either, unless ABSOLUTELY
necessary), and without too great an addition to the size of the code.

Any further comments Andy?  Or anyone else?

-- 
		Harry Gross				 |  reserved for
							 |  something really
Internet: hjg@amms4.UUCP   (we're working on registering)|  clever - any
UUCP: {jyacc, rna, bklyncis}!amms4!hjg			 |  suggestions?

bds@lzaz.ATT.COM (B.SZABLAK) (07/17/89)

In article <568@amms4.UUCP>, hjg@amms4.UUCP (Harry Gross) writes:
> In article <2885@ast.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
> >In article <19654@louie.udel.EDU> V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) writes:
> >>The additional code should
> >>be included in #ifdef DEBUG -- #endif so that it is possible to generate
> >>a fast run-time library and a debugging library version. 
> >Argh!  I think #ifdefs make code very hard to read and ugly.  I am not all
> >that wild about the idea.
> 
> Any further comments Andy?  Or anyone else?

Yeah, prints are a pretty crude way to debug (albeit sometimes unavoidable).
Why not bite the bullet and use a debugger? Sounds like this is more
appropriate to an "instructional" system anyway. This may have passed by the
PC users, but I have posted the mods for ptrace(2) for the Atari ST which
should be easily ported to the PC (any intention of picking this up
Andy?), and a debugger (mdb) which is somewhat 68000 specific (namely the
disassembler and register handling).

By the way, if the ACK compiler could include the filename and linenumber
info in the a.out file, that would significantly improve the debugger's
utility.

meulenbr@cstw01.prl.philips.nl (Frans Meulenbroeks) (07/18/89)

Ok. I'll make them available asap. However, I am currently struggling 
to get a new hard disk up and running (has anyone experience with
connecting a Micropolis 1355 to an ST??? If so, please drop me a mail)
Therefore it can take some time to get this rolling again.

I've not decided whether I will post the 1.3 or 1.4a version.
Problem is that I don't have a 1.3 version handy any more.

Should I also post the stuff from 1.3 which is unaltered for the ST?
This is quite a lot, and I doubt if the n-th repost of animals
is really useful.

If you want to be a volunteer to try to upgrade and if you pass the
following qualifications
- have clean ST 1.1 sources to work upon
- have enough time available to try things out within a reasonable 
  amount of time
then drop me a mail. Please include a description of your system,
hard disk, real time clock, modem speed.
If I get many reactions I'll choose a few people, trying to cover as
much different configurations as possible.

By the way: the version which I have currently running does include a
rs232 driver (based upon the 1.4a driver), ptrace, mdb and Simon
Poole's kbd fixes. Also support for a number of real time clocks
(although I cannot test all of them)
There is no support for >4 partitions, like icd has.
This is mainly since the format is incompatible with the one from HDX 3.0.
I've also not looked into great detail at Bruce's stuff, and I've also 
not incorporated Simon Poole's FS changes (I'll try to stick to what is
in 1.4a), and the screen and mouse drivers.

PS: can anyone confirm or deny if the source posting of cv works?

Frans Meulenbroeks        (meulenbr@cst.prl.philips.nl)
	Centre for Software Technology
	( or try: ...!mcvax!phigate!prle!cst!meulenbr)

meulenbr@cstw01.prl.philips.nl (Frans Meulenbroeks) (07/18/89)

Some things which might be considered for the next release of Minix:
1 splitting the lib and commands directories.
  These are currently quite huge, causing ls to fail after a make in
  these directories
2 declare all local subroutines forward in the various files
3 create an include file for kernel, mm, fs which contains all
  global subroutines. This file should be included by every module.
4 explicitly type all subroutines
5 Use function prototypes in all forward declarations.
  Since ack does not support these yet they could be embedded in a macro
  e.g.:
	#define ARG(x)	
	extern int f(ARG(int, char *))
  This eases the migration of the code to compilers which do support
  ANSI function prototypes (they can define ARG(x) to x), and also gives
  the reader a better overview of what's going on.

What does the net in general, and ast in particular, think about this?

PS: a lot of effor for 2-5 has already been done.

Frans Meulenbroeks        (meulenbr@cst.prl.philips.nl)
	Centre for Software Technology
	( or try: ...!mcvax!phigate!prle!cst!meulenbr)

S.Usher@ucl-cs.UUCP (07/19/89)

From: S.Usher@uk.ac.ucl.cs


Do you realise that Sozobon C can only compile sources which produce .o
files of less than 32K. You can, of cause link these together to build
programs of a far larger size.

	Stephen Usher (MSc Computer Science, University College London.)

Addresses:-
(JANET)

S.Usher@uk.ac.ucl.cs		or	UCACMSU@uk.ac.ucl.euclid

--8<-------------- Cut ------------------- Here ----------------------

V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) (07/19/89)

Concerning my proposal to introduce debugging stuff into the library I saw
the following responses:

Andy Tanenbaum (ast@cs.vu.nl) wrote:
>Argh!  I think #ifdefs make code very hard to read and ugly.  I am not all
>that wild about the idea.
I admit that #ifdef #else #endif constructs are ugly and hard to read, because
they contain two different solutions of the same problem. But common
#ifdef DEBUG #endif shouldn't be that ugly. They contain unique code that may
even be of some interest if you omit it, because they show common sources of
errors.
Furthermore there are ways to hide this #ifdef'ed stuff as

Wayne Hayes <wayne@CSRI.TORONTO.EDU> showed:
>Oh, c'mon. There are very clean ways to do this.

Gary Mills <mills@CCU.UMANITOBA.CA> asked:
>Would not _lint_ be a better solution ?
No -- lint is a very useful tool, but neither lint can do everything that could
be done with a debugging library nor can a run time error checking detect all
errors that lint can. For example lint can detect a LW argument instead of a
word (integer) argument, but the resulting run-time value may be a valid and
possible argument so that the error cannot be detected. On the other hand
all argument types may be ok and lint be happy, but the actual value may
be impossible and cause a bus-error or something else what is difficult to
debug. Furthermore it would be possible to introduce additional stack checking
during the debugging phase of a program with such a special library.

By the way if you have a PD lint version for Minix please post it !

Harry Gross <hjg@AMMS4.UUCP> wrote:
>I _LIKE_ it.  Alot.
I'm happy that I am not alone in this world.

Bitnet:  V61@DHDURZ1                               Ronald Lamprecht
UUCP:    ...!unido!DHDURZ1.bitnet!V61              Theoretische Physik
ARPAnet: V61%DHDURZ1.BITNET@CUNYVM.CUNY.EDU       (Heidelberg, West Germany)

ast@cs.vu.nl (Andy Tanenbaum) (07/19/89)

In article <573@prles2.UUCP> meulenbr@cstw01.prl.philips.nl (Frans Meulenbroeks) writes:
>Some things which might be considered for the next release of Minix:
>1 splitting the lib and commands directories.
In principle I have no objection, although I like the idea of being able
to get all the commands in one ls.  Another possibility is running up the
size of the exec buffer from 2K to 3K or 4K.  If I were to split lib,
what would be a good division.  The system calls, e.g. fork(2) could go
in one directory, but I can't think of any logical division for the rest.
For commands, I can't think of any logical division at all.  Just splitting
[a-m] and [n-z] doesn't seem real exciting.

>2 declare all local subroutines forward in the various files
I think this is done for those cases where they routine non-ints.  For
things returning int, it shouldn't matter.

>3 create an include file for kernel, mm, fs which contains all
>  global subroutines. This file should be included by every module.
I don't understand what you mean.

>4 explicitly type all subroutines
You mean like void foo() vs. int foo()?  I suppose I could.  The ACK compiler
accepts void.

>5 Use function prototypes in all forward declarations.
>ARG

Andy Tanenbaum (ast@cs.vu.nl)

fnf@estinc.UUCP (Fred Fish) (07/19/89)

In article <2885@ast.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
>In article <19654@louie.udel.EDU> V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) writes:
>>The additional code should
>>be included in #ifdef DEBUG -- #endif so that it is possible to generate
>>a fast run-time library and a debugging library version. 
>
>Argh!  I think #ifdefs make code very hard to read and ugly.  I am not all
>that wild about the idea.

Amen!  Perhaps minix should standardize on my DBUG package recently posted
to comp.sources.misc.  Everything is done with macros, so there are no
messy #ifdef's to clutter up the code.  You can quickly train your eye
to simply ignore all the DBUG_<whatever> macros.  Since this package is
completely public domain, such use would have my complete blessing.

-Fred
-- 
# Fred Fish, 1835 E. Belmont Drive, Tempe, AZ 85284,  USA
# 1-602-491-0048           asuvax!{nud,mcdphx}!estinc!fnf

meulenbr@cstw01.prl.philips.nl (Frans Meulenbroeks) (07/19/89)

In article <2903@ast.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
]In article <573@prles2.UUCP> meulenbr@cstw01.prl.philips.nl (Frans Meulenbroeks) writes:
]>Some things which might be considered for the next release of Minix:
]>1 splitting the lib and commands directories.
]In principle I have no objection, although I like the idea of being able
]to get all the commands in one ls.  Another possibility is running up the
]size of the exec buffer from 2K to 3K or 4K.  If I were to split lib,
]what would be a good division.  The system calls, e.g. fork(2) could go
]in one directory, but I can't think of any logical division for the rest.
]For commands, I can't think of any logical division at all.  Just splitting
][a-m] and [n-z] doesn't seem real exciting.

I could not either. However, I am getting a little bit irritated by not
being able to do an ls in /us/src/lib, and by fsck always complaining
about it being a huge directory.

The only two splitups that come in mind are:
- fork off games stuff and system stuff (so create sections 1/6/8)
- investigate whether there is already a workable posix draft on
  commands and split in posix/additional.
  If posix does not have something useful perhaps X/Open can be used
  instead.
]
]>2 declare all local subroutines forward in the various files
]I think this is done for those cases where they routine non-ints.  For
]things returning int, it shouldn't matter.

Yes, but declaring them all would make things more visible.
Also, personally, I'd prefer to have all local routines declared
forward, not just the one's that are used before declared.
This also gives a fast overview of what to expect in a file.
]
]>3 create an include file for kernel, mm, fs which contains all
]>  global subroutines. This file should be included by every module.
]I don't understand what you mean.

currently every routine which needs an external int function just
uses it, and presto. I would prefer a declaration just like for non-int
functions. However to avoid the scattering of lines like:
extern int send();
I think I'd prefer a file, say fs.h, which includes all declarations of
all exported functions.
]
]>4 explicitly type all subroutines
]You mean like void foo() vs. int foo()?  I suppose I could.  The ACK compiler
]accepts void.

Yes, thats one thing. But what I actually meant was avoiding declarations
without any typing at all. This happens on some places. I know if you
declare a function without a return type, the return type defaults to
int. However I do not like it. It does remind me too much of Fortran
where the first letter of an undeclared variable determines its type :-)
]
]>5 Use function prototypes in all forward declarations.
]>ARG
Is this a yes or a no ? :-)
]
]Andy Tanenbaum (ast@cs.vu.nl)


Frans Meulenbroeks        (meulenbr@cst.prl.philips.nl)
	Centre for Software Technology
	( or try: ...!mcvax!phigate!prle!cst!meulenbr)

nick@nswitgould.cs.uts.oz (Nick Andrew) (07/19/89)

in article <568@amms4.UUCP>, hjg@amms4.UUCP (Harry Gross) says:
> 
>>>The additional code should
>>>be included in #ifdef DEBUG -- #endif so that it is possible to generate
>>>a fast run-time library and a debugging library version. 
> 
> Agreed.  However, there are other possibilities.  I have given this a bit of
> thought over the past day or two.  If a very fast, very limited output routine
> were created, and calls to it were dependent on a runtime Debug_State variable,
> it could be done without #ifdefs (which I don't like either, unless ABSOLUTELY
> necessary), and without too great an addition to the size of the code.

#ifdefs may be ugly, when used indiscriminately, however a simple #ifdef DEBUG
at the right places in the library could work wonders. I think Andy was
referring to using #ifdefs to keep the PC and ST versions in sync ... which
is truly ugly, as people modifying the code for the PC cannot test the ST
version.

Compiling inbuilt debugging dependent on a runtime variable is truly loathsome;
each module will be larger - don't forget, checking the arguments to functions
could involve as much work (code) as the function itself. The library is big
enough already!  At least, with #ifdef DEBUGs in the library source, if you
suspect program X is making wonky calls to (eg) strcat (like Arc does :-))
then when compiling you can include the debugging strcat.s without searching
the whole debugging library.

Meta-point:  I would have said the major barriers to porting more code to
Minix were lack of some system calls; addressing limitations; and lack of
some standard commands (e.g. awk). Lack of system calls can be fudged.
Addressing limitations are more difficult to get around. And lack of
commands is annoying.

Differences in some ordinary system calls can get you into trouble. Such
as r+,w+ and a+ modes in fopen(). Lack of 3-argument open(). Incorrect
handling of O_APPEND. These are the problems I look for first when porting!

	Regards, Nick.
-- 
			"Zeta Microcomputer Software"
ACSnet:    nick@nswitgould.cs.uts.oz	nick@ultima.cs.uts.oz
UUCP:      ...!uunet!munnari!ultima.cs.uts.oz!nick
Fidonet:   Nick Andrew on 3:713/602 (Zeta)

dal@syntel.UUCP (Dale Schumacher) (07/19/89)

In article <19978@louie.udel.EDU> Ronald Lamprecht <V61%DHDURZ1.BITNET@cunyvm.cuny.edu> writes:
> Concerning my proposal to introduce debugging stuff into the library I saw
> the following responses:
> 
> Andy Tanenbaum (ast@cs.vu.nl) wrote:
>> Argh!  I think #ifdefs make code very hard to read and ugly.  I am not all
>> that wild about the idea.
> I admit that #ifdef #else #endif constructs are ugly and hard to read, because
> they contain two different solutions of the same problem. But common
> #ifdef DEBUG #endif shouldn't be that ugly. They contain unique code that may
> even be of some interest if you omit it, because they show common sources of
> errors.
[other comments deleted for brevity]

I've long thought that it would be useful to have a library with heavy
runtime checking, but I would NOT want to pay ANY overhead price in my
final code.  What I would suggest is to create a library with conditionally
compiled bullet-proofing code built in, then compile with debugging on
to get a debugging library and with debugging off to get a library to link
with your final production code.  Keeping the sources together would help
prevent divergence between the two libraries (bugs in one, but not the
other being particularly nasty).  Since I'm now beginning the process of
rewriting part of dLibs (my standard C library for the ST) to bring it
up to X3J11 and POSIX conformance (and port it to Minix), I may very well
include debugging code as described here.  Remember, one very nice trick
for conditionally including debugging code is the DEBUG(x) and TRACE(x)
macros used in the kernel code.  That seems to be a fine way to solve
the #ifdef..#endif problem, yet still gain the benefit of aggressive
error checking during program development.  I feel VERY strongly that
I want all that error checking OUT OF THE WAY when I go to create my
final program.

\\   /  Dale Schumacher                         399 Beacon Ave.
 \\ /   (alias: Dalnefre')                      St. Paul, MN  55104-3527
  ><    ...umn-cs!midgard.mn.org!syntel!dal     United States of America
 / \\   "What is wanted is not the will to believe, but the will to find out,
/   \\  which is the exact opposite." -Bertrand Russell

ast@cs.vu.nl (Andy Tanenbaum) (07/21/89)

In article <061989A1660@syntel.UUCP> dal@syntel.UUCP (Dale Schumacher) writes:
>I want all that error checking OUT OF THE WAY when I go to create my
>final program.

It is sort of like going to sailing school and wearing a life preserver
during the theory part on land, but refusing to wear it when you get in
the boat.

I think the right way to do it is with the ASSERT() macro.  There is a large
theoretical literature about program correctness using assertions, so this
way is clearly not a hack.  If done right, it also has documentation value.
And it generates no life preserver anywhere near the water.

Andy Tanenbaum (ast@cs.vu.nl)

bobs@crdec3.apgea.army.mil (J.R.Suckling) (07/22/89)

V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald Lamprecht) writes:
> 
> Concerning my proposal to introduce debugging stuff into the library ...

How about this:

    Once I wrote a UNIX plot filter that read UNIX plot and wrote UNIX
    plot.  My problem was the off the shelf tools used the same
    function names for both input and output.  Since I wanted to use
    the standard reader program from one generic output filter. I
    stold( we have the license ) the source for the reader and added a
    set of macros(ie. #defines ) to map one of thies sets of 'names'
    to another set of 'na_es'.  I jest added the underscore in a place
    that would not break anything.

If I was to say make an #include file which, for example

#define strcpy	st_cpy
#define strtok	st_tok
	... ,
	
redefine all of, or a set of, libc interface names with the the extra
'_'. Then I could write an interface library that would alow me to
test pre and post conditions on libc with out touching libc at all.

st_cpy( s1, s2 )
  char *s1, *s2;
{		/* test pre conditions */
	assert( s1 != NULL );
	assert( s2 != NULL );

	isastring( s2 ); /* may only check for a user defined length? */

		/* do the work. */
	strcpy( s1, s2 );

		/* test post conditions */

	assert( strcmp( s1, s2 ) == 0 );

		/* return the required value. */
	return s1;
}

I guess it is clear that you can not use the debugging #include file
when writting the debug library.  :-)

To add the debugging, all I would then need to do is add the #include
to each source file and the debuging library to each linker call.
And then run tests.

I would start to do this for you all, but I only have the book.  Some
day I will break down and get a home computer and it will run MINIX.

If I had a lint I would use it first, but lint is a big program, I
know that was why lint was not ported back the the pdp11's running
2.10BSD.  It is true a good set of pre and post condition tests and
good test data can get bugs that lint will miss.  However, It is hard
to get good test data.

      ___                        < BobS @ ApgEA.Army.Mil. >
 __  /   ` ,    U.S. Army Information Systems Command -- Aberdeen
'/_) \_   /_    J. Robert Suckling      / The opinions expressed are
/__)_(_)_/\)    ASQNC-TAB-T  E5234     /  solely my own and do not reflect
       \        APG, Md,21010-5423    /   US Army policy or agreement.
   \___/        Work: (301)671-3927, 671-2496

I vote for floating point.
I like numbers from 0 to 255, I like the numbers between thies as well.

dal@syntel.UUCP (Dale Schumacher) (07/24/89)

[ast@cs.vu.nl (Andy Tanenbaum) writes...]
> In article <061989A1660@syntel.UUCP> dal@syntel.UUCP (Dale Schumacher) writes:
>>I want all that error checking OUT OF THE WAY when I go to create my
>>final program.
> 
> It is sort of like going to sailing school and wearing a life preserver
> during the theory part on land, but refusing to wear it when you get in
> the boat.

I don't think the analogy is valid, partially because the life preserver
don't make the boat go half as fast (or worse).

The C standard library has a long tradition of minimal error checking.
This gives nice efficient code, but can make debugging a real hassle.
The way I look at it, there are two ways that a library function can
be given parameters which will cause an error.  The first is through
a data-dependent bug.  A certain input stream creates an error condition,
like the user entering "0" when asked "Enter number of people to divide
the profits by:".  It is difficult to catch this kind of error during
testing, since the error condition may never occur in the test input.
The second kind of problem is a bad call in the program, often due to
faulty program logic or mismatched parameters.  This kind of error is
easier to discover, since it will likely cause the program to function
improperly, but the symptoms may not be clearly indicative of the actual
problem, since the libarary function behaviour is "undefined" for such
a condition.

The solution to the first problem involves good run-time checking of all
input data to assure that it is within acceptable ranges.  THIS KIND OF
RUN-TIME CHECKING SHOULD STAY IN THE PROGRAM.  Debugging the second kind
of problem can be helped by heavy run-time argument checking in all of
the library functions, but once the program is reasonably stable, THIS
KIND OF CHECKING CAN BE REMOVED, since we now assume that will will not
call a library function with bad parameters.  Of course there will likely
be areas we haven't tested well enough, but if the program starts behaving
strangely under some circumstances, we can re-link it with the error
checking library and may catch the bug that way.

> I think the right way to do it is with the ASSERT() macro.  There is a large
> theoretical literature about program correctness using assertions, so this
> way is clearly not a hack.  If done right, it also has documentation value.
> And it generates no life preserver anywhere near the water.

ASSERT() is very useful, and I fully agree that it should be used far
more that is usually is.  However, it doesn't do the whole job.  It
serves the function of gaurding against error condition, where they can
be checked ahead of time, but there are other times where something
which gives a little more information (like parameter values) or just
prints a warning (rather than aborting the program) would be more
helpful.  In these cases, I think the following is more flexible.

#ifdef NDEBUG
#define	DEBUG(x)	/* x */
#else
#define	DEBUG(x)	x
#endif

There is a place for both ASSERT() and DEBUG() in an error-checking library.

> Andy Tanenbaum (ast@cs.vu.nl)

PS.  The inverted logic of NDEBUG means that by default debugging or
assertion code is included, which seems to be a GOOD THING (tm).  If
you want faster (and less safe) code you must explicitly ask for it.

\\   /  Dale Schumacher                         399 Beacon Ave.
 \\ /   (alias: Dalnefre')                      St. Paul, MN  55104-3527
  ><    ...umn-cs!midgard.mn.org!syntel!dal     United States of America
 / \\   "What is wanted is not the will to believe, but the will to find out,
/   \\  which is the exact opposite." -Bertrand Russell

ONM64%DMSWWU1A.BITNET@cunyvm.cuny.edu (Kai Henningsen) (08/15/89)

>From:         Andy Tanenbaum <ast@CS.VU.NL>

>In article <19654@louie.udel.EDU> V61%DHDURZ1.BITNET@cunyvm.cuny.edu (Ronald
> Lamprecht) writes:
>>The additional code should
>>be included in #ifdef DEBUG -- #endif so that it is possible to generate
>>a fast run-time library and a debugging library version.
>
>Argh!  I think #ifdefs make code very hard to read and ugly.  I am not all
>that wild about the idea.
>
>Andy Tanenbaum (ast@cs.vu.nl)

How about assert() or some low-level variant thereof (that doesn't use
fprintf)? That would still use #ifdef DEBUG, but make the code more
readable. I think that wouldn't be too hard to do.

Kai
onm64 @ dmswwu1a . bitnet

ast@cs.vu.nl (Andy Tanenbaum) (08/26/89)

In article <20046@louie.udel.EDU> ONM64%DMSWWU1A.BITNET@cunyvm.cuny.edu (Kai Henningsen) writes:
>How about assert() or some low-level variant thereof (that doesn't use
>fprintf)? That would still use #ifdef DEBUG, but make the code more readable.

I definitely agree that assert() is the way to go.  It is theoretically
elegant and efficient as well, since you can turn the checking on and off
with a compile time flag.

Andy Tanenbaum (ast@cs.vu.nl)