med@druhi.ATT.COM (Myron Drapal) (08/04/88)
From Computer Shopper August 1988, page 214: "Atari to Release Fourth Edition of TOS" ..... In an exclusive interview, chief TOS developer Roy J. Good stated that many planned improvements had to be cancelled due to lack of downward compatibility with applications written for the older TOS. Specifically mentioned was the update for the GEMDOS function malloc ($48), which is used to allocate memory for applications. Many programmers had written their own code to overcome the deficiencies of the older malloc, and this code will not run properly with the newer malloc in place. Roy Good called programs containing such code "misbehaved," products of "just flat-out bad programming." ..... ***Flame On*** Seems to me that this is another instance of the pot calling the kettle black! Atari, and their bug riddled malloc code (I should really include all of TOS here, since it's all pretty full of bugs), admonishing developers for fixing malloc any way they could. If it didn't take Atari 2+ years for each new version of TOS to hit the streets, maybe these developers could wait for a fixed version of malloc. I guess I really am taking exception to Roy J. Good being the "chief developer", since he really hasn't even been there a year yet. I really don't think that he has a clue as to what happened between the developers and Atari in those earily days. Some expert... ***Flame Off*** Just so I don't leave you without some good news, the rest of the article was very informative. It describes the fixes that are being included in the new version of TOS. Maybe I'll see one by Christmas 1990... Myron Drapal AT&T Denver att!druhi!med
apratt@atari.UUCP (Allan Pratt) (08/06/88)
From article <3308@druhi.ATT.COM>, by med@druhi.ATT.COM (Myron Drapal): > From Computer Shopper August 1988, page 214: > [deleted] Please, please, please do not take quotes in a rag like that as Gospel. There were at least half a dozen howlers in that article, and this was one of them. The truth of that "quote" is this: I tried to fix Malloc so there would be no limits on the number of blocks you could have around, by using the memory above and below the block to keep track of the blocks themselves. I had to back out this improvement, because lots of programs violated the commandment: Thou Shalt Not Screw With Memory Thou Ownest Not The Malloc you will get in the new ROMs is vastly improved, but still has limits because I was forced to go on using the OSPool. The OSPool is statically allocated with a fixed size. It is still expandable with FOLDR100.PRG, but the fact remains that it can be filled up. So the summary is, "There were some improvements we tried but had to back away from because of ill-behaved programs." Another of those improvements was getting rid of clearing memory before launching a program. In 11/20 ROMs (pre-Mega), the code which did this was REALLY SLOW, but only had a max of <1M to clear. For a Mega, Landon Dyer sped up the code, but it now has a max of just under 4M to clear. This takes a nontrivial amount of time. Nobody promised that the space after your declared BSS and before your initial stack pointer would be clear, but it has always been that way and some programmers chose to depend on it. This is called a "settled expectation" and we have to live with it in the name of compatibility. ============================================ Opinions expressed above do not necessarily -- Allan Pratt, Atari Corp. reflect those of Atari Corp. or anyone else. ...ames!atari!apratt
good@atari.UUCP (Roy Good) (08/06/88)
in article <3308@druhi.ATT.COM>, med@druhi.ATT.COM (Myron Drapal) says: > From Computer Shopper August 1988, page 214: > "Atari to Release Fourth Edition of TOS" > ..... > [text deleted - you can read the original] > ***Flame On*** > [text deleted - you can read the original] > ***Flame Off*** > Just so I don't leave you without some good news, the rest of the > article was very informative. It describes the fixes that are being > included in the new version of TOS. Maybe I'll see one by Christmas > 1990... > Myron Drapal > AT&T Denver > att!druhi!med Well, I just couldn't let that pass, now could I? First, I was never shown a review copy of the subject article. If someone wants to call me "chief developer" without checking first, I can't really control that, now can I, Mr. Drapal? [My business card says very clearly "Manager, Product Development", and I would never use the term "chief developer" - those days are past, and with other companies]. As to abuse of 'malloc', we did find several programs which, having allocated memory via 'malloc', then went and accessed beyond it, or assumed successive 'malloc's would always return contiguous chunks of memory, and relied on it. Now I consider that to be lazy, bad, programming practice. I have written a LOT of code in the past, at the OS, utility and business application level. Some of the code is still in daily use, running realworld businesses, five or more years later. My programming experience taught me never to assume anything and to abide by common-sense rules of design. (Please don't flame - it's only my opinion, and I know there are a lot of people who don't share it) I am certainly in no position to defend earlier releases of TOS. The Mega TOS has been shipping with ST's for many months, so the "2 years between releases" is not quite accurate. I have tried to be as honest as I can in postings to this net, while maintaining a degree of calm which is sadly lacking in many postings. I DO appreciate frustration which may have been caused in the past, because I, too, have been on the receiving end of buggy OS and tools. I might point out that Apple hasn't had very good press recently, but two wrongs don't make a right, and I'm certainly not intimating that just because Apple appears to have made mistakes then it's OK for Atari to do the same. But I stand by my comment of having to remove certain improvements and bug fixes because "misbehaved" programs broke. There is an incredible amount of software out there, and I would guess that the vast majority of it is used by people that wouldn't know one end of a source file from the other. That is why compatibility is so important - people have money invested in their software, and would like to take advantage of the improved performance and features of the new TOS but don't want to have to pay again to buy new versions of the software, even if they were to be made available. Compatibility is always an issue, and at some point it becomes necessary to 'bite the bullet' and say "Sorry, this is where we make the break". We have decided, and rightly so, I believe, that the time to do it is not yet. It may make sense to do it on the 32-bit line, where there are bound to be some problems with some software. We will address that issue quite soon, but need to survey the more popular products, talk to the developers, and THEN decide how much compatibility is mandatory. As for the parting shot about delivery of the new TOS, we have had it in Beta for a couple of months, and expect to make a release to Developers next week. Lead times to mask ROMs are quite long at present, so I don't expect the new ROMs to be in production until late this year. But certainly before Christmas 1990! I hope this (lengthy) response clarifies the situation! ----------------------------------------------------------------------------- Roy J. Good Product Development, Atari Corporation Views expressed are my own. Atari may agree or disagree; they have the right. -----------------------------------------------------------------------------
jrd@STONY-BROOK.SCRC.SYMBOLICS.COM (John R. Dunning) (08/08/88)
Date: 5 Aug 88 17:43:31 GMT From: imagen!atari!apratt@ucbvax.Berkeley.EDU (Allan Pratt) [... clearing all of memory ...] This takes a nontrivial amount of time. Nobody promised that the space after your declared BSS and before your initial stack pointer would be clear, but it has always been that way and some programmers chose to depend on it. This is called a "settled expectation" and we have to live with it in the name of compatibility. I believe it's a bug that programmers rely on undocumented features like that. Perhaps you want to allow those old (buggy) programs to continue to run; I would too, in Atari's position, but please, don't penalize the rest of us for those vendors' bugs. I'd be completely pleased if you'd provide a switch to turn the behaviour off. Lacking that functionality, those of us who care about such things (maximizing performance) will continue to bludgeon the the supplied code over the head.
med@druhi.ATT.COM (Myron Drapal) (08/10/88)
In article <1104@atari.UUCP>, good@atari.UUCP (Roy Good) writes: > in article <3308@druhi.ATT.COM>, med@druhi.ATT.COM (Myron Drapal) says: > > From Computer Shopper August 1988, page 214: > > "Atari to Release Fourth Edition of TOS" > > ..... > > [text deleted - you can read the original] > > ***Flame On*** > > [text deleted - you can read the original] > > ***Flame Off*** > > Just so I don't leave you without some good news, the rest of the > > article was very informative. It describes the fixes that are being > > included in the new version of TOS. Maybe I'll see one by Christmas > > 1990... > > Myron Drapal > > AT&T Denver > > att!druhi!med > > [text deleted - you can read the original] > > As for the parting shot about delivery of the new TOS, we have had it in Beta > for a couple of months, and expect to make a release to Developers next week. > Lead times to mask ROMs are quite long at present, so I don't expect the > new ROMs to be in production until late this year. But certainly before > Christmas 1990! > > I hope this (lengthy) response clarifies the situation! > > ----------------------------------------------------------------------------- > Roy J. Good > Product Development, Atari Corporation > > Views expressed are my own. Atari may agree or disagree; they have the right. > ----------------------------------------------------------------------------- 1) Malloc has never worked properly, and by the sounds of the Computer Shopper article and your own confessions, it really won't work properly in the near future. Malloc, as defined in the UNIX environment (or by K&R, Page 173, if you prefer), will allow you to allocate or free memory in any machine- compatible size in any order until you run out. Does the TOS version do this? Will it ever? I realize that you might take exception to this definition of Malloc, but Atari doesn't seem to believe in manuals or written documentation of any kind (plausible deniability). 2) I admit the examples that yourself and A. Pratt provided *do* show "misbehaved" code, but can you really be sure that this is not a side-effect of a permanently broken Malloc? Don't get me wrong here, I'm not defending the coding style, since clearly this code would fail in most UNIX environments. "Thou Shalt Not Screw With Memory Thou Ownest Not" (quote from A. Pratt) is a fine motto, if a simple corallary of it is "Malloc will allocate any free memory, and free will deallocate is in any order." Having a 1 or 2 meg system on which you can only malloc about 20 times, and must free in exactly the reverse order of allocation is a ***JOKE***. 3) Two wrongs don't make a right, but one good right will make up for a number of wrongs. Talk to Commodore or Apple users (I don't have one, but I know quite a few people who do). Timely updates provided to the OS, with disks *AND MANUALS* provided at a reasonable charge (media costs + small charge) will go a long way towards making your users and developers happy. 4) As far as I'm concerned, the "2+" year clock is still running on the next set of TOS ROMS. The ones provided in the MEGA series never really made it to the public, due to problems, some much worse than those it fixed. I'm not asking for a free upgrade: just a reasonably priced upgrade in a shorter amount of time. The fixes listed for this new version of ROMS is good, but it shouldn't have taken "2+" years for these fixes to become available. 5) I *DO* appreciate your situation regarding buggy OS and tools, but I do feel that if Atari wants to keep the ST alive with third party software (it may be too late for this already, re: good developers going elsewhere), they better get their act together and *SUPPORT THESE DEVELOPERS*. Without them, the ST will die a slow and painful death. 6) I've heard the "Any day now" story from Atari much too long now. I'll believe the TOS ROMS by "late this year" when I can go to my local dealer and buy them. Until then... Waiting for MINIX so I don't have to put up with the ol' GEM dog no more... I know I'll probably get some hate mail from a few of you True Blue Atari owners out there who will claim that I'm just Atari bashing again. Well, it might be true, but I really hate to see such a wonderful machine go down the tubes due to the stupidity of a few at Atari Corp. Myron Drapal AT&T Denver att!druhi!med
caa@midgard.UUCP (Charles A Anderson) (08/10/88)
This memory clearing stuff reminds me of an intresting discovery I made a while ago, I had a small (very small) program that I wrote to test out the assempro assembler that I had just bought, all the program did was switch resolutions from med to low and low to med. The odd thing was, that when I assembled it with the pc relative option on, there was no pause before the program ran like there was when I ran the non pc-relative version. I thought that this meant that the relocator was extremly slow, however I have since been informed that the relocator is in actuality quite fast, and that a possible explanation is that pc relative programs do not have memory cleared before they are executed. Anybody got the real story??? -- Charles Anderson | People of the Earth can you hear me? caa@midgard.mn.org | Came a voice from the sky on that magical night ...!amdahl!bungia!midgard!caa | And in the colours of a thousand sunsets | The traveled to the world on a silvery light | -Billy Thorpe, Children Of The Sun
leo@philmds.UUCP (Leo de Wit) (08/11/88)
In article <456@midgard.UUCP> caa@midgard.mn.org (Charles A Anderson) writes: [some lines deleted]... >I thought that this meant that the relocator was extremly slow, however I >have since been informed that the relocator is in actuality quite fast, and >that a possible explanation is that pc relative programs do not have memory >cleared before they are executed. Anybody got the real story??? You hit the nail on the head. The real story is, that when the word at 0x1a of the binary is non-zero, no relocation is done; the memory clearing is also not done (which is incorrect as far as the bss part is conceirned, read your K&R), which you may consider a bug or a feature, and the program file (which was opened to be read into memory) is not closed (a bug). I posted an article some days ago about the format of ST binaries; you can find more info there. Your assembler obviously set the word at 0x1a when assembling 'pc-relative'. I find the expression 'pc-relative' a bit strange in this context; pc-relative is an addressing mode on the 68000 which can be used only for reading. If you want to write to a location in this way you have to load it into an address register first (or something similar). Example: lea.l flag(pc),a2 st (a2) flag dc.b 0 Perhaps is meant: address register indirect with displacement. Example: * a2 already loaded with start of a data area st flag(a2) flag equ datarea+20 This is also used by many of the C-compilers; at the start of a program an address register is loaded with the start of a data segment (data or bss), or preferably 32K further. This addressing mode can address 64K; if the segment is larger, absolute addressing is used generally (AND relocation!). The exact story of how registers are initialized by Pexec I'll tell in a follow-up. Leo.
hyc@math.lsa.umich.edu (Howard Chu) (08/12/88)
I dunno 'bout you, but personally, I feel pretty disappointed that an improvement wasn't made, to accomodate the bad programming practices of other developers. I don't want to knock the software houses writing software for the ST's, but really now - these are supposed to be professional programmers, right? They're supposed to know what they're doing, and when they later discover a problem with previously written code, don't they have a responsibility to fix it themselves? Developers have been clamoring for these bug fixes for an intolerable length of time. Wouldn't it have been "better" (sorry for such an insubstantial, abstract term here, but these are really my feelings more than deeply reasoned thoughts coming out here...) to have fixed the problem and convinced the folk with ill-behaved code to fix it, and release updates of their own? Certainly these developers don't expect to simply write a program, test it a bit, release it, and never be bothered by it ever again, do they? Bitrot is something we all have to deal with, releasing updates to existing software shouldn't be considered an especial hardship. It's a common sense expectation, like a fact of life. I think this is making a Very Bad statement about Atari. It's telling the world that they're willing to be less than the best. Not only that, it's telling people who buy Atari products that they also must settle for less than the best, or else they should go find some other product to worry about. This displays the same attitude I dislike so much in Sun. ("You can't afford a Sun-4/260? No problem, here's a Sun-4/110, cheaper 'cause it's made with low-spec parts, slower parts, etc. Sun - because you don't always need the best." Perhaps I shouldn't be so surprised at this. I dunno for sure. When I first laid my eyes on the ST, I goggled. "Wow, what a slick machine!" I know I'm not the first to have been fanatically devoted to Atari, and may not even be the last. The ST was just another amazing work of computing machinery from a a company I practically worshipped. Why else would people always be starting "my computer is better than yours" wars - we honestly believed that, yes, this computer *is* the best. But that attitude is long gone now. Yes, I feel somewhat disillusioned. Ah well. I'm not gonna trash all my work on this now, though. It's still a decent machine, fun to work on, does neat things that no other micro does. But dang, it's lost the gleam. That spark of ingenuity, that air of superiority, that excitement is all gone. It's just another box on the market. hohum. [If you couldn't tell by now, these are Very Definitely my Very Personal Opinions. Take 'em any way you see fit, while keeping that in mind...] -- / /_ , ,_. Howard Chu / /(_/(__ University of Michigan / Computing Center College of LS&A ' Unix Project Information Systems
kurth@sco.COM (Kurt Hutchison) (08/13/88)
In article <399@clio.math.lsa.umich.edu>, hyc@math.lsa.umich.edu (Howard Chu) writes: > I dunno 'bout you, but personally, I feel pretty disappointed that an > improvement wasn't made, to accomodate the bad programming practices > of other developers. I don't want to knock the software houses writing > software for the ST's, but really now - these are supposed to be professional > programmers, right? They're supposed to know what they're doing, and when > they later discover a problem with previously written code, don't they have > a responsibility to fix it themselves? > ... > somewhat disillusioned. Ah well. Expect perfection and you will always be dissapointed. > The above article reminds me of a story about IBM and Intel. IBM released MS-Dos and used a few (software) interrupts that were clearly documented as being reserved in the original 8086 doc. Along comes the 286 that uses those interrupts for internal purposes and breaks MS-Dos. Six months later a new rev of the 286 (the one that finally made it to market by the way) comes along that does not use those interrupts and Dos can run unchanged. IBM PC sales accounted for more than 50% of Intel's CPU chip sales at the time. I for one agree with the argument that releasing a new OS that breaks old software is a bad idea. Theological arguments about the proper behavior of programs always take a back seat to compatibility concerns. I work in the software business for a successful company, and we have historically always supported old programs no matter how much it hurts. Compatibility is a key issue for many computer products. Remember the original PC clone wars? I worked for a hardware company then that made a PC clone which they thought was "better" than the PC. It didn't sell at all because it wasn't exactly compatible, you had to buy a special version of DOS for it and "Ill-behaved" programs didn't run. Ill-behaved programs are the rule rather than the exception, most programs that perform really well were Ill-Behaved. Lotus-1-2-3 is a good example. Flight Simulator is another (no longer considered a good performer, but it was revolutionary at the time). The lessons of compatibility are hard ones for engineers to learn because they are often more concerened with "cleanliness of code" or "proper behavior". While compatibility is not a universal truth, compatibility between OS releases is a good thing. Would you be willing to wait six months for new releases of all of your software so that it would run again? No small business would, the hacker might. Small businesses and non-computer-literate people are where the small computer market is. I therefore think that Atari is doing the right thing here, a little late perhaps, but the philosophy is right. - kurt -- ------------------------------------------------------------------------- Kurt Hutchison The Santa Cruz Operation Software Engineer Trumpet player, synth player, pianist, cyclist, philosopher at large The above opinions (if any) are my own
cmcmanis%pepper@Sun.COM (Chuck McManis) (08/13/88)
In article <399@clio.math.lsa.umich.edu> (Howard Chu) writes: >This displays the same attitude I dislike so much in Sun. ("You can't afford >a Sun-4/260? No problem, here's a Sun-4/110, cheaper 'cause it's made with >low-spec parts, slower parts, etc. Sun - because you don't always need the >best." > /_ , ,_. Howard Chu **FLAME ON Needless to say this probably doesn't belong here in this group but I did take offense to the above comment. Howard states that the Sun 4/110 is less expensive because we (Sun) use lower grade parts in it. Frankly he is full of shit. He apparently doesn't consider the differences in such things as power supplys, 12 slot versus 3 slot VME cages, completely different CPU boards with completely different cache architectures, and slightly slower clock rate which doesn't require the fastest parts available, not that they aren't prime top of the line parts, but they don't have to wiggle their bits quite so fast and are thus easier to make. (And some believe more reliable). **FLAME OFF --Chuck McManis uucp: {anywhere}!sun!cmcmanis BIX: cmcmanis ARPAnet: cmcmanis@sun.com These opinions are my own and no one elses, but you knew that didn't you.
rosenkra@Alliant.COM (Bill Rosenkranz) (08/13/88)
--- i'm sorry, i just can't tolerate this discussion anymore... by not fixing some very nasty bugs in something as basic as memory allocation, you don't get a choice. no one is forcing you to get the new ROMs. you can always keep the old ones and never upgrade. it's just plain stupid to take the stance that we should maintain backwards compatibility for the sake of programmers that tried to fix the the deficiencies themselves. if that were the case, we all would still be using 8-bit systems with 4k memory and no new systems/software would ever be produced. i'd rather have a choice. now you have no choice. how many of the 500,000+ STs do you think will ever see the new ROMs anyway? if atari really wants to please everyone, then offer 2 ROMs: one with and one without :^) the ST is a very nice system (eventhough they recently raised the prices to developers). i don't think even the bitterest of developer out there could deny that. atari corp itself needs a slap upside the head every once in awhile, though... (this is not directed to h.chu in particular...it was just a convenient place to jump in...) -bill
dag@chinet.chi.il.us (Daniel A. Glasser) (08/13/88)
I think this problem could be solved quite simply by using one of the reserved fields in the GEMDOS executable program header. One of these fields could be used for telling the OS that it does not have to clear the TPA before executing the program. This should not be a magic number, as there may be other features that programs will selectivly enable in the future, but at least two disjoint nybbles should be set aside to have a magic pattern that distinguishes the value from random garbage. This would be downward compatible as the older versions of TOS ignore this field anyway and the programs that set it would not be depending on the clear not being performed. Commercial programs could have this field set up if they don't need the memory to be cleared. For extended functionality, programs that required that functionality could have a different initial magic number (hence a different header size) which would flag to the system that this program used this functionality. Older versions of TOS would not load these programs. A third class of programs would use extended functionality only when it was available. A capabilities report could be added into the GEMDOS functions and these programs would have the old magic number but possibly set some of the flags in the same field that holds the "no-clear" flag. As to the memory management scheme for eliminating the OSPOOL problems -- there are better schemes for doing it which don't break programs that make stupid assumptions about memory ownership and contiguity. Maybe another flag? Yes, operating systems CAN be improved and extended without breaking a significant number of applications, even when the OS is GEMDOS! Note: I left Mark Williams Company's employ in early May and moved to Madison, WI. I have not found a USEnet site that is willing to give me an account up here yet, so I'm off the news for a while. I occasionally call into Chicago to read my usenet mail, but this only happens about once a month for now. If anyone wants to reach me for questions or to yell at me for what I've just said, please call me at (608)273-8476 (nights) or write to me at 1022 D Sunnyvale Ln., Madison, WI 53713. Note to A.Pratt (or anybody else) at Atari: Call me if you still have questions about the MWC 3.0 symbol table. Also, since I'm a registered developer in my own right (not just through MWC) is it possible for me to get a beta set of the new ROMs? Daniel A. Glasser -- Daniel A. Glasser dag@chinet.UUCP One of those things that goes "BUMP!!! (ouch!)" in the night. Alternate address -- CIS: 76505,1672 (if you must.)
Thomas_E_Zerucha@cup.portal.com (08/14/88)
If I understand right, there are actually two problems with Malloc violations. 1. Some people do a Malloc and use memory outside of the Malloc'ed range. 2. Some people expect successive Mallocs to return sequential addresses. #2 is not true even now if your memory gets at all fragemented. If the 40 folder bug is fixed, the OSPool should be large enough to hold Malloc Handles, especially if it returns memory into the OSpool correctly. Alternately, (since I hope no one is expecting to use beyond what a Malloc (-1) would return) you could put the headers at the very bottom of the largest Mallocable segment and that might fix the problems and retain compatibility. I hope you can now MFree in a different order than Mallocing and have it work properly (this shouldn't be a problem to fix). Considering I use RTX often, perhaps the 64 bytes extra may be enough of a buffer to solve #1. Perhaps you should try to do some of these fixes and see if your compatibility percentage improves. And list what breaks. If you get to 98% or more, then perhaps it would be better to go to the fixed Malloc (with the aforementioned compatibility patches). I seem to remember that the Mega roms broke a few things.
Thomas_E_Zerucha@cup.portal.com (08/14/88)
I think Dave Beckemeyer has an excelent idea about the TPA - and there are a few "undefined" words in the header portion of the executable image. All it would take is for Atari to say "This word is the number of 4K bloks for the TPA", or "This longword is the size of the TPA". Have the current value of zero for these unused locations default to a system setting like RTX does, and I have some C code that could set the value of these which I could have for anyone in 5 minutes. So could anyone else - just a simple get value and write word into header. BUT THIS REQUIRES ATARI TO SET THE STANDARD. Alternately a symbol could be defined in that area of the executable, but I think there would be support from the developers to have Atari define a new header byte.
peter@sugar.uu.net (Peter da Silva) (08/14/88)
In article <1045@scolex>, kurth@sco.COM (Kurt Hutchison) writes: > I for one agree with the argument that releasing a new OS that breaks > old software is a bad idea. Theological arguments about the proper > behavior of programs always take a back seat to compatibility concerns. And this is why operating systems have a limited life. Eventually the bugs left in to keep the old software from breaking accumulate to the point where it's not worth keeping the old software. This is one of the reasons UNIX has so far been so successful as a third party O/S: with no binary standard there's been no old software to keep running. Now that Xenix and COFF have become binary standards, I expect this to change. In fact, it's beginning to... look how big the new SV/Xenix/BSD merge is going to be. Look how big SV already is. > Remember the original PC clone wars? I worked for a hardware company then > that made a PC clone which they thought was "better" than the PC. It didn't > sell at all because it wasn't exactly compatible, you had to buy a special > version of DOS for it and "Ill-behaved" programs didn't run. But when *IBM* came out with a beter (and slightly non-standard) PC it sold just fine, despite the fact that ill-behaved programs didn't run on it. They've done it twice now, first with the AT and now with the PS/2 line. > Ill-behaved > programs are the rule rather than the exception, most programs that perform > really well were Ill-Behaved. This is because the operating system didn't support things, like fast text, that the programs needed. I thought one of the reasons for going with GEM was to keep stuff like this from happening. There's nothing about your cheap Malloc that required programs to make assumptions about how memory was allocated. It could have been used like "sbrk" in UNIX as the low-level allocator... just split memory up into 7 or 8 chunks, and Malloc them whenever the existing malloc pool got filled up. This reminds me of the bourne shell on UNIX... it assumed that you could always restart instructions after a segmentation violation, so it didn't bother allocating memory until it got a SIG_SEGV. It broke when UNIX was ported to the 68000, where this behaviour didn't work. A very close parallel: it was making assumptions about how memory management worked. But since this wasn't just a caese of binary compatibility, it was fixed (the alternative would have been to put two 68000s in every UNIX box, and use one to restart instructions...). > While compatibility is not a universal truth, compatibility between OS > releases is a good thing. Would you be willing to wait six months for > new releases of all of your software so that it would run again? People have done this time and time again. They just kept using the old version of the operating system or computer while waiting for the new one to get up to speed. > Kurt Hutchison - The Santa Cruz Operation - Software Engineer How did you people get the bourne shell working right on the '86? -- Peter da Silva `-_-' peter@sugar.uu.net Have you hugged U your wolf today?
Jinfu@cup.portal.com (08/15/88)
I have an idea, why not provide both 'fixed' Malloc and the old Malloc to developers. You can use #define old_Malloc switch in header file for programmers to choose which one, provided the new TOS ROM has enough space to put both versions of Malloc in. Jinfu Chen
rosenkra@Alliant.COM (Bill Rosenkranz) (08/16/88)
In article <366@bdt.UUCP> david@bdt.UUCP (David Beckemeyer) writes: >In article <19880807215637.4.JRD@MOA.SCRC.Symbolics.COM> jrd@STONY-BROOK.SCRC.SYMBOLICS.COM (John R. Dunning) writes: ->>I believe it's a bug that programmers rely on undocumented features like ->>that. [ good points deleted ] ->First some historical considerations: -> ->In C programs, it is common for UNIX hackers to expect uninititalized data ->to be zeroed, but this should be a function of the compiler startup code ->more than the C programmer. However since the compiler writers found that ->TOS cleared the BSS for them, they probably removed the code in the startup. [ more good stuff deleted ] ->Now for a possible alternate solution to the clearing BSS problem: [ even more good stuff deleted ] as soon as i got alan pratt's improved gemstart.s, i did 2 things: 1)include the osbind.o traps (for gemdos/bios/xbios calls) so i wouldn't have to always remember to link osbind.o 2) added code to zero bss (controlled by a global switch much like the stksize global). i of course still use alcyon (slow but capable). ->David Beckemeyer (david@bdt.uucp) | "Don't call me Stupid!" wouldn't dream of it, dave! -bill
rwa@auvax.UUCP (Ross Alexander) (08/16/88)
[fancy line-eater line down for preventative maintenance; call later] In recent articles, Roy J. Good @ Atari and others discuss the Malloc problem (qv). May I suggest the following solution: 1) build a proper version of Malloc (MallocP for "malloc patched" comes to mind). Assign it a TOS-call number, and give it reasonable semantics; Un*x semantics are as reasonable as any [ :-) :-) ]. 2) use this reasonable allocator as a foundation for a version broken in exactly the same ways as the current TOS version. That is, it obeys the assumptions that people are making about the current allocator (that it's okay to exceed your chunk, that chunks are contiguous, et al.), and make this broken allocator available through the old TOS-call number. 3) encourage people to use the new allocator; they will because it's no longer broken, right ? 4) after a while, make the old entry point go away (if you want). Now, this may be a little tricky. You might have to compromise some functionality in the back-compatibility entry-point version. For instance, it might be only able to allocate 1/3 or 1/2 of the total available free memory (here's a reason to use the upgraded entry point :-). I don't know if this is THE answer, but it might be AN answer. Comments ? -- Ross Alexander @ Athabasca University <the-known-world>!alberta!auvax!rwa
ugbernie@sunybcs.uucp (Bernard Bediako) (08/16/88)
In article <36500051@iuvax> pwp@iuvax.cs.indiana.edu writes: > >Since it both important to have old programs work and to have a correct >Malloc, the folks at Atari should give careful consideration to having >two versions of Malloc with the old version recieving the old name. It >might also be necessary to have two versions of Free. This is a lot better idea than keeping broken procs in the new roms. Adding a new Malloc called Malloc2 (etc.) would say a lot to developers & ST programmers in general about the quality in the ST. It's the best of both worlds and would rather wait awhile longer to see this done, than buy (or most likely NOT buy) ROMs that are only partially fixed. How do we know we won't be stuck with similar bugs in future machine??? I for one wouldn't buy a 68030 or any nxt generation Atari if these bugs were allowed to stand for ever. Bernie
kurth@sco.COM (Kurt Hutchison) (08/17/88)
In article <2474@sugar.uu.net>, peter@sugar.uu.net (Peter da Silva) writes: > In article <1045@scolex>, kurth@sco.COM (Kurt Hutchison) writes: > > I for one agree with the argument that releasing a new OS that breaks > > old software is a bad idea. Theological arguments about the proper > > behavior of programs always take a back seat to compatibility concerns. > > And this is why operating systems have a limited life. Eventually the bugs > left in to keep the old software from breaking accumulate to the point where > it's not worth keeping the old software. This is one of the reasons UNIX has > so far been so successful as a third party O/S: with no binary standard there's > been no old software to keep running. While a boon in the developement stage it is a hindrace at the marketing level. It is not true of SCO Xenix, we support every binary ever made for any SCO XENIX. Our 386 OS will run 8086, 286 and 386 binaries unchanged. This has turned out to be a good thing for us. Our binary formats encode which CPU and OS version they were compiled for. The same strategy works for whether it is a Xenix III or V Binary or a Unix Sys V binary, or COFF format (all of which we support). In spite of this, Xenix has evolved and changed. There are lots of new calls and some old ones (still supported) that are officially obsolete. > > Now that Xenix and COFF have become binary standards, I expect this to change. > In fact, it's beginning to... look how big the new SV/Xenix/BSD merge is > going to be. Look how big SV already is. Lets hope that memory keeps getting cheaper :-). Our users have confirmed many times that they want backwards compatibility, memory is never an issue. Keep in mind that we sell primarily to businesses who have a little more money than the Atari end user. Atari wants the business users though so these examples are potentially valid. > But when *IBM* came out with a better (and slightly non-standard) PC it sold > just fine, despite the fact that ill-behaved programs didn't run on it. They've > done it twice now, first with the AT and now with the PS/2 line. This is a good point worth discussing. When the non-standard PC's came out they instantly became a standard. Dos was modified so that it knew which hardware it was running on and the apps followed suit. "Three times" means about once every three years. Atari's OS releases are far enough apart that I will concede that it would be reasonable for them to consider fundamental changes to their OS. If they released updates every six months they would not want to do this. > > > Ill-behaved > > programs are the rule rather than the exception, most programs that perform > > really well were Ill-Behaved. > > This is because the operating system didn't support things, like fast text, > that the programs needed. I thought one of the reasons for going with GEM > was to keep stuff like this from happening. As long as the hardware/OS combination *allows* bad behavior, there will *be* bad behavior (in my opinion). A better memory management unit, a protected mode where I/O instructions are illegal and hardware in general is not accessible except through the OS might go a long way to prevent ill-behaved programs. The Atari ST, the Mega (I think), the mac, do not have these features. The Lisa and the Amiga (I think), and the 286 and 386 PC families do. They all cost more that the ST. You mentioned the idea of having two mallocs, keep the old one and add a new one that works. I really like this idea. There is lots of precedent for this kind of approach. It concedes compatibility while allowing new programs to take advantage of the new features. This path does result in (potentially) unlimited growth. I don't know how much space is left in the ROM's and if the ST motherboard can handle the newer larger ROM's. > How did you people get the bourne shell working right on the '86? I don't know. It's probably proprietary anyway. I have probably said too much in this article already. Nothing that our marketing people won't tell prospective customers though. - kurt -- ------------------------------------------------------------------------- Kurt Hutchison The Santa Cruz Operation Software Engineer Trumpet player, synth player, pianist, cyclist, philosopher at large The above opinions (if any) are my own
gaz@apollo.COM (Gary Zaidenweber) (08/17/88)
From article <692@auvax.UUCP>, by rwa@auvax.UUCP (Ross Alexander): > > In recent articles, Roy J. Good @ Atari and others discuss the Malloc > problem (qv). May I suggest the following solution: > [suggestions deleted] > > I don't know if this is THE answer, but it might be AN answer. > Comments ? > > -- > Ross Alexander @ Athabasca University > <the-known-world>!alberta!auvax!rwa Now this newsgroup is becoming useful! I understand the frustration of dealing with broken code (though here at work I usually just go fix it when I encounter it:-) Suggestions from developers and users are just what Atari needs. They've even begun to show some responsiveness. Let's encourage them with constructive criticism. I think the flames have just made them spend their money on asbestos underwear rather then product development. Way to go to you guys with this and other suggestions. (I love *MY* Atari, but I'm not sure how I feel about *THE* Atari) -- UUCP: ...{umix,mit-eddie,uw-beaver}!apollo!gaz ARPA: gaz@apollo.COM AT&T: (508)256-6600 x6081 Its never too late to have a happy childhood!
apratt@atari.UUCP (Allan Pratt) (08/18/88)
In article <692@auvax.UUCP} rwa@auvax.UUCP (Ross Alexander) writes: } In recent articles, Roy J. Good @ Atari and others discuss the Malloc } problem (qv). May I suggest the following solution: } } 1) build a proper version of Malloc (MallocP for "malloc } patched" comes to mind). Assign it a TOS-call number, and } give it reasonable semantics; Un*x semantics are as } reasonable as any. } } 2) use this reasonable allocator as a foundation for a } version broken in exactly the same ways as the current TOS } version. That is, it obeys the assumptions that people are } making about the current allocator (that it's okay to } exceed your chunk, that chunks are contiguous, et al.), and } make this broken allocator available through the old } TOS-call number. But this doesn't fix anything! The whole reason that Malloc is broken is that you can't allocate an arbitrary number of blocks, because a static area of memory is set aside for block descriptors. Any reasonable malloc uses the blocks themselves to keep track (by using the space just before or just after a free block as linkage to the next block), but we can't do that because people keep messing with memory they don't own, and so on. ============================================ Opinions expressed above do not necessarily -- Allan Pratt, Atari Corp. reflect those of Atari Corp. or anyone else. ...ames!atari!apratt
saj@chinet.UUCP (Stephen Jacobs) (08/18/88)
In article <1121@atari.UUCP>, apratt@atari.UUCP (Allan Pratt) writes: > In article <692@auvax.UUCP} rwa@auvax.UUCP (Ross Alexander) writes: > } In recent articles, Roy J. Good @ Atari and others discuss the Malloc > } problem (qv). May I suggest the following solution: [a reasonable suggestion] > But this doesn't fix anything! The whole reason that Malloc is broken is > that you can't allocate an arbitrary number of blocks, because a static > area of memory is set aside for block descriptors. Any reasonable > malloc uses the blocks themselves to keep track (by using the space just > before or just after a free block as linkage to the next block), but we > can't do that because people keep messing with memory they don't own, > and so on. > > ============================================ > Opinions expressed above do not necessarily -- Allan Pratt, Atari Corp. > reflect those of Atari Corp. or anyone else. ...ames!atari!apratt The solution would seem to be to make it a guaranteed "feature" that any program trying to use both old and new versions of the storage allocator would die horribly. This isn't so different from the present universal behavior of buffered and unbuffered io to the same device: you'd best pick one and stick with it. There's some stuff like that concerning sensing mouse button states, too, I think (is that the same as buffered vs unbuff?) I'd like to agree with whoever it was that said you really shouldn't use the system memory allocator/deallocator enough to get into trouble with an 'improved' one, though. Library functions to manage small and/or frequent dynamic storage needs are 'friendlier' (to hardware, user and maintainer). Stuff like the arena management routines of MWC or the automatically garbage-collecting allocator that Free Software Foundation uses.
med@druhi.ATT.COM (Myron Drapal) (08/18/88)
In article <3de9dfb4.52e@apollo.COM>, gaz@apollo.COM (Gary Zaidenweber) writes: > From article <692@auvax.UUCP>, by rwa@auvax.UUCP (Ross Alexander): > > > > In recent articles, Roy J. Good @ Atari and others discuss the Malloc > > problem (qv). May I suggest the following solution: > > > [suggestions deleted] > > > > I don't know if this is THE answer, but it might be AN answer. > > Comments ? > > > > -- > > Ross Alexander @ Athabasca University > > <the-known-world>!alberta!auvax!rwa > > Now this newsgroup is becoming useful! I understand the frustration > of dealing with broken code (though here at work I usually just go > fix it when I encounter it:-) Suggestions from developers and users > are just what Atari needs. They've even begun to show some responsiveness. > Let's encourage them with constructive criticism. I think the flames have > just made them spend their money on asbestos underwear rather then product > development. Way to go to you guys with this and other suggestions. > > (I love *MY* Atari, but I'm not sure how I feel about *THE* Atari) > -- > UUCP: ...{umix,mit-eddie,uw-beaver}!apollo!gaz > ARPA: gaz@apollo.COM > AT&T: (508)256-6600 x6081 > Its never too late to have a happy childhood! I also think that this discussion has become more useful. And to think, it all started from a (couple) of flames... From a little flame, hope springs eternal... ;-) In general, I think that Atari could learn a great deal by listening to a few of the people in this newsgroup - people who have been developing software for years, and who have the experiences of how (and how not) to upgrade existing software. At the root of the problem (my opinion) is Atari's (management included) unwillingness to listen to its customers. Atari Corp. is too busy trying to find its identity in the computer marketplace, and in so doing has ignored the input of its customers. Myron Drapal AT&T Denver att!druhi!med
david@bdt.UUCP (David Beckemeyer) (08/20/88)
In article <1060@scolex> kurth@sco.COM (Kurt Hutchison) writes: [ stuff deleted ] >... When the non-standard >PC's came out they instantly became a standard. Dos was modified so >that it knew which hardware it was running on and the apps followed >suit. We're talking apples and oranges here. DOS had a huge installed base. It was IBM coming out with the new system, not a 3rd party, and certainly not Atari. I think one of the reasons Atari is afraid to break any programs is becuase then they wont have any. Quite a few of the Atari software is no longer supportted either becuase the company went out of business or is unwilling to spend any more resources on Atari development. This means if Atari breaks it, it is broken forever and it's no longer something they can demo at Comdex. -- David Beckemeyer (david@bdt.uucp) | "Don't call me Stupid!" Beckemeyer Development Tools | "No. That would be an insult 478 Santa Clara Ave, Oakland, CA 94610 | to all the stupid people!" UUCP: {unisoft,sun}!hoptoad!bdt!david | - A fish called Wanda