ir230@sdcc6.ucsd.edu (john wavrik) (11/17/89)
Bill Bouma writes: > I fail to see what all this has to do with implementing forth in > assembly language? I see no reason a forth interpreter written in > any other language could not be made to behave identically to one > written in assembly! > Also, I have to disagree with the rest of what was said because the > main thing that distinguishes forth from other languages is the > parameter stack. Other languages (yes, even C) provide means to > build new data types using structures. One can build new operations > by writing functions. A language which does not allow the user to add > to it would not be very useful! I'd like to point out to everyone how this exchange began: As a person who has a lot of experience with Forth, I received from a "proud papa" a version of Forth written in 'C' (I will refer to it as XFORTH). It was lacking some critical words (in particular the double precision wordset) and had others improperly implemented (like UM*), which prevented me from testing it on existing applications. I went to add the missing words and fix the errors (a trivial task on traditional Forth's). When I tried, I found that mechanisms to add the words were lacking too. This inspired some thinking about the problems inherent in trying to implement Forth using other languages. The precise statement of mine was: # I'd be happy to see a full Standard version of Forth implemented in 'C' # which allows a user the same control as he enjoys in an assembly language # version (in particular the ability to add new primitives without # understanding 'C', the 'C' source code for the implementation, and how his # 'C' compiler works). Failing this, I can only conclude that the purpose of # implementing Forth in 'C' is to provide 'C' programmers with a toy version # of Forth. It's a shame that people homed in on the second sentence rather than the first. ---- Before we go any further, I'd like to take a vote: All Forth programmers who believe that "the main thing that distinguishes Forth from other languages is the parameter stack" please raise your hand. [see Bill, none of them do!] ---- What would a 'C' programmer do if he received from a friend a program which used declared variables of type "long" [note for Forth programmers: this is what they call double precision integers] and his 'C' compiler does not support this type? a. Ask your friend to recode her program b. Buy a new 'C' compiler c. Modify your 'C' compiler, even though you don't have the source code d. Write a note to comp.lang.c telling everyone that you have a "toy" compiler e. Let off steam by writing to comp.lang.forth and harassing the Forth programmers Most 'C' programmers would do b. and e. (they have too much pride to do a. or d.) Almost any Forth programmer (using a traditional implementation of Forth) would do the equivalent of c. Please note that the problem is to add the data type "long" in such a way that it has the same status as built-in types. Treating a "long" as an array or a "struct" and defining functions like long_add would not do! [to Pascal programmers: a struct is what they call a record] [to Forth programmers: we don't have one, but by the time someone has finished explaining what it is, you'll have figured out how to make one] You want to read, without alteration, the source code of a person who has "long" as a primary type. She has declarations like: long x,y,z; and statements like: z = x + y; or worse: z = x + w; (with w an "int"). Her compiler recognizes "long" as a primary type (and handles the attendant overloading of symbols and coercion), yours doesn't! [to 'C' programmers: in order not to lose you for the rest of this message, take my word for it: you can't add a new structure like this without (1) changing the compiler, (2) using your compiler to write a new compiler, (3) writing a pre-processor -- which is almost as hard as (2). Please read on and convince yourself later] ------- Let's look at the task of defining a double-precision addition in Forth. Here is the definition in F-83 due to Henry Laxen: \ 32 bit Arithmetic Operations 11OCT83HHL CODE D+ (S d1 d2 -- dsum ) AX POP DX POP BX POP CX POP CX DX ADD BX AX ADC 2PUSH END-CODE Besides being as fast as possible, this definition is quite simple. I would rank this as very low in difficulty -- ANY Forth programmer could do a definition like this. (I think Henry Laxen is a very imaginative and creative person -- but I don't think this is an example. While I don't teach assembly language in my course, any of the students who pass my course would be capable of doing this as one problem on a weekly homework (part of the problem being to learn enough about assembly language to do it). For comparison, let's look at the task of doing this in high-level Forth. The definition of D+ itself is simple -- we want to add the low order words together and then add the high order words WITH THE CARRY FROM THE LOW ORDER: : D+ ( d1 d2 -- d3 ) ROT >R >R ( high order on top ) 0 ADC R> R> ROT ADC DROP ; Unfortunately, high level Forth (just like high level anything else) does not provide us access to the carry flag -- so we must define the word ADC ( n1 n2 c -- n3 c' ;;; add with carry ) which adds with a carry but does not use the hardware carry flag. In the code below, we have synthesized this by treating 16-bit addition as double- precision 8-bit addition. To make things easier to understand, we introduce a collection of pseudo-registers to hold the low and high bytes of intermediate numbers. [To Forth programmers: I know I've used too many variables, but I'm trying to make a point] The word SPLIT ( n i -- ) splits the 16-bit integer n into high and low bytes and stores these in the i-th slots of the arrays. The only other thing to explain is that << and >> are shift words provided in XForth: << ( n k -- n' ) n' is n (logically) shifted left by k bits. CREATE LOW0 4 CELLS ALLOT CREATE HIGH0 4 CELLS ALLOT : 'LOW ( i -- addr-low ) 1- CELLS LOW0 + ; : 'HIGH ( i -- addr-high ) 1- CELLS HIGH0 + ; : LOW 'LOW @ ; : HIGH 'HIGH @ ; : SPLIT ( n i -- ;;; store ) >R DUP 255 AND R@ 'LOW ! 8 >> R> 'HIGH ! ; : ADC ( n1 n2 c -- n3 c' ;;; add with carry ) >R 2 SPLIT 1 SPLIT 1 LOW 2 LOW + R> + 3 SPLIT \ 3 LOW has sum of low bytes, 3 HIGH has carry 1 HIGH 2 HIGH + 3 HIGH + 4 SPLIT \ 4 LOW has sum of high bytes, 4 HIGH has carry 4 LOW 8 << 3 LOW + 4 HIGH ; In contrast to the low-level solution this is slow, complicated, and ugly. (I wrote it to see what would be involved and make no claims that optimal in speed or beauty.) It runs about 60 times slower than the assembly language when both are run under F-83. It is 240 times slower when running on XForth. It is not something an average Forth programmer can be expected to do (it takes a brown belt at least). > The thing that bothers me about this is that you seem to be implying > that the user knows the host assembly language? If not, then what is > the difference between having your forth core written in assembly, or > having it written in basic? Speed! I think that the misconception is thinking of Forth as a program which is written in assembly language. Most programming languages produce a program which is compiled to leave behind "THE RESULT". It is immaterial to someone who runs "THE RESULT" which language was used to produce it. The choice of language is more a matter of convenience to the programmer. Forth is the assembly language for a virtual machine which is mapped to an underlying real processor. It is a combination extensible compiler, extensible language, and extensible operating system WHICH IS MOSTLY WRITTEN IN ITSELF. What I have called "traditionally implemented Forth" is a version of Forth in which a great many words written in Forth itself are mixed with a few words written in the assembly language of the host processor. I have included the entire double-precision wordset for F-83 as an appendix. It has a higher proportion of assembly language words than normal. It should nevertheless be clear that the definitions of all words, both colon and code, are short and simple. A more typical illustration is the I/O interface (including keyboard, video, and files) which, in F-83, consists entirely of high level Forth words based upon a single code primitive: BDOS Load up the registers and do a DOS system call. return the result placed in the A register on the stack. CODE BDOS (S n fun -- m ) AX POP AL AH MOV DX POP 33 INT AH AH SUB 1PUSH END-CODE (INT for "interrupt" is a call to either the BIOS ROM or the operating system. There are a number of books that list all the available interrupt calls stating what they do, what has to be loaded into which registers, and what is in which registers upon return. Any Forth programmer who knows how to read can integrate OS hooks into a Forth program provided that he knows which registers are used by Forth and how the stack is implemented. In F-83, the machine stack is the Forth parameter stack, BP is the return stack pointer, and SI is the Forth instruction pointer -- everything else is free within a word) Traditional Forth is not "implemented in assembly language". It is really implemented in Forth -- but Forth is so closely mapped to the underlying processor that key words can be easily defined in terms of the host's assembly language. In addition to the use of assembly language, a major advantage of the traditional implementation is that it provides the user with access to the details of the implementation. The fact that a user can extend the compiler as part of programming has a *great* deal to do with ease of programming. It is important to understand that the Forth approach to problem solving is radically different than the conventional approach. In Forth we produce a problem-oriented application language and then attack the problem. Thus Forth is more a toolkit for building languages than a language. It would be unfair to say that a great deal of Forth programming requires the use of assembly language or access to the implementation. But it would be fair to say that the 5% or 10% which does use them is responsible for many of the claims made by Forth programmers that they can easily achieve things that would be difficult (or impossible) to do in conventional languages. Please do not assume that it takes a great deal of assembly language knowledge to make major alterations in a traditionally implemented Forth. Several years ago, when I used Kitt Peak VAX-Forth for my class, I wanted to make it look like the dialect of Forth found in the first edition of Brodie's "Starting Forth". Kitt Peak was a traditionally implemented overgrown FIG-Forth and I had to make it look like Brodie's anticipation of Forth-83. The good people at Kitt Peak had found (in 1982) ways to make their 32-bit VAX version compatible with their 16-bit PDP-11 version. Their Forth included "everything but the kitchen sink" -- in particular floating point and double-precision floating point numbers and a lot of "better ideas" like prefacing all byte operations by "b" rather than "c"). I had to add a few missing words ( <CMOVE, I' and SP! as I recall ) using assembly language -- and, believe me, I do not know VAX assembly language! I also had to undo most of their "better ideas" -- I had to change their "endif" to "then", their "b@" to "c@", etc. Forth programmers take doing things like this for granted (I don't think I've ever mentioned it to anyone before). But in the world of conventional languages, it would be effectively the same as a programmer fixing his Turbo-C so that it works just like Microsoft-C. (The fact that altering a 'C' compiler is in fact several orders of magnitude more difficult is part of the point I'm making.) > Why not? What do registers have to do with it? If you are writing from > "within the forth environment" you are writing forth, right? Then why > does it matter what that forth was written in? I hope I've answered this. John J Wavrik jjwavrik@ucsd.edu Dept of Math C-012 Univ of Calif - San Diego La Jolla, CA 92093 P.S. I don't understand how this newsgroup has become split into two subgroups. On the one hand we have the people who want to know how many pins will be on the RTX-3000 (and Phil Koopman who replies by supplying the phone number of Harris, Inc). On the other, we have 'C' programmers who are intent at modifying a language they haven't bothered to understand. Apparently the people who are doing interesting and portable things in Forth are too busy to take the time to tell us about them and post some source code. ============================================================================ APPENDIX -- F-83 DOUBLE PRECISION WORDSET shadow screen comments are indented \ 16 bit Arithmetic Operations Unsigned Multiply 26Sep83map You could write a whole book about multiplication and division, and in fact Knuth did. Suffice it to say that UM* is the basic multiplication primitive in Forth. It takes two unsigned 16 bit integers and returns an unsigned 32 bit result. All other multiplication functions are derived from this primitive one. It probably isn't particularly fast or elegant, but that is because I never liked arithmetic and I stole this implementation from FIG Forth anyway. U*D is a synonym for UM* \ 16 bit Arithmetic Operations Unsigned Multiply 22Aug83map CODE UM* (S n1 n2 -- d ) AX POP BX POP BX MUL DX AX XCHG 2PUSH END-CODE : U*D (S n1 n2 -- d ) UM* ; \ 16 bit Arithmetic Operations Division subroutines 05MAR83HHL These are various subroutines used by the division primitive in Forth, namely U/. Again I must give credit for them to FIG Forth, since if I can't even understand multiply, divide would be completely hopeless. \ 16 bit Arithmetic Operations Unsigned Divide 05MAR83HHL UM/MOD This is the division primitive in Forth. All other division operations are derived from it. It takes a double number, d1, and divides by by a single number n1. It leaves a remainder and a quotient on the stack. For a clearer understanding of arithmetic consult Knuth Volume 2 on Seminumerical Algorithms. \ 16 bit Arithmetic Operations Unsigned Divide 22Aug83map CODE UM/MOD (S d1 n1 -- Remainder Quotient ) BX POP DX POP AX POP BX DX CMP >= ( divide by zero? ) IF -1 # AX MOV AX DX MOV 2PUSH THEN BX DIV 2PUSH END-CODE \ 16 bit Comparison Operations 05MAR83HHL 0= Returns True if top is zero, False otherwise. 0< Returns true if top is negative, ie sign bit is on. 0> Returns true if top is positive. 0<> Returns true if the top is non-zero, False otherwise. = Returns true if the two elements on the stack are equal, False otherwise. <> Returns true if the two element are not equal, else false. ?NEGATE Negate the second element if the top is negative. \ 16 bit Comparison Operations 04OCT83HHL ASSEMBLER LABEL YES TRUE # AX MOV 1PUSH LABEL NO FALSE # AX MOV 1PUSH CODE 0= (S n -- f ) AX POP AX AX OR YES JE NO #) JMP END-CODE CODE 0< (S n -- f ) AX POP AX AX OR YES JS NO #) JMP END-CODE CODE 0> (S n -- f ) AX POP AX AX OR YES JG NO #) JMP END-CODE CODE 0<> (S n -- f ) AX POP AX AX OR YES JNE NO #) JMP END-CODE CODE = (S n1 n2 -- f ) AX POP BX POP AX BX CMP YES JE NO #) JMP END-CODE : <> (S n1 n2 -- f ) = NOT ; : ?NEGATE (S n1 n2 -- n3 ) 0< IF NEGATE THEN ; \ 16 bit Comparison Operations 27Sep83map U< Compare the top two elements on the stack as unsigned integers and return true if the second is less than the first. Be sure to use U< whenever comparing addresses, or else strange things will happen beyond 32K. U> Compare the top two elements on the stack as unsigned integers. True if n1 > n2 unsigned. < Compare the top two elements on the stack as signed integers and return true if n1 < n2. > Compare the top two elements on the stack as signed integers and return true if n1 > n2. MIN Return the minimum of n1 and n2 MAX Return the maximum of n1 and n2 BETWEEN Return true if min <= n1 <= max, otherwise false. WITHIN Return true if min <= n1 < max, otherwise false. \ 16 bit Comparison Operations 11OCT83HHL ASSEMBLER LABEL YES TRUE # AX MOV 1PUSH CODE U< (S n1 n2 -- f ) AX POP BX POP AX BX CMP YES JB NO #) JMP END-CODE CODE U> (S n1 n2 -- f ) AX POP BX POP BX AX CMP YES JB NO #) JMP END-CODE CODE < (S n1 n2 -- f ) AX POP BX POP AX BX CMP YES JL NO #) JMP END-CODE CODE > (S n1 n2 -- f ) AX POP BX POP AX BX CMP YES JG NO #) JMP END-CODE : MIN (S n1 n2 -- n3 ) 2DUP > IF SWAP THEN DROP ; : MAX (S n1 n2 -- n3 ) 2DUP < IF SWAP THEN DROP ; : BETWEEN (S n1 min max -- f ) >R OVER > SWAP R> > OR NOT ; : WITHIN (S n1 min max -- f ) 1- BETWEEN ; \ 32 bit Memory Operations 09MAR83HHL 2@ Fetch a 32 bit value from addr. 2! Store a 32 bit value at addr. \ 32 bit Memory Operations 13Apr84map CODE 2@ (S addr -- d ) BX POP 0 [BX] AX MOV BX INC BX INC 0 [BX] DX MOV 2PUSH END-CODE CODE 2! (S d addr -- ) BX POP 0 [BX] POP BX INC BX INC 0 [BX] POP NEXT END-CODE \ 32 bit Memory and Stack Operations 26Sep83map 2DROP Drop the top two elements of the stack. 2DUP Duplicate the top two elements of the stack. 2SWAP Swap the top two pairs of numbers on the stack. You can use this operator to swap two 32 bit integers and preserve their meaning as double numbers. 2OVER Copy the second pair of numbers over the top pair. Behaves like 2SWAP for 32 bit integers. 3DUP Duplicate the top three elements of the stack. 4DUP Duplicate the top four elements of the stack. 2ROT rotates top three double numbers. \ 32 bit Memory and Stack Operations 11OCT83HHL CODE 2DROP (S d -- ) AX POP AX POP NEXT END-CODE CODE 2DUP (S d -- d d ) AX POP DX POP DX PUSH AX PUSH 2PUSH END-CODE CODE 2SWAP (S d1 d2 -- d2 d1 ) CX POP BX POP AX POP DX POP BX PUSH CX PUSH 2PUSH END-CODE CODE 2OVER (S d2 d2 -- d1 d2 d1 ) CX POP BX POP AX POP DX POP DX PUSH AX PUSH BX PUSH CX PUSH 2PUSH END-CODE : 3DUP (S a b c -- a b c a b c ) DUP 2OVER ROT ; : 4DUP (S a b c d -- a b c d a b c d ) 2OVER 2OVER ; : 2ROT (S a b c d e f - c d e f a b ) 5 ROLL 5 ROLL ; \ 32 bit Arithmetic Operations 05MAR83HHL D+ Add the two double precision numbers on the stack and return the result as a double precision number. DNEGATE Same as NEGATE except for double precision numbers. S>D Take a single precision number and make it double precision by extending the sign bit to the upper half. DABS Return the absolute value of the 32 bit integer on the stack \ 32 bit Arithmetic Operations 11OCT83HHL CODE D+ (S d1 d2 -- dsum ) AX POP DX POP BX POP CX POP CX DX ADD BX AX ADC 2PUSH END-CODE CODE DNEGATE (S d# -- d#' ) BX POP CX POP AX AX SUB AX DX MOV CX DX SUB BX AX SBB 2PUSH END-CODE CODE S>D (S n -- d ) AX POP CWD AX DX XCHG 2PUSH END-CODE CODE DABS (S d# -- d# ) DX POP DX PUSH DX DX OR ' DNEGATE @-T JS NEXT END-CODE \ 32 bit Arithmetic Operations 06Apr84map D2* 32 bit left shift. D2/ 32 bit arithmetic right shift. Equivalent to divide by 2. D- Subtract the two double precision numbers. ?DNEGATE Negate the double number if the top is negative. \ 32 bit Arithmetic Operations 06Apr84map CODE D2* (S d -- d*2 ) AX POP DX POP DX SHL AX RCL 2PUSH END-CODE CODE D2/ (S d -- d/2 ) AX POP DX POP AX SAR DX RCR 2PUSH END-CODE : D- (S d1 d2 -- d3 ) DNEGATE D+ ; : ?DNEGATE (S d1 n -- d2 ) 0< IF DNEGATE THEN ; \ 32 bit Comparison Operations 01Oct83map D0= Compare the top double number to zero. True if d = 0 D= Compare the top two double numbers. True if d1 = d2 DU< Performs unsigned comparison of two double numbers. D< Compare the top two double numbers. True if d1 < d2 D> Compare the top two double numbers. True if d1 > d2 DMIN Return the lesser of the top two double numbers. DMAX Return the greater of the the top two double numbers. \ 32 bit Comparison Operations 01OCT83MAP : D0= (S d -- f ) OR 0= ; : D= (S d1 d2 -- f ) D- D0= ; : DU< (S ud1 ud2 -- f ) ROT SWAP 2DUP U< IF 2DROP 2DROP TRUE ELSE <> IF 2DROP FALSE ELSE U< THEN THEN ; : D< (S d1 d2 -- f ) 2 PICK OVER = IF DU< ELSE NIP ROT DROP < THEN ; : D> (S d1 d2 -- f ) 2SWAP D< ; : DMIN (S d1 d2 -- d3 ) 4DUP D> IF 2SWAP THEN 2DROP ; : DMAX (S d1 d2 -- d3 ) 4DUP D< IF 2SWAP THEN 2DROP ; \ Mixed Mode Arithmetic 27Sep83map This does all the arithmetic you could possibly want and even more. I can never remember exactly what the order of the arguments is for any of these, except maybe * / and MOD, so I suggest you just try it when you are in doubt. That is one of the nice things about having an interpreter around, you can ask it questions anytime and it will tell you the answer. *D multiplys two singles and leaves a double. M/MOD divides a double by a single, leaving a single quotient and a single remainder. Division is floored. MU/MOD divides a double by a single, leaving a double quotient and a single remainder. Division is floored. \ Mixed Mode Arithmetic 04OCT83HHL : *D (S n1 n2 -- d# ) 2DUP XOR >R ABS SWAP ABS UM* R> ?DNEGATE ; : M/MOD (S d# n1 -- rem quot ) ?DUP IF DUP >R 2DUP XOR >R >R DABS R@ ABS UM/MOD SWAP R> ?NEGATE SWAP R> 0< IF NEGATE OVER IF 1- R@ ROT - SWAP THEN THEN R> DROP THEN ; : MU/MOD (S d# n1 -- rem d#quot ) >R 0 R@ UM/MOD R> SWAP >R UM/MOD R> ; \ 16 bit multiply and divide 27Sep83map */ is a particularly useful operator, as it allows you to do accurate arithmetic on fractional quantities. Think of it as multiplying n1 by the fraction n2/n3. The intermediate result is kept to full accuracy. Notice that this is not the same as * followed by /. See Starting Forth for more examples. \ 16 bit multiply and divide 04OCT83HHL : * (S n1 n2 -- n3 ) UM* DROP ; : /MOD (S n1 n2 -- rem quot ) >R S>D R> M/MOD ; : / (S n1 n2 -- quot ) /MOD NIP ; : MOD (S n1 n2 -- rem ) /MOD DROP ; : */MOD (S n1 n2 n3 -- rem quot ) >R *D R> M/MOD ; : */ (S n1 n2 n3 -- n1*n2/n3 ) */MOD NIP ;
peter@ficc.uu.net (Peter da Silva) (11/18/89)
You're right. If you can't add code words and you're too busy to fix the readily available C source to your C Forth you're going to have a loss of performance. But, really, that's all that you're going to lose. And that loss of performance may be acceptable: a. It may mean that you can run code on a 33 MHz 68030 or SPARC instead of a 2 MHz 8051 (or whatever) or on an emulator. b. It means you can timeshare your Forth development instead of having to do all your testing on the testbed system. c. It means you can test your code in a safe environment. When developing Forth under UNIX I would often do a !fork()! after loading a bunch of words before trying something. That way if I blew it I'd be back in the parent environment. d. It means you can get a working Forth environment up on a new development system in a matter of minutes rather than days. I don't see the point in beating this particular dead horse any more. We accept that there are disadvantages to Forth under C. But that doesn't make it a "toy". -- `-_-' Peter da Silva <peter@ficc.uu.net> <peter@sugar.hackercorp.com>. 'U` -------------- +1 713 274 5180. "vi is bad because it didn't work after I put jelly in my keyboard." -- Jeffrey W Percival (jwp@larry.sal.wisc.edu)
bouma@cs.purdue.EDU (William J. Bouma) (11/18/89)
In article <5172@sdcc6.ucsd.edu> ir230@sdcc6.ucsd.edu (john wavrik) writes: > >It's a shame that people homed in on the second sentence rather than the >first. Why? What is wrong with discussing this? Does it bother you that people disagree with you? Perhaps your statement was wrong? Perhaps my statements were wrong? I would like to learn. The way to do that is talk about it. >Before we go any further, I'd like to take a vote: > All Forth programmers who believe that "the main thing that > distinguishes Forth from other languages is the parameter stack" > please raise your hand. > [see Bill, none of them do!] Take off them dark sunglasses you are wearing. Besides, since when does the belief of the majority constitute proof? > ---- > >(Problem of adding longs to C expunged.) > >Please note that the problem is to add the data type "long" in such a way that >it has the same status as built-in types. Treating a "long" as an array or a >"struct" and defining functions like long_add would not do! Why not? That is how one does it in C. You pose me a problem and then take away my means of solving it. Then you claim that the problem cannot be solved. >You want to read, without alteration, the source code of a person who has >"long" as a primary type. She has declarations like: > > long x,y,z; >and statements like: > z = x + y; Oh, there is no way to do that. As I have said before, in C the new types you add do not "look like" the ones provided for you in the language. And new operators, being functions, become prefix rather than infix. But give me a few minutes in the editor, and I can convert the code syntax to work with my new definitions. In that sense, perhaps one could call forth better than c. But it has nothing over lisp in being able to do this! >It would be unfair to say that a great deal of Forth programming requires the >use of assembly language or access to the implementation. But it would be fair >to say that the 5% or 10% which does use them is responsible for many of the >claims made by Forth programmers that they can easily achieve things that >would be difficult (or impossible) to do in conventional languages. > >>I said: >> Why not? What do registers have to do with it? If you are writing from >> "within the forth environment" you are writing forth, right? Then why >> does it matter what that forth was written in? > >I hope I've answered this. Hardly! You gave a few examples of intermixing forth and assembly and doing the equivalent without assembly. You gave a bunch of "forth is better than c" propoganda. But you failed to give any evidence to support your claims that I am questioning. The forth-83 standard does not require the words CODE or END-CODE! In fact, it says in section 14.2 "Because of the system dependent nature of machine language programming, a Standard Program cannot use CODE or ;CODE."!! I stand by my claim that one can write forth in any language and have it behave identically. Further, it doesn't matter how much of it is written in the base language (asm, c, or whatever) and how much is in forth. The so called "traditional implementation" has nothing to do with the bahavior of forth at the top level! This seems obvious, but I welcome PROOF to the contrary. >P.S. I don't understand how this newsgroup has become split into two > subgroups. On the one hand we have the people who want to know how many > pins will be on the RTX-3000 (and Phil Koopman who replies by supplying > the phone number of Harris, Inc). On the other, we have 'C' programmers > who are intent at modifying a language they haven't bothered to > understand. Seems to me like both of these subject lines have everything to do with forth and thus belong completely in this news group! If you wish to talk about something else, please do. Yes I program in C, but mostly I program in Common-Lisp. I have written several compilers and interpreters in languages ranging from assembly to ML. Two of those were Forth interpreters. Yet, I "fail to understand" Forth! -- Bill <bouma@cs.purdue.edu> || ...!purdue!bouma
jbm@eos.UUCP (Jeffrey Mulligan) (11/18/89)
ir230@sdcc6.ucsd.edu (john wavrik) writes: Interesting posting... but I found the following bits a little inconsistent: >Please note that the problem is to add the data type "long" in such a way that >it has the same status as built-in types. Treating a "long" as an array or a >"struct" and defining functions like long_add would not do! later: >Let's look at the task of defining a double-precision addition in Forth. Here >is the definition in F-83 due to Henry Laxen: > >\ 32 bit Arithmetic Operations 11OCT83HHL > CODE D+ (S d1 d2 -- dsum ) > AX POP DX POP BX POP CX POP CX DX ADD BX AX ADC > 2PUSH END-CODE Isn't having a special word "D+" for adding doubles (instead of using "+") equivalent in a sense to having a special function long_add? You made a point in the C example of pointing out that the compiler has to be able to add double precision to single precision integers; Does this require yet another forth word, or do you have to do it as a series of two operations, a type conversion followed by D+ ?? -- Jeff Mulligan (jbm@aurora.arc.nasa.gov) NASA/Ames Research Ctr., Mail Stop 239-3, Moffet Field CA, 94035 (415) 694-3745
ir230@sdcc6.ucsd.edu (john wavrik) (11/18/89)
jbm@eos.UUCP (Jeffrey Mulligan) writes > Isn't having a special word "D+" for adding doubles (instead of using "+") > equivalent in a sense to having a special function long_add? > You made a point in the C example of pointing out that the compiler > has to be able to add double precision to single precision integers; > Does this require yet another forth word, or do you have to do it > as a series of two operations, a type conversion followed by D+ ?? Part of understanding Forth involves understanding that, as an interpreted language, it is forced to make certain decisions at run time which a compiled language can make at compile time. [Forth is actually in the same class as APL, interpreted BASIC, and interpreted LISP. The fact that Forth runs a few orders of magnitude faster than these is a tribute to Charles Moore's ingenuity at selecting what should be done by the language and what by the user.] In any language implementation, the process of adding two floating point numbers (to take an extreme) is different from the process of adding two integers. These are really two different operations. If, however, the language has you declare the types of the variables a compiler can decide which operation to perform when the code is compiled (rather than each time it is executed). In a compiled language, therefore, "+" is in a position to know in advance which type of "+" you have in mind. In Forth a "+" imbedded in a word has no way of knowing what type of things will be on the stack when the word is executed -- so using the same symbol for a variety of different addition operations would require a determination of type each time it executes -- obviously inefficient. The Forth solution is to have the programmer decide on the type and use a different symbol for what really are different operations ( + for integers, D+ for double precision, F+ for floating point, etc.) If a b c are on the stack, a + b * c is computed by * + if a b c are integers D* D+ if a b c are double precision F* F+ if a b c are real so all the types are treated using the same syntax. (Notice that my example doesn't involve any stack manipulation -- John, you devil!) In Pascal we would do x := a + b * c using integers. But if we use functions to add a new type, we would have to do, at best: x:= type_add(a,type_mult(b,c)) Syntactically this is quite different. It gets worse if the new type is a component of another type. Pascal does not provide for "addition of records". Thus if we make complex numbers (or whatever) a record type we will never be able to say z := x + y for complex numbers. Pascal does automatic coercion on its built-in types -- so if you ask a floating point to be added to an integer, it will automatically convert the integer to floating point (and then apply floating point addition). [Forth requires the programmer to specify any type conversion necessary just as it requires the programmer to decide on which addition is needed.] Forth requires the programmer to make decisions that would be made by the compiler in other languages -- this is the price that Charles Moore decided we should pay for having a fast interactive language. (editor's note: actually the decisions are obvious ones -- like what kinds of things we intend to add). The claim can be made that new types can be added to Forth in a way, both syntactically and sematically, to have the same status as "built in" objects -- in this way we claim that we have done the equivalent of "extending the compiler". Pascal, on the other hand, the compiler has been "taught" to treat its built- in objects in a certain way -- new objects can be added, but they are treated differently -- there is no realistic way to retrain the compiler to make first-class citizens out of these second-class citizens. The example in my last note suggests, and the responses confirm, the following: If someone sends you source code that your Pascal compiler has not been "taught" to recognize, you will have to alter the source code. A Forth programmer in a similar situation would alter the "compiler" and leave the source code untouched. (Philosophical comment: If things which are not taught to your compiler are treated in a different (and uglier) way -- you want a compiler which is as fat as possible. It should allow you to select from things that you may or may not use. If you can add anything you want without penalty, you want a system which is slim and understandable. Fat systems are hard to understand. You'd rather have the power to create anything you need. "Thin and powerful" vs. "Fat and ugly" -- it's the Forth difference) John J Wavrik jjwavrik@ucsd.edu Dept of Math C-012 Univ of Calif - San Diego La Jolla, CA 92093
wmb@SUN.COM (11/18/89)
One purpose of a Forth written is C is to have Forth on machines
upon which you otherwise couldn't get Forth. Another purpose is
to be able to link a Forth interpreter into an application which
is written in C. Another purpose is because some people just *want*
a Forth written in C, for whatever reason. Another purpose is to
get a Forth system "up and running" quickly on a new machine which
already has a C compiler. It took me about 5 minutes to bring up
C Forth 83 on a Sun 386i, without looking at any kind of manual
whatsoever.
I sell both assembly-language Forth and C Forth products. I recommend
the assembly language versions where they are applicable. Given the
choice, I personally prefer to use the assembly language version. I don't
always have choice. Given the choice between C Forth and no Forth,
I will choose C Forth.
C is the "assembly language" of a Forth written in C .
Given that, here is how you could write "D+" in C.
Yes, you have to know some C to understand it or to have written it.
You also have to understand binary arithmetic.
I don't buy the argument that you can write assembly language without
knowing assembly language. Learning just enough to get by is still
learning, and even that must build upon a base of knowledge about
other assembly languages and computer architecture in general (registers,
condition codes, addressing, etc).
Anyway, this is the straightforward version of D+ ; a slightly-optimized
version appears later. (By the way, has anybody else noticed that the
word "D+" sounds sort of like "deep lust"? (No, I don't hang out in adult
video stores)).
/*
* Carry calculation (assumes 2's complement arithmetic):
* (a) If both operands are positive, carry = 0.
* (b) Else, if both operands are negative, carry = 1.
* (c) Else it must be true that exactly one operand is negative,
* so carry = 1 iff the sum of the operands is non-negative.
*
* This calculation takes advantage of the "short-circuit evaluation"
* semantics of && and || , both for correctness and for efficiency.
* It is likely that both operands are positive, so the expression will
* be resolved by the &&, thus skipping the rest of the calculation.
*/
#define CARRY(a,b,r) ( ( (a|b) < 0 ) && (( (a&b) < 0 ) || ( r >= 0 )) )
int ah, al, bh, bl, rh, rl; /* arguments a,b result r */
bh = *sp++; bl = *sp++; /* Pop arguments */
ah = *sp++; al = *sp++;
rl = al + bl; rh = ah + bh + CARRY(al,bl,rl); /* Calculate result */
*sp++ = rl; *sp++ = rh; /* Push result */
Discussion:
This is not as fast as the equivalent machine code, but it's
probably within a factor of roughly 2. It is portable to any
2's complement machine. Presumably, this implementation is
readily-understandable by anybody who knows C .
If "int" is replaced by "long", this definition should even
provide 64-bit arithmetic on a 32-bit Forth running on a 16-bit
machine, such as a PC. (Doing that in assembly language on a PC
is likely to be quite an exercise, considering the shortage of
registers).
Implementation of D- and/or DNEGATE is left to the reader as an exercise.
Mysterious "XFORTH" unmasked:
Dr. Wavrik has been kind enough not to mention C Forth 83 by name while
he points out its deficiencies. In the interest of calling a spade a
spade, the mysterious "XFORTH" is my C Forth 83 product. I admit the
deficiencies (UM* is indeed implemeted incorrectly. I thank Dr. Wavrik
for pointing that out. The lack of D+ is corrected by this message).
(To add to the possible confusion, Mikael Patel has a Forth in C which
is actually named XForth. I believe that the XFORTH to which Dr. Wavrik
is referring is C Forth 83).
Optimized version:
With that in mind, here is the optimized version of D+ for C Forth 83.
These optimizations are based on knowledge of how the stack is implemented
in C Forth 83 (with the top of stack "cached" in a register variable)
and knowledge of some predeclared register variables. It's basically
the same code with the arguments are declared slightly differently and
some unnecessary "pushes" and "pops" eliminated.
case DPLUS:
{
normal ah, bh;
#define al scr
#define bl tos
#define rh tos
#define rl *sp
/*
* Carry calculation (assumes 2's complement arithmetic):
* (a) If both operands are positive, carry = 0.
* (b) Else, if both operands are negative, carry = 1.
* (c) Else it must be true that exactly one operand is negative,
* so carry = 1 iff the sum of the operands is non-negative.
*
* This calculation takes advantage of the "short-circuit evaluation"
* semantics of && and || , both for correctness and for efficiency.
* It is likely that both operands are positive, so the expression will
* be resolved by the &&, thus skipping the rest of the calculation.
*/
#define CARRY(a,b,r) ( ((a | b) < 0) && (((a & b) < 0) || (r >= 0)) )
bh = tos; bl = *sp++;
ah = *sp++; al = *sp;
rl = al + bl; rh = ah + bh + CARRY(al, bl, rl);
#undef al
#undef bh
#undef rh
#undef rl
#undef CARRY
}
next;
marc@noe.UUCP (Marc de Groot) (11/22/89)
In article <5599@eos.UUCP> jbm@eos.UUCP (Jeffrey Mulligan) writes: >ir230@sdcc6.ucsd.edu (john wavrik) writes: >>Please note that the problem is to add the data type "long" in such a way that >>it has the same status as built-in types. Treating a "long" as an array or a >>"struct" and defining functions like long_add would not do! > >Isn't having a special word "D+" for adding doubles (instead of using "+") >equivalent in a sense to having a special function long_add? Forth does not dictate any particular approach. The Forth Approach says "Do what's good for YOU." Do you want loading of operators so that "+" will add ints and floats, and concatenate strings? Well, you can implement that relatively trivially. John Wavrik's example is unfortunately weak; the point he is trying to make is not. The reserved word "long" is a member of a class of objects which DOES NOT CHANGE. You, the programmer, are not allowed to add to this set of words or change them. The reserved word "+" will add char's short's long's int's and float's and double's. If you want more functionality, you have to program in C++. In straight C, there is no way to change its behavior. Forth is a language which, I am told, is particularly comfortable for object-oriented programming (OOP) . I have no experience with OOP but I have a colleague who enthusiasticly churns out fast, tight, OO code on Amigas using JForth. The point is that in Forth, like in LISP, programs are members of the same class of object as the reserved word set. There is a huge advantage to programming in such a system. The increased flexibility provides for a richness and power not available in other systems. ^M -- Marc de Groot (KG6KF) These ARE my employer's opinions! Noe Systems, San Francisco UUCP: uunet!hoptoad!noe!marc Internet: marc@kg6kf.AMPR.ORG
ForthNet@willett.UUCP (ForthNet articles from GEnie) (01/15/90)
Date: 01-13-90 (09:09) Number: 2777 (Echo) To: R.BARR Refer#: NONE From: STEVE WHEELER Read: NO Subj: FORTH IMPLEMENTATION Status: PUBLIC MESSAGE I don't know much, but one of the software types at Topologix wrote a Forth for the transputer boards they made. I got the impression that it was more for his own edification than anything else. Don't remember his name, and I don't even know if he's still at Topologix since their business started south. But imagine the possibilities ... Forth on a four-transputer board with multiple megs plugged into a Sun with color monitor! Gee .. (no, gee-whillikers!) NET/Mail : RCFB Golden, CO (303) 278-0364 VESTA & Denver FIG for Forth! ----- This message came from GEnie via willett through a semi-automated process. Report problems to: 'uunet!willett!dwp' or 'willett!dwp@gateway.sei.cmu.edu'
ForthNet@willett.UUCP (ForthNet articles from GEnie) (02/19/90)
Category 3, Topic 24 Message 51 Sat Feb 17, 1990 R.BERKEY [Robert] at 18:57 PST To: David Albert Re: : (colon) Data Structures Threading Tradeoffs > ...I have seen that several implementations of Forth use a small > "inner interpreter loop" using DS:SI for example as the > instructioni pointer. I chose just to use CALL and RET as the > entry and exit to my words. Therfore, CS:IP is my instruction > pointer and word pointer. Here's the question: Why do people use > the separate inner interpreter loop? It seems that the call and > return are much more flexible and that I can more easily manipulate > return addresses since they are just on the stack. I use BP for my > parameter stack pointer. This gets into the whole issue of varieties of implementations of colon. To review, the basic varieties of Forth threading techniques have been called, in increasing order of abstractness: native code compilation jsr threaded direct threaded indirect threaded token threaded Native code compilation is just the usual mix of code that an assembler and an ordinary compiler produce. This may get called other things like direct machine compilation. A Forth native code compilation may have lots of calls intermixed with short runs of low level code. Depending on viewpoint, this may or may not be considered a threading technique. What you've implemented sounds like it might be related to the class of JSR (jump subroutine) threading, where the body of a colon definition contains a sequence of calls. JSR threading is related to native code compilation in that the processor looks at them in the same way. The structural difference is such that a JSR threaded system can be compliant to the Forth-83 Standard, while a native code compilation is not. A Forth-83 implementation could also have a native code compiler, but this would be there in addition to the : (colon) compiler. The names "direct threaded" (DTC) and "indirect threaded" (ITC) were criticized on technical grounds in an early Forth Dimensions but the names have stuck. Direct threading gets a code field added to the body of the colon definition. The code field is directly executable, although often one register must be set before executing the code field. One key answer to your query is that compiling a compilation token on an 80188 jsr threaded system takes three bytes, whereas compiling a compilation token with DTC, ITC, TTC, etc., takes two bytes--a potential for substantial reduction in code size. Indirect threading means that the code field, instead of being executable, contains the address of executable code. The Forth-79 Standard restricted implementations of : to indirect threading. Token threading (TTC) has several variants. It may add one more level of indirection through a table of pointers, to a table of pointers to code. With token threading, addresses can be completely isolated from the main body of code, making relocatability easy. Specific machine architectures lead to more variations on the above, including segment threading (SgTC) on the 8086, and a 68000 "token" threading in which the table is accessed by the architecture and the thread is directly executable. It might seem at first glance that these systems would get slower the more abstract they get. But then consider that in a JSR system NEXT is RET CALL . That's two bytes of opcode, which reads from memory four bytes of addresses, and writes two bytes, for a total of eight bytes of memory access. Meanwhile an 80188 direct threaded system with an inline NEXT of LODSW AX JMP has three bytes of opcode, which reads two bytes of address, for a total of five bytes of memory access. Processors including the PDP-11 and HP2100(?) have single-opcode instructions that can perform an indirect-threaded NEXT . Its easy to see that the speed tradeoffs can get interesting. Like you suggest, there are many other tradeoffs. I've sometimes wondered about the efficiency tradeoffs of having the return stack the default 80x8x SP stack. Related to your comment about ease of manipulating return addresses, one technique that's used is SP, BP XCHG to get at the return stack. One thing I find interesting about JSR is that it clarifies that a Forth IP register, (DS:SI or whatever), is really a part of the return stack. Now as for how all this compares with what one discovers when reading up on TIL's, I wouldn't know, but would be interested. Robert ----- This message came from GEnie via willett through a semi-automated process. Report problems to: 'uunet!willett!dwp' or 'willett!dwp@gateway.sei.cmu.edu'
ForthNet@willett.UUCP (ForthNet articles from GEnie) (03/06/90)
Category 3, Topic 24 Message 52 Mon Mar 05, 1990 R.BERKEY [Robert] at 05:48 PST Re: ?FOR , a version of FOR that executes zero times. Frank Sergeant writes: > Rob Chapman (via Usenet) suggests that u FOR ... NEXT should > execute u times. > > Rob, thanks for your suggestion... ... > I have about decided that I agree with Rob. ... > cmFORTH has the word -ZERO that accomplishes this change to u > instead of u+1. It is used like this > : STARS ( u -) FOR -ZERO STAR THEN NEXT ; ... > Rather than use -ZERO, I think I will build it in as the default. ... > Anyone have any thoughts on why we shouldn't make this change? I like Rob Chapman's structure, also. However, I think that using the name FOR impedes readability, portability of code, and potential for standardization. I don't see any basic technical problem with FOR (the name NEXT and using I equivalent to R@ with FOR are different issues). We went through this after Forth-83--there were those who chose to code or recode DO as ?DO . As to a zero executing version of FOR , here's an implementation I've added after seeing Rob Chapman's posting. The loop code is in the target system. Here are some of the target compiler implementations. thread, BRANCH, \ target compile BRANCH thread, ?FOR, \ target compile (?FOR) thread, ?LOOP, \ the (loop) that get's compiled can vary 0 value -1LOOP \ returns the compilation token of (-1LOOP) T: ?FOR ( -- >mark ) \ T: adds the word to a compiler vocabulary ?FOR, \ compile (?FOR) BRANCH, >MARK \ (inefficient: BRANCH, should be eliminated) IP-R> SAVE> ?LOOP, \ save what ?LOOP compiles on return stack IP->R \ (should be left on parameter stack?) -1LOOP =: ?LOOP, \ set ?LOOP, ; T: ?LOOP ( >mark -- ) DUP >RESOLVE \ resolve the branch after ?FOR ?LOOP, <RESOLVE \ compile loop and resolve backward branch IP-R> RESTORE> ?LOOP, IP->R \ put back previous copy in ?LOOP, ; This design is so that the syntax ?LOOP can be used to terminate a variety of loops. Also, ?FOR puts branch addresses on the stack as would BEGIN so that using WHILE inside a ?FOR loop is like using it inside a BEGIN ... UNTIL loop. I'm not at all sure about this--the ANS ?DO requires two addresses. Any comments? The following code is for the target system. On the 80x8x, using the overflow flag for (LOOP) and (NEXT) is a bit faster than using the carry flag (here it's 15 bytes memory access vs. 16). This is the same basic loop body I've always implemented for (LOOP) , except that this version has no provision for LEAVE . Maybe I'll add that later for compatibility, but for now I plan on using WHILE and UNLOOP for new code. CODE (-1LOOP) ( -- ) 0 [+BP] WORD DEC OV<> IF ES: LODSB CBW AX, SI ADD NEXT JMP THEN SI INC 4 #, BP ADD NEXT JMP END-CODE ' (-1LOOP) =: -1LOOP CODE (?FOR) ( u -- ) AX POP $80 #, AH XOR SWITCH, AX PUSH AX PUSH SWITCH, NEXT JMP END-CODE ' (?FOR) =: ?FOR, I'm not using I with these loops right now, but it could be coded as follows. This coding for I makes ?FOR ... I ... the same as 0 ?DO ... I ... CODE I ( -- w ) \ get the current index of the innermost loop 2 [+BP], AX MOV 0 [+BP], AX SUB AX PUSH NEXT JMP END-CODE Robert ----- This message came from GEnie via willett through a semi-automated process. Report problems to: 'uunet!willett!dwp' or 'willett!dwp@gateway.sei.cmu.edu'
ForthNet@willett.UUCP (ForthNet articles from GEnie) (03/12/90)
Category 3, Topic 24 Message 53 Sun Mar 11, 1990 F.SERGEANT [Frank] at 10:40 CST Re: ?FOR , a version of FOR that executes zero times and other tinkerings To Robert Berkey Thanks for the tip on using the overflow flag in NEXT. I guess you lay down a 1-byte branch except when ?FOR & NEXT are far enough apart to require a 2-byte branch. I've got to consider that for all my branches. I hate the added complexity for the compiler (& for SEE). This week I changed from FOR to ?FOR. Here's my new code. I've built the branch in as part of 'for' so 'for' costs 4 bytes for each ?FOR NEXT loop. ( runtime FOR - keeps only count on Rstk) CODE for SWITCH, ( point to return stack) BX PUSH, ( save loop count on R stk) SWITCH, ( point back to data stack) BX POP, ( refill TOS ) 0 [SI] SI MOV, ( branch to next to skip loop 1st time) NXT, ( in-line threading next - as opposed to NEXT next) END-CODE This allowed me to eliminate '1-' when it appeared before the FOR. It also allowed me to eliminate the ?DUP IF 1- ..... THEN that often surrounded FOR ... NEXT. I went thru my applications making those changes, hoping for a net savings in bytes. I was surprised that there was a net increase in program size, but not much. EVERY FOR NEXT loop was 2 bytes longer because of the branch built into 'for' while the 2 byte savings for dropping '1-' and the 8 byte savings for dropping '?DUP IF 1-' only happened some of the time. In addition, a few times an extra '1+' was needed. Thus I consider this only a border-line success. I think I'm ahead due to the improved clarity of dropping the '?DUP IF 1-'. So, I've paid a small object code price for an improvement in source code clarity. It is probably worth it to me. There is something nice about 7 FOR ... NEXT looping 7 times rather than 8 and I've always liked the idea of 0 FOR ... NEXT doing it zero times! But, the above trade off is not my favorite kind. I much prefer the kind the source code gets simpler & clearer and the object code gets smaller and faster. Continuing to tinker, I added 'N!' ( u a - u) which stores u into a and keeps u on the stack. A quick search of my applications showed about 28 places it could be used in place of '!' and eliminate 'DUP'. I added '+UNDER' ( a b c - a+c b) which replaced 'SWAP n + SWAP' with 'n +UNDER'. I factored the 'IF' into 'ABORT"'. I added 'NIP'. These changes did reduce the object code size for my applications. But, not by enough for me to be sure it was worth having more words in the nucleus. I am about to go thru and look for places to use COUNT to walk forward thru a sequence of byte values (instead of DUP 1+ SWAP C@), a practice recently decried by Mitch Bradley (I think it was). I've thought of defining run-time begin if while else then repeat until again to improve the appearance of the SEE decompilation. For now I don't want to pay the price. They might go well in a special version designed for teaching Forth. -- Frank ----- This message came from GEnie via willett through a semi-automated process. Report problems to: 'uunet!willett!dwp' or 'willett!dwp@gateway.sei.cmu.edu'
wmb@MITCH.ENG.SUN.COM (03/13/90)
> I am about to go thru > and look for places to use COUNT to walk forward thru a sequence of byte > values (instead of DUP 1+ SWAP C@), a practice recently decried by Mitch > Bradley (I think it was). I don't object to having a word to perform this function, but it should be named something else other than COUNT . How about C@+ , or NEXTCHAR , or something like that. It could still be implemented as : C@+ COUNT ; , or if you are worried about speed, either implement it directly in code, duplicating the the implementation of COUNT , or use something like Yngve's SYNONYM facility to create "smart" aliases that either compile or execute their referent as is appropriate. COUNT has a specific purpose - converting a packed string to an "addr len" string. Using it to step through an array just makes Forth harder to read, because the reader looks at it and thinks there must a packed string somewhere. Indeed, the name "count" has negative mnemonic value in this case. "Cleverness" (aha! this just happens to work so I'll use it) at the expense of clarity is one of the things that gives Forth a bad name in the computer community at large. Cheers, Mitch
ForthNet@willett.UUCP (ForthNet articles from GEnie) (03/19/90)
Category 3, Topic 24 Message 54 Sun Mar 18, 1990 R.BERKEY [Robert] at 07:07 PST Re: Undetected out-of-range branches Frank Sergeant writes, 900311: > I guess you lay down a 1-byte branch except when ?FOR & NEXT > are far enough apart to require a 2-byte branch. I've got to > consider that for all my branches. I hate the added complexity > for the compiler (& for SEE). The one-byte branch is something inherited in code over ten years old now. And no, there are no two-byte branches. Also inherited were three out-of- range branches in shipped code. If anything, the monster problem as I see it with one-byte branches is compilers and assemblers without range checking. Consider that an out-of- range branch is a problem that moves around if you put in more code to detect what's happening. Charles Moore notes that learning need happen but once. What a former co-worker learned after a difficult debugging experience was to stay away from Forth. After that job he became an engineering rep for Intel products (technically also Harris) at a distributor. He likes to talk about Forth... Robert ----- This message came from GEnie via willett through a semi-automated process. Report problems to: 'uunet!willett!dwp' or 'willett!dwp@gateway.sei.cmu.edu'
ForthNet@willett.UUCP (ForthNet articles from GEnie) (03/25/90)
Category 3, Topic 24 Message 55 Sat Mar 24, 1990 F.SERGEANT [Frank] at 11:20 CST RB>Charles Moore notes that learning need happen but once. What a RB>former co-worker learned after a difficult debugging experience RB>was to stay away from Forth. That is a great juxtaposition. Another I like is "Money is the root of all evil." "A man needs roots." (I know the 1st is a misquote.) Perhaps my very favorite is "Time flies like an arrow." "Fruit flies like a banana." Yours, of course, is more serious. Thanks for the details on the 1 vs 2 byte branches. Having a fixed size branch sure simplifies things. Even if I decided to use a variable size branch, I believe I'd leave the forward branch a fixed (2-byte) size, since at that time, the destination is unknown. This leaves only the backward branches that can be optimized. Since this eliminates IF-ELSE-THEN, I think the variable size branches wouldn't be worth doing. So, that leaves me to consider using a fixed size 1-byte branch, with an in-range test. Sometime I hope to look over my code and see what that would save me in time and space (and how much code would need to be re-written). -- Frank ----- This message came from GEnie via willett through a semi-automated process. Report problems to: 'uunet!willett!dwp' or 'willett!dwp@gateway.sei.cmu.edu'
ForthNet@willett.UUCP (ForthNet articles from GEnie) (03/26/90)
Category 3, Topic 24 Message 56 Sat Mar 24, 1990 D.RUFFER [Dennis] at 18:41 EST One more detail on Intel branches Frank. If you plan on using variable length calls, the returns must know which was being used at compile time. In other words, if you use a 5 byte call you must use a FAR RETurn to get back. I think the 80386 automatically uses 32 bit addresses when it is in protected mode, so that is when FAR CALLs use 7 bytes. I haven't figured it all out yet, so I may still be mistaken about it. Having range checks in you conditional branches is almost required. I get real upset when a Forth complains that my loop is too long. It would be much better to fix the problem for me. It might be tough for a foward branch, because you didn't save enough bytes, but a backwards branch should be able to figure out how to do big and small branches. IMHO DaR ----- This message came from GEnie via willett through a semi-automated process. Report problems to: 'uunet!willett!dwp' or 'willett!dwp@gateway.sei.cmu.edu'
stepwolf@milton.acs.washington.edu (Shiva) (03/26/90)
Re: pygmy12 (or 13) Can anyone tell me how to modify pygmy so that it runs on Ms-Dos machines and does not write to the screen directly, but through Ms-Dos [so that people with screwed-up/non-ibm-compatible graphics can use it]? If anyone's ever heard of it, I have a Sanyo 555. Thanks. --lupus@max.u.washington.edu (the best one to e-mail to me at)
dka@mtunq.ATT.COM (Doug Addison) (05/23/90)
I'd be interested in the results as I'm new to Forth and this would be instructive as to developing such a system for other-than-host type hardware. D. K. Addison
ForthNet@willett.UUCP (ForthNet articles from GEnie) (05/26/90)
Date: 05-23-90 (20:40) Number: 3276 (Echo) To: W.FEDERICI Refer#: 3267 From: RAY DUNCAN Read: NO Subj: FORTH IMPLEMENTATION Status: PUBLIC MESSAGE There are a number of people working with this idea already -- of putting a small headerless kernel on the target, and keeping the interpreter/compiler layer and dictionary/symbol table for the target on the host --- still allowing "transparent" interactive use of the target. Klaus Flesch in Germany, Martin Tracy in Los Angeles, and Chris Stephens of COMSOL in England have worked on this. I believe Forth Inc. is selling some derivative of Chris Stephen's stuff as a product here in the US. NET/Mail : LMI Forth Board, Los Angeles, CA (213) 306-3530 ----- This message came from GEnie via willett through a semi-automated process. Report problems to: uunet!willett!dwp or willett!dwp@hobbes.cert.sei.cmu.edu
ForthNet@willett.UUCP (ForthNet articles from GEnie) (05/28/90)
Category 3, Topic 24 Message 68 Sun May 27, 1990 B.RODRIGUEZ2 at 12:17 EDT You too? This seems to be the topic of the year. I'm sorry I missed Ray Duncan's ESP column on this; but I did hear the talk on chipFORTH at last year's Rochester conference. I've recently modified my own metacompiler to compile directly into an embedded Zilog Super8, using a small (~300 bytes) "talker" program in the target system. Meta-definitions are compiled directly into the target's memory, but the headers reside in the host. Meta- interpretation lets you execute target words by typing their names at the host. There's some fudging to make the target's stack look like the active parameter stack. Naturally, you compile into the target only those words that your application requires. (It's fun to be able to test "DUP" when it's the only word in the target dictionary!) It's still an ungainly mess, but I'm going to be giving a "how-to" talk at Rochester on this project. (Or at least as much of the "how-to" as I can communicate in 15 minutes.) If you're not going to be at Rochester, I'd be happy to send you the 5-page paper. I'm not going to be showing off the actual code until I clean it up a bit. - Brad ----- This message came from GEnie via willett through a semi-automated process. Report problems to: uunet!willett!dwp or willett!dwp@hobbes.cert.sei.cmu.edu
ForthNet@willett.UUCP (ForthNet articles from GEnie) (06/02/90)
Category 3, Topic 24 Message 69 Thu May 31, 1990 W.FEDERICI [W.FEDERICI] at 20:34 PDT Yes, the connection with meta-compiling was obvious -- I even considered working up the target system "from scratch" using the memory examine/change functions of a conventional monitor ROM on a single-board computer. In the interest of getting something working quickly, tho, I started with a rather complete "precompiled" kernel (~2.5Kbytes on the 68000); I was most interested in seeing how transparent the interaction between the host interpreter and the target execution would be. . Brad -- I would appreciate a copy of your paper, since I won't get to Rochester this year. . Wilson ----- This message came from GEnie via willett through a semi-automated process. Report problems to: uunet!willett!dwp or willett!dwp@hobbes.cert.sei.cmu.edu
ForthNet@willett.UUCP (ForthNet articles from GEnie) (06/02/90)
Date: 05-31-90 (15:38) Number: 3296 (Echo) To: ALL Refer#: NONE From: DAVID ALBERT Read: (N/A) Subj: ASM Source for Forth Status: PUBLIC MESSAGE Hi folks, I have been involved in building embedded systems for a little while now and have been trying to port a forth system over. I wrote my own TIL, and it works fine, but I am having a litte trouble implementing a full Forth. The systems are 80188 based, but have no disk drives (as most embedded systems don't) and therefore, such a system must use memory (NV SRAM or EEPROM) for block storage. So, if anyone knows anything about this, or has the assembler source for a PC Forth so I can modify it, it would be much appreciated. Thanks! ----- This message came from GEnie via willett through a semi-automated process. Report problems to: uunet!willett!dwp or willett!dwp@hobbes.cert.sei.cmu.edu
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (09/14/90)
Date: 09-11-90 (10:42) Number: 724 (Echo) To: DONALD BENSCH Refer#: 719 From: KARL BROWN Read: NO Subj: FORTH & SBC Status: PUBLIC MESSAGE Hello. Welcome to the fold. To gain an understanding of FORTH which will help you write a SBC FORTH, I would recommend going 'way back to the begining. The book: Threaded Interpretive Languages by Loeliger (R.G.) ISBN 0-07-038360-X McGraw-Hill BYTE BOOKS 70 Main St Peterborough, NH 03458 is the one you want. It tells you in all the gory detail exactcly what is required to set up an interperted, extensible language (FORTH). Good luck. Give Siliconnections (222-2221) a call. Ask for Andy. -Karl [ One might guess the area code is 604??? -dwp] NET/Mail : British Columbia Forth Board - Burnaby BC - (604)434-5886 ----- This message came from GEnie via willett through a semi-automated process. Report problems to: uunet!willett!dwp or dwp@willett.pgh.pa.us
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (09/14/90)
Date: 09-11-90 (13:10) Number: 725 (Echo) To: DONALD BENSCH Refer#: 719 From: JACK BROWN Read: NO Subj: FORTH & SBC Status: PUBLIC MESSAGE DB>Does anyone know of a book that explaines how to set-up Forth on a DB>single board computer (or any CPU based system) from scratch? You might get CH Ting's EFORTH . Download the file EFORTH.ZIP from BCFB (may also be on ECFB and GEnie). Ting also has an execellent manual to go with EFORTH. [ You can also get EFORTH.ZIP from wsmr-simtel20.army.mil and wuarchive.wustl.edu (they mirror simtel20). If you don't have FTP access, drop me an email message to the address below I can send you a UUENCODED copy by return email. --dwp] EFORTH is a model Forth system that resides on your single board computer and communicates with your HOST PC via serial port and your favorite COMM software. EFORTH has about 20 - 30 code primitives that you must code yourself for your SBC's CPU. The 8086 model is included in MASM format in the file EFORTH.ZIP. Because the model uses MASM format for the kernel it may be more readily digestible to those familiar with MASM and just starting out with Forth. --- * QDeLuxe 1.01 #260s Are you a member of FIG? Why not join today! ----- This message came from GEnie via willett through a semi-automated process. Report problems to: uunet!willett!dwp or dwp@willett.pgh.pa.us
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (10/10/90)
Category 3, Topic 24 Message 86 Mon Oct 08, 1990 F.SERGEANT [Frank] at 22:20 CDT To Vance Heron, regarding cross-(meta)-compilation in Pygmy Forth. VH> Given a Forth system like PYGMY (I really love it - thank you VH> Frank) how could I port it to a 68000 box ??? How did Frank get VH> his 1st interpreter up so he could interpret the rest ?? Believe it or not, I actually plan to answer both of the above questions. My latest answer to "should Forth be written in assembler or in Forth?" is it definitely should be written in Forth, at least most of the time, but there can be cases where it might need to be distributed as an assembly listing. Are we talking about (1) Porting to a different processor or (2) distributing to a new Forther a system that has already been written for his system. There are gray areas, e.g. sending a Forth system for the PC to someone who has a "clone" with video RAM at a "wrong" location. He can't quite run it without making some changes. He should, though, be able to look at the source code (even if it is in blocks. I have several generally available programs that will display blocks, e.g. the HEX mode of XTREE). Then, with DOS's DEBUG program or various other file or memory editing programs he could patch the video RAM address(es) to suit his own system. Not as smooth as sending him a copy that will run without patching, you say? An alternative would be to send him the source code in assembly language and let him use that listing as a guide for his patching, or, IF he had a compatible assembler and knew how to use it, he could edit the assembly source and re- assemble. That also might not be smooth, depending on which tools and what knowledge the recipient had. When I previously expressed my preference for distributing Forth in Forth, I was refering to the #2 case. There I see no point in sending him a paper assembly listing when you can send him a disk that will run. Then, since he can run Forth, I think it is superior for him to have his Forth's source code also written in Forth. If we send him a Forth that must be compiled from that Forth before it will run so that he can use it to compile the Forth that he wants to compile, we get into the bootstrap problem. He's in a similar position if we send him assembly source and he doesn't have an assembler. If we include a paper assembly listing, including the hex object code, he can type in the object directly (and painfully). There again, he might not have DEBUG or an equivalent monitor program to allow him to do that. So we say everyone can be expected to have an assembler, at least an equivalent to DOS's DEBUG, but not be expected to have a Forth. However, the "assembler" listings of Forth that I've looked at, including FIG, were not pure assembler. They typically have the code words written in assembler and various (fairly opague) macro tricks to get the assembler to lay down headers and colon definition lists. That is one reason I recommend Forth in Forth: readability. As proof, I say again, download EFORTH and compare its assembly listing with its Forth listing; see which you think is more readable. And, which, in the case of EFORTH, was it really written in and which was the derivative listing? I had the opportunity to ask Bill Muench that question last week in San Francisco. He tells me he wrote it in Forth and Ting translated it to assembler. So, at best, I feel the assembly listing approach is only to be used if nothing else will work to give you a running Forth system. Thereafter, abandon the assembly listing and work in Forth. ANSWER #2: I started with the FIG listing for the 6809 and entered the (almost unreadable) hex codes into a Radio Shack Color Computer thru a monitor program that I'd written in BASIC, saving intermediate stages to cassette tape. Once that was done, I re-wrote FIG Forth for the 6809, wrote my meta-compiler & 6809 assembler, and re-gen'd my system. Using that system and its meta-compiler, I cross- (meta)-compiled Forths for the 8088 and the 68000. The first step was to write a Forth assembler for the target processor that would run on my Forth on the Color Computer. (Boy was I glad when I moved from cassette tape to a disk drive!) The next step was to re- write the code words in that new assembly language, and adjust for RAM and I/O locations in the target system, and then run that new source code through the meta-compiler. Simple. After I finally got a PC, I primarily used L&P's F83. That was my platform for generating Pygmy Forth for the 1st time. ANSWER #1: If you have access to a PC, so that you can run Pygmy, then just use Pygmy on the PC to generate a version for the 68000. If you don't want to do that, you might send me your 68000 system and I might do it for you, as I want to have Pygmy running on various processors. I already have an earlier version of Pygmy running (from EPROM) on a breadboarded Zilog S8 system. Anyway, here's how you do it: first write a 68000 assembler that runs in Pygmy on the PC but generates 68000 opcodes. Watch, naturally, for byte order and for even addresses. You might shorten this step by starting with one of the 68000 Forth assemblers that have been published. Then, using Pygmy & its editor on the PC, re-write all the code words that are currently in 80x8x assembly language into their equivalents in 68000 assembly language. Look over operating system calls, etc. You could use a central next instead of in-line, at least to start with, and put a trace or single stepping routine in it to display the stacks and IP etc. Load the code, generating an image in the PC's memory of the 68000 version of Pygmy. Save it to disk. Transfer it (via serial ports, I suppose) to your 68000 system and try it out. The 68000's single-stepping, debugging monitor will come in very handy, combined with the Forth level single stepping I mentioned above. -- Frank ----- This message came from GEnie via willett through a semi-automated process. Report problems to: dwp@willett.pgh.pa.us or uunet!willett!dwp
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (10/21/90)
Date: 10-13-90 (16:34) Number: 4054 (Echo) To: ALL Refer#: NONE From: JONAH THOMAS Read: (N/A) Subj: 8086 CODE Status: PUBLIC MESSAGE I've just recently started programming on an 8086 instead of a 68000, and I wasn't sure how much slower POPs and PUSHs were than direct register moves. So I tested it with the following code in Pygmy 1.3. ( looking at speed of different assembly commands) CODE TEST1 BX AX MOV, AX BX MOV, NXT, END-CODE CODE TEST2 AX POP, AX PUSH, NXT, END-CODE : TESTA -1 FOR TEST1 NEXT ; : TESTB -1 FOR TEST2 NEXT ; : TESTC 50 FOR TESTA NEXT ; : TESTD 50 FOR TESTB NEXT ; ( TESTC averaged 20.40 seconds, TESTD averaged 19.34 seconds apparently PUSH and POP are faster than register moves on this machine. What a surprise.) I figured that since the only difference between them was the two assembly instructions, that would be the only thing to make a difference in the speed. But the PUSH and POP combination actually came out faster. This is a memory operation where the other is just register to register! How could it possibly be faster? Is it 4 cycles where BX AX MOV, is 5? I'm running on an obsolescent 12Mh AT, it's just a standard 80286 processor. I don't think it does any fancy cache memory or anything like that. Can somebody recommend a good book? I don't remember mysteries like this on the 68000, but maybe I've just forgotten.... --- ~ NET/Mail : The MATRIX (5 Nodes/1.2 Gig) Birmingham, AL (205) 323-2016 ~ RNet 1.05M:* Metrolink - Front Porch BBS 404-695-1889 HST -Regional HUB NET/Mail : DC Information Exchange, MetroLink Int'l Hub. (202)433-6639 ----- This message came from GEnie via willett through a semi-automated process. Report problems to: dwp@willett.pgh.pa.us or uunet!willett!dwp
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (10/22/90)
To: JONAH THOMAS Refer#: 4054 From: DAVE SIEGEL Read: NO Subj: 8086 CODE Status: PUBLIC MESSAGE JT> I figured that since the only difference between them was the two JT>assembly instructions, that would be the only thing to make a JT>difference in the speed. But the PUSH and POP combination actually JT>came out faster. This is a memory operation where the other is just JT>register to register! How could it possibly be faster? Is it 4 JT>cycles where BX AX MOV, is 5? The MOV, MOV sequence uses more memory than the PUSH, POP sequence. (Each MOV is a two byte instruction, while PUSH and POP are single byte instructions). For more details on "interesting" performance characteristics of Intel CPUs, see Abrash's Zen of Assembler. -dms --- ~ EZ-Reader 1.21 ~ PCRelay:MOONDOG -> #35 RelayNet (tm) 4.10 MoonDog BBS Brooklyn,NY 718 692-2498 9600-V ----- This message came from GEnie via willett through a semi-automated process. Report problems to: dwp@willett.pgh.pa.us or uunet!willett!dwp
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (10/22/90)
To: ALL Refer#: NONE From: RON TOLER Read: HAS REPLIES Subj: CODE & PROC 3.2+ Status: PUBLIC MESSAGE I'm attempting to figure out some of the details involved with 3.2 + assembler codeing using CODE and PROC statements. All I seem to be able to do so far is to lock up my PC. I've used 3.1 assembler previously but only for short task CODE statements with all data on the stack and available to the CODE procedure. I could use some pointers on two things in the 3.2+ assembler. First: Calling PROC . PROC +TEST AX, # 0001 ADD BX, # 0000 ADC RETF END-PROC . VARIABLE TEST+ +TEST TEST+ ! . CODE ++TEST AX POP BX POP TEST+ CALLFI BX PUSH AX PUSH NEXT, END-CODE . This is my basic trial code in which I have been using as described in a comment form 9-12-89 in using CALLFI and RETF via a variable. Is there something majorly wrong or is there some syntax incorrect. Debugging is impossible when all variations on a theme lock up the PC. . The other thing is VARIABLE access via the CODE statements. Examples that I have also seem to access strange locations. Any help would be greatly appreciated. NET/Mail : LMI Forth Board, Los Angeles, CA (213) 306-3530 ----- This message came from GEnie via willett through a semi-automated process. Report problems to: dwp@willett.pgh.pa.us or uunet!willett!dwp
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (10/22/90)
To: RON TOLER Refer#: 1782 From: RAY DUNCAN Read: NO Subj: CODE & PROC 3.2+ Status: PUBLIC MESSAGE Major things wrong here. The address of the variable you are using is a Forth logical address, but you really need a segment and offset for JMPFI. Use ADDR>S&O to convert the logical address to a segment and offset, put the latter in the variable, then use JMPFI. There are many other subtle features of CODE and PROC definitions related to the contents of the segment registers, the mapping of logical to physical addresses, and so on. NET/Mail : LMI Forth Board, Los Angeles, CA (213) 306-3530 ----- This message came from GEnie via willett through a semi-automated process. Report problems to: dwp@willett.pgh.pa.us or uunet!willett!dwp
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (10/22/90)
Category 3, Topic 24 Message 91 Sun Oct 21, 1990 F.SERGEANT [Frank] at 14:20 CDT Re: register moves vs pushes & pops on 8088 etc. I couldn't believe Jonah Thomas's results that found the push instructions to be faster than the register move instructions. There is such a huge difference in the listed number of "clocks" that I bend over backwards to use the moves instead of the push/pops. I decided I'd better test it out myself! I tested it on an 8 MHz XT clone with a NEC V20 in place of the 8088 microprocessor. The NEC book says 2 clocks for the move instruction and 12 clocks for the push or pop instructions. So, the push & pop test ought to take approximately 6 times as long, right? Wrong. I've known for a long time that pipelining of instructions makes the timings from the book, well, maybe not completely meaningless, but far from certain. In the following tests, I laid down a whole lot of the instructions to be tested in-line in a single code word (one word for the moves and another word for the push/pops). This makes the times for next & FOR NEXT pale into insignificance. I used the macros MOVES, and PUSHES, as they are a hell of a lot easier that coding AX BX MOV, a thousand times by hand! It is unnecessary, I think, but I also disabled interrupts during the test, to eliminate the overhead of the timer ticks. I needn't have bothered. Unless some other code re-enabled the interrupts without me realizing it. There is also the dynamic RAM refresh overhead that I've ignored. The "scaffolding" overhead is under 1/2 of a percent. Here's the code I used (for Pygmy Forth). I'd be interested in the timing results on other processors. CODE INTS-OFF ( -) CLI, NXT, END-CODE ( disable interrupts) CODE INTS-ON ( -) STI, NXT, END-CODE ( re-enable interrupts) : MOVES, ( # -) 2/ FOR BX AX MOV, AX BX MOV, NEXT ; ( a macro to lay down lots of reg to reg move instructions) : PUSHES, ( # -) 2/ FOR AX POP, AX PUSH, NEXT ; ( a macro to lay down lots of push & pop instructions) : TESTX2 25000 FOR TESTX1 NEXT ; ( about 1 second ?) ( empty loop as a "control" ) The results: 250,000,000 register to register move instructions take about 278 seconds and 250,000,000 register push and pop instructions take about 449 seconds, giving an actual ratio of 1.615:1. So, thank god, the move instructions are faster! But, not by the expected 6 to 1. The NEC book also says to allow 4 clocks per byte when the needed byte is not already in the prefetch queue. I feel like a chemist figuring out the empirical atomic ratios of molecules. Each move instruction takes 2 bytes. Each push or pop takes 1 byte. If we assume the needed bytes are NEVER present in the prefetch queue, we would add 2*4=8 clocks to the move, giving it a total of 10 clocks, and we'd add 1*4 clocks to the push/pop, giving it a total of 16 clocks. There's our 16 to 10 or 1.6 to 1 ratio. I just didn't realize it was this bad. Different mixes of instructions would affect this quite a bit, apparently. At 12 clocks the 1 byte push/pop is "too fast." The prefetch queue can't keep up. What instructions do you have to run to keep the queue filled up? Maybe divide instructions. If we'd do a dummy divide instruction every other instruction perhaps we wouldn't have prefetch faults! Of course, I've done this test and analysis hurriedly and I may have screwed up, in which case I'll be embarassed over my sarcasm. You'll let me know, I presume, if I'm in error? So, a "2 cycle" instruction really takes 10 cycles. It reminds me of two things: (1) the "$4 pizza commercials where the cheapest $4 pizza really costs 16.95 (2) rating automobiles at so many EPA miles per gallon (the explanation here is that an EPA "mile" is only 3,280 feet long). -- Frank ----- This message came from GEnie via willett through a semi-automated process. Report problems to: dwp@willett.pgh.pa.us or uunet!willett!dwp
toma@tekgvs.LABS.TEK.COM (Tom Almy) (10/22/90)
In article <1899.UUL1.3#5129@willett.pgh.pa.us> ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) writes: > I couldn't believe Jonah Thomas's results that found the push >instructions to be faster than the register move instructions. There is such >a huge difference in the listed number of "clocks" that I bend over backwards >to use the moves instead of the push/pops. I decided I'd better test it out >myself! > I tested it on an 8 MHz XT clone with a NEC V20 in place of the 8088 >microprocessor. [testing details deleted] > The NEC book also says to allow 4 clocks per byte when the needed >byte is not already in the prefetch queue. [...] >If we assume the needed bytes are NEVER present in the prefetch queue, we >would add 2*4=8 clocks to the move, giving it a total of 10 clocks, and >we'd add 1*4 clocks to the push/pop, giving it a total of 16 clocks. >There's our 16 to 10 or 1.6 to 1 ratio. [...] >At 12 clocks the 1 byte push/pop is "too fast." The >prefetch queue can't keep up. What instructions do you have to run to >keep the queue filled up? Maybe divide instructions. The other way to keep the queue filled is to widen the data path. For instance, an 8086, V30, 80186, or 80286 can fetch two bytes at a time, thus you only need to add four clocks for every two bytes if not already in the prefetch queue. The move instruction would be 4 clocks added to the basic 2 clock = 6, and the push/pop would add 2 clocks to the basic (now faster) 10 clocks=12, for a 2 to 1 ratio, rather than the 5 to 1 quoted. This is off by a factor of 2.5 rather than almost 4. The 32 bit data path of the 80386 (and 80486 when instructions are not in the cache) does even better. Tom Almy toma@tekgvs.labs.tek.com Standard Disclaimers Apply
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (11/04/90)
Category 3, Topic 24 Message 93 Sat Nov 03, 1990 J.BLACK13 [John Black] at 12:29 EST I'm having a little problem with PYGMY Forth. What I want to do is simple. I want to read a binary file, alter the filepointer to skip some header bytes, and copy the rest of the file to a diskfile or printer. FILE-READ(buf cnt f# - ) seems to be the right command. However, when cnt >= decimal 25 the command goes into an endless loop filling up my screen with garbage. Also, even when I read succesfully using a low cnt, I can't find the resulting information in the buffer. (Mainly, I've tried looking thru the buffer using DUMP). I'm not a total novice with Forth, but this one has me puzzled. Any suggestions?? Thanks JEB ----- This message came from GEnie via willett through a semi-automated process. Report problems to: dwp@willett.pgh.pa.us or uunet!willett!dwp
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (06/27/91)
Category 3, Topic 24 Message 103 Tue Jun 25, 1991 D.RUFFER [Dennis] at 23:05 EDT Re: dwelly@saddle-lk.cs.UAlberta.CA (Andrew Dwelly) > Q. What was the book ? is it still available, alternatively could > you recommend a book that covers this subject. The book you refered to was (I think) "Threaded Interpretive Languages" but can remember the name. Another userful one for you might be Stack Machines (I think) by Phil Koopman (I think). > Q. Has anyone attempted to write an expert system in Forth ? what > was it like ? I have written an automotive engine analyzer, but I've never written a paper about it (confidentiality problems). There is also one written for deisal locomotive engines that was written up in one of the Rochester Forth Conferences. Can't remember the year, but it was quite some time ago. I'm not sure how to answer the question about what it was like. It was a monster program, with too many decision points to test. On average, mine found the correct answer about 70% of the time which was better than the average mechanic could do consistantly. So, the system is still very succesful in the marketplace. The company that uses it is still very successful. > Q. Is there anywhere I could get hold of the FORML conference > proceedings for the last few years ? FIG sells them, but I have no idea if any library carries them. Sorry I can't be more help, but am on vacation and don't have all my reference sources with me. {B-{)> DaR ----- This message came from GEnie via willett. You *cannot* reply to the author using e-mail. Please post a follow-up article, or use any instructions the author may have included (USMail addresses, telephone #, etc.). Report problems to: dwp@willett.pgh.pa.us _or_ uunet!willett!dwp
ForthNet@willett.pgh.pa.us (ForthNet articles from GEnie) (07/01/91)
Category 3, Topic 24 Message 108 Sun Jun 30, 1991 B.RODRIGUEZ2 [Brad] at 12:02 EDT [ If you would like a copy of any of these files, drop me a note at one of the addresses at the end of this message. Binary files are UUencoded. In order for me to answer your request, you MUST: Include the line containing the file name. (so I know what you want.) Include your email address in the _body_ of the message. You _must_ include an address *relative to* the InterNet. -Doug Philips] From Mitch Bradley: > Why reinvent the wheel? Why not just buy an existing Forth > implementation? One possible reason: most commercial Forths have restrictive licensing practices, especially regarding the compiler layer. Consider the expert system I wrote last year (BTW, available as EXPERT90.ZIP on GEnie, maybe someone can port it to an FTP site for Andy Dwelly.) It uses the Forth compiler to produce a compiled rule base...which means that the end user MUST have access to the compiler. Thus, were I to use polyForth (or LMI, MMS, etc.) for this system, every end user would have to pay a royalty. My clients (who sell the software I write) usually won't stand for this. Incidentally, I agree with your sentiments, Mitch, and I _am_ now using a commercial Forth (with more reasonable licensing terms). Sorry if I'm repeating my pet-peeve-of-the-month, but this is a very real problem which IMHO is obstructing acceptance of Forth. - Brad ----- This message came from GEnie via willett. You *cannot* reply to the author using e-mail. Please post a follow-up article, or use any instructions the author may have included (USMail addresses, telephone #, etc.). Report problems to: dwp@willett.pgh.pa.us _or_ uunet!willett!dwp