JAJZ801@CALSTATE.BITNET ("Jeff Sicherman,CSU Long Beach") (09/22/90)
Having tired of the object-oriented forth discussion (many OO's ago) is anyone interested in starting a new thread, namely: Concurrency in FORTH By this I mean parallelism in computation, mostly of an implicit nature, rather than multitasking or multiuser, though multitasking may be a proper prerequisite for implementation. I'm mostly interested in Forth as a virtual machine for data flow architecture (does anybody talk about this anymore?) and multiprocessor architectures. Jeff Sicherman jajz801@calstate.bitnet
a684@mindlink.UUCP (Nick Janow) (09/24/90)
koopman@a.gp.cs.cmu.edu (Philip Koopman) writes: > The question is: how do you get parallelism with Forth, given that the stack > would seem to create all sorts of artificial dependencies? The question should not be "how do we get parallelism _despite_ FORTH's stack?"; it should be "How best can we use the power of the stack to implement multiprocessing?". As far as I can see, the main problem in parallelism is the efficient communication of data and instructions between processors. Stacks are an efficient way of passing data between words (tasks, subroutines, whatever); perhaps there's some way of passing data and/or instructions between processors using stacks. I haven't contemplated parallel processing for a while and I don't remember doing so with stacks in mind. Maybe something will turn up if we keep stacks in mind while considering parallelism? Breakthroughs often come about by keeping several previously unrelated ideas in your thoughts while pondering a problem. It's something to sleep on. :)
koopman@a.gp.cs.cmu.edu (Philip Koopman) (09/24/90)
In article <9009241330.AA06784@ucbvax.Berkeley.EDU>, JAJZ801@CALSTATE.BITNET ("Jeff Sicherman,CSU Long Beach") writes: > ... I'm mostly interested > in Forth as a virtual machine for data flow architecture (does anybody > talk about this anymore?) and multiprocessor architectures. Data Flow is changing these days. The folks at MIT seem to have pretty much given up on some fundamental points and are going to a more pipelined (e.g. RISC-like) architecture with a large grain of interprocessor interaction parallelism. See the paper on Monsoon in the 1990 Computer Architecture Conference Proceedings (pg. 82). IMHO, the new research direction is giving up a lot of the potential of dataflow (or, perhaps, it is an admission that the potential never panned out). (This assessment is my personal opinion; others (from MIT) claim that this is a natural outgrowth of previous work). BUT, fine-grain parallelism at a reasonably local level is gaining in popularity. This is localized data flow of the type that has been around ever since the IBM 360/91 FPU days. These days, the buzz-word is "superscalar execution" (Intel 80960, IBM RS/6000, more announced almost weekly). The question is: how do you get parallelism with Forth, given that the stack would seem to create all sorts of artificial dependencies? Making each operator into an array operator (such as John Dorband has done with MPP Forth) is possible, but not really "data flow". I haven't thought about this much, so maybe I missed something obvious. Phil Koopman koopman@greyhound.ece.cmu.edu Arpanet 2525A Wexford Run Rd. Wexford, PA 15090 Senior scientist at Harris Semiconductor, and adjunct professor at CMU. I don't speak for them, and they don't speak for me.
koopman@a.gp.cs.cmu.edu (Philip Koopman) (09/25/90)
In article <3287@mindlink.UUCP>, a684@mindlink.UUCP (Nick Janow) writes: > As far as I can see, the main problem in parallelism is the efficient > communication of data and instructions between processors. Stacks are an > efficient way of passing data between words (tasks, subroutines, whatever); > perhaps there's some way of passing data and/or instructions between processors > using stacks. I guess I'm thinking more about fine-grain parallelism where a single stack processor can have multiple active instructions (super-scalar). Stacks as information transfer mechanisms start getting more into interconnection than processor design (but, that's an important design consideration too). > Breakthroughs often come about by > keeping several previously unrelated ideas in your thoughts while pondering a > problem. I agree completely. -- Phil