dave@wrs.wrs.com (David N. Wilner) (10/02/89)
There has been considerable discussion recently in this newsgroup about multi-processor support with VxWorks. As the technical director at Wind River, I wanted to respond to clear up some potential misunderstandings and to let people know about work we have in progress. VxWorks multi-processor support is currently based on a low-level packet transmission protocol that uses shared memory on a backplane, test-and-set for interlocking access by multiple CPUs, and interprocessor interrupts for notification of the destination CPU. This low-level protocol is a very efficient means of getting data from one CPU to another. The low-level protocol is implemented in the form of a network interface driver (called "bp" for "backplane") that fits into the VxWorks networking suite. This makes all of the higher-level protocols supported by VxWorks available over the backplane to each of the processors in a multi-processing configuration, in exactly the same way they are available over an ethernet connection. These include TCP/IP for interprocess communication, rlogin and telnet for remote login, ftp and nfs for remote file access, remote procedure calls, and remote source debugging. One of the things that these protocols buy you is generality and network transparency. For example, TCP/IP sockets can be used identically for communication between processes on the same CPU, on different CPUs on the same backplane, on different CPUs over ethernet, and on different CPUs running different operating systems (VxWorks, Unix, VMS). Applications can be reconfigured, moving processes to different CPUs, by changing no more than a host name or address. The higher-level protocols do, of course, add some overhead, although not nearly as much as has been suggested. One poster said it took 200,000us to send a task-to-task message via TCP/IP sockets over the backplane! In fact, the actual time on a typical 68020 board is about 2000us. It is true that in real-time systems it is sometimes necessary to sacrifice generality for performance, and that there are applications that can't afford the overhead of the higher-level protocols over the backplane. For this reason, hooks to the low-level "bp" protocol are supplied, so that the raw packet passing mechanism can be used without the overhead of the other protocols. See the description of etherLib(1). Despite the name, this mechanism works with the backplane interface as well as all the standard ethernet interfaces. Furthermore, the VxWorks board support packages contain all the hooks necessary for multi-processing. These include: address translation: sysLocalToBusAdrs,sysBusToLocalAdrs test-and-set: sysBusTas bus interrupts: sysBusIntGen,sysBusIntAck, sysIntDisable,sysIntEnable mailbox interrupts: sysMailboxConnect,sysMailboxEnable processor numbers: sysProcNumGet,sysProcNumSet These functions are used by the backplane interface, but also allow users to define their own application specific mechanisms. We are currently working on improvements to the protocols including optimizations to both the low-level backplane protocol and to TCP/IP. We are also working on higher-level models that will give global access to system objects including tasks, semaphores, message queues, etc, with very efficient, low-overhead mechanisms. We recognize the importance of these features to application builders. In the meantime, I would encourage any of our customers who have done work in this area, to submit their work to the VxWorks User's Group archive. We will be sure to peruse any submissions and will try to incorporate good ideas into our work. More information on the archive can be obtained by sending the message "send index" to netlib@thor.ucar.edu, or by emailing to Richard Neitzel at the National Center for Atmospheric Research at thor@thor.ucar.edu. Further inquiries or discussion can be addressed directly to me. Thanks for all the interest and comments. David Wilner Director of Engineering Wind River Systems sun!wrs!dave 415-428-2623