[comp.lang.c++] Virutal Functions Across Processor

mra@srchtec.uucp (Michael Almond) (01/14/91)

We are using AT&T's C++ 2.0 on Sun 3's and 4's to compile software for loading
onto several 68030 processor boards.  The boards are running VxWorks, version 4
that has been adapted to allow shared memory across processors.

If you create an object on one board and try to call a vertual method for
that instance from another board, then the virtual table on the oroginal
procesor is used to resolve the address.  Since, one processor cannot call
a function on another processor, this results in an op code error.

Is there any way to have a given instance's vtable pointer point to the
local vtable rather than the one on the other processor, or is there another
solution to the problem?

BTW, we in the process of converting to g++ and, of course, love it.

Help!!


---
Michael R. Almond (Georgia Tech Alumnus)           mra@srchtec.uucp (registered)
search technology, inc.				      mra%srchtec@salestech.com
4725 peachtree corners cir., suite 200		       emory!stiatl!srchtec!mra
norcross, georgia 30092					 (404) 441-1457 (office)
[search]: Systems Engineering Approaches to Research and Development

rfg@NCD.COM (Ron Guilmette) (01/20/91)

In article <414@srchtec.UUCP> mra@srchtec.uucp (Michael Almond) writes:
>We are using AT&T's C++ 2.0 on Sun 3's and 4's to compile software for loading
>onto several 68030 processor boards.  The boards are running VxWorks, version 4
>that has been adapted to allow shared memory across processors.
>
>If you create an object on one board and try to call a virtual method for
>that instance from another board, then the virtual table on the oroginal
>procesor is used to resolve the address.  Since, one processor cannot call
>a function on another processor, this results in an op code error.
>
>Is there any way to have a given instance's vtable pointer point to the
>local vtable rather than the one on the other processor, or is there another
>solution to the problem?

Hummm.. this sounds rather familiar.

If I understand you correctly, you have a system in which there is more
than one logical address space.  Is that correct?  What I mean is, do
the processes executing on one processor get access to all of the same
physical locations (at all of the same logical addresses) as processes
running on other processors do or not?

If not, then you have a (multiprocessor) system which has multiple
logical address spaces.

The bad news is that if you do have multiple logical address spaces,
and if you are trying to coordinate (and communicate between) various
processes which may each have their own ideas about their own address
spaces, and if you are trying to do this in C++... well then your
life will be complicated (but not impossible).

Long long ago (in a galaxy not far away) I succeeded Michael Tiemann
(of g++ infamy) as the compiler hacker for a project group at MCC
(called the ES-kit project) where the project goals included getting
numerous cooperating processes (all written in C++) to do marvelous
things on a loosely-coupled multi-processor.  The way our system worked,
processes running on one processor had address spaces which were totally
unrelated to the address spaces of processes running on other processors.
Thus, you couldn't pass a char* (or any other type of pointer) from one
processor to another and have it still be meaningful.

(We did have memory-mapping units with each CPU, so we could have overcome
some of our problems by trying to maintain some sort of `unified' address
space over all processes and all processors, but we never did that, although
it was debated at length down at the local watering-hole.)

Anyway, the approach used on the ES-kit project (which was invented by
Michael and others before I arrived) was to take the view that each C++
object resided on a particular processor (and thus in a particular address
space).  Whenever some process (on the same processor "node" or on a
different one) needed to invoke a member function for that object, some
magic (which was implemented via a cooperation between the g++ compiler
and our operating system kernel) would effectively forward this "request"
on to the "node" where the object lived, and the kernel on that node
would turn this incomming request into a new process which was constructed
so that it would simply invoke the given member function for the given
object and then exit back to the local kernel.  The local kernel would
(of course) be responsible for collecting any return value and sending
it back to the "calling" process (often on a different "node").

All this was working (thanks to Michael and others) by the time I joined
the project (so I can take virtually no credit for any of it :-(.  I did
however get this scheme to also work for remotely-invoked constructors
(which were a bit more complicated than the remotely-invoked regular
member functions) and I thunk up a (clever?) way to use type-conversion
functions to help us to asynchronously retreive returned results.

Anyway, this whole scheme worked well, and it helped us to exploit the
parallelism inherient in our hardware in a way that felt quite natural
(to a C++ programmer).  Unfortunately, implementing this system did
require some extensions to "standard" C++ as well as the construction
of our own specialized message-passing kernel, but that was a small price
to pay for what we got.

Numerous papers are available (from MCC) which describe the ES-kit
environment in more detail.  Send me E-mail if you want the address
of somebody there who could send their tech-reports to you.  Also,
the entire ES-kit system (both hardware and software) is available
(I believe) for commercial licensing.

-- 

// Ron Guilmette  -  C++ Entomologist
// Internet: rfg@ncd.com      uucp: ...uunet!lupine!rfg
// Motto:  If it sticks, force it.  If it breaks, it needed replacing anyway.