[comp.protocols.tcp-ip] Off loading the protocol

farber@CIS.UPENN.EDU (David Farber) (03/18/88)

The following is abstracted from a note on our experiences with a line
of research aimed at eamining a radically different approach to "off
loading" designed for the very high speed networking era -- "Gigabit
networking". I would be happy to supply the full text and/or memnet
documents to any interested people.


             Some Thoughts on the Impact of Very High Speed Networking on
                                Processor Interfaces

......

          Approach

            This note  proposes a  completely different  view  of  computer
          networking, a  view which  derives from experiments started in my
          group at  the University  of  Delaware  (and  continuing  at  the
          University of  Pennsylvania) resulting  in the  creation a  novel
          local network  architecture called "MEMNet." MEMNet is a research
          system aimed  at exploring ways of removing the severe processing
          overhead found  in distributed  operating systems.  The  approach
          MEMNet takes  is to treat the network as a mechanism which allows
          a  processor  to  access  the  collective  memory  space  of  the
          distributed  system.   Thus,  when   a  processor   in  a  MEMNet
          environment needs  to send data via the high-speed local network,
          it simply  writes to  memory   addresses which  are in the memory
          space  of  the  recipient  processor.  Similarly,  the  recipient
          processor, when  it chooses to examine data which has been "sent"
          by another  processor, reads  its local  memory  (or  physically-
          remote memory,  in a  hierarchical memory  system) simply  by the
          normal memory  access mechanisms of that processor. In the MEMNet
          environment, there  is a  set of  special memory controllers with
          adequate  caching,  connected  together  via  a  high-speed  (200
          megabit) ring. The caching provides a mechanism equivalent to the
          snooping caches of modern multiprocessors.

            During this  research, we examined the architecture of software
          systems which  would run  in a  MEMNet environment.  Much to  our
          surprise (although  we  should  not  have  been  surprised),  the
          software implications  of  such  a  distributed  environment  are
          essentially non-existent.  That is,  a software system written to
          run on  the fully  distributed MEMNet environment is essentially,
          in all  respects, identical  to the same software system designed
          to run on a simple multiprocessor, shared-memory environment.

            The issues  one must  face in  designing  the  architecture  of
          future wide-area,  high-speed networks ......
          
            ......

	    In summary,  this note  suggests that we reexamine what we mean
          by "networking"  in the  future.  It  essentially  suggests  that
          networking is simply a special case of interprocess communication
          over a  widely-distributed computer  system, and  thus  can  take
          advantage of technology already developed.

          
_________________________________________________________________________


---------------
David J. Farber; Prof. of CS and EE, Director - Distributed Systems Labs.
University of Pennsylvania/200 South 33rd Street/Philadelphia, PA  19104-6389
Tele: 215-898-9508; FAX: 215-274-8192 

Alessandro.Forin@SPEECH2.CS.CMU.EDU (03/18/88)

Work in the area of networked shared memory is progressing quite rapidly:
It is not just a crazy thought.
The following is a short list of references to recent work that is relevant
to the subject.  I'll be glad to receive notice of any other activity
in the area by people in this list.

1.    Bisiani,  R.  and  Forin,  A.,  ``Architectural Support for
Multilanguage Parallel  Programming  on  Heterogeneous  Systems'',  2nd
International Conference   on  Architectural  Support  for  Programming
Languages  and Operating Systems, IEEE, Palo Alto , October 1987, pp. 21-30.

2.    Cheriton D.R., ``Problem-oriented Shared Memory: A Decentralized
Approach to  Distributed  System  Design'',  Proc.  of  the  6th  Intl.
Conf.  on Distributed Computing Systems, May 1986, pp. 190-197.

3.    Kai Li and Paul  Hudak,  ``Memory  Coherence  in  Shared  Virtual
Memory Systems'',  Proceedings  of  the  Fifth Annual Symposium on
Principles of Distributed Computing, ACM, 1986, pp. 229-239.

4.    Krishnaswamy, Ahuja, Gelernter,  Carriero,  ``Progress  Towards  a
Linda Machine'',  Proc.  ICCD,  IEEE-CS  and  IEEE  Circuits, October 1986,
pp.  97-101.

5.    Ramachandran,  U.,  Ahamad,  M.,  Khalidi,  M.,  ``Hardware  Support
For Distributed Shared Memory'', Report GIT-ICS-87/41, Georgia Tech,
November 1987.

6.    Rashid, R. et al., ``Machine-Independent Virtual  Memory  Management
for Paged Uniprocessors and Multiprocessor Architectures'', 2nd
International Conference  on  Architectural  Support  for  Programming
Languages   and Operating Systems, IEEE, Palo Alto , October 1987, pp.
31-39.

7.    Wendorf,  J.  and  Tokuda, H., ``An Interprocess Communication
Processor:  Exploiting  OS/Application  Concurrency'',  Tech.  report
CMU-CS-87-152, Carnegie-Mellon University, March 1987.

8.    Zayas,  R.E., The Use of Copy-on-reference in a Process Migration
System, PhD dissertation, Carnegie-Mellon, January 1987.
-----------------------------------------------------------------------------
Alessandro Forin, Computer Science Dept., Carnegie-Mellon University,
5000 Forbes Pittsbugh PA, 15213.
ARPA: af@cs.cmu.edu
Phone: (412) 268-2569

auerbach@hercules.csl.sri.com (Karl Auerbach) (03/20/88)

Just curious: How does the shared memory paradigm handle the case where
the machines are of different memory architectures with different
data representations?

PS -- I would like a copy of the full article.

				Thanks, --karl--

Alessandro.Forin@SPEECH2.CS.CMU.EDU (03/28/88)

> Date: 19 Mar 88 18:18:09 GMT
> From: csl!hercules!auerbach@spam.istc.sri.com  (Karl Auerbach)
> Subject: Re: Off loading the protocol
> Sender: tcp-ip-request@sri-nic.arpa
> To: tcp-ip@sri-nic.arpa
>
> Just curious: How does the shared memory paradigm handle the case where
> the machines are of different memory architectures with different
> data representations?

As far as I know, my system (Agora) is the only one that specifically
addresses your question.  A description of our work will appear in the
August issue of IEEE Trans. Computer, and you can find also a paper now
in the proceedings of the ASPLOS-II conference (it is in the list I
posted earlier to the tcp-ip list).

The answer is basically to keep around at runtime data type
descriptors, so that you can translate a datum coming from an
incompatible machine into the appropriate local representation.  

This is much the same problem you have with the description of data
inside messages for any remote procedure call package (Sun, Apollo,
Xerox, Mach-MiG, etc..).  It is in no way specific to the shared memory
paradigm.

Sorry for the delay in answering...
sandro-