dave@enmasse.UUCP (Dave Brownell) (09/23/85)
A while ago, I saw some trade press announcements about AT&T providing a "streams" interface for networking sometime early '86. More recently I've seen press about a working network file system (distinct from NFS) using "streams". My question is -- what are they? Can anyone direct me to accurate descriptions of interfaces, functionality, etc.? (Are they out yet?) I've seen the October 1984 BSTJ, with an article by Dennis Ritchie about them (focused on terminal operations). Do they provide the same functionality that Berkeley sockets do? Is there any hot gossip? -- David Brownell EnMasse Computer Corp ...!{harvard,talcott,genrad}!enmasse!dave
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (09/25/85)
> A while ago, I saw some trade press announcements about AT&T providing > a "streams" interface for networking sometime early '86. More recently > I've seen press about a working network file system (distinct from NFS) > using "streams". > > My question is -- what are they? Can anyone direct me to accurate > descriptions of interfaces, functionality, etc.? (Are they out yet?) > I've seen the October 1984 BSTJ, with an article by Dennis Ritchie about > them (focused on terminal operations). Do they provide the same > functionality that Berkeley sockets do? Is there any hot gossip? So read the article! Streams are different from sockets and more generally useful. Rumor has it that UNIX System V Release 3 (perhaps available Jan. 1986) will include stream i/o, but only in support of networking and not in place of other character i/o. That's too bad; terminal handling in particular can benefit greatly from stream i/o. If anyone wants to spread a more accurate rumor, please do so.
fred@mot.UUCP (Fred Christiansen) (09/26/85)
Lawrence Bump (attunix!bump) was scheduled to give a paper on streams and AT&T's network file system at Usenix. did it happen? -- << Generic disclaimer >> Fred Christiansen ("Canajun, eh?") @ Motorola Microsystems, Tempe, AZ UUCP: {seismo!terak, trwrb!flkvax, utzoo!mnetor, ihnp4!btlunix}!mot!fred ARPA: oakhill!mot!fred@ut-sally.ARPA AT&T: 602-438-3472
santosh@cheviot.uucp (Santosh Shrivastava) (10/03/85)
In article <1699@brl-tgr.ARPA> gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) writes: >> A while ago, I saw some trade press announcements about AT&T providing >> a "streams" interface for networking sometime early '86. More recently >> I've seen press about a working network file system (distinct from NFS) >> using "streams". >> >> My question is -- what are they? Can anyone direct me to accurate >> descriptions of interfaces, functionality, etc.? (Are they out yet?) >> I've seen the October 1984 BSTJ, with an article by Dennis Ritchie about >> them (focused on terminal operations). Do they provide the same >> functionality that Berkeley sockets do? Is there any hot gossip? > >So read the article! > >Streams are different from sockets and more generally useful. > Streams imply connections! There are many applications that can be adeqately handeled by connectionless datagrams. I reckon there will always be a need for interfaces supporting both streams and datagrams, and in this respect Berkeley sockets are superior. Streams are good mainly for terminal handling (as in V8) but to base your entire networking on them is surely a bad idea.
sjl@amdahl.UUCP (Steve Langdon) (10/06/85)
In article <499@cheviot.uucp> Santosh Shrivastava writes: > > Streams imply connections! There are many applications that can be > adeqately handeled by connectionless datagrams. I reckon there will always > be a need for interfaces supporting both streams and datagrams, and > in this respect Berkeley sockets are superior. Streams are good mainly > for terminal handling (as in V8) but to base your entire networking > on them is surely a bad idea. I agree that there is a need for both connection mode and connectionless communication. However, it is a mistake to assume that this is an issue which directly affects the choice between streams and sockets. The functionality missing in streams (as described in Ritchie's paper), is multiplexing. Without multiplexing you cannot implement kernel resident versions of any of the major protocol suites (TCP/IP, OSI, etc.). If a way can be found to add multiplexing to streams, then either connection mode or connectionless service should be possible using streams. -- Stephen J. Langdon ...!{ihnp4,cbosgd,hplabs,sun}!amdahl!sjl [ The article above is not an official statement from any organization in the known universe. ]
steveg@hammer.UUCP (Steve Glaser) (10/07/85)
In article <449@cheviot.uucp> santosh@cheviot.UUCP (Santosh Shrivastava) writes: > >Streams imply connections! There are many applications that can be >adeqately handeled by connectionless datagrams. I reckon there will always >be a need for interfaces supporting both streams and datagrams, and >in this respect Berkeley sockets are superior. Streams are good mainly >for terminal handling (as in V8) but to base your entire networking >on them is surely a bad idea. Wrong... Streams do imply a connection, but the key is "What is the connection to?" In the case of datagram services, the connection is to a "datagram transport layer". As long as you send the address to send along with the data, things work just like sockets do. Sockets are very much like streams. The main advantages of streams over sockets are: 1. the same interface *everywhere* between stream processing modules (4.2 sockets have 3 different internal interfaces, (a) between a driver and a "ip" layer (b) between a protocol and the socket layer and (c) between protocol layers). 2. better internal buffering primitives 3. defined rules on how processing modules interact, in particular the rules describe what you can do in a processing module and seem to allow easier migration of functionality into front end processors. 4. integration with the terminal subsystem (at least in V8). 5. less wired in knowledge of TCP/IP - potential to support other protocols easier (yeah I know that 4.2 tried to solve this problem, but they got a few things wrong here) Advantages the other way include: 1. sockets that are embedded in the file system name space (but you can't just open(2) them though). 2. the ability to pass file descriptors through unix domain sockets (this *may* make it into AT&T). 3. they exist, they work, people are using them. Summary: streams are better architecturally, but they don't exist (at least in a form that mortals can get hold of) sockets are available, and do much of what is needed, but lack elegance. Steve Glaser Tektronix Inc (at least for another week)
dhp@ihnp3.UUCP (Douglas H. Price) (10/07/85)
The modality of STREAMs does not preclude datagram services. STREAMs are useful as a way for a UNIX process to view a communications device and a (possibly) associated set of protocols in a device-independent manner. Let me give you a for-instance: Lets say you implement a STREAM module that simply guarantees that any block of data you give it will be transmitted immediately with no guarantee of safe arrival. This would be a send-and-pray datagram service, even though it was implemented in STREAMs. Lets add a guaranteed end-to-end delivery service STREAM module, stacked on top of the send-and-pray service. Now we have a fast-select datagram service. Further; lets stack a sequencing and non-duplication service STREAM module on top of the fast-select service. Voila, we have a basic virtual circuit. There is no requirement that STREAMs must implement a virtual circuit. That is merely the service that has been discussed most frequently. Note also that STREAMs permit a bottom-up definition of just the grade of service that the application requires. You don't have to buy the whole nine yards if you don't want or need it. -- Douglas H. Price Analysts International Corp. @ AT&T Bell Laboratories ..!ihnp4!ihnp3!dhp
buz@umich.UUCP (Greg Buzzard) (10/08/85)
Can anybody provide a summary of all the *different* contexts in which the term "stream", or "stream service", etc. is used? I suspect that "stream", "stream service", etc. probably fit into that class of terms that are sufficiently overused so as to have several (well, at least two) different context dependent meanings. For instance, I generally consider "stream service" to imply a protocol that (unless explicitly stated otherwise is end-to-end reliable, i.e., guarantees delivery, eliminates duplicates, and...) appears as a uni-directional device interface (such as a printer, or sensor) which cannot be backspaced or reread and for which no explicit synchronization exists. Is this inconsistent with the standard Unix(?) concept of STREAM?
buz@umich.UUCP (Greg Buzzard) (10/08/85)
Let me "edit" my last response here (our news interface wouldn't let me cancel and change my own article before it got out on the net). I meant only to state that my "common" interpretation of STREAMs is close but not totally consistent to some of the interpretations which I have recently seen posted. I wasn't intending to solicit any specific response. Greg Buzzard ihnp4!umich!buz
guy@sun.uucp (Guy Harris) (10/09/85)
> Sockets are very much like streams. Actually, sockets are a descriptor-level interface to a networking implementation, and streams are a mechanism for connecting various device drivers and protocol layers. One could, presumably, replace the current 4.2BSD protocol-protocol and protocol-"interface" mechanism with a stream mechanism and leave the current networking system calls in place. The papers on streams are rather silent on how you do anything other than reading from or writing to a stream. One might infer that you open "/dev/ec0", push an IP stream processing module on top of it, and a TCP or UDP stream processing module on top of that, but: 1) this doesn't win if you have multiple network interfaces, are trying to connect to a host not on a network that you're on, and want the route to that host to be determined dynamically on an IP-datagram-by-IP-datagram basis (I suppose you could have it tear down all stream connections and rebuild them if the route changes, but that seems messy). 2) this also is a mess if somebody else is using the same Ethernet interface - if the Ethernet driver is talking to two higher-level protocol modules, how does it know which one to route which packets to? (We assume here that there is one instance of the IP stream processing module per TCP or UDP stream.) It seems fairly clear that if TCP/UDP/IP or a similar protocol suite is implemented using the streams mechanism, it isn't done this way. Unfortunately, I've seen nothing to indicate how it *is* done. Presumably, instantiations of stream processing modules can have more than one upstream and one downstream module. In a system with TCP/UDP/IP, for instance, there would be one instantiation of the IP module. It would have multiple network interface drivers downstream of it, and packets would get put on the queues of the downstream modules based on the route to the host the packet is intended for. Upstream of the IP module, there might be one instantiation of the TCP module and one instantiation of the UDP module, and the IP module would put packets on the queue of one or the other based on the protocol type; alternatively, there might be one instantiation of the TCP or UDP module per active TCP or UDP file table entry. > The main advantages of streams over sockets are: > ... > 5. less wired in knowledge of TCP/IP - potential to support > other protocols easier (yeah I know that 4.2 tried to solve > this problem, but they got a few things wrong here) Presumably, 4.3 fixed this to support XNS. In a streams-based mechanism as described above, the routing code used by the IP module could as full of IP dependencies as the 4.2 routing code. > Advantages the other way include: > > 1. sockets that are embedded in the file system name space > (but you can't just open(2) them though). > > 2. the ability to pass file descriptors through unix domain > sockets (this *may* make it into AT&T). The paper on "Interprocess Communication in the Eighth Edition Unix System", by D. L. Presotto and DMR, given at the Portland USENIX describes mechanisms to provide both these capabilities - "mounted streams" to attach file system name space names to streams (and you *can* open them), and "ioctl" operations to pass file descriptors over streams (although the paper is silent on what it means to pass a file descriptor to a TOPS-20 system on the other end of a TCP connection). Guy Harris
bc@cyb-eng.UUCP (Bill Crews) (10/09/85)
> There is no requirement that STREAMs must implement a virtual circuit. > -- > Douglas H. Price This seems to be a disagreement upon terminology. If one must set up a "session" or "virtual circuit" or whatever before being able to send a packet to the other end, then it is not a datagram service, even though one may parcel his byte stream into logical units that one might be tempted to call "datagrams". If a circuit must be established, it must also be torn down. There is state information that must be considered when either party vanishes. Depending upon the implementation, the reestablishment of a virtual circuit may or may not have a problem due to the continued existence of an existing one that has yet to be properly shut down. Could you please state YOUR definition of a stream and of a virtual circuit? -- / \ Bill Crews ( bc ) Cyb Systems, Inc \__/ Austin, Texas [ gatech | ihnp4 | nbires | seismo | ucbvax ] ! ut-sally ! cyb-eng ! bc
bc@cyb-eng.UUCP (Bill Crews) (10/09/85)
> streams are better architecturally, but they don't exist > (at least in a form that mortals can get hold of) > > sockets are available, and do much of what is needed, but > lack elegance. > > Steve Glaser At the risk of being overly simplistic . . . Telephones are great! I can call someone specific, talk to him/her without keying a mike each time or any other such bother, no one else can here what we say (:-) . . . They are great! BUT . . . imagine having a CB radio in your car and driving down the highway. You want to monitor the Smoky reports, and you may want to report some Smokies of your own. Now, imagine what it would be like if the ONLY thing allowed on channel 19 is a request for a specific other CB station to switch with you to a private channel, where you can converse and then switch back to channel 19. It would totally destroy the intended use of the CB service. The point is that, in the real world, there are applications where circuits are clearly best AND there are those where datagrams are clearly best. If one tries to shoehorn one into the other, our ability to model the real world is significantly diminished. I tend to be a datagram service advocate -- but ONLY because there seem to be so many people out there who have no personal need for anything other than circuit networking and who therefore want to abolish datagram service or other- wise reduce it to klugedom. WHY??? The need is real; circuits don't suffice; let datagrams co-exist with circuits! -- / \ Bill Crews ( bc ) Cyb Systems, Inc \__/ Austin, Texas [ gatech | ihnp4 | nbires | seismo | ucbvax ] ! ut-sally ! cyb-eng ! bc
guy@sun.uucp (Guy Harris) (10/10/85)
> alternatively, there might be one instantiation of the TCP or UDP module > per active TCP or UDP file table entry. No. IP can't tell which instantiation of the TCP/UDP module to hand the packet to, since it doesn't know what a TCP/UDP header looks like and can't dig the port number out of the header. "Never mind." Guy Harris
rld@uel (Bob Duncanson ) (10/10/85)
In article <2084@amdahl.UUCP> Stephen Langdon writes: > The functionality missing in streams (as described in Ritchie's paper), > is multiplexing. Without multiplexing you cannot implement kernel resident > versions of any of the major protocol suites (TCP/IP, OSI, etc.). If a > way can be found to add multiplexing to streams, then either connection mode > or connectionless service should be possible using streams. I am sure Stephen already knows that streams (as implemented for System V) does include the capability of multiplexing drivers, and cascading the connections of such drivers in arbitrary useful (or complex) ways. I disagree with Santosh Shrivastava (article <499@cheviot.uucp>) that streams (as is is meant by Ritchie and AT&T) necessarily implies only connection-oriented in a way that is inferior to sockets. Connection/non-connection orientation lies in how the mechanism is used in the same way that there are "stream sockets" and "datagram sockets" (and "raw sockets"). -- Bob Duncanson {mcvax!ukc!}uel!rld Customary Disclaimer of Responsibility applies.
boyd@basser.oz (Boyd Roberts) (10/10/85)
At least there is some hope for the world. Steve Glaser has done well. This discussion about streams and sockets is getting very dull. Streams work, and they are streams. Berzerkely sockets are a completely different solution. Sockets are all wrong. You don't need that mess in your kernel. "sockets are available, and do much of what is needed, but lack elegance." Dead right! We've got V8. We've used streams. Sockets are totally misguided. But, they do fit in well with the berzekeley total confusion implementation strategy. Networking in the kernel is all wrong. A sufficent level of support, but no more. This "wired in" gore is just *NOT* the way to go. Save us from 4.5BSD.
kre@munnari.OZ (Robert Elz) (10/11/85)
In article <455@basser.oz>, boyd@basser.oz (Boyd Roberts) writes: > Dead right! We've got V8. We've used streams. Sockets are totally misguided. > But, they do fit in well with the berzekeley total confusion implementation > strategy. Networking in the kernel is all wrong. A sufficent level of support, > but no more. This "wired in" gore is just *NOT* the way to go. Boyd is a little confused - that is, he is confusing sockets & the network code. There is no logical reason that sockets require networking code in the kernel, it just happens to be implemented that way on 4.[23]. Streams would allow network code to be implemented in the kernel as well. For example, consider Dennis Ritchie's recent article detailing the research implementation of tcp/ip using streams - from that I have no idea at all whether the net protocols are implemented in the kernel or in user processes. If you didn't already know how 4.[23] are implemented, you would not be able to tell from a description of the socket interface either. Berkeley decided (from fairly reliable reports) to put the protocol code in the kernel because they didn't feel that vax context switch time was fast enough to put it in user mode. That's an implementation decision, whether it was correct or not is something that could only really be answered by someone doing an implementation in user mode. From what I've heard of the performance of UNET I'd say they were right to put it in the kernel, but perhaps a user mode stream implementation would demonstrate that it can be adequately done that way. Now the difference (I believe, given limited knowledge of streams, as most of us just aren't fortunate enough to have ever seen one. Dennis, if you are reading this, and want to send me the code, I'm not going to object!) between sockets & streams is that streams imply names in the filesystem namespace - there has to be something that you can "open". Sockets don't have that requirement (I wanted to say "limitation" but I don't want to appear unnecessarily biased :-). So, probably misinterpreting from Dennis' article, my guess is that Dennis' rlogin would be implemented by having rlogin open /dev/tcpNNN for some NNN, and then doing the regular rlogin protocol (given that tcp is being done for you somewhere). The problem is, how do I find the NNN. Its possible that this is some kind of magic device, where two opens of the same name get two completely different connections, in which case all that would be required would be that there be one /dev/tcp000 and everyone would use that. Is this how its done? I suspect that its more likely that rlogin has to hunt for a free /dev/tcpNNN somehow though. The question is, why should rlogin care? It doesn't need to know any kind of name for its own end of the connection - this is exactly the abstraction that sockets provide. With a socket, you just "make one" (unnamed) then connect to wherever you want to go (which would also be necessary using the stream approach of course). There are other problems than rlogin just having to hunt. What if there aren't enough /dev/tcpNNN's for everyone who wants a connection now? Just mknod infinity of them? Seems a little wasteful. Conclusion - streams seem to be a great idea, but what I would like is the socket interface, and then the ability to push a line discipline (stream?) onto a socket, so I don't need a name for something I am never going to refer to. Incidentally, side issue, can someone comment on whether pipes are implemented using streams in 8th edition unix? If not, why not? The semantics required of a pipe seem to be all available with streams - deleting the old pipe special cases would certainly improve the elegance. (I don't suppose that I need to say that pipes are implemented using sockets in 4.[23]). Robert Elz seismo!munnari!kre kre%munnari.oz@seismo.css.gov
steveg@hammer.UUCP (Steve Glaser) (10/11/85)
In article <2864@sun.uucp> guy@sun.uucp (Guy Harris) writes: >> = me > = Guy >> The main advantages of streams over sockets are: >> ... >> 5. less wired in knowledge of TCP/IP - potential to support >> other protocols easier (yeah I know that 4.2 tried to solve >> this problem, but they got a few things wrong here) >> ... >Presumably, 4.3 fixed this to support XNS. In a streams-based mechanism as >described above, the routing code used by the IP module could as full of IP >dependencies as the 4.2 routing code. I think what Guy is refering to is the crocks in 4.2 where it wan't general enough about network addresses (like using an int for them...). Actually, an example of what I had in mind here is the accept sequence on a passive open. In 4.2 (TCP/IP) the syscall sequence in the server looks something like: 1. socket() 2. bind() 3. listen() 4. accept() The accept blocks till a connection request is received. It returns new file descriptor representing the *open* connection. If the server didn't really want to talk to somebody (say it only accepts conections from specific users), it would have to close the connection. The client side has now seen a "connection succeed" followed by a "close connection". In the ISO transport layer (ISO TP4), the server has the option of rejecting the connection *before* the other end has seen its "connect()" succeed. In addition, there is some optional "user data" that gets sent in the connection request and connection accept/reject packets. 4.2 BSD does not support this cleanly. Yeah I know you could kludge it up and have your TP4 protocol module return a file descriptor that is open as far as the socket layer is concerned, but refuses to work until you to some magic ioctls to finish the accept (and similarly for connect requests on the client). As I understand what AT&T is doing (Summit, not research), they are just recognizing the fact that putting the protocol state transition model *exactly* at the system call level is wrong. They have a simple message scheme to allow a user program to send a message directly to a protocol module and use those messages to push the protocol module(s) through their state machines. You can easily build a library that gives you the functionality provided in the 4.2 kernel interface, but you haven't constrained the system call interface to reflect only those protocols that were around at the time you designed the syscall interface. Steve Glaser tektronix!steveg (till 10/14/85) harvard!prime!steveg (after 10/14/85)
matt@oddjob.UUCP (Matt Crawford) (10/12/85)
In article <1554@hammer.UUCP> steveg@hammer.UUCP (Steve Glaser) writes: > >Actually, an example of what I had in mind here is the accept sequence >on a passive open. In 4.2 (TCP/IP) the syscall sequence in the server >looks something like: > 1. socket() 2. bind() 3. listen() 4. accept() > >The accept blocks till a connection request is received. It returns >new file descriptor representing the *open* connection. If the server >didn't really want to talk to somebody (say it only accepts conections >from specific users), it would have to close the connection. The >client side has now seen a "connection succeed" followed by a "close >connection". > >In the ISO transport layer (ISO TP4), the server has the option of >rejecting the connection *before* the other end has seen its >"connect()" succeed. ... > >4.2 BSD does not support this cleanly. ... I betcha two new ioctl's could give 4.2 the above functions. Do a select() on a socket upon which you are listening and when a connection is available apply new ioctl #1 to peek at the address of the pending connection at the head of the listening socket's so_q. (Or getpeername() could be extended to provide this information) If the process does not want to accept it can apply the second new ioctl to drop the pending connection. Note that this will be useless in the INET domain because the SYNs are ACK'd in tcp_input() whether or not accept() has been called, so the other side sees it's connect() succeed as long as the passive side has done a listen() and the queue of pending connections is not full. _____________________________________________________ Matt University crawford@anl-mcs.arpa Crawford of Chicago ihnp4!oddjob!matt
guy@sun.uucp (Guy Harris) (10/12/85)
> > There is no requirement that STREAMs must implement a virtual circuit. > > This seems to be a disagreement upon terminology. That's exactly what it is, and all that it is. The only thing that "stream" as in "Dennis Ritchie's streams" and "stream" as in "4.2BSD SOCK_STREAM socket" (i.e., virtual circuit) have in common is six letters of the alphabet and the fact that they're both used for networking. A "stream" as in "Dennis Ritchie's streams" is a linear connection of "stream processing modules" (yes, I know this is sounds like a circular definition, but treat "stream processing module" as an uninterpreted token). The module at the "tail end" of the stream can be a UDP module. The user writes data to a descriptor; the module at the "head end" of the stream receives this data, manipulates it, passes it down to the next module, etc. until it reaches the UDP module. The UDP module decorates it with a UDP header and hands it to an IP module (this connection isn't strictly a stream connection; see Dennis' recently-posted article on multiplexing in streams) which decorates it with an IP header, figures out the first hop in the route to the destination, and hands it to the appropriate interface driver. Guy Harris
guy@sun.uucp (Guy Harris) (10/13/85)
> Dead right! We've got V8. We've used streams. Sockets are totally > misguided. But, they do fit in well with the berzekeley total confusion > implementation strategy. Networking in the kernel is all wrong. A > sufficent level of support, but no more. This "wired in" gore is just > *NOT* the way to go. Could you please explain how 4.2BSD networking (for which "sockets" is a poor term - the "socket" code is only the top layer of the 4.2BSD networking code) is different from V8 networking (for which "streams" may be a poor term, given that DMR's article described multiplexing as not fitting strictly within the stream paradigm) in the way you describe? According to DMR's description, TCP/UDP/IP is implemented in V8 by a big module - within the kernel. All the stream buffering is done in the kernel. What's left that's outside the kernel in V8 but inside the kernel in 4.2BSD? The main thing that the "socket" layer of the 4.2BSD code provides is common code to handle what was considered to be the functionality needed by all communications channels. I'll agree that it looks like there are a number of things you can do better with streams than with Berkeley's networking architecture, but it's not clear that this is a case of two completely different approaches. Conceivably, one could implement all the Berkeley networking system calls on top of a streams base. Would you praise this as enlightened V8 networking or bash it as unenlightened Berkeley networking? A lot less Berkeley/USDL/Bell Labs Research/whoever bashing, and a lot more careful analysis, would be useful. Guy Harris
boyd@basser.oz (Boyd Roberts) (10/13/85)
In article <972@munnari.OZ> kre@munnari.OZ (Robert Elz) writes: >Boyd is a little confused - that is, he is confusing sockets & the >network code. Be real. I may have been drunk, but i wasn't confused. >So, probably misinterpreting from Dennis' article, my guess is that >Dennis' rlogin would be implemented by having rlogin open /dev/tcpNNN >for some NNN, and then doing the regular rlogin protocol (given >that tcp is being done for you somewhere). The problem is, how >do I find the NNN. Its possible that this is some kind of magic >device, where two opens of the same name get two completely >different connections, in which case all that would be required >would be that there be one /dev/tcp000 and everyone would use >that. Is this how its done? I suspect that its more likely that >rlogin has to hunt for a free /dev/tcpNNN somehow though. The >question is, why should rlogin care? It doesn't need to know >any kind of name for its own end of the connection - this is >exactly the abstraction that sockets provide. With a socket, you >just "make one" (unnamed) then connect to wherever you want to >go (which would also be necessary using the stream approach of >course). > Guess again. You can have a server (a process) that's selecting on a file /dev/tcp. Your connector opens /dev/tcp and the connection is made by the server. How this works is as follows. The server is mounted on /dev/tcp and it has a message and a connection line displine between it and /dev/tcp. When the other process (call it the ``connector'') opens /dev/tcp the server gets an open message. The connector blocks until the the server acknowleges the open. If server can make the connection the connector's open returns with a file descriptor that is it's tcp connection. On failure, the server nak's the open and you get -1. The search of /dev/xxNNN was always odious. >There are other problems than rlogin just having to hunt. What >if there aren't enough /dev/tcpNNN's for everyone who wants >a connection now? Just mknod infinity of them? Seems a little >wasteful. How you make the connection is up to the server (coder). >Conclusion - streams seem to be a great idea, but what I would >like is the socket interface, and then the ability to push >a line discipline (stream?) onto a socket, so I don't need a name >for something I am never going to refer to. > >Incidentally, side issue, can someone comment on whether pipes >are implemented using streams in 8th edition unix? You don't need a name space device to get a stream. A pipe is a full duplex stream. So, call pipe() and you've got a stream. Boyd Roberts ...!seismo!munnari!basser.oz!boyd
dhp@ihnp3.UUCP (Douglas H. Price) (10/15/85)
>> There is no requirement that STREAMs must implement a virtual circuit. >> -- >> Douglas H. Price > >Could you please state YOUR definition of a stream and of a virtual circuit? >-- > / \ Bill Crews > ( bc ) Cyb Systems, Inc > \__/ Austin, Texas > The term "STREAMs" refers to Ritchie streams. A Ritchie stream is normally implemented as a local-kernel-only virtual circuit with a device on your host. The virtual circuit is not required to have any expression in the network, or on the remote end of the communication. Given the general confusion concerning the term "stream", it might be argued that another name should have been chosen for this particular mechanism. But that's the name that was chosen, so I had thought it understood that in its capitalized form "STREAMs", it could be distinguished from the more general bytestream paradigm. Sorry about the confusion. On to the definitions: A "virtual circuit" is a full-duplex, errorless communications path possessing the properties of non-duplication, non-loss and strictly serial (ordered) data. A "datagram" is an atomic, self-contained message, normally completely contained in a single outbound packet for which there is no guarantee of delivery (i.e., send and pray). A "stream" (bytestream) series of bytes which exhibit no natural boundaries on messages. Message boundaries (if any) are implicit in the data of the bytestream and are understood at the peer-to-peer level rather than at the transport level. "STREAMs" (Ritchie streams) are a kernel mechanism for implementing the insertion of protocols in between a user process and the device associated with the communication. -- Douglas H. Price Analysts International Corp. @ AT&T Bell Laboratories ..!ihnp4!ihnp3!dhp
root%bostonu.csnet@CSNET-RELAY.ARPA (BostonU SysMgr) (10/15/85)
>steveg@hammer.UUCP (Steve Glaser) writes >The accept blocks till a connection request is received. It returns >new file descriptor representing the *open* connection. If the server >didn't really want to talk to somebody (say it only accepts conections >from specific users), it would have to close the connection. The >client side has now seen a "connection succeed" followed by a "close >connection". >....4.2 BSD does not support this cleanly. The description seems fine but could someone clarify exactly what the objection is here, with a little more than 'does not support this cleanly'. Is this a religious issue? If a client connects to a server and the server wishes to reject on the basis of some info about the remote side, what is the harm of just tossing the remote side with a shutdown() or close(), (the other side will get offended at its rude treatment?) Or is the real issue here that there is no way to securely determine the USER associated with the client process? If so, I think that is orthogonal to whether you have to accept() to find out. It's a desireable feature that can be layered into applications (consider FTP and TELNET for example.) I think it's semantics, if TP4 is passing that much info it may as well be open, although standardizing rejections analagous to <errno.h> could be an advantage in a few situations (might prevent re-tries, see SMTP for a model of this, fatal, temporary etc rejections.) In fact, errno.h probably covers just about everything one might want to say, a few could be added with little harm (we are not *yet* threatening errno's 2^32 address space! :-) I presume that even in heterogenous nets most systems on the net support the notion of an integer, so the idea is portable, adapting to SMTP's format of "errno string<CR><LF>" should be a fine way to explain to the other side why they are getting 86'd. -Barry Shein, Boston University