[comp.sys.next] Talk though Ethernet

wpcf@wucs1.wustl.edu (Pei Weng) (06/03/91)

I am getting started doing some DSP programming on NeXT.What I am
trying to do is:when I talked to my NeXT machine,another user on 
the network can hear my talk and respond using his Next machine.
Has anybody done similar things before? Any information will be
greatly appreciated.Thanks in advance! 

eps@toaster.SFSU.EDU (Eric P. Scott) (06/07/91)

In article <1991Jun2.224148.20862@cec1.wustl.edu>
	wpcf@wucs1.wustl.edu (Pei Weng) writes:
>I am getting started doing some DSP programming on NeXT.What I am
>trying to do is:when I talked to my NeXT machine,another user on 
>the network can hear my talk and respond using his Next machine.
>Has anybody done similar things before? Any information will be
>greatly appreciated.Thanks in advance! 

The most promising work in this area (what will eventually become
the Internet standard for this) is a package called Voice
Terminal developed jointly by USC/ISI and BBN.  The current
version runs on SPARCstations (not NeXTs).  While limited
functionality is possible over UDP, a protocol specifically
designed for real-time applications (ST) is preferable.  This
means building a new SunOS kernel, or in the case of the NeXT,
implementing ST protocol as a loadable kernel server.  VT is
capable of four-way conferencing; it mixes mu-law audio from
multiple sources in real time.  It also places exceptional
demands on the machine; since ST frames are only 256 bytes
long, audio is nominally processed in 180-byte chunks (NeXT's
snd driver apparently only allows power-of-2-sized reads :-( ).
This means the CPU has to be able to handle microphone data
at least 45 times a second, as well as whatever comes in from
the network.  A SPARCstation 1 is said to be barely up to the
task (and it's useless for anything else while VT is running).

If you want a simple, not-quite-realtime talk, Adamation has a
program called Live Wire with voice capability.

					-=EPS=-