[comp.mail.multi-media] A high-level language for animation & sound ??

eho@clarity.Princeton.EDU (Eric Ho) (10/01/89)

Just wondering if anyone out there is developing a high-level language that
has all the flexibility & simpilicty (& device independence) of PostScript but
capable of addressing sound & animation as well ?
--

Eric Ho  
Princeton University
eho@confidence.princeton.edu

ianf@nada.kth.se (Ian Feldman) (10/02/89)

In article <EHO.89Oct1023005@cognito.Princeton.EDU> eho@clarity.Princeton.EDU (Eric Ho) writes:
>Just wondering if anyone out there is developing a high-level language that
>has all the flexibility & simpilicty (& device independence) of PostScript but
>capable of addressing sound & animation as well ?

  Yes, there is such an animal, and its name is HyperTalk[tm].  Animation
  via VideoWorks HC-Driver or by XCMD. How much more flexible can you get?

-- 
----
------ ianf@nada.kth.se/ @sekth.bitnet/ uunet!nada.kth.se!ianf
----
--

amanda@intercon.com (Amanda Walker) (10/03/89)

In article <1833@draken.nada.kth.se>, ianf@nada.kth.se (Ian Feldman) writes:
> >Just wondering if anyone out there is developing a high-level language that
> >has all the flexibility & simpilicty (& device independence) of PostScript but
> >capable of addressing sound & animation as well ?
> 
>   Yes, there is such an animal, and its name is HyperTalk[tm].  Animation
>   via VideoWorks HC-Driver or by XCMD. How much more flexible can you get?

Well, I'd say it misses on the "device-independent" bit...  HyperCard is
about as Macintosh-dependent as software gets...

It has some good ideas, though.  It's aimed a little too much at interaction
for it to be useful for describing multi-media documents, though, in my
opinion.

Anybody from CMU or BBN want to pipe up here :-)?

--
Amanda Walker
amanda@intercon.com

nsb@THUMPER.BELLCORE.COM (Nathaniel Borenstein) (10/03/89)

> Anybody from CMU or BBN want to pipe up here :-)?

Well, as an ex-CMU person, maybe I qualify.  The reason I hadn't answered
is that, frankly, I think the basic answer to this question is "No."  As far
as I know, there is no good high-level language for the portable expression
of time-sequenced multimedia events such as animation and sound.  

Now, what probably made Amanda mention CMU and BBN is, of course, the fact
that both Andrew and Diamond can represent such events in their datastreams.
Indeed, both datastreams are even moderately portable.  I know much more
about Andrew, and can in fact attest that the Andrew data stream readily
represents animations and sound, and could be easily extended to video
(the extreme flexibility and extensibility of the Andrew model is one of
Andrew's real strong suits, actually).  The one word I sort of choked on,
however, in my first paragraph, was the word "good."

Even granting, for example, that PostScript is a "good" language for 
static imaging -- and I think there is some room for debate there --
we aren't even close to it for active processes.  If you like PostScript,
you could probably do a lot worse than to start with Display PostScript 
for such a language.  In my opinion, however, the proper design of such
languages is still very much a research question.

-- Nathaniel Borenstein, Bell Communications Research

tcrowley@DIAMOND.BBN.COM ("Terry Crowley") (10/03/89)

Have to agree with Nathaniel, I wouldn't claim that what we have is a
good representation of time-sequenced multimedia events.  In fact, as
it stands now a Slate document (nee Diamond), doesn't have any notion
of time-sequenced events.  Speech and video can be embedded in a document
but are only played when explicitly invoked.  You could probably get part
of the way along using that capability and the document extension
language, but I haven't tried doing it so I can't make any claims right
now.

Other activity that I know about that seems pertinent is the Athena
Muse work at MIT, designed primarily as a tool for developing interactive
video instructional materials, and Polle Zellweger's work at Xerox PARC
on Scripted Documents.

Terry Crowley

wjh+@ANDREW.CMU.EDU (Fred Hansen) (10/12/89)

Eric Ho has asked: 

>> Just wondering if anyone out there is developing a high-level
language that
>> has all the flexibility & simpilicty (& device independence) of
PostScript
>> but capable of addressing sound & animation as well ?

Other responders to this request have mentioned MIDI as a sound standard
(standard for sound ?-).  We could even consider making MIDI a data type
accessible to postScript and then do everything in PostScript.  With the
extension developed for NeWS, PostScript should be able to handle the
real time sequencing required for animations.

It seems to me, however, that we need 2 more "languages".  

The way I see it, a message is a document and contains various pieces
like rasters, spreadsheets, sounds, and animations.  One language needed
is a high level, user oriented language for expressing the behavior of
the message in response to reader actions.  The author may wish a
particular button to start a voice-gram or may want the voice to
activate in conjunction with an animation.  If we try to build special
purpose tools for users to express all possible sequencing we will
inevitably omit some useful options.  Users will be far better served if
we instead build a general-purpose language with functionality for
invoking animations, sound, and other dynamic objects.  

The other "language" needed is some data stream definition that can
amalgamate in one stream the various elements we want:  MIDI,
postscript, text, rasters, animations, spreadsheets, ... .  PostScript
could serve this purpose, but it needs to be extended with a stronger
notion of quoting embedded objects.  {There is a way to do this in
PostScript: the red PostScript book shows a way to incorporate large
rasters, but it essentially means dropping out of PostScript and reading
directly from the data stream.}

What work is going on at CMU?  We have candidates for both languages.  

The Andrew ToolKit (ATK) datastream definition provides for mixture of
text and other objects of any sort.  The run-time environment defines
protocols by which the driver for an outer object communicates with
embedded objects, including how the outer object calls upon the inner to
read its portion of the data stream.  

For user level expression of behaviors, we have implemented Ness.   This
is a fairly standard language in many respects:  variables, functions,
assignment, if-then-else, while-do.  In addition it offers a novel
algebra for dealing with string values; with this algebra substring
references can be passed as arguments to functions and returned as their
value.  This means that a parsing function not only returns the string
it matches, but also the location of that string within its underlying
base string.

Another innovation of the language is a notation that permits scripts in
the language to intercept mouse, menu, and key events intended for
objects.  In this way a mouse click on any object can be interpreted to
start an animation or initiate sound.

(A few months ago I promised to send out documentation on Ness to a few
prople.  I expect to actually do that this week.  Hang in there, folks
:-)

Fred Hansen

karl@ficc.uu.net (Karl Lehenbauer) (10/30/89)

In article <1833@draken.nada.kth.se> ianf@nada.kth.se (Ian Feldman) writes:
>In article <EHO.89Oct1023005@cognito.Princeton.EDU> eho@clarity.Princeton.EDU (Eric Ho) writes:
>>Just wondering if anyone out there is developing a high-level language that
>>has all the flexibility & simpilicty (& device independence) of PostScript but
>>capable of addressing sound & animation as well ?

>  Yes, there is such an animal, and its name is HyperTalk[tm].  Animation
>  via VideoWorks HC-Driver or by XCMD. How much more flexible can you get?

Surely you must be joking.  The music capabilities of HyperTalk are completely
inadequate, to wit:  1) music input is entirely textual, 2) only one note
can play at a time, 3) the computer busy-loops while playing sound,
4) there is no velocity sensitivity.  This is based on reading a book on
HyperTalk.  I have heard that Apple is addressing at least some of these
issues.
-- 
-- uunet!ficc!karl	"The last thing one knows in constructing a work 
			 is what to put first."  -- Pascal

pa1158@sdcc13.ucsd.edu (Viet Ho) (11/13/89)

>Surely you must be joking.  The music capabilities of HyperTalk are completely
>inadequate, to wit:  1) music input is entirely textual, 2) only one note
>can play at a time, 3) the computer busy-loops while playing sound,
>4) there is no velocity sensitivity.  This is based on reading a book on
>HyperTalk.  I have heard that Apple is addressing at least some of these
>issues.

   Have you tried looking at an Amiga platform?  All that you
mentioned above is already available on the machine with dedicated
processors for sound and video, leaving the processor free to work
on other jobs.