mark@giza.cis.ohio-state.edu (Mark Jansen) (11/13/90)
I recently got information in the mail about Steven Job's new color Next Machine. You buy his Next Cube with a 32bit graphics board that evidently can grab video, compress it in real time and then play back video in real time from these compressed files. Lady on the phone priced a bare system at $14K, without hard disk. Seems to me that this is the first workstation that you could use instead as a stand-alone animation computer. Couldn't you just created rendered graphics, get it into compressed format and then dump, 20 seconds or so. You wouldn't need an Abekas or special single step video recorder. Am I right? am I wrong? -- Mark Jansen, Department of Computer and Information Science The Ohio State University; 2036 Neil Ave., Columbus, OH USA 43210-1277 mark@cis.ohio-state.edu
topix@gpu.utcs.utoronto.ca (R. Munroe) (11/13/90)
In article <85866@tut.cis.ohio-state.edu> mark@giza.cis.ohio-state.edu (Mark Jansen) writes: > > I recently got information in the mail about Steven Job's new >color Next Machine. You buy his Next Cube with a 32bit graphics board >that evidently can grab video, compress it in real time and then play >back video in real time from these compressed files. Lady on the phone >priced a bare system at $14K, without hard disk. Seems to me that this >is the first workstation that you could use instead as a stand-alone >animation computer. Couldn't you just created rendered graphics, get >it into compressed format and then dump, 20 seconds or so. You wouldn't >need an Abekas or special single step video recorder. Am I right? am >I wrong? > > > >-- >Mark Jansen, Department of Computer and Information Science >The Ohio State University; 2036 Neil Ave., Columbus, OH USA 43210-1277 >mark@cis.ohio-state.edu Although I'm not sure how the images are compressed, I would be quite amazed if they could play back in broadcast quality. An Abekas is specialized to do just that. I would expect that a Next is too general (so to speak) a platform to handle high quality real time playback. Besides, you still need good video epuipment to encode the RGB signal so the animation could be put to tape. We use Silicon Graphics workstations which are incredibly fast beasties, but we still do frame by frame for brodcast quality. Bob Munroe topix@utcs.utoronto.ca
jim@baroque.Stanford.EDU (James Helman) (11/14/90)
I had a similar thought about using a NeXTDimension as an animation slave to our SGI. My inquiries on this subject both to the net and to our NeXT rep have gone unanswered. I'd be happy even with 10-15 frames/sec and minor decompression degradation (most things go out on VHS anyway). I suspect the lack of response means that a) NeXT is keeping the specs secret, b) the bandwidth just isn't there, or c) both. I hope I'm wrong. From my perspective, it's one of the more intriguing aspects of the product. Jim Helman Department of Applied Physics Durand 012 Stanford University FAX: (415) 725-3377 (jim@KAOS.stanford.edu) Work: (415) 723-9127
jack@dali.gatech.edu (Tom Rodriguez) (11/14/90)
I've seen demos of the video capabilities of the Next Dimension machines and they are very impressive. The captured video is 640 by 480 32 bit RGBA. It can record and play back in realtime from the harddisk, doing JPEG video compression along the way. You can do video editing in software without any loss of video quality due to copying and recopying the video signal. As far as the quality of the video out, i haven't seen anything produced by it. It does have S-vhs video out in addition to a normal composite signal. I doubt that the quality is poor, mainly because my general impression of the machine is that the hardware that supports the video is excellent. Anyway, if anyone get's the chance to see a demo of the Next Dimension machine, check it out. It really rocks. tom ----------------------------------------------------------------------------- Tom Rodriguez jack@cc.gatech.edu SERC Multimedia Computing Group Georgia Institute of Technology Atlanta Georgia 30332-0280
talent@spanky.sps.mot.com (Steve Talent) (11/14/90)
In article <JIM.90Nov13163758@baroque.Stanford.EDU> jim@baroque.Stanford.EDU (James Helman) writes: >I had a similar thought about using a NeXTDimension as an animation >slave to our SGI. My inquiries on this subject both to the net and to >our NeXT rep have gone unanswered. I'd be happy even with 10-15 >frames/sec and minor decompression degradation (most things go out on >VHS anyway). I suspect the lack of response means that a) NeXT is >keeping the specs secret, b) the bandwidth just isn't there, or c) >both. I hope I'm wrong. From my perspective, it's one of the more >intriguing aspects of the product. > >Jim Helman >Department of Applied Physics Durand 012 >Stanford University FAX: (415) 725-3377 >(jim@KAOS.stanford.edu) Work: (415) 723-9127 I have a copy of the NeXT brochure that lists some specs for the new machines, including the NeXTdimension Board. Of course you need the NeXTcube for this board. Image compression and decompression is done using a dedicated JPEG Image Compression Processor from C-Cube. The brochure states that you can do real-time video compression and decompression and store up to 60 minutes of live video on a high capacity hard disk (probably 1.4GB drive). The compression factor is user selectable. The brochure doesn't state any details about how these numbers were calculated. I think the sales rep. stated a compression range of 50X to 200X. It seems that image degradation for animation would be less noticible that for still images. Steve Talent, Motorola Semiconductor Products Sector CAD Tempe, AZ 602-897-5440, talent@dover.sps.mot.com
dave@tygra.ddmi.com (David Conrad) (11/14/90)
In article <1990Nov13.043825.9515@gpu.utcs.utoronto.ca> topix@gpu.utcs.utoronto.ca (R. Munroe) writes: >In article <85866@tut.cis.ohio-state.edu> mark@giza.cis.ohio-state.edu (Mark Jansen) writes: ) )) I recently got information in the mail about Steven Job's new ))color Next Machine. You buy his Next Cube with a 32bit graphics board ))that evidently can grab video, compress it in real time and then play ))back video in real time from these compressed files... ) )Although I'm not sure how the images are compressed, I would be quite amazed )if they could play back in broadcast quality. An Abekas is specialized to )do just that. I would expect that a Next is too general (so to speak) a )platform to handle high quality real time playback... ) Guess again. NeXT claims thirty frames per second with the dedicated compressor/decompressor co-processor. Read the article in the latest issue of Byte. -- David R. Conrad dave@tygra.ddmi.com -- = CAT-TALK Conferencing Network, Prototype Computer Conferencing System = - 1-800-825-3069, 300/1200/2400/9600 baud, 8/N/1. New users use 'new' - = as a login id. <<Redistribution to GEnie PROHIBITED!!!>>> = E-MAIL Address: dave@ThunderCat.COM
olson@sax.cs.uiuc.edu (Bob Olson) (11/14/90)
I asked around and got the following replies to the query... >Couldn't you just created rendered graphics, get >it into compressed format and then dump, 20 seconds or so. You wouldn't >need an Abekas or special single step video recorder. Am I right? am >I wrong? Thanks to those who replied. ********************************************************************** From: Kim_Orumchian@NeXT.COM Here are some of my thoughts related to animation on NeXTdimension: I think what the discussion below hinges on what is meant by "broadcast quality". Images that are compressed and played back by a NeXTdimension are not going to look as good as those produced frame by frame on a dedicated video production system. This is not to say that images produced in this way will look bad, just that they won't be exactly what you would expect from a dedicated machine costing hundreds of thousands of dollars. This brings up several issues. The first is that NeXTdimension is fully capable of generating very high quality still frame images in much the same way that the SUN workstations are bring used (as described below). The speed of NeXTdimension for manipulating 32 bit graphic images makes it quite desirable for this kind of operation, (the i860 is one fast chip). Another issue is that even though the NeXTdimension's JPEG compressed images may not be as high resolution as what production houses have today, an important question is who would use low cost, easy to use animation, video editing and production products. I think there is an analogy to desktop publishing: in the early days only large very specialized production houses used computers to aid in layout and production. When Pagemaker came out on the Mac, many people scoffed at the solution because it did not offer them the same kind of quality as the high end systems. The point is a new audience was found because the lowered price point/simplified ease of use appealed to a different, much broader set of people. I think the same is going to be true in the case of software products that sit on NeXTdimension. In summary, yes NeXTdimension is a great stand-alone animation computer (among other things) and even though it does not offer "broadcast quality" output in the strict sence, the video it generates by playing back compressed images is pretty high resolution (better than most VCR's). Also, there are coming high-end consumer-grade single frame VCR's that would let you image frames with full resolution (640X480, 32 bits per pixel), and then output them one frame at a time without doing any compression whatsoever. ********************************************************************** From: Eddy Wong <ewong@gpu.utcs.utoronto.ca> In order to output production quality video suitable for broadcasting, the video signal has to strictly conform to RA170A standard. In another word, the timing of the video signal has to be very accurate. Although there are many video boards for different computer platforms capable of producing good enough video signal to be recorded to commercial video equipments, they are not good enough for professional quality video recording. I don't have enough information to tell whether NeXTDimension is good enough for professional quality video. Hope this help. ********************************************************************** From: jasmerb@ohsu.EDU (Bryce Jasmer) >Does anyone know more precisely what he is talking about? My gut >feeling is that he is underestimating the NeXTdimension, but I'm not >sure... No, I think he is hitting the nail right on the head. Remember what the NeXTDimension is actually doing when it does compression. It is taking an image and doing 37 times compression on it. This isn't easy to do, it must remove some things for the picture and do its best to get a close approximation. The data going into the compression is not the same as the data coming out of the compression. This is fine if you plan on doing a quick recording of CNN but if you plan on doing animation with "photorealistic" quality, the NeXTDimension and C-Cube (the chip that does the JPEG compression) just don't cut the mustard. The animation frames take a long time to generate and need to be exact. If you compress them to the hard disk you will lose some of the sharpness of the frames. Now don't get me wrong, the NeXTDimension and compression chips are really sweet but they just don't cut it for what this guy is thinking about doing.
velasco@beowulf.ucsd.edu (Gabriel Velasco) (11/14/90)
From: Kim_Orumchian@NeXT.COM >I think what the discussion below hinges on what is meant by >"broadcast quality". Images that are compressed and played back by a >NeXTdimension are not going to look as good as those produced frame by >frame on a dedicated video production system. Why? This is not a flame. I am really wondering where the deficiency is. Is it in the hardware? The software? The compression algorithm? The original poster was talking about producing the sequences frame by frame. >Another issue is that even though the NeXTdimension's JPEG compressed >images may not be as high resolution as what production houses have >today, I am not at all familiar with NeXTdimension's JPEG compressed images. What type of algorithm do they use? Is it delta modulation? Run length encoding? Send only changes from one frame to the next? Selective color mapping? If NeXTdimension is using 32 bit color (mentioned later in the article) then isn't that *better* than what production houses are using? HDTV is only 24 bit color. How are the 32 bits divided into the three colors? >When Pagemaker came out on the Mac, many people scoffed at >the solution because it did not offer them the same kind of quality as >the high end systems. The point is a new audience was found because >the lowered price point/simplified ease of use appealed to a >different, much broader set of people. I think the same is going to be >true in the case of software products that sit on NeXTdimension. Let's not forget its use in multimedia. I don't think that the "production" quality video systems give much thought to compression because they don't have a bandwidth problem. To transmit these frames over something like the ethernet (We're doing it right now over 10Mbps ethernet in "real-time" with UVC compression boards.) you need compression. A real-time 24 (or 32) bit color video channel would take up a good chunk of an FDDI network too. It seems like the NeXT system may make video e-mail possible. >Also, there are coming high-end consumer-grade >single frame VCR's that would let you image frames with full >resolution (640X480, 32 bits per pixel), and then output them one >frame at a time without doing any compression whatsoever. The author is treating the "compression" like a dirty word. Compression need not decrease the quality of the image any. It depends on the type of compression. What is the resolution of the NeXTdimension video? From: jasmerb@ohsu.EDU (Bryce Jasmer) >Remember what >the NeXTDimension is actually doing when it does compression. It is >taking an image and doing 37 times compression on it. This isn't easy >to do, it must remove some things for the picture and do its best to >get a close approximation. The data going into the compression is not >the same as the data coming out of the compression. This is not necessarily true for all forms of compression. Straight run length encoding preserves every bit of detail in the original image. Delta modulation can come pretty close at a high enough sampling rate. Is it the type of compression that they are using that causes them to lose resolution? Are they mapping the colors into a color map or something similar to what is done for gif images? I may be wrong here, but it seems to me that people are producing "broadcast quality" images with Amiga's outfitted with genlock cards. There are some relatively expensive supposedly professional systems available for the Amiga right now. I've seen pictures of stuff created by the the Amiga that had supposedly been done for KTLA tv station and others. -- ________________________________________________ <>___, / / | ... and he called out and said, "Gabriel, give | /___/ __ / _ __ ' _ / | this man an understanding of the vision." | /\__/\(_/\/__)\/ (_/_(/_/|_ |_______________________________________Dan_8:16_|
bakke@plains.NoDak.edu (Jeffrey P. Bakke) (11/15/90)
In article <velasco.658597478@beowulf> velasco@beowulf.ucsd.edu (Gabriel Velasco) writes: > If NeXTdimension is using 32 bit color (mentioned later in the article) > then isn't that *better* than what production houses are using? HDTV > is only 24 bit color. How are the 32 bits divided into the three > colors? > I have some literature from NeXT that states that the NeXTdimension board has 32-bit color, but from what it looks like, it uses 8-bits for RGB (24-bit) and the extra 8-bits are used for something called a transparency plane. I'm not sure exactly how that works, but it has something to do with the ability to produce 'sprite' like images in which you can view portions of covered images through areas of the top image. > > What is the resolution of the NeXTdimension video? > I believe that its around 1180 something by 890. Somewhere in there, its not the standard 1280x1024 or 1024x768 that you usually hear. Something about the pixel size representing a better view of paper relative size or something. -- Jeffrey P. Bakke | There are a finite number of INTERNET: bakke@plains.NoDak.edu | jokes in the world... UUCP : ...!uunet!plains!bakke | The overflow began BITNET : bakke@plains.bitnet | decades ago. "I am not a number, I am a free man!" - The Prisoner
kenb@amc-gw.amc.com (Ken Birdwell) (11/15/90)
>talent@spanky.sps.mot.com (Steve Talent) >The brochure states that you can do real-time video compression and >decompression and store up to 60 minutes of live video on a high >capacity hard disk (probably 1.4GB drive). The compression factor >is user selectable. The brochure doesn't state any details about >how these numbers were calculated. I too was confused by these numbers. But if you figure the a 24 bit image (640*480*3) takes a little under a meg (921600 bytes) then youre talking about (1400000000/921600) 1500 frames or about 50 seconds at uncompressed rates. Since the compression rate in configurable, (but I heard that you have to go to 30:1 if you want to do realtime, something about bandwidth), you should be able to get about 25 minutes of fairly good quality images (synthetic images will show a greater degradation than captured video) and about 60 minutes at 72:1 , which I assume to be fairly low quality. --
b645zai@utarlg.utarl.edu (Jay Finger) (11/15/90)
In article <velasco.658597478@beowulf>, velasco@beowulf.ucsd.edu (Gabriel Velasco) writes... >From: Kim_Orumchian@NeXT.COM > >>I think what the discussion below hinges on what is meant by >>"broadcast quality". Images that are compressed and played back by a >>NeXTdimension are not going to look as good as those produced frame by >>frame on a dedicated video production system. > >Why? This is not a flame. I am really wondering where the deficiency is. Is >it in the hardware? The software? The compression algorithm? The original >poster was talking about producing the sequences frame by frame. > >>Another issue is that even though the NeXTdimension's JPEG compressed >>images may not be as high resolution as what production houses have >>today, > >I am not at all familiar with NeXTdimension's JPEG compressed images. What >type of algorithm do they use? Is it delta modulation? Run length encoding? >Send only changes from one frame to the next? Selective color mapping? First of all, I am not very familiar with video compression, so you may want to take this with a few grains of salt: JPEG is a large consortium (sorry, but I don't know what it stands for). "JPEG Compression" itself denotes an industry wide algorithm(s) that many people are using or are planning to use. What algorithms it is based upon I have no idea. It's relatively new, and requires a *LOT* of processing. C-Cube is the first with a single chip solution that can do the compression in real time. The compression used is not fully recoverable. That is, some information gets lost, you may lose some detail, colors may change a little etc. Ever you've every watched something compressed and the de-compressed with this chip though, you won't be dissapointed. For normal TV you can't tell the difference, at 640x480 only if you're trying to notice. The reason stuff produce on the NeXT dimension (or any other similar system, it's *NOT* a matter of NeXT doing something stupid) will not be as good as dedicated equipment, is as follows: When producing on a NeXTdimension, you produce a frame, compress it (through the C-Cube chip) save it. Do the next frame compress, append to end of first. And so on. When you record to tape, you're just playing the recorded images back in real time, letting them be expanded by the C-Cube chip. The dedicated hardware Kim was talking about (I think) is capable of taking each frame as you produce it, and recording that single frame to tape. Then you produce the next frame, record straight to tape. What's different? You're not compressing and re-expanding each frame. The idea though, is that the compression algorithms are (hopefully) good enough that a viewer who isn't looking for differences will never see them. >If NeXTdimension is using 32 bit color (mentioned later in the article) >then isn't that *better* than what production houses are using? HDTV >is only 24 bit color. How are the 32 bits divided into the three >colors? > 32 bits = 8 Red + 8 Green + 8 Blue + 8 Alpha. The alpha channel is used for transparency information (I think). >The author is treating the "compression" like a dirty word. >Compression need not decrease the quality of the image any. It depends >on the type of compression. He's not treating it like a dirty word. He simply knows that it is not perfect, and is trying to keep people from thinking that. ---- Jay Finger Computer Science and Engineering, University of Texas at Arlington b645zai@utarlg.utarl.edu finger@csun5.utarl.edu
jim@baroque.Stanford.EDU (James Helman) (11/15/90)
I have the same brochure. But the wording, "lets you take live video,
compress it, and store it on a hard disk," isn't totally explicit
about the frame rate or resolution of the images. If it means 30 full
NTSC resolution frames per second, that's good.
talent@spanky.sps.mot.com writes:
I think the sales rep. stated a compression range of 50X to 200X.
It seems that image degradation for animation would be less noticible
that for still images.
Someone from C-Cube gave a talk here last year. I think 50:1 to 200:1
might be pushing it a little bit. I remember approximately 20:1
compression of an NTSC frame without perceptible degradataion. That
could probably be pushed up to 50:1 without it looking too bad. 200:1
is the absolute max I've heard. The higher compression ratios were
claimed to be most useful on print images, e.g. mostly white with some
black.
Of course, JPEG is a *still image* compression technique (it doesn't
use any of the correlations between frames). CCITT/ISO is supposed to
(soon?) announce the MPEG (motion video compression) standard. Once
this gets into silicon, compression ratios should jump substantially.
I'd like to hear more numbers: the storage per frame required,
sustainable data rates to and from disk, as well as compression
ratios.
One concern I have is trying to do a quasi-realtime (i.e. 30Hz)
operation on a Unix (ok, really Mach underneath) machine. Is there
enough headroom so that as long as system is pretty quiescent you
don't need to kill all the daemons or reboot in single user mode?
The same issue applies to disk access. Do you need to use a raw
partition in order to get (barring bad block/cylinder forwarding)
contiguous storage? Or is there enough headroom so that the normal
filesystem is usually adequate?
Jim Helman
madler@piglet.caltech.edu (Mark Adler) (11/15/90)
>> The author is treating the "compression" like a dirty word. >> Compression need not decrease the quality of the image any. It depends >> on the type of compression. Yes it depends, and to get the compression ratios we're talking about, you certainly do not get back the original image bit-for-bit. The C-Cube chip has a user-selectable (approximate) compression ratio, and the higher you set that ratio, the worse the image gets on decompression. I have played with the "A" version of the chip and I find that ratios of 20 or so do not degrade the image very noticably. (Which is quite impressive if you think about it.) This assumes that the image is a "typical natural scene" without too much high-frequency stuff that a viewer might focus on (like text). The B version should be about the same---it simply conforms to the pretty much finished JPEG compression standard. JPEG compression uses every trick in the book (well not wavelets or fractals) to get those ratios. It breaks the image into 16x16 blocks and does a discrete-cosine-transform on them. This moves the high frequency stuff to the lower right corner of the box. Then the information in the box is quantized (this is where information is lost) and run-length encoded. This process is done separately on the Y, U, and V channels of the color image where Y is the luminance and U and V are the color-difference signals. U and V have half the resolution of the Y, so there is some more information lost there in the conversion from RGB. As a final note, JPEG is a standard for still images. There is MPEG in the works for moving pictures which compresses in the temporal direction and gets even higher ratios. It is similar to JPEG for the first frome, but subsequent frames (about 10 or so) are changes from the first frame with 16x16 blocks moving around. This scheme still has problems, and a tenatative standard may emerge in about a year. It will be longer before you see this in chips (there are some chips out now from LSI Logic, but they're just to speed up the experimentation). And there are other schemes, like wavelets and fractals, and who knows what else is around the corner ... Mark Adler madler@piglet.caltech.edu
news@helens.Stanford.EDU (news) (11/15/90)
JPEG is a large consortium (sorry, but I don't know what it stands for). JPEG is a joint committee of the ISO and CCITT. It includes representatives from IBM, DEC, NEC and C-Cube among others. MPEG is a similar committee for motion video. I don't think an MPEG standard has been announced yet. Acronym translation. JPEG: Joint Photographic Experts Group MPEG: Motion Picture Experts Group ISO: International Standards Organization CCITT: Consultative Committee International Telegraph and Telephone Jim Helman Department of Applied Physics Durand 012 Stanford University FAX: (415) 725-3377 (jim@KAOS.stanford.edu) Work: (415) 723-9127
olson@sax.cs.uiuc.edu (Bob Olson) (11/15/90)
In article <velasco.658597478@beowulf> velasco@beowulf.ucsd.edu (Gabriel Velasco) writes:
I am not at all familiar with NeXTdimension's JPEG compressed images. What
type of algorithm do they use? Is it delta modulation? Run length encoding?
Send only changes from one frame to the next? Selective color mapping?
I believe that the problem he may be talking about (ie lower
resolution with JPEG) is that JPEG is a "lossy" compression algorithm
-- you lose data when doing the compression. It seems to me also that
you could get as good of quality if you don't use compression.
If NeXTdimension is using 32 bit color (mentioned later in the article)
then isn't that *better* than what production houses are using? HDTV
is only 24 bit color. How are the 32 bits divided into the three
colors?
There are actually only 24 bits of color. There is another 8 bits of
alpha channel - transparency information. It makes for cool images: an
image of a Porsche where you can actually see *through* the windows.
What is the resolution of the NeXTdimension video?
1120x832
--bob
Bob Olson University of Illinois at Urbana/Champaign
Internet: rolson@uiuc.edu UUCP: {uunet|convex|pur-ee}!uiucdcs!olson
UIUC NeXT Campus Consultant NeXT mail: olson@fazer.champaign.il.us
"You can't win a game of chess with an action figure!" AMA #522687 DoD #28
turk@media-lab.media.mit.edu (Matthew Turk) (11/15/90)
> This is not necessarily true for all forms of compression. Straight > run length encoding preserves every bit of detail in the original > image. Delta modulation can come pretty close at a high enough > sampling rate. ... However lossless compression techniques for static images can't perform anywhere near 37X compression. More like 6X or 8X, at best. I *think* the JPEG standard has a lossless option that performs somewhere around that. And by the way, remember that these numbers are for color images -- the numbers for gray scale images are about 3X worse. Matthew
topix@gpu.utcs.utoronto.ca (R. Munroe) (11/15/90)
In article <velasco.658597478@beowulf> velasco@beowulf.ucsd.edu (Gabriel Velasco) writes: > > *** Some Stuff Deleted *** > >I may be wrong here, but it seems to me that people are producing >"broadcast quality" images with Amiga's outfitted with genlock cards. >There are some relatively expensive supposedly professional systems >available for the Amiga right now. I've seen pictures of stuff created >by the the Amiga that had supposedly been done for KTLA tv station and >others. > I've been doing computer animation for a number of years (on Silicon Graphics workstations running NeoVisuals and Wavefront). I've *never* seen broadcast quality animation come off an Amiga. Bob Munroe topix@utcs.utoronto.ca
cleland@sdbio2.ucsd.edu (Thomas Cleland) (11/15/90)
In article <velasco.658597478@beowulf> velasco@beowulf.ucsd.edu (Gabriel Velasco) writes: > >I may be wrong here, but it seems to me that people are producing >"broadcast quality" images with Amiga's outfitted with genlock cards. >There are some relatively expensive supposedly professional systems >available for the Amiga right now. I've seen pictures of stuff created >by the the Amiga that had supposedly been done for KTLA tv station and >others. > You're right. Properly equipped Amigas produce video of broadcast quality, in the case of the Video Toaster it is superior to standard broadcast quality. The gist of the method is that the video is never digitized; all the processing, overlays, digital and video effects, etc., are done in NTSC composite video. Neither the Mac nor the PC have the bandwidth necessary for a Toaster-like board to operate. I suspect that the NeXT does. I was told at a recent Toaster demo that network-affiliated stations were replacing some of their dedicated equipment with Toasters. Thom Cleland tcleland@ucsd.edu
madler@piglet.caltech.edu (Mark Adler) (11/15/90)
kenb@d9.amc.com (Ken Birdwell) figures: >> But if you figure the a 24 bit image (640*480*3) takes a little under >> a meg (921600 bytes) ... >> ... about 60 minutes at 72:1, which I assume to be fairly low quality. The compression ratios apply to the YUV image, which averages 16 bits per pixel instead of 24 (since the U and V are actually 320x480). This gives 614400 bytes. Even so, this would require a compression ratio of 44:1 to get 60 minutes on a 1.4G drive. The C-Cube chip shows some degradation at 50:1, but one could argue that it's still better than you'd get on a normal analog TV. On another note, the standard for digital TV (used by digital VTR's) is 720x243 (before interlacing, giving about 720x484 since there are two empty lines). I wonder why NeXT is using 640x480? I hope they weren't influenced by (shudder) Mac's or PC's. Mark Adler madler@piglet.caltech.edu
mark@calvin..westford.ccur.com (Mark Thompson) (11/15/90)
In article <OLSON.90Nov14142745@sax.cs.uiuc.edu> olson@sax.cs.uiuc.edu (Bob Olson) writes: >In article <velasco.658597478@beowulf> velasco@beowulf.ucsd.edu (Gabriel Velasco) writes: >> What is the resolution of the NeXTdimension video? > >1120x832 Sorry but I find this VERY hard to believe. The entire NeXT screen may be this resolution but I doubt very much that the real-time video is this resolution. Something in the range of 640x480 is far more likely. Considering the bandwidth limitations of NTSC video and the loss incurred by JPEG compression, the effective resolution will probably even be below 640x480. Now it is possible that pixel replication or image scaling are performed to fill 1120x832 pixels, but that does not reflect the actual resolution. +--------------------------------------------------------------------------+ | Mark Thompson | | mark@westford.ccur.com | | ...!{decvax,uunet}!masscomp!mark Designing high performance graphics | | (508)392-2480 engines today for a better tomorrow. | +------------------------------------------------------------------------- +
mark@calvin..westford.ccur.com (Mark Thompson) (11/15/90)
In article <1990Nov15.013438.4075@gpu.utcs.utoronto.ca> topix@gpu.utcs.utoronto.ca (R. Munroe) writes: >In article <velasco.658597478@beowulf> velasco@beowulf.ucsd.edu (Gabriel Velasco) writes: >>I may be wrong here, but it seems to me that people are producing >>"broadcast quality" images with Amiga's outfitted with genlock cards. >>There are some relatively expensive supposedly professional systems >>available for the Amiga right now. I've seen pictures of stuff created >>by the the Amiga that had supposedly been done for KTLA tv station and >>others. > >I've been doing computer animation for a number of years (on Silicon Graphics >workstations running NeoVisuals and Wavefront). I've *never* seen broadcast >quality animation come off an Amiga. Then you have obviously not seen the Video Toaster for the Amiga. This brand new peripheral is a dual 24bit frame buffer, frame grabber, digital video effects generator, seven input switcher, 35ns character generater, paintbox, and 3D animation system. It DOES produce broadcast quality output and when you consider that the board including all the software is under $1600, it is utterly amazing! It has been reviewed in a number of the professional video mags and the reviewers were astonished to find that unlike most computer-based video products these days, it does produce broadcast quality output. It is certainly not as fast as an SGI machine, but at about 1/30 the hardware/software price, I will wait the extra couple minutes per rendered frame. If you would like more info, email me. +--------------------------------------------------------------------------+ | Mark Thompson | | mark@westford.ccur.com | | ...!{decvax,uunet}!masscomp!mark Designing high performance graphics | | (508)392-2480 engines today for a better tomorrow. | +------------------------------------------------------------------------- +
evgabb@sdrc.UUCP (Rob Gabbard) (11/16/90)
From article <velasco.658597478@beowulf>, by velasco@beowulf.ucsd.edu (Gabriel Velasco): > > is only 24 bit color. How are the 32 bits divided into the three > colors? > They are probably using 24 bit color with 8 bit alpha blending. -- Rob Gabbard (uunet!sdrc!evgabb) Technical Development Engineer Structural Dynamics Research Corp
dtynan@unix386.Convergent.COM (Dermot Tynan) (11/17/90)
In article <1990Nov15.115234.4438@nntp-server.caltech.edu>, madler@piglet.caltech.edu (Mark Adler) writes: > > The compression ratios apply to the YUV image, which averages 16 bits > per pixel instead of 24 (since the U and V are actually 320x480). > On another note, the standard for digital TV (used by digital VTR's) is > 720x243 (before interlacing, giving about 720x484 since there are two > empty lines). I wonder why NeXT is using 640x480? I hope they weren't > influenced by (shudder) Mac's or PC's. Actually, referring to the American Cinematographers Manual (which I *don't* have in front of me, so forgive any roundoff (?) errors), not only is the definition for "broadcast quality" defined as 720x484, it also requires "from 8 to 12 bits" per color! Seeing as they only used approximately 5 bits per color, they probably figured they had compromised anyway, and might as well use 640x480, which seems to be deep-rooted in ancient mysticism, along with 80-characters per line, and 24 lines per screen (monitor bandwidth and P31 resolution black-magic incantations heard in the background). - Der -- Dermot Tynan dtynan@zorba.Tynan.COM {altos,apple,mips,pyramid}!zorba!dtynan "Five to one, baby, one in five. No-one here gets out alive."
dave@imax.com (Dave Martindale) (11/17/90)
In article <1990Nov15.115234.4438@nntp-server.caltech.edu> madler@piglet.caltech.edu (Mark Adler) writes: > >On another note, the standard for digital TV (used by digital VTR's) is >720x243 (before interlacing, giving about 720x484 since there are two >empty lines). I wonder why NeXT is using 640x480? I hope they weren't >influenced by (shudder) Mac's or PC's. There are two types of digital VTR's, D-1 and D-2. D-1 uses a 13.5 MHz sampling frequency, while D-2 uses 4 times the subcarrier frequency. Both of these frequencies were chosen for convenience in digitizing and reconstructing an analog signal without regard to the actual horizontal pixel count - 720 in one case, something else in the other. The vertical pixel count is fixed by the number of active scan lines in the picture. Both of these standards give non-square pixels. On the other hand, most frame buffers that work with video signals want to provide square pixels, so they use whatever pixel clock rate is necessary to give square pixels given the fixed vertical resolution. The difference between 480 and 484 is a few lines at the very top and bottom of the screen that are never visible to an ordinary viewer anyway. The digital VTR's, of course, are expected to record all of the relevant parts of the video signal (more than 484, perhaps all 525 lines) while the frame buffer need provide only the visible portion.
madler@piglet.caltech.edu (Mark Adler) (11/18/90)
Dermot Tynan (dtynan@zorba.Tynan.COM) notices: >> Actually, referring to the American Cinematographers Manual (which I >> *don't* have in front of me, so forgive any roundoff (?) errors), not >> only is the definition for "broadcast quality" defined as 720x484, it >> also requires "from 8 to 12 bits" per color! Seeing as they only used >> approximately 5 bits per color, they probably figured they had >> compromised anyway, and might as well use 640x480, which seems to be >> deep-rooted in ancient mysticism, along with 80-characters per line, >> and 24 lines per screen (monitor bandwidth and P31 resolution black- >> magic incantations heard in the background). Well, sort of. Computing 5.3 bits per color is misleading. There are still 8 bits each for Y, U, and V---it's just that there are half as many U's and V's. I said there was an average of 16 bits per pixel, but really there are 32 bits per two pixels. The YUV gets converted to RGB in a way that gives a full eight bits of resolution to R, G, and B for each pixel, except that there is a subtle correlation between adjacent even and odd pixels. As someone else pointed out, U and V combined get the same bandwidth that Y does on an NTSC broadcast, which is how this elementary compression of RGB by 3:2 retains broadcast quality. However, one might consider "broadcast quality" to be an oxymoron. The numbers 480 and 640 do seem to have Kabbalistic origins, along with 24 and 80. Mark Adler madler@piglet.caltech.edu
bakke@plains.NoDak.edu (Jeffrey P. Bakke) (11/19/90)
In article <254@sdrc.UUCP> evgabb@sdrc.UUCP (Rob Gabbard) writes: > From article <velasco.658597478@beowulf>, by velasco@beowulf.ucsd.edu (Gabriel Velasco): > > is only 24 bit color. How are the 32 bits divided into the three > > colors? > They are probably using 24 bit color with 8 bit alpha blending. I believe the distributed announcements mention 24-bit color and the other 8 bits are called 'transparency planes' or something similar. Something to allow you to display images with 'sprite' type ability, e.g. you can view overlayed images through parts of the top image. Don't really know a whole lot about it or how it works, but the actual colors availble does seem to be 24-bits only. -- Jeffrey P. Bakke | There are a finite number of INTERNET: bakke@plains.NoDak.edu | jokes in the world... UUCP : ...!uunet!plains!bakke | The overflow began BITNET : bakke@plains.bitnet | decades ago. "I am not a number, I am a free man!" - The Prisoner
clp@wjh12.harvard.edu (Charles L. Perkins) (11/19/90)
It was stated that the "real" standard for broadcast quality requires 8-12 bits per color...unless I misunderstand what was meant, the NeXTdimension could in theory meet this (with 8 bits per color) as long as it was allowed to output only a few minutes at a time (some compression without loss is still possible even with full color)... or with even better frame-at-a-time VCRs on their way, it could do full resolution on all frames. Charles
peb@Autodesk.COM (Paul Baclaski) (11/27/90)
I missed the beginning of this thread, so please bear with me if this was already mentioned... Does the NTSC output of a NeXT machine do overscan? This is a must for "broadcast quality" more than N bits per pixel is. Paul