[comp.graphics] Digitized NTSC to RGB?

sleat@ez.ardent.com (Michael Sleator) (02/02/90)

In article <3089@becker.UUCP> bdb@becker.UUCP (Bruce Becker) writes:
>In article <11339@ucsd.Edu> brian@ucsd.Edu (Brian Kantor) writes:
>|I have a bunch of data which consists of 8-bit pixels that represent
>|digitized composite NTSC.  To display it in monochrome is trivial, but
>|I want to turn it into RGB, which requires extracting the Y, I and Q
>|signals out of the image and then doing the matrix thing.  It was
>|sampled at 3x the burst freq (about 10.7 Mhz).
>
>	It seems to me that you will have needed to
>	precisely synchronize the sampling frequency
>	(and phase) to the color burst or you won't
>	get the color back out (or at least not without
>	a lot of truly tedious guesswork).

I don't believe this is true.  Look at it this way:  The subcarrier frequency
is approx. 3.58MHz.  The color bandwidths are limited to roughly 500kHz for
the Q channel and 1.3MHz for the I channel.  Therefore the upper sideband
for the color information should be limited to about 4.9MHz, no?  Thus the
10.7Msample/sec rate meets the Nyquist criterion, and he should be able to
reconstruct both the burst and the chroma, regardless of the sampling phase.
Am I missing something?  (There are certainaly a lot of less tangible factors
limiting the resolution to which he will be able to extract the color
information, but this is just a first-order analysis. [see below])

>	You'll need a filter function since the bandwidth
>	is asymmetrical about the burst frequency, so that
>	below 500 Khz the amplitude is ~ double what it
>	"should" be.

I don't quite understand what you mean by the last bit of this.  The NTSC spec,
as I have it, is:
	Q Bandwidth:
		< 2db down @ 400 kHz
		< 6db down @ 500 kHz
		>= 6db down @ 600 kHz

	I Bandwidth:
		< 2db down @ 1.3 MHz
		>= 20db down @ 3.6MHz

It's commonly accepted that many people "cheat", and simply limit both channels
to about 500kHz, but I don't see how that could engender your statement about
the amplitude being double what it "should" be. ???  Could you explain?


>	... Assuming you've done some
>	magic normalization to the monochrome values
>	you already know how to get, apply the standard
>	YIQ to RGB transform to the YIQ values you have
>	obtained.

I'm don't know exactly what you mean by "magic normalization", but it seems
to me that it should go something like this: (set you=brian@ucsd.Edu)

Depending on the quality of the source from which the original signal came,
you might have to perform an "Automatic Gain Control" function.  The usual
way to do this is to look at the difference in level between the tip of the
sync pulse and the blanking level (say, immediately before the color burst).
Pick a reference level for this, then normalize the values for every scan
line on the basis of the difference between your reference level and the
sync-to-blank level at the beginning of that line.  If the video source
and the digitizer are DC coupled and very stable, then you might not need
to do this.  However, most video equipment is AC coupled, which means that
the absolute level will fluctuate with the image content.

You might be able to get some mileage from knowing certain other things about
the original signal, such as the fact that the sync and color burst waveforms
are very stable, so you can treat them as a repetitive waveform and get a
synthetic higher sampling rate.  This might reduce one of the error terms in
demodulating the chroma by allowing you to reconstruct the burst with higher
accuracy.  (This assumes that they *are* indeed stable, and that your sample
clock was not synchronized to the video.)

Then you can filter out the chrominance component (the 3.58MHz subcarrier)
to get the luminance on one hand, and demodulate the phase-encoded subcarrier
to get the chrominance on the other.  Don't forget the silly 33-degree phase
shift, or your colors will be all wacko.  (See, it's just a Simple Matter of
Programming!) (note waving of hands...)


[This is the below to which you were directed above...]
An interesting question is just how much color information you'll be able
to reconstruct with eight bits per pixel, and what the quantization error
will look like.  Different encoding schemes will display different sensitivites
to errors, and NTSC was not designed with digitization in mind.  Errors in
the amplitude of the subcarrier shouldn't cause much problem because they
translate into errors in saturation, which the viewer should be fairly
tolerant of.  However, the limited sample resolution will also show up as
errors in the subcarrier phase, which translate into errors in hue, which
the viewer will be quite sensitive to.  Hmmm...  I'm sure people in the TV
industry have worked through all of this in excruciating detail.  If anyone
here has pointers to good articles on the subject, I'd like to hear about them.


>	An interesting problem, actually...

Yeah.  It sounds like fun.  Let us know how it goes.


Michael Sleator			Voice:		408-732-0400
Stardent Computer		FAX:		408-735-0635
880 W. Maude			internet:	sleat@ardent.com
Sunnyvale, CA  94086		uucp:		...!uunet!ardent!sleat