bender@oobleck.Eng.Sun.COM (Michael Bender) (01/09/91)
I'm modifying my Panasonic WJ-MX12 video switcher to do a different type of video insert keying, and while designing on the modification I came across this circuit stub in the chroma signal path (the MX12 seperates Y and C and processes them seperately): +5V +VRef ^ ^ | | +------------------------------------+ X | | X 10K | +VRef | chroma X | ^ | in | |/c | | >---||-----+------| npn X | 0.1 b|\e 10K X | | 100 X |/c +---||---/\/\/\---+------||----+----| npn | 0.1 | 0.1 b|\e X --- 10 | X 820 --- pF +---||---> chroma X | | 0.1 out | | X ----- ----- X 820 ||||| ||||| X | ----- ||||| Note: VRef is about 2.8V I think this is a simple lowpass filter, but would it also introduce a phase shift? Any help on what this really does would be appreciated! also, is chroma information phase encoded, i.e. the phase of the chroma signal determines the particular hue that you will see? thanks, mike -- Won't look like rain, Won't look like snow, | DOD #000007 Won't look like fog, That's all we know! | AMA #511250 We just can't tell you anymore, We've never made oobleck before! | MSC #298726
grege@gold.gvg.tek.com (Greg Ebert) (01/11/91)
In article <5497@exodus.Eng.Sun.COM> bender@oobleck.Eng.Sun.COM (Michael Bender) writes: > [...] >I think this is a simple lowpass filter, but would it also introduce a >phase shift? Any help on what this really does would be appreciated! > Yes, it will low-pass filter, but I'm not about to analyze the beast. >also, is chroma information phase encoded, i.e. the phase of the >chroma signal determines the particular hue that you will see? > Yes. Chroma information consists of amplitude and phase. There is a reference burst between Hsync and active video which locks a PLL once each video line. The phase difference between the PLL and the instantaneous chroma signal corresponds to hue (ie, color). The amplitude of the chroma signal corresponds to the saturation (or 'brightness') of that particular hue. Color uses RGB, and the information is encoded as luminance, chroma_phase, and chroma_amplitude. It's more fun when done digitally :) ! Tidbit: The NTSC standard was developed in the 1940's to be compatible with monochrome, and still fit inside 6Mhz. Considering the state of technology back then, those guys were damn clever!
ISW@cup.portal.com (Isaac S Wingfield) (01/17/91)
In article <767@ssc.UUCP> markz@ssc.UUCP (Mark Zenier) writes: > >Does anyone know the reason they picked 3.579545... Mhz for the Most of the energy in a (B&W) television image lies at frequencies which are multiples of the horizontal rate (15,734.26 for NTSC color). The chroma signal carrier (actually a suppressed carrier) is at an odd multiple of half the horizontal rate; this causes the chroma sidebands to interdigitate with the luma information and produce minimal interference with it. As the color subcarrier frequency goes up to higher multiples, the available bandwidth increases, but the sound subcarrier is 4.5mHz above the video, so higher sub frequencies will cause progressively more cross-interference between chroma and sound. 3.579545 is a compromise. Another part of the compromise, and for which I can't remember the reason, involved changing the horizontal and vertical rates from the B&W values (15,750Hz & 60Hz) to slightly different ones (15,734.26Hz & 59.94Hz). This may also have to do with minimizing crosstalk with the sound carrier. Isaac isw@cup.portal.com
gaarder@batcomputer.tn.cornell.edu (Steve Gaarder) (01/18/91)
In article <38104@cup.portal.com> ISW@cup.portal.com (Isaac S Wingfield) writes: >In article <767@ssc.UUCP> markz@ssc.UUCP (Mark Zenier) writes: >>Does anyone know the reason they picked 3.579545... Mhz for the >Most of the energy in a (B&W) television image lies at frequencies which are >multiples of the horizontal rate (15,734.26 for NTSC color). The chroma signal >carrier (actually a suppressed carrier) is at an odd multiple of half the >horizontal rate; this causes the chroma sidebands to interdigitate with the >luma information and produce minimal interference with it. > A good way to think of this is that the subcarrier produces a very fine pattern of alternating black and white dots on the screen, which toggle between black and white with every frame. >Another part of the >compromise, and for which I can't remember the reason, involved changing >the horizontal and vertical rates from the B&W values (15,750Hz & 60Hz) >to slightly different ones (15,734.26Hz & 59.94Hz). This may also have to >do with minimizing crosstalk with the sound carrier. Actually, it was done to make deriving those frequencies from the 3.579545 master easier. A the time, frequency division was done by syncronizing an oscillator, which was fine for small divisors but impractical for large ones. Thus a division ratio was chosen that could be factored completely into small numbers. (Divide the burst by 13, 5, and 7, then double it, to get the horizontal frequency) Steve Gaarder gaarder@theory.tn.cornell.edu ...!cornell!batcomputer!gaarder
markz@ssc.UUCP (Mark Zenier) (01/19/91)
In article <1991Jan17.192816.23881@batcomputer.tn.cornell.edu>, gaarder@batcomputer.tn.cornell.edu (Steve Gaarder) writes: > >In article <767@ssc.UUCP> markz@ssc.UUCP (Mark Zenier) writes: > >>Does anyone know the reason they picked 3.579545... Mhz for the > Actually, it was done to make deriving those frequencies from the 3.579545 > master easier. A the time, frequency division was done by syncronizing > an oscillator, which was fine for small divisors but impractical for large > ones. Thus a division ratio was chosen that could be factored completely > into small numbers. (Divide the burst by 13, 5, and 7, then double it, to > get the horizontal frequency) Ok, and 5 Mhz divided by 8 and 11 then multplied by 7 then 9 is 3.579545... Has anyone seen any design rationale published by the NTSC. Ancient history, electronic style. markz@ssc.uucp