jj@alice.UUCP (05/08/86)
Todd at NBI says > There are a few misapprehensions involved in his article. > In a CD player utilizing digital filtering, the digital filter can be > considered the oversampling device (oversampler). The stream of 16 bit > ... > times by the digital filter such that the input stream of the digital > filter is formatted as follows (read right to left): > > W4 W4 W4 W4 W3 W3 W3 W3 W2 W2 W2 W2 W1 W1 W1 W1 > > **OR** > > Each word of the 44.1K stream is sampled one time and zeros are entered > as the remaining three samples for that given period. The digital filt- > er input stream would appear as follows: > > 0 0 0 W4 0 0 0 W3 0 0 0 W2 0 0 0 W1 > > I won't discuss much of the differences in results when using each of > these methouds. I don't know which of these is commonly used by manufac- > turers of CD players. I do prefer the first approach because it does give > some interpolation between inputted samples (in addition to the filtering Sorry to disappoint you, but the second approach is used. It provides a much simpler sin x / x correction, and also provides just as much interpolation after things are put through the filter (which is in the neighborhood of 96*4 samples long after interpolation). It's not just adjacent samples that are interpolated, it's a considerable history of the signal that's interpolated, and zeros are put between the known samples for several other reasons as well... > function) due to the implementation of the digital filtering. The digital > filter's output stream is at a 176.4 K rate. Some are at 2x instead of 4x > This process does not correct errors, but will help some in filtering > high frequency components of 'ticks' resulting from poor error conceal- Not really. Since the interpolation FILTER is flat across the entire original band (0-20K), no "audible" <that's another question that I won't argue in THIS article> energy is filtered out of the click, so nothing is disguised! > ment. Unrecoverable errors (mis-read words off of disc that cannot be > corrected) must undergo error concealment attempting to substitute accep- > table values for missing words. Some methouds used in error concealment > are 1) muting (convert missing value to 0), 2) repeating [?] (convert > missing value to most recent correct value), 3) interpolation (substitute > missing value with the average of the previous and following values), > and 4) use of polynomial algorithm. The methouds of concealment are > listed in order from most to least likely in causing audible 'ticks'. > Error concealment is performed while the data stream is still at the > 44.1 K rate and before oversampling. True. > Although I do not know of application of this is any product, a data > interpolator could be considered an effective oversampling device. > Consider a case in which the interpolator takes four samples of each > word in the 44.1 K stream. The interpolator could take the mean > average of each group of four inputs from the effective inputted > 176.4 K rate. Note the following diagram illustrating this: > > W4 W4 W4 W4 W3 W3 W3 W3 W2 W2 W2 W2 W1 W1 W1 W1 > {M--------M}{J--------J}{G--------G}{D--------D}{A--------A} > ------N}{K--------K}{H--------H}{E--------E}{B--------B} > --O}{L--------L}{I--------I}{F--------F}{C--------C} > > A=(W1+W1+W1+W1)/4 > B=(W1+W1+W1+W2)/4 > C=(W1+W1+W2+W2)/4 > : > : > : This is a very simple interpolator, wiht a very simple filter, and very little filtering of frequency components above the original nyquist rate. One of the major reasons for digital interpolation is to make the analog filters simpler. This example wouldn't do it. > The algebraic expressions above represent the values of each word > outputted from the interpolator. Notice that there will be four > ... > wide, square impulse response. ___+---+___ As it's impulse response > is symetrical, no phase distortion will be introduced.) > > Oversampling in itself, does nothing for us. The oversampling > device is what gives us advantages. Oversampling digital filters as > well as interpolators can be considered to minimize quantitization > distortion (giving smoother transistions between any two originally > sampled values). Not minimize, but reduce. The fact that transitions are smoother (which isn't true anyhow) is unrelated. The reason that noise is reduced is that the quantization noise that does exist is spread over a wider bandwidth, and some of it is filtered out by the analog filters. Please, people, if you're going to make an attempt at explaination, be ACCURATE. Rabiner and Shaffer, or Rabiner and Gold, or Oppenheim and Shaffer, have all written good texts that will explain this to you. Please go to the library and read one of them. -- TEDDY BEARS UNITE! HUG A SHY PERSON TODAY! "I wish I was home again, back home in my heart again, ..." (ihnp4;allegra;research)!alice!jj
todd@nbisos.UUCP (Todd Wilson) (05/29/86)
In respose to jj@alice.UUCP: >> In a CD player utilizing digital filtering, the digital filter can be >> considered the oversampling device (oversampler). ............... > Sorry to disappoint you, but the second approach is used. .......... This approach: >> Each word of the 44.1K stream is sampled one time and zeros are entered >> as the remaining three samples for that given period. The digital filt- >> er input stream would appear as follows: >> >> 0 0 0 W4 0 0 0 W3 0 0 0 W2 0 0 0 W1 >> > ......................................................... It provides > a much simpler sin x / x correction, and also provides just as much > interpolation after things are put through the filter (which is in the > neighborhood of 96*4 samples long after interpolation). ............ I thought it was in the neigborhood of 96 samples in the filter at a given time, with 24 of the original samples (samples from 44.1 K stream) used. There are typically 96 taps and 96 constants used as multiplication coefficients in a 4 X oversampling digital filter, not 384 (96*4). > ...................................................... It's not just > adjacent samples that are interpolated, it's a considerable history of > the signal that's interpolated, .................................... This is true. That history of the signal would include ~24 successive, original samples, plus an additional ~72 zeros for any given point in time. These ~96 points of data would be processed by multiplication by constants, and then addition of each of the 96 products. (It's not necessary to perform the multiplies by the 72 zeros, nor to add the 72 products of zero.) Notice that the same 24 original samples are in the filter for four of the 176.4 K sample periods. One of those samples will be dropped off the end of the delay line, and a new one taken in. This new set of 24 samples will now be used in the filter for the next four 176.4K sample periods. If the following approach was used, >> .................... Each word of the 44.1K stream is sampled four >> times by the digital filter such that the input stream of the digital >> filter is formatted as follows (read right to left): >> >> W4 W4 W4 W4 W3 W3 W3 W3 W2 W2 W2 W2 W1 W1 W1 W1 >> every fourth output would consider each of the 24 samples, four times. The three remaining outputs would consider 23 samples, four times, and 2 samples (at the beginning and end of the delay line), between one and three times as follows: W0 - (1 time) W1-W23 - (4 times) W24 - (3 times) W1-W24 - (4 times) W1 - (3 times) W2-W24 - (4 times) W25 - (1 time) W1 - (2 times) W2-W24 - (4 times) W25 - (2 times) W1 - (1 time) W2-W24 - (4 times) W25 - (3 times) W2-W25 - (4 times) W2 - (3 times) W3-W25 - (4 times) W26 - (1 time) W2 - (2 times) W3-W25 - (4 times) W26 - (2 times) W2 - (1 time) W3-W25 - (4 times) W26 - (3 times) W3-W26 - (4 times) W3 - (3 times) W4-W26 - (4 times) W27 - (1 time) : : : I think that the results here would give a lower average difference in transitions between each of the filter's outputted samples. It is true that a filter operating on this principle will require more intensive calculations. (We no longer have 72 products of 0.) <Since impulse response is symetrical, each of 48 constants has a duplicate. This allows a simplification of adding each of 48 samples to it's corresponding, symetrically located sample, and then perform the multiplications to the sums of 2 samples. This requires use of only 48 multiplies for a given final output, compared to 96.> It appears to me, that when using 24 samples, each inputted once to a 96 tap filter, four different impulse resposes will be present for four sets of 4:1 interlaced data streams. If every fourth sample of the 176.4K stream was demultiplexed out of the stream, that new stream would have a unique filter characteristic (defined by a unique impulse response) having been applied to derive it from the original 44.1 K stream. If we offset by one 176.4 K sample, and took every fourth sample, we would have a new stream appearing as the original 44.1K stream filtered by another unique filter response. If this be true (every four consecutive samples outputted corresponds to a different impulse response characteristic), differences between outputted samples might become large in magnitude. Does Phillips "noise shaping" following their digital filtration help this situa- tion (if this case really exists)? Or, is it left to the analog low pass filters following D/A conversion to filter out what may be high energy ultra-sonic frequencies resultant of these differences between outputted samples. The following illustration shows an impulse response picturing approximated levels of 64 of the 96 constants used in a 96 tap FIR filter. The four illustrations following that, show 4 unique impulse responses that I believe would correspond to four groups of every forth outputted sample. I notice that these 4 impulse responses are all undersampled versions of the first illustration (so all four might be considered the same impulse response sampled in slightly different time frames), but they each still differ. In the case of an inputted impulse of a single 176.4K sample width, this filter would give an outputted signal as the first illustration pictures. This is what I would consider correct, and as a *true* 96 tap filter (performs all 96 multiplies by all inputted data) would produce given the same narrow pulse as an input. Would this hold true in other cases (with complex waveforms) as well? *================================================================* * ~~ * * * * ~ ~ * * * * * * ~ ~ * * * * * * __ - - __ * *__ __--__ __-~- - ~ ~ - -~-__ __--__ __* * ~ ~~ ~--~ - _ _ - ~--~ ~~ ~ * * -_ _- * *================================================================* *3210321032103210321032103210321032103210321032103210321032103210* *================================================================* * ~ * * * * * * * * * * * * * * * * - * * _ _ _ - ~ - _ _ _ _* * ~ _ - ~ * * * *================================================================* * 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0* *================================================================* * * * * * ~ * * * * * * ~ * * * * * * _ _ * * - ~ - - _ * * ~ ~ - - ~ * * _ - * *================================================================* * 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 * *================================================================* * * * * * ~ * * * * * * ~ * * * * * * _ _ * * _ - - ~ - * * ~ - - ~ ~ * * - _ * *================================================================* * 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 * *================================================================* * ~ * * * * * * * * * * * * * * * * - * *_ _ _ _ - ~ - _ _ _ * * ~ - _ ~ * * * *================================================================* *3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 * Although it would be impossible (due to anti-aliasing filters on input of A/D in digital sampling recorder) to record a single pulse of significant amplitude, is it accurate to have a filter that will take in a 176.4K sample width impulse instead of the original 44.1K sample width impulse? (CD test disks are available with such impulses digitally encoded on them.) > .......................... and zeros are put between the known samples > for several other reasons as well... One of these reasons is cost reduction and simplification of digital filter design (only ~24 samples need be processed at a time). I would love to know what the other reasons for inserting zeros between samples are. (So anxiuos to know that it excedes my ability in finding time soon to research texts at the library. :-)) >> This process does not correct errors, but will help some in filtering >> high frequency components of 'ticks' resulting from poor error conceal- > band (0-20K), no "audible" <that's another question that I won't > argue in THIS article> energy is filtered out of the click, so nothing > is disguised! Good point. >> Although I do not know of application of this in any product, a data >> interpolator could be considered an effective oversampling device. >> Consider a case in which the interpolator takes four samples of each >> word in the 44.1 K stream. The interpolator could take the mean >> average of each group of four inputs from the effective inputted >> 176.4 K rate. Note the following diagram illustrating this: >> >> W4 W4 W4 W4 W3 W3 W3 W3 W2 W2 W2 W2 W1 W1 W1 W1 >> {M--------M}{J--------J}{G--------G}{D--------D}{A--------A} >> ------N}{K--------K}{H--------H}{E--------E}{B--------B} >> --O}{L--------L}{I--------I}{F--------F}{C--------C} >> >> A=(W1+W1+W1+W1)/4 >> B=(W1+W1+W1+W2)/4 >> C=(W1+W1+W2+W2)/4 >> : >> : >> : > This is a very simple interpolator, wiht a very simple filter, and very > little filtering of frequency components above the original nyquist > rate. One of the major reasons for digital interpolation is to > make the analog filters simpler. This example wouldn't do it. I agree. This example was provided as a model of a simple hypothetical oversampling device. >> Oversampling in itself, does nothing for us. The oversampling >> device is what gives us advantages. Oversampling digital filters as >> well as interpolators can be considered to minimize quantitization >> distortion (giving smoother transistions between any two originally >> sampled values). > Not minimize, but reduce. The fact that transitions are smoother (which > isn't true anyhow) is unrelated. The reason that noise is reduced is > that the quantization noise that does exist is spread over a wider > bandwidth, and some of it is filtered out by the analog filters. Thanks for setting me straight here. I am lacking abilities in frequency domain analysis. I am more comfortable analyzing things in the time domain. > Please, people, if you're going to make an attempt at explaination, > be ACCURATE. ....................... I am being accurate. I prefaced my response as "my understanding". I felt I did ok in accurately stating my understanding on this subject, although I may not have an accurate understanding. > ............. Rabiner and Shaffer, or Rabiner and Gold, or > Oppenheim and Shaffer, have all written good texts that will explain > this to you. Please go to the library and read one of them. Thanks for your response, jj. I appreciate the references to these texts. Although it will be a while before time permits finding them, I surely will research them when I can. Bye now -- ...{allegra|hao|ucbvax}nbires!nbisos!todd(USENET)Welcoming your flames :-)ingly!