[comp.dsp] A question about the Nyquist theorm

ingoldsb@ctycal.UUCP (Terry Ingoldsby) (02/26/91)

Pursuing the discussion of the Nyquist theorem, I have a question
about practical sampling applications.  If you have a sine wave at
frequency f, which you sample at just over 2f samples per second then
the Nyquist theorem is satisfied.  I know that by performing a Fourier
transform it is possible to recover all of the signal, i.e. deduce that
the original wave was at frequency f.

Note that this is different than just playing connect the dots with the
samples.  Most of the algorithms I've heard of used with CD players
perform a variety of interpolation, oversampling, etc., but these all
seem to be elaborate versions of connect the dots.  I'm not aware that
the digital signal processing customarily done will restore the wave to
anything resembling its original.

I suspect that there is something I am missing here.  Can anyone clarify
the situation?

E.g.

Original:


      x x         x x         x x
     x   x       x   x       x   x
    x     x     x     x     x     x
           x   x       x   x       x
            x x         x x

    ^    ^    ^    ^    ^    ^    ^
Sample points 


Connect the dots reproduction (you can draw in the lines, I hate ascii drawings)

                   x
         x    
    x
              x              x    x
                        x

The only thing I can think is that the resulting waveform must contain
frequencies greater than the Nyquist limit allows, thus permitting them
to be filtered out with a brick wall filter (approachable with digital
filtering) letting the orignal come through unaltered.  Can someone confirm
my belief?

-- 
  Terry Ingoldsby                ingoldsb%ctycal@cpsc.ucalgary.ca
  Land Information Services                 or
  The City of Calgary       ...{alberta,ubc-cs,utai}!calgary!ctycal!ingoldsb

jbuck@galileo.berkeley.edu (Joe Buck) (02/28/91)

In article <625@ctycal.UUCP>, ingoldsb@ctycal.UUCP (Terry Ingoldsby) writes:
|> Pursuing the discussion of the Nyquist theorem, I have a question
|> about practical sampling applications.  If you have a sine wave at
|> frequency f, which you sample at just over 2f samples per second then
|> the Nyquist theorem is satisfied.  I know that by performing a Fourier
|> transform it is possible to recover all of the signal, i.e. deduce that
|> the original wave was at frequency f.
|> 
|> Note that this is different than just playing connect the dots with the
|> samples.  Most of the algorithms I've heard of used with CD players
|> perform a variety of interpolation, oversampling, etc., but these all
|> seem to be elaborate versions of connect the dots.  I'm not aware that
|> the digital signal processing customarily done will restore the wave to
|> anything resembling its original.

The Nyquist sampling theorem says more than just that you need to sample
at a rate higher than twice the highest frequency.  It also gives the
formula for the reconstructed time series.

If you have a signal with no frequency components higher than f = 1/2T,
where T is the spacing between samples, then the original waveform x(t)
may be found exactly at any point by computing the sum

x(t) = sum from m=-infinity to infinity x[m] * sinc (pi (t - m*T) / T)

where sinc(x) is just sin(x)/x (note: sinc(0) is 1).

This is exactly what you get when you pass a series of Dirac delta functions
with weights x[m] through an ideal low pass filter with cutoff frequency
1/2T; the impulse response of such a filter is sinc(pi*t/T).

You can't make an ideal low pass filter; for one thing, it's noncausal.
All you can do is approximate this.  To know more about how this works,
you need to study some digital signal processing; then you can go laugh
at your CD or DAT sales critter when he attempts to tell you about why
one system is better than another.

Example of CD salespeak: pushing oversampling as an advanced technical
feature.  Oversampling is simply inserting zeros between the digital
samples and thus increasing the sampling rate.  It's used because then you
can use cheaper, less complex analog filters; it reduces the system cost.
Still, some sales critters think it's an advanced technical extra.

--
Joe Buck
jbuck@galileo.berkeley.edu	 {uunet,ucbvax}!galileo.berkeley.edu!jbuck	

todd@appmag.com (Todd Day) (02/28/91)

jbuck@galileo.berkeley.edu (Joe Buck) writes:

%Example of CD salespeak: pushing oversampling as an advanced technical
%feature.  Oversampling is simply inserting zeros between the digital
%samples and thus increasing the sampling rate.  It's used because then you
%can use cheaper, less complex analog filters; it reduces the system cost.
%Still, some sales critters think it's an advanced technical extra.

Not all CD players just insert zeroes.  I used the same double oversampling
chip and the same DAC as my Denon 1500 CD player (well, I used the serial
versions) in my 56000 project board.  The oversampling chip did do
interpolation (and it was slightly more complex than bilinear (cubic spline?
I don't remember)).

I know the math works for inserting zeroes if you use a sinc function to
reconstruct the signal.  However, how does it work out for reconstruction
with a near step function?  I've never run through the math on that one...
Quickly off the top of my head, it doesn't look like it will work...

-- 
Todd Day  |  todd@appmag.com  |  appmag!todd@hub.ucsb.edu
		  ^^^^^^^^^^ coming soon!

wilf@sce.carleton.ca (Wilf Leblanc) (03/01/91)

jbuck@galileo.berkeley.edu (Joe Buck) writes:

>[deleted]

>Example of CD salespeak: pushing oversampling as an advanced technical
>feature.  Oversampling is simply inserting zeros between the digital
>samples and thus increasing the sampling rate.  It's used because then you
>can use cheaper, less complex analog filters; it reduces the system cost.
>Still, some sales critters think it's an advanced technical extra.

This kills me too.  Especially 8x oversampling !
(I always thought oversampling was used because analog filters usually
have a horrible phase response near the cutoff.  However, if you want
to spend enough money, you can get very near linear phase response
with an analog filter.  So, you are right).

When I bought my CD player, it said on the front panel 'Dual D/A
converters'.  For fun, I asked the salesperson what that meant.
The reply was rather funny, and of course completely inaccurate.

What does this really mean ?  (I figured maybe two distinct D/A's rather
than 1 D/A and two sample and holds ??).

>--
>Joe Buck
>jbuck@galileo.berkeley.edu	 {uunet,ucbvax}!galileo.berkeley.edu!jbuck	
--
Wilf LeBlanc                                 Carleton University
Internet: wilf@sce.carleton.ca               Systems & Computer Eng.
    UUCP: ...!uunet!mitel!cunews!sce!wilf    Ottawa, Ont, Canada, K1S 5B6

jbuck@galileo.berkeley.edu (Joe Buck) (03/01/91)

In article <1991Feb28.084837.7506@appmag.com>, todd@appmag.com (Todd Day) writes:
|> Not all CD players just insert zeroes.  I used the same double oversampling
|> chip and the same DAC as my Denon 1500 CD player (well, I used the serial
|> versions) in my 56000 project board.  The oversampling chip did do
|> interpolation (and it was slightly more complex than bilinear (cubic spline?
|> I don't remember)).
|> 
|> I know the math works for inserting zeroes if you use a sinc function to
|> reconstruct the signal.  However, how does it work out for reconstruction
|> with a near step function?  I've never run through the math on that one...
|> Quickly off the top of my head, it doesn't look like it will work...

If a step-function is used, then you get a sinc function in the frequency
domain.  What happens is that you have a rolloff at high frequencies (that
is, this introduces a distortion).  Some manufacturers use this anyway, and
then add another filter to boost the high frequencies by a corresponding
amount to compensate for this distortion.

--
Joe Buck
jbuck@galileo.berkeley.edu	 {uunet,ucbvax}!galileo.berkeley.edu!jbuck	

cjwein@watcgl.waterloo.edu (Chris J. Wein) (03/01/91)

In article <wilf.667759065@rigel.sce.carleton.ca> wilf@sce.carleton.ca (Wilf Leblanc) writes:
>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>
>>[deleted]
>
>>Example of CD salespeak: pushing oversampling as an advanced technical
>>feature.  Oversampling is simply inserting zeros between the digital
>>samples and thus increasing the sampling rate.  It's used because then you
>>can use cheaper, less complex analog filters; it reduces the system cost.
>>Still, some sales critters think it's an advanced technical extra.
>
>This kills me too.  Especially 8x oversampling !
>(I always thought oversampling was used because analog filters usually
>have a horrible phase response near the cutoff.  However, if you want
>to spend enough money, you can get very near linear phase response
>with an analog filter.  So, you are right).
>

I also understand that oversampling increases the 'transition region'
of the filter thus allowing for lower order filters.  However, the sharper
the cutoff, the more ringing will be present in the step response.  This
ringing might be below audible levels though.  Comments?

As for the CD's that do not oversample, what type of filter is generally used?
Theoretically, the transition region for the filter is about 4.1 Khz  
(cutoff frequency is 20Khz and stopband at 44.1-20=24.1 Khz) but in 
practice I think you could get away with much more since there shouldn't
be much energy above 12Khz which extends the transition region to about 
12Khz.  Nevertheless, to get the necessary attenuation (which is what, 
40 dB+?) in 12Khz is a demanding spec.

So what type of filter?  Chebyshev type 2? 

-- 
==============================================================================
 Chris Wein                           | cjwein@watcgl.waterloo.edu 
 Computer Graphics Lab, CS Dept.      | cjwein@watcgl.uwaterloo.ca
 University of Waterloo               | (519) 888-4548 

rea@egr.duke.edu (Rana E. Ahmed) (03/01/91)

In article <11515@pasteur.Berkeley.EDU> jbuck@galileo.berkeley.edu (Joe Buck) writes:
>In article <625@ctycal.UUCP>, ingoldsb@ctycal.UUCP (Terry Ingoldsby) writes:
>|> Pursuing the discussion of the Nyquist theorem, I have a question
>|> about practical sampling applications.  If you have a sine wave at
>|> frequency f, which you sample at just over 2f samples per second then
>|> the Nyquist theorem is satisfied.  I know that by performing a Fourier
>|> transform it is possible to recover all of the signal, i.e. deduce that
>|> the original wave was at frequency f.
>
>
>If you have a signal with no frequency components higher than f = 1/2T,
>where T is the spacing between samples, then the original waveform x(t)
>may be found exactly at any point by computing the sum
>
>x(t) = sum from m=-infinity to infinity x[m] * sinc (pi (t - m*T) / T)
>
>where sinc(x) is just sin(x)/x (note: sinc(0) is 1).
>
>Joe Buck

Suppose we sample a pure sine wave of frequency 'f' at the Nyquist rate,
i.e., at 2f samples/sec (exact), such that we start sampling the sine wave
at the time of its zero crossing. Thus, if we assume uniform sampling,
then all subsequent samples will have values equal to zero, i.e., x[m]=0 for 
all m (Assuming instantaneous sampling, i.e., no Hold Time for samples).
Intutively, if we pass these samples (each of zero voltage (say)) through an 
ideal low-pass filter, then we should expect to get zero voltage at the output
of filter. In other words, reconstructed signal voltage =0 for all t. 
(see also the formula for x(t) above ).
How can we recover the pure sine in this sampling strategy? 
Am I missing something ??  

Comments are appreciated.

Rana Ahmed
========================================================================

mpurtell@iastate.edu (Purtell Michael J) (03/01/91)

In article <1991Feb28.084837.7506@appmag.com> appmag!todd@hub.ucsb.edu writes:
>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>
>%Example of CD salespeak: pushing oversampling as an advanced technical
>%feature.  Oversampling is simply inserting zeros between the digital
>%samples and thus increasing the sampling rate.  It's used because then you
>%can use cheaper, less complex analog filters; it reduces the system cost.
>%Still, some sales critters think it's an advanced technical extra.
>
>Not all CD players just insert zeroes.  I used the same double oversampling
>chip and the same DAC as my Denon 1500 CD player (well, I used the serial
>versions) in my 56000 project board.  The oversampling chip did do
>interpolation (and it was slightly more complex than bilinear (cubic spline?
>I don't remember)).

A Singnetics chip I've seen that does oversampling does sinusoidal
interpolation in addition to interpolation in the case of uncorrectable
errors in the data stream (from a CD).  

If you're going to have interpolation for errors, which I think IS important,
you might as well do it for oversampling too, even if you can't hear the
difference.  If nothing else it eases the constraints on the output filter.
-- 
-- Michael Purtell --  | "In a hundred years, | There's an Old Irish Recipe for
mpurtell@iastate.edu   |  we'll all be dead." |   Longevity: Leave the Table
Iowa State University  |  -- The January Man  |  Hungry.  Leave the Bed Sleepy.
                "slow is real"                |    Leave the Tavern Thirsty.

whit@milton.u.washington.edu (John Whitmore) (03/02/91)

In article <1991Mar1.004711.15100@watcgl.waterloo.edu> cjwein@watcgl.waterloo.edu (Chris J. Wein) writes:
>In article <wilf.667759065@rigel.sce.carleton.ca> wilf@sce.carleton.ca (Wilf Leblanc) writes:
>>jbuck@galileo.berkeley.edu (Joe Buck) writes:

>>>Example of CD salespeak: pushing oversampling as an advanced technical
>>>feature.  Oversampling is simply inserting zeros ...
>>>Still, some sales critters think it's an advanced technical extra.

	It isn't, mainly because it's NOT an extra.  All CD players
oversample, and use a FIR filter (i.e. digital filtering).  The
cheapest, oldest ones use 2x oversampling, and don't advertise the
'feature'.

>>This kills me too.  Especially 8x oversampling !
>>If you want to spend enough money, you can get very near linear 
>>phase response with an analog filter.  So, you are right).

	Not exactly.  You'd have to spend money on things like thermostats
to keep a really high Q filter (for a brickwall filter), because
REGARDLESS of cost, ALL the components in an analog filter are
poorly controlled.  Typical digital frequency accuracy (if you
aren't trying hard) is 0.001%; that kind of performance is
equivalent to an analog pole with a Q of 100,000 (which is not
feasible).

>I also understand that oversampling increases the 'transition region'
>of the filter thus allowing for lower order filters. 
>
>As for the CD's that do not oversample, what type of filter is generally used?
>  [more about why this would be a difficult filter to make}

	As I mentioned above, there ARE no non-oversampling CD
players, for exactly the reasons you gave.  A steep-slope
cutoff filter with good passband linearity is going to require
a bevy of trimmed capacitors/inductors, and would (1) cost
mightily, (2) require difficult factory adjustments, (3) not
survive shipping without needing readjustment, (4) fail when
the room temperature changed.
	The last 2x-oversampled CD player I took apart had four
trimmer components (inductors) in the filter.  4x-oversampling
filters typically have merely selected components (not trimmed).
I'd trust the long-term reliability of the 4x models before 
the 2x ones.
	IMHO, 8x and 16x oversampling are simple hype; the available
capacitors and other filter components cannot be characterized
accurately for the range of frequencies involved (20 Hz to 200000 Hz)
so if I were designing a filter for 'em, I'd just use the same
design as for the 4x units.  The part of the filtering that
ISN'T hype is the digital part (the number of taps in the FIR
{Finite Impulse Response} filter is a major factor in both cost
and performance).  None of the sales info ever mentions this,
though...

	John Whitmore

ajf@maximo.enet.dec.com (Adam J Felson) (03/02/91)

In article <17510@milton.u.washington.edu>, whit@milton.u.washington.edu (John
Whitmore) writes:
::In article <1991Mar1.004711.15100@watcgl.waterloo.edu>
cjwein@watcgl.waterloo.edu (Chris J. Wein) writes:
::>In article <wilf.667759065@rigel.sce.carleton.ca> wilf@sce.carleton.ca (Wilf
Leblanc) writes:
::>>jbuck@galileo.berkeley.edu (Joe Buck) writes:
::
::>>>Example of CD salespeak: pushing oversampling as an advanced technical
::>>>feature.  Oversampling is simply inserting zeros ...
::>>>Still, some sales critters think it's an advanced technical extra.
::
::	It isn't, mainly because it's NOT an extra.  All CD players
::oversample, and use a FIR filter (i.e. digital filtering).  The
::cheapest, oldest ones use 2x oversampling, and don't advertise the
::'feature'.
::

The first CD players did NOT have oversampling.  They ran @ 44.1 KHZ with a
pretty
steep 20KHZ analog filter.  


 
__a__d__a__m__

jroth@allvax.enet.dec.com (Jim Roth) (03/02/91)

In article <1991Mar1.191955@maximo.enet.dec.com>, ajf@maximo.enet.dec.com (Adam J Felson) writes...
>In article <17510@milton.u.washington.edu>, whit@milton.u.washington.edu (John
>> ...other stuff deleted...

>The first CD players did NOT have oversampling.  They ran @ 44.1 KHZ with a
>pretty
>steep 20KHZ analog filter.  

This is false.

The first players were from Sony and Phillips.  The Sony DAC's ran at Nyquist
rate with analog lowpass filters.  The Phillips used 4x oversampling and
noise shaping with a 14 bit DAC, followed by a rather gentle 3rd order Bessel
filter.  The oversampling FIR filter included compensation for the slight
rolloff of the Bessel filter.

See the Phillips technical journal that appeared about that time.

- Jim

mcmahan@netcom.COM (Dave Mc Mahan) (03/03/91)

 In a previous article, appmag!todd@hub.ucsb.edu writes:
>Not all CD players just insert zeroes.  I used the same double oversampling
>chip and the same DAC as my Denon 1500 CD player (well, I used the serial
>versions) in my 56000 project board.  The oversampling chip did do
>interpolation (and it was slightly more complex than bilinear (cubic spline?
>I don't remember)).
>
>I know the math works for inserting zeroes if you use a sinc function to
>reconstruct the signal.  However, how does it work out for reconstruction
>with a near step function?  I've never run through the math on that one...
>Quickly off the top of my head, it doesn't look like it will work...

Weell, If you look at the frequency content of a step function, you will find
that it contains harmonics that stretch into infinity.  To get a perfect
representation of that function when you re-construct, you would need
samples that are spaced infintly close together.  Since this can't be done
(without actually using the original function, since it is the only
representation of this type of sampling that meets the Nyquist criteria),
you can never re-construct the original step.  If you wish to sample this
function and then re-construct it, you will first need to lowpass filter it
to get rid of all harmonics greater than 1/2 your sample frequency.  This
will instantly transmute your nice step function into something resembling it
but containing overshoot and/or ringing right at the step edge.  You can
re-construct THAT waveform exactly, but it's not going to be a perfect step.

Vertical edges get lost during sampling.  A close representation will be
generated during reconstruction, but you will never get the perfect step
(or squarewave) that you originally had.


>Todd Day  |  todd@appmag.com  |  appmag!todd@hub.ucsb.edu


   -dave


-- 
Dave McMahan                            mcmahan@netcom.com
					{apple,amdahl,claris}!netcom!mcmahan

gaby@Stars.Reston.Unisys.COM ( UNISYS) (03/05/91)

>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>
>
>Example of CD salespeak: pushing oversampling as an advanced technical
>feature.  Oversampling is simply inserting zeros between the digital
>samples and thus increasing the sampling rate.  It's used because then you
>can use cheaper, less complex analog filters; it reduces the system cost.
>Still, some sales critters think it's an advanced technical extra.

I think the oversampling is not time interpolation (which by Nyquist
does not add any more information to the originial signal), but more 
error correction oversampling.  I.e. the same bit is sampled multiple
times to determine its value.  I assume that this is done by sampling
over the duration (space on the CD) of the bit.  Since the same bit
value is sampled multiple times (eight in the case of 8 times over
sampling) I assume some voting procedure is used to determine the
"true" (or best estimate) of the bit value.  I assume this results in
less tracking and sample errors.  For an ideal system, it also implies
that if the CD had a higher density (say 8 times) the laser can read 
it at this resolution (i.e. you could put 8 time the music on one
CD).

I think it is a little generous to think that the industry truely 
does "oversampling" as it is implied by signal processing connotations.
This (as you say) requires more compute requirements to ensure proper
interpolation of the sampled data.  If the interpolation (filtering)
is done wrong, then the quality of the output would go down...

- Jim Gaby

  gaby@rtc.reston.unisys.com

tohall@mars.lerc.nasa.gov (Dave Hall (Sverdrup)) (03/05/91)

In article <wilf.667759065@rigel.sce.carleton.ca>, wilf@sce.carleton.ca (Wilf Leblanc) writes...
>jbuck@galileo.berkeley.edu (Joe Buck) writes:
> 
>>[deleted]
> 
>When I bought my CD player, it said on the front panel 'Dual D/A
>converters'.  For fun, I asked the salesperson what that meant.
>The reply was rather funny, and of course completely inaccurate.
> 
>What does this really mean ?  (I figured maybe two distinct D/A's rather
>than one D/A and two sample and holds).

        OK, I may be advertising my ignorance of CD player design here,
but it seems to me it would be very difficult to produce right and left 
channel (stereo) outputs without 2 separate D/A's. My lack of expert 
knowledge leads me to believe that the 'Dual D/A converters' logo
is like building a car with a V-8 engine and advertising 'Dual Quad
Cylinder Heads' as a unique technical advancement!  What is the real
story? Let's hear from some CD technology wizards.

hedstrom@sirius.UVic.CA (Brad Hedstrom) (03/06/91)

In article <1991Mar5.155748.29328@eagle.lerc.nasa.gov> tohall@mars.lerc.nasa.gov (Dave Hall (Sverdrup)) writes:
>>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>>When I bought my CD player, it said on the front panel 'Dual D/A
>>converters'.  For fun, I asked the salesperson what that meant.
>>The reply was rather funny, and of course completely inaccurate.
>> 
>>What does this really mean ?  (I figured maybe two distinct D/A's rather
>>than one D/A and two sample and holds).
 
> OK, I may be advertising my ignorance of CD player design here,
> but it seems to me it would be very difficult to produce right and left 
> channel (stereo) outputs without 2 separate D/A's. My lack of expert 
> knowledge leads me to believe that the 'Dual D/A converters' logo
> is like building a car with a V-8 engine and advertising 'Dual Quad
> Cylinder Heads' as a unique technical advancement!  What is the real
> story? Let's hear from some CD technology wizards.

I don't profess to be a CD (or any other kind of) guru but here goes.
The samples are stored on the CD serially alternating between L and R
channels. This means that for each samples time interval, two samples
are read from the CD

sample	L	R	L	R	L	R	...
-----------------------------------------------------------
time	t = 0		t = t1		t = t2		...

where the sampling rate, t(n) - t(n-1) = 1/(44.1 kHz) = 22.7 usec. The
L and R samples were taken at the same time (in parallel) but can only
be read from the CD one at a time (serially). Assuming the L is read
first, that means that the corresponding R is delayed by 1/(44.1
kHz)/2 = 11.3 usec. Since the L and R are available in a serial
stream, only one D/A is required. At the output of the D/A are two
parallel analog sections composed of sample and holds, filters,
amplifiers, etc. This is where the "stereoness" in created.

Now since CD player manufactures are always looking for that little
thing that distinguished their product from the rest, they started
offering CD players with 2 D/A's. The reason: as shown above there is
a time delay between L and R which translates to a phase difference
between the two channels. By using 2 D/A's and delaying the L sample
by 11.3 usec, the two channels could be put back in phase.

Sounds very impressive. Of course there are numerous "audiophiles" who
claim to be able to audibly distinguish single and dual D/A players.
To put this phase difference in perspective, assume that your speakers
are *optimally* placed in an acoustically perfect room. Further assume
that you, the only listener (or object for that matter) in the room,
have placed yourself equidistant from the two optimally placed
speakers. Now move the R speaker (the one delayed by 11.3 usec) about
1/2" closer than its optimal position. Now you have compensated for
the time delay. But don't you dare change your location in the room;
you'll throw everything totally out of wack!



--
_____________________________________________________________________________
Brad Hedstrom                  Electrical and Computer Engineering Department
University of Victoria                     Victoria, British Columbia, Canada
UUCP: ...!{uw-beaver,ubc-vision}!uvicctr!hedstrom       ``I don't think so.''
Internet: hedstrom@sirius.UVic.CA                  ``Homey don't play that.''

mzenier@polari.UUCP (Mark Zenier) (03/06/91)

In article <wilf.667759065@rigel.sce.carleton.ca> wilf@sce.carleton.ca (Wilf Leblanc) writes:
>When I bought my CD player, it said on the front panel 'Dual D/A
>converters'.  For fun, I asked the salesperson what that meant.
>The reply was rather funny, and of course completely inaccurate.
>
>What does this really mean ?  (I figured maybe two distinct D/A's rather
>than 1 D/A and two sample and holds ??).

Some of the first ones used 1 D/A and sample and holds. The 
BBC wanted to broadcast monophonic off of some CD's.  With
the half sample time delay, it made the signal sound terrible.

They had to switch to one of the CD players with two 14 bit dacs.

Ancient history.

Mark Zenier  mzenier@polari.uucp  markz@ssc.uucp

todd@appmag.com (Todd Day) (03/06/91)

gaby@Stars.Reston.Unisys.COM ( UNISYS) writes:

%I think the oversampling is not time interpolation

Take it from someone who's played with the innards of more than a couple
CD players that it is.

%(which by Nyquist
%does not add any more information to the originial signal)

By "real-world" electronics, it doesn't add any more info, but allows
you to build an analog output filter that doesn't take away more info.

%, but more 
%error correction oversampling.  I.e. the same bit is sampled multiple
%times to determine its value.

No need to do this... it's usually real obvious or the Reed-Solomon
codes will allow recovery of the original info if a couple bits are
incorrect.

%I assume that this is done by sampling
%over the duration (space on the CD) of the bit.

The bits are not sampled off the disc like analog data.  They are
read off the disc much like serial data comes out of your modem
into your computer.

%I think it is a little generous to think that the industry truely 
%does "oversampling" as it is implied by signal processing connotations.
%This (as you say) requires more compute requirements to ensure proper
%interpolation of the sampled data.  If the interpolation (filtering)
%is done wrong, then the quality of the output would go down...

But it isn't done wrong and it is done on the fly.  There are a lot of
specialized chips on the market for just this purpose.  I used one of
them in my DSP board.  Remember, these chips are just mini-computers
with a built in program and they generally run at about 2 MHz.  They
only need to produce an update every 1/88kHz.  Even for a microprocessor
running at 2 MHz, that's all day long.

-- 
Todd Day  |  todd@appmag.com  |  appmag!todd@hub.ucsb.edu
		  ^^^^^^^^^^ coming soon!

edf@sm.luth.se (Ove Edfors) (03/06/91)

wilf@sce.carleton.ca (Wilf Leblanc) writes:

>jbuck@galileo.berkeley.edu (Joe Buck) writes:

>>[deleted]

>>Example of CD salespeak: pushing oversampling as an advanced technical
>>feature.  Oversampling is simply inserting zeros between the digital
>>samples and thus increasing the sampling rate.  It's used because then you
>>can use cheaper, less complex analog filters; it reduces the system cost.
>>Still, some sales critters think it's an advanced technical extra.

>This kills me too.  Especially 8x oversampling !
>(I always thought oversampling was used because analog filters usually
>have a horrible phase response near the cutoff.  However, if you want
>to spend enough money, you can get very near linear phase response
>with an analog filter.  So, you are right).

> ... [ stuff deleted ] ...

>--
>Wilf LeBlanc                                 Carleton University
>Internet: wilf@sce.carleton.ca               Systems & Computer Eng.
>    UUCP: ...!uunet!mitel!cunews!sce!wilf    Ottawa, Ont, Canada, K1S 5B6

---
  Let me first point out that I'm not very familiar with CD players, so
please forgive me if this posting is not compatible with contemprary CD
technology.
---

  The reason for oversampling is, as mentioned above, that analog filters
with very sharp cutoff are expensive and/or have a horrible phase response.
With oversampling it's possible to use (generaliszed) linear phase discrete
time filters prior to the D/A conversion. As a result of this operation one
can use much cheaper analog filters on the output. 

This media is not ideal for graphical illustrations, but I'll try anyway.

Let:       fs  - sampling frequency ( 44 kHz )
           Fs  - new sampling frequency ( L*44 kHz )

Consider the following amplitude spectrum on a CD:

                             ^  
        --           --------|--------           --
           \       /         |         \       /
             \   /           |           \   /
        -------+-------------+-------------+--------->  
             -fs/2                       fs/2

Reconstruction of this signal require an analog filter with a
sharp cutoff frequency at fs/2.

After insertion of (L-1) 0's between the samples we get:

                             ^  
--     -----     -----     --|--     -----     -----     -----
  \   /     \   /     \   /  |  \   /     \   /     \   /     \ 
   \ /       \ /       \ /   |   \ /       \ /       \ /       \
---------+--------------+----+----+--------------+--------------->
       -Fs/2          -fs/2     fs/2            Fs/2

Now ... use a discrete time filter (generalized linear phase) with a
sharp cutoff frequency at fs/2 (i.e at fs/Fs - normalized frequency).

This operation will give us the following spectrum:
(which is a copy of the first one except for the difference that the
lobes are furter apart)

                             ^ 
                           --|--  
                          /  |  \ 
                         /   |   \
---------+--------------+----+----+--------------+--------------->
       -Fs/2          -fs/2     fs/2            Fs/2


Reconstruction of this signal is much "cheaper" since the analog
filter on the output could have a much wider transition region. 


--------------------------------------------------------------------
Ove Edfors                          PHONE:    Int. +46 920 910 65
Div. of Signal Processing                     Dom.  0920 - 910 65
University of Lulea                 FAX:      Int. +46 920 720 43
S-951 87  LULEA                               Dom.  0920 - 720 43
SWEDEN                              E-MAIL:   edf@sm.luth.se
--------------------------------------------------------------------

touch@grad2.cis.upenn.edu (Joseph D. Touch) (03/07/91)

In article <3463@polari.UUCP> mzenier@polari.UUCP (Mark Zenier) writes:
>Some of the first ones used 1 D/A and sample and holds. The 
>BBC wanted to broadcast monophonic off of some CD's.  With
>the half sample time delay, it made the signal sound terrible.


WHAT???  I saw the few posts about the time delay alledgedly IMPOSED
by using a single D/A.  A little thought reveals that using 4 sample
and holds and 1 D/A removes the time delay completely, and S/H's are
cheaper than D/A's.  

S/H1 locks the D/A output for the left channel as it comes off the
CD, S/H2 does the same for the right channel.  S/H's 3 and 4 grab
the values of S/H 1 and 2 just before the next set of values gets
locked in, resulting in an output completely in phase.

Joe Touch
PhD Candidate
Dept of Computer and Information Science
University of Pennsylvania

Time	1	2	3	4

CD	L1  R1  L2  R2  L3  R3  L4  R4 

D/A	L1  R1  L2  R2  L3  R3  L4  R4 

S/H 1	L1------L2------L3------L4-----

S/H 2       R1------R2------R3------R4------

S/H 3	       L1------L2------L3------L4------

S/H 4	       R1------R2------R3------R4------

jbuck@galileo.berkeley.edu (Joe Buck) (03/07/91)

In article <1180@aviary.Stars.Reston.Unisys.COM>, gaby@Stars.Reston.Unisys.COM ( UNISYS) writes:
|> I think the oversampling is not time interpolation (which by Nyquist
|> does not add any more information to the originial signal), but more 
|> error correction oversampling.  I.e. the same bit is sampled multiple
|> times to determine its value.  I assume that this is done by sampling
|> over the duration (space on the CD) of the bit. 

When you don't know what you're talking about, please save the
network bandwidth by refraining from posting.  You clearly haven't
a clue about the way error correction is done on CDs (it's done
by error correcting codes) and your phrasing ("I think", "I assume")
indicates that you're making it up.


--
Joe Buck
jbuck@galileo.berkeley.edu	 {uunet,ucbvax}!galileo.berkeley.edu!jbuck	

mcmahan@netcom.COM (Dave Mc Mahan) (03/07/91)

 In a previous article, gaby@Stars.Reston.Unisys.COM (Jim Gaby - UNISYS) writes:
>>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>>
>>
>>Example of CD salespeak: pushing oversampling as an advanced technical
>>feature.  Oversampling is simply inserting zeros between the digital
>>samples and thus increasing the sampling rate.  It's used because then you
>>can use cheaper, less complex analog filters; it reduces the system cost.
>
>I think the oversampling is not time interpolation (which by Nyquist
>does not add any more information to the originial signal), but more 
>error correction oversampling.  I.e. the same bit is sampled multiple
>times to determine its value.

I think (once again, I too am no CD expert) that this is incorrect.  A little
thought on the subject should show this.  Data on a CD follows an industry
standard format.  It has to, or nobody could use the same CD player for all
the variety of CD's that have been released.  This alone indicates that you
can't "sample the same bit multiple times to determine it's value".  I guess
you could try to spin the CD twice as fast to read the same track twice in
during the same amount of time and then do some kind of voting to determine
which bit is correct, but I doubt this is also the case.  CD drive motors are
all standard to keep costs down.  It is much more effective to use the built-in
error correction coding on a CD to correct the random bit flips that occur.
The scheme used is pretty powerful for all but the worst scratches.  It is
my opinion that 'over-sampling' means exactly that.  Creating more samples
than were originally read off the disk.  How can they do that, you ask?  It's
quite simple.  They just stuff in 3 zeros for every value read off the disk
in addition to the value from the disk.  Why do they do that, you ask?  Again,
the answer is simple.  Doing this allows them to increase the effective
sample rate to the FIR digital filters within the CD.  They then use a
sine(x)/(x) correction (sometimes called the sync function) to 'smooth' the
data at the higher sample rate.  This effectively increases your sample rate
to the DAC and allows you to push your analog low-pass filtering farther out
so it distorts the music less.  You STILL need to do the final analog lowpass
filtering, but now you don't need to make such a critical filter to get the
same performance.  I have used exactly this technique with ECG waveform
restoration, and it works amazingly well.  You can take very blocky, crummy
data that has been properly sampled (follows the Nyquist criteria) and turn
it into a much smoother, better looking waveform.  This technique makes the
steps between each sample smaller and performs peak restoration of the original
sample.  This is needed if the original samples didn't happen to fall on the
exact peak of the waveform, which almost always happens.  A side benefit is
that you get automatic scaling of the data to take full advantage of the range
of your D-to-A converter.  This is probably not a big deal for a CD player
since the original sample was intended to be played back exactly as it was
recorded, but for my ECG re-construction it works great.  Samples come in to
me as 7 bit digitial samples, and with no extra overhead (other than scaling
the FIR filter weights properly when I first design the filter) I get samples
out that take advantage of the full 10 bit range of the playback DAC I have
selected.  The oversampling interpolates between the original samples to make
the full range useful.  The original samples are scaled as well and come out
of the FIR filter properly scaled along with all the original data.  The
'chunky-ness' of the data steps is much reduced, and the whole thing looks
better than it did.

What is the cost of this technique?   This type of over-sampling requires you
to be able to do multiplications and additions at a fairly high rate.  That is
the limiting factor.  With some special selection of FIR tap weights and
creative hardware design, you can turn the multiplications required into
several ROM table lookups that can be implemented quite cheaply.  Adders
are also needed, but these are relativly simple to do (as compared to a 'true'
multiplier).  You shift data in at one clock rate, and shift it out at 4 times
that rate for a 4x oversampling rate.  The next step is to do the final D-to-A
conversion and analog lowpass filtering with a less complicated filter.

So what do you think?  Is that how it is done?  Does anybody out there REALLY
know and shed some light on this question?


>- Jim Gaby
>
>  gaby@rtc.reston.unisys.com

   -dave

-- 
Dave McMahan                            mcmahan@netcom.com
					{apple,amdahl,claris}!netcom!mcmahan

todd@appmag.com (Todd Day) (03/08/91)

mcmahan@netcom.COM (Dave Mc Mahan) writes:

%I guess
%you could try to spin the CD twice as fast to read the same track twice in
%during the same amount of time and then do some kind of voting to determine
%which bit is correct, but I doubt this is also the case.

Even this is impossible.  The CD has only one track or "groove",
just like an LP.  It would be difficult for a servo system to track
bits on the hi-density CD like the stepper system on the relatively
low density magnetic platter.

But of course, I'm off the subject.  This has nothing to do with DSP...

-- 
Todd Day  |  todd@appmag.com  |  appmag!todd@hub.ucsb.edu
		  ^^^^^^^^^^ coming soon!

mberg@dk.oracle.com (Martin Berg) (03/08/91)

In article <38839@netnews.upenn.edu> touch@grad1.cis.upenn.edu (Joseph D. Touch) writes:
>WHAT???  I saw the few posts about the time delay alledgedly IMPOSED
>by using a single D/A.  A little thought reveals that using 4 sample
>and holds and 1 D/A removes the time delay completely, and S/H's are
>cheaper than D/A's.  
>
>S/H1 locks the D/A output for the left channel as it comes off the
>CD, S/H2 does the same for the right channel.  S/H's 3 and 4 grab
>the values of S/H 1 and 2 just before the next set of values gets
>locked in, resulting in an output completely in phase.

It looks right, but have you considered that S/H-circuits actually are a 
specialized kind of analog hardware ? This means that you will get some added 
noise and distortion for every S/H you add in series with the signal. 
This may not amount to much, but in a time where more and more HiFi-companies
exclude unnecessary circuits (f.ex.: bass/treble controls) and uses
more and more sofisticated analog circuits in the CD-players, I am not
sure if your idea will be usable - anyway not in 'real' HiFi CD's.

BTW: does anyone know about any CD-manufacturer actually using this
solution - maybe to produce cheap CP-players ?

Martin Berg

Oracle Denmark

bryanh@hplsla.HP.COM (Bryan Hoog) (03/08/91)

>
>Some of the first ones used 1 D/A and sample and holds. The 
>BBC wanted to broadcast monophonic off of some CD's.  With
>the half sample time delay, it made the signal sound terrible.
>

   Let's see.  A half sample delay is 11.34 uSec.  Assuming they
 just added up the left and right channel to get mono, they
 essentially created a simple time delay filter.  This lowpass 
 filter would have a zero at 44.1 KHz.  The 3 dB frequency would be  
 22.1 kHz.  At 10 kHz, there would be about .6 dB of attenuation.
 This filter is linear phase.

   It sounded terrible?  I wonder what the mechanism was, since the
 L+R signal shouldn't be affected much.  But wait, the L-R signal
 is no longer cancelled completely.  It probably has a high pass shape
 that's the inverse of the L+R lowpass shape.

   But wait another second.  If the microphone that recorded the 
 material in the first place had been shifted a fraction of 
 an inch . . .

   Bryan Hoog

duerr@motcid.UUCP (Michael L. Duerr) (03/09/91)

From article <wilf.667759065@rigel.sce.carleton.ca>, by wilf@sce.carleton.ca (Wilf Leblanc):
> When I bought my CD player, it said on the front panel 'Dual D/A
> converters'.  For fun, I asked the salesperson what that meant.
> The reply was rather funny, and of course completely inaccurate.
> 
> What does this really mean ?  (I figured maybe two distinct D/A's rather
> than 1 D/A and two sample and holds ??).

Yes, it means that.  Or, more likely, one dual-channel D/A.

There are a couple of reasons why.  Sample and holds have errors known as
pedestal - a voltage step that occurs when they transition between state -
and droop, where the signal decays as it is held.  The would be irrelevant
to sound quality, except that they are nonlinear and thus introduce
distortion.  Yes, D/A's have distortion too, but adding more only degrades
things.  

Also, a S/H will have some feedthrough.  Thus, when it is holding and the
D/A is producing a value for the second channel, some of the second channel
will feed through.  While the ammount is slight, remember that the output
of the FIR into the DAC may be 22 bits.  That represents 132 dB of dynamic 
range.  At 1 volt levels, -120 dB would be 10 mV.  Depending on the noise
level that may be burried, but the it is amazing what the human ear can
integrate up out of white noise.  Thus, channel isolation problems are 
potentially lessened by using dual D/A's.

Of course, its easier to use a dual D/A than a single one plus two more
S/H chips, even if the isolation between D/A sides is not an issue or
the DAC itself has bad isolation.  This is probably the biggest reason
for Dual D/A - less chips, less board space, less $.  Sound quality
improvements will be undiscernable to most listeners, who probably buy
more based on ( real and perceived ) features.

fcr@saturn.wustl.edu (Frank C. Robey X5569) (03/13/91)

In article <4692@apricot30.UUCP> duerr@motcid.UUCP (Michael L. Duerr) writes:
>From article <wilf.667759065@rigel.sce.carleton.ca>, by wilf@sce.carleton.ca (Wilf Leblanc):
>> When I bought my CD player, it said on the front panel 'Dual D/A
>> converters'.  For fun, I asked the salesperson what that meant.
>> The reply was rather funny, and of course completely inaccurate.
>> 
>> What does this really mean ?  (I figured maybe two distinct D/A's rather
>> than 1 D/A and two sample and holds ??).
>
>Yes, it means that.  Or, more likely, one dual-channel D/A.
>
>There are a couple of reasons why.  Sample and holds have errors known as
>pedestal - a voltage step that occurs when they transition between state -
>and droop, where the signal decays as it is held.
.. other reasons for using dual dacs based on current technology deleted.

When CD players were introduced, the D/A's used produced a lot of
"glitch" energy during transitions in level.  This glitch energy
showed up as harmonics and other undesireable spurious and thus
needed to be removed.  Sample and holds (or more correctly track and
holds) were used to remove the glitches.  Even if they had used
dual DAC's, dual track and holds would have been needed.   I seem
to recall that the level of distortion terms without a track and
hold was about -70dB below the fundamental.  This level and the
noise floor varied with the fundamental frequency and the sampling
frequency- the higher frequencies had higher higher distortion levels
and noise.

Current audio-quality DAC's balance delays (the major cause of the
glitches) between bits to minimize the glitch energy.  This was not
true of the DAC's available several years ago.

As for pedestal, it does not need to be a particularly non-linear
effect.  If you look at feedthrough, a linear effect, then this will
only slightly reduce channel separation- to maybe 60 dB or so- not a
particularly drastic problem in my opinion.

I was working in at group at HP- Lake Stevans in the early 80's
that was trying to get a dynamic range from a digital source around
130 dB.  Around this level, many components are no longer linear.  
Depending upon the type, voltage, and impedance levels many capacitors
and inductors created harmonics at levels far in excess of that.
At that level even some resistors were not useable.  I suspect that
some of the high-end audio equipment is finding this problem now with
the "18" and "20" bit DAC's. 

Frank Robey 
now at: fcr@ll.mit.edu    MIT- Lincoln Laboratory

jefft@phred.UUCP (Jeff Taylor) (03/13/91)

In article <625@ctycal.UUCP> ingoldsb@ctycal.UUCP (Terry Ingoldsby) writes:
>Pursuing the discussion of the Nyquist theorem, I have a question
>about practical sampling applications.  If you have a sine wave at
>frequency f, which you sample at just over 2f samples per second then
>the Nyquist theorem is satisfied.  I know that by performing a Fourier
>transform it is possible to recover all of the signal, i.e. deduce that
>the original wave was at frequency f.
>
>Note that this is different than just playing connect the dots with the
>samples.  Most of the algorithms I've heard of used with CD players
>perform a variety of interpolation, oversampling, etc., but these all
>seem to be elaborate versions of connect the dots.  I'm not aware that
>the digital signal processing customarily done will restore the wave to
>anything resembling its original.
>
>I suspect that there is something I am missing here.  Can anyone clarify
>the situation?


I wrote this about 5 years ago, and have posted it a couple of times in the
past.  Nyquest and Oversampling/zero filling seems to be on of those things
that most people know about, but don't really understand.  I've gotten enough
mail back saying this clears up some of the same sorts of questions that are
appearing again - that I'll post it again. 

jt


---- OVERSAMPLING AND ZERO FILLING ----------------------------------
What follows is a hand waiving (no math) justification on why this is logical
(although it defies common sense).

Back up to the basics about sampling (talk about the signals, and leave
out A/D's for the moment).  Everyone *knows* that the
sampling must be done at twice the bandwidth of the signal.  This is because

	1) The fourier transform of a periodic impulse (time domain) is
	a periodic impulse train (freq domain).

	2) The Multiplication of two signals in the time domain is
	equivalant to convolution in the frequency domain.

                time                            freq

	|   **         **                 |   *
	|  *   *      *   *               |**
Signal	|  *    *     *    *              |    *
	|-*-------*--*-------*            |     *
	*          *          *           |     *
	|*          *                     |--------------------


Sample  |                                 |
impulse ^   ^   ^   ^   ^   ^             ^               ^             ^
train   |   |   |   |   |   |             |               |             |
        +---------------------            +-------------------------------
        |<T>|                             |<---- 1/T ---->|

If we multiply the two time domain signals together (sample the signal) we
get:

	|                                 |   *       *       *      *
	|   ^           ^                 |**           *****          *****
	|   |           |                 |    *     *         *    *
	|   |   ^       |   ^             |     *   *           *  * 
	+----------------------           |     *   *           *  *
	v           |                     +---------------------------------
	|           v                             ^
                                                  |
						 1/2T

Looking at the freq plot, if we filter everything to the right of 1/2T,
we get the original signal back.  Therefore this impulse train (time domain)
contains all the information in the original signal.

A couple of important points about this time domain signal. 1) it is a
different signal then the original 'analog' signal, but contains all
the information that the original signal had.  2) It is a periodic sequence
of impulses, and *zero* everywhere else (the definitive digital signal,
only two values, 0 and infinity :-)). 3) It can be completely described
with a finite number of terms (the area under the impulses) so it is 
well suited for digital systems.

The disadvantage of this signal is that it is hard to realize (infinite
bandwidth, infinite amplitude).  However it is easy to get the weighting of
(area under) the impulses.  The area under each impulse is the value of the
original waveform at the instant it is sampled.  (Sample/Hold -> A/D).

[Key point coming up]

If you think of the 'digital' signal as completely describing the impulse
train signal, instead of an approximation of the original analog signal, it
is easy to accept zero filling as not introducing any errors.


	|                                 |   *       *       *      *
	|   ^           ^                 |**           *****          *****
	|   |           |                 |    *     *         *    *
	|   |   ^       |   ^             |     *   *           *  * 
	+-o---o---o---o---o---o           |     *   *           *  *
	v           |                     +---------------------------------
	|           v                                     ^
                                                          |
						         1/2T

By adding redundant information (the "o"'s above) of impulses with zero
area, we have not changed the spectrum of the signal, or it's ability
to represent the original analog signal.  Granted, this signal will not
look much like the original analog signal if plotted. So what. [try
sampling a 99 hz sine wave (which we know is bandlimited < 100hz) at
200 samples/sec.  It won't look like a sine wave either].  Two other
approaches, linear interpolation and sample duplication change the
impulse train, and the spectrum.  [ sin(f)/f ** 2  *I think*  and sin(f)/f ]


[Draw out a couple of cycles of 99hz and sample it at 200 S/sec, then
upsample to 400 by 1) zero filling, 2) linear interpolation 3) sample
duplication. None of them will be very accurate representation of the
original signal (if they are, change the phase 90 deg)]

Why bother oversampling?  Twice the sample rate, twice the processing
required (or more (or less)).  In the case of CD's which have a signal
BW of 20Khz, and a sample rate of ~44 khz, that means any signal at
20khz gets mirrored at 24khz.  To get rid of it you either need a
*very* sharp analog filter (with phase distortion/ringing), or lower
the BW of the filter (and lose some of the high freq).  If you
oversample by zero filling, it is possible to remove the
aliased signal with a digital filter.

A digital FIR filter has some good properties for removing the aliased
signal.  It is easy to make mulit-stage (90+) filters.  They are
noncausal (for proper reconstruction in the time domain, each of the
'zero' samples should be influenced by next impulse (not easily achieved
in an analog design :-) )).

		IMPULSE RESPONSE FIR INTERPOLATION FILTER

                                 |
                                _-_
                               - | -
                      _-_     _  |  _     _-_
                -*---*---*---*-------*---*---*---*
                   -      -_-         -_-      -


An important thing to notice about this filter is, it is zero at every
other sample (original sample rate), so running the oversampled signal
through this filter does not change any of the original samples (also
hard to do with an analog filter :-) ).  Adding more stages to the
filter moves the added zeros closer to the values of the original
waveform (by removing the aliased frequencies).  If the filter was
perfect, and the analog signal was bandlimited, they would become
identical to what would have been sampled at 88Khz.

The signal, and it's spectrum after running through this filter is:


	|                                 |   *                      *
	|   ^           ^                 |**                          *****
	|   | ^         | ^               |    *                    *
	|   | | ^       | | ^             |     *                  * 
	+----------------------           |     *                  *
	v           |                     +---------------------------------
	|           v                                     ^
                                                          |
						         1/2T

This is then fed to a D/A converter (at the 88 Khz rate), and the analog
output filter has a much simpler job. The signal at 20Khz is aliased at
68khz.

[side note on this FIR filter - half of the coefficents are zero,  half
of the signal samples are zero,  and the coefficents that are left are
duplicated.  But IIR filters have the reputation of being more efficent?
(but then I often use IIR filters when I want less ripple disortion,  and the
traditional rational for FIR filters is low disortion due to linear
phase delay).  Such is dsp, it often doesn't make sense, until you remember
the reason for your prejudice.]

mcphail@dataco.UUCP (Alex McPhail) (03/14/91)

In article <1180@aviary.Stars.Reston.Unisys.COM> gaby@Stars.Reston.Unisys.COM (Jim Gaby - UNISYS) writes:
>>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>>
>>
>
>I think the oversampling is not time interpolation (which by Nyquist
>does not add any more information to the originial signal), but more 
>error correction oversampling.  I.e. the same bit is sampled multiple
>times to determine its value.  I assume that this is done by sampling
>over the duration (space on the CD) of the bit.  Since the same bit
>value is sampled multiple times (eight in the case of 8 times over
>sampling) I assume some voting procedure is used to determine the
>"true" (or best estimate) of the bit value.  I assume this results in
>less tracking and sample errors.  For an ideal system, it also implies
>that if the CD had a higher density (say 8 times) the laser can read 
>it at this resolution (i.e. you could put 8 time the music on one
>CD).

Actually, this is not true.  You can not increase the density of information
on a compact disk without changing the technology.  Right now, each bit
of information occupies an area 1.6 microns square (ie adjacent bits must
be seperated by at least 1.6 microns).  If you attempt to compress the
data using closer seperation, the optical interference patterns will produce
intolerable noise on adjacent bits, even with oversampling.  You must use
a much higher frequence laser (producing a higher energy output, thus 
requiring more robust material in the compact disk, thus requiring even
higher energy writing lasers, etc., etc.) to achieve a closer seperation
of information in the compact disks.

The bottom line is the optical disk media has already reached physical
bandwidth saturation, and will not support an increase in binary density
without degradation to the desired signal.
============================================================================
________
|\     /  Alex McPhail
| \   /    
|  \ /    mail to mcphail@dataco              
|---X     (uunet!mitel!melair!dataco!mcphail)
|  / \     
| /   \   The opinions are mine alone.
|/_____\  The rest is yours.
  Alex     

***--------------------------------------------------------------***
* DISCLAIMER:                                                      *
* ==========:                                                      *
*    The opinions expressed are solely of the author and do not    *
*    necessarily reflect the opinions of Canadian Marconi Company. *
***--------------------------------------------------------------***

stephen@corp.telecom.co.nz (Richard Stephen) (03/14/91)

In article <3354@phred.UUCP> jefft@phred.UUCP (Jeff Taylor) writes in
response to 
>In article <625@ctycal.UUCP> ingoldsb@ctycal.UUCP (Terry Ingoldsby) writes:
>>Pursuing the discussion of the Nyquist theorem, I have a question
>>about practical sampling applications.  If you have a sine wave at
     [...etc...]
>
>I wrote this about 5 years ago, and have posted it a couple of times in the
>past.  Nyquest and Oversampling/zero filling seems to be on of those things
>that most people know about, but don't really understand.  I've gotten enough
>mail back saying this clears up some of the same sorts of questions that are
>appearing again - that I'll post it again. 

 [...etc...long explanation deleted...]

For those interested, check out the following paper:

MAX W HAUSER: Principles of Oversampling A/D conversion; J. Audio Eng.
Soc., Vol 39, No 1/2 1991 (January/February)
The first sentence of the abstract says:
"Growing practical importance of oversampling analog-to-digital
converters (OSDACs) reflects a synergism between microelectronic
technology trends and signal theory, neither of which alone is sufficient
to explain OSDACs fully......"

Besides being an excellent comprehensive discourse, it has one the best
collected bibliograhies on A/D, sampling, noise, digital filters, dither
etc that I have ever seen.

richard
============================ Richard Stephen ===============================
|   Technology Strategy             |      email: stephen@corp.telecom.co.nz
|   Telecom Corporation of NZ Ltd   |      voice: +64-4-823 180
|   P O Box 570, Wellington         |        FAX: +64-4-801 5417
|   New Zealand                     |

gsteckel@vergil.East.Sun.COM (Geoff Steckel - Sun BOS Hardware CONTRACTOR) (03/15/91)

In article <504@dcsun21.dataco.UUCP> mcphail@dcsun18.UUCP (Alex McPhail,DC ) writes:
>
>Actually, this is not true.  You can not increase the density of information
>on a compact disk without changing the technology.  Right now, each bit
>of information occupies an area 1.6 microns square (ie adjacent bits must
>be seperated by at least 1.6 microns).  If you attempt to compress the
>data using closer seperation, the optical interference patterns will produce
>intolerable noise on adjacent bits, even with oversampling.

Oversampling has nothing to do with data recovery off the disk.
Oversampling is a technique to make the engineering job of reconstructing
the output waveform easier or cheaper.  It is done after the data stream
has been recovered off of the medium.

>You must use
>a much higher frequence laser (producing a higher energy output, thus 
>requiring more robust material in the compact disk, thus requiring even
>higher energy writing lasers, etc., etc.) to achieve a closer seperation
>of information in the compact disks.

A couple of misconceptions here:
1) a higher frequency reading laser does not need any change in the read-only CD
materials.  A read-write CD might require some change in composition.
Just because the photons have higher energy doesn't mean that 1 milliwatt of
green light affects an aluminized reflector any more than 1 milliwatt of infrared.

Currently 5 milliwatt orange-red semiconductor lasers are available on the surplus
market, which are a good deal brighter than you need to read CDs.  A green laser
(semiconductor) has been announced by several companies.  I expect to see them
in products as soon as a standard for denser disks is hashed out.

2) read-only CDs are molded from a master, not written with a laser, and would
require only good quality control to be produced with 50% smaller pits.

	geoff steckel (gwes@wjh12.harvard.EDU)
			(...!husc6!wjh12!omnivore!gws)
Disclaimer: I am not affiliated with Sun Microsystems, despite the From: line.
This posting is entirely the author's responsibility.

whit@milton.u.washington.edu (John Whitmore) (03/15/91)

In article <504@dcsun21.dataco.UUCP> mcphail@dcsun18.UUCP (Alex McPhail,DC ) writes:
>In article <1180@aviary.Stars.Reston.Unisys.COM> gaby@Stars.Reston.Unisys.COM (Jim Gaby - UNISYS) writes:
>>>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>>... if the CD had a higher density (say 8 times) the laser can read 
>>it at this resolution (i.e. you could put 8 time the music on one
>>CD).
>
>Actually, this is not true.  You can not increase the density of information
>on a compact disk without changing the technology. 
> ...  If you attempt to compress the
>data using closer seperation, the optical interference patterns will produce
>intolerable noise

	There are no semiconductor lasers available that can do the
readout task at higher resolution, BUT a frequency-doubling or -tripling
scheme can conceivably be employed.  IBM has shown an 80%-efficient
doubler on a semiconductor laser.  If made commercial in a CD or
similar optical disk, such a frequency-doubling would (in theory)
allow a quadrupling of disk capacity.


I am known for my brilliance,                  John Whitmore
 by those who do not know me well.

jamesv@hplsla.HP.COM (James Vasil) (03/17/91)

Can anyone supply a more precise reference to the following article
that was mentioned in an early response?

> See the Phillips technical journal that appeared about that time.

Alternatively, any other references to zero-filling and d/a conversion
would be appreciated.  (I found the Hauser article interesting but it
concentrated on oversampling in a/d converters.)

Regards,
James

--
James A. Vasil                      Hewlett-Packard Co.
Applications Development Eng.       Lake Stevens Instrument Div., MS 320
jamesv@lsid.hp.com                  8600 Soper Hill Road
(206) 335-2605                      Everett, WA  98205-1298

askst@unix.cis.pitt.edu (Ahmedi S Kayhan) (03/21/91)

L.R. Lawrance is a good one on the issues about sampling, downsampling
(decimation), upsampling(zero-filling-interpolation), etc. 

askst@unix.cis.pitt.edu (Ahmedi S Kayhan) (03/21/91)

In article <105376@unix.cis.pitt.edu> askst@unix.cis.pitt.edu (Ahmedi S Kayhan) writes:
>L.R. Lawrance is a good one on the issues about sampling, downsampling
>(decimation), upsampling(zero-filling-interpolation), etc. 
>
>
Correction :
	The book is: Multirate Digital Signal Processing
	Authors: R.E Crochiere and L.R. Rabiner
	ISBN : 0-13-605162-6
	Publisher: Prentice-Hall

	This book covers above mentined issues.