[sci.electronics] A question about the Nyquist theorm

F0O@psuvm.psu.edu (02/15/91)

     I was reading an article that states the Nyquist theorm as:
     "The sample frequency must be at least twice the highest frequency
component within the analog signal for an accurate representation of the
analog signal".
     I'd guess here he is talking about complex signals.  But what do you
do with a pure sine wave?  There is only one frequency component in a sine
wave(the fundamental), and if you sample at twice that, you're not going
to get a good representation of the signal.
     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
only going to get two points per cycle.
     I'm sure I must not be understanding something here, or does the
Nyquist equation only apply to complex signals?

                                                             [Tim]

north@manta.NOSC.MIL (Mark H. North) (02/16/91)

In article <91046.095459F0O@psuvm.psu.edu> F0O@psuvm.psu.edu writes:
>
>     I was reading an article that states the Nyquist theorm as:
>     "The sample frequency must be at least twice the highest frequency
>component within the analog signal for an accurate representation of the
>analog signal".

This is an incorrect statement of the Nyquist theorem. The sample freq
must be *greater* than twice the highest freq component...

>     I'd guess here he is talking about complex signals.  But what do you
>do with a pure sine wave?  There is only one frequency component in a sine
>wave(the fundamental), and if you sample at twice that, you're not going
>to get a good representation of the signal.

A pure sine wave is fine. As long as you sample at greater than twice its
freq. Even though it may appear that you are not getting a good represen-
tation of the signal it can be shown with Fourier analysis that the
sample set is unique to this component and hence the exact signal can
be recovered from the sample set.

>     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
>only going to get two points per cycle.

And imagine that those two points are phased such that they land at the
zero crossing of the 60Hz signal. All your samples are zero! This is
why you must sample at greater than 2nu.

 
 A good reference is "Digital Signal Analysis" by Samuel D Stearns. It is
 no longer in print but is available in most engr. libraries. Also there
 is a new edition of this book published by Printice Hall.

 Mark

ertas@athena.mit.edu (Mehmet D Ertas) (02/16/91)

In article <1751@manta.NOSC.MIL>, north@manta.NOSC.MIL (Mark H. North) writes:
|> In article <91046.095459F0O@psuvm.psu.edu> F0O@psuvm.psu.edu writes:

|> >     I'd guess here he is talking about complex signals.  But what do you
|> >do with a pure sine wave?  There is only one frequency component in a sine
|> >wave(the fundamental), and if you sample at twice that, you're not going
|> >to get a good representation of the signal.
|> 
|> A pure sine wave is fine. As long as you sample at greater than twice its
|> freq. Even though it may appear that you are not getting a good represen-
|> tation of the signal it can be shown with Fourier analysis that the
|> sample set is unique to this component and hence the exact signal can
|> be recovered from the sample set.


And just for ther sake of completeness, here's how you recover your
original signal:

Take your samples, pass them through an A-D converter and LPF the outcoming
signal with cutoff freq. 0.5 times the sampling frequency. There you go!

|> 
|> >     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
|> >only going to get two points per cycle.
|> 
|> 
|> 
|> And imagine that those two points are phased such that they land at the
|> zero crossing of the 60Hz signal. All your samples are zero! This is
|> why you must sample at greater than 2nu.
|> 

That's correct; you cannot recover the amplitude of a frequency comp.
exactly half the sampling frequency.

|>   
|>  A good reference is "Digital Signal Analysis" by Samuel D Stearns. It is
|>  no longer in print but is available in most engr. libraries. Also there
|>  is a new edition of this book published by Printice Hall.
|> 
|>  Mark

Another useful reference may be "Digital Signal Processing" by Oppenheim &
Schafer.

M. Deniz Ertas

R_Tim_Coslet@cup.portal.com (02/17/91)

In Article: <1751@manta.NOSC.MIL>
	north@manta.NOSC.MIL (Mark H. North) writes:
>In article <91046.095459F0O@psuvm.psu.edu> F0O@psuvm.psu.edu writes:
>>
>>     I was reading an article that states the Nyquist theorm as:
>>     "The sample frequency must be at least twice the highest frequency
>>component within the analog signal for an accurate representation of the
>>analog signal".
>
>This is an incorrect statement of the Nyquist theorem. The sample freq
>must be *greater* than twice the highest freq component...
>
Looks accurate to me, it says "at least twice" and you say "*greater* than 
twice". Both wordings mean the same to me (although the second is probably
clearer due to the emphasis).

While the Nyquist criteria sets a minimum theoretical sampling rate, practical
sampling rates are generally at least 5 times max frequency component (and
more samples may be required if better reproduction is required, 5 times is
just a "rule of thumb").

                                        R. Tim Coslet

Usenet: R_Tim_Coslet@cup.portal.com             BIX:    r.tim_coslet
Free Kuwait.    Disarm Iraq.    Stop Soviet repression in the Baltic.

terryb.bbs@shark.cs.fau.edu (terry bohning) (02/17/91)

north@manta.NOSC.MIL (Mark H. North) writes:

> >     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
> >only going to get two points per cycle.
> 
> And imagine that those two points are phased such that they land at the
> zero crossing of the 60Hz signal. All your samples are zero! This is
> why you must sample at greater than 2nu.
> 
The catch is that you *know* you're sampling the highest input freq at
2 points per cycle.  That is, the input signal is bandlimited.  So if
someone gives you a set of all zero samples and you know the sample
rate is 120 Hz, the only frequency it can be is 60 Hz.
The Nyquist theorem is at least, not greater than. Oppenheim & Schafer,
"Digital Signal Processing", Prentice-Hall, 1975, pg. 28 bottom.
In reality, of course, since ideal filters are unavailable for
band-limiting, the rate must be higher.

ruck@sphere.UUCP (John R Ruckstuhl Jr) (02/17/91)

In article <Abj8CR200WBN04RPUr@andrew.cmu.edu>, kr0u+@andrew.cmu.edu (Kevin William Ryan) writes:
> F0O@psuvm.psu.edu
> >     I was reading an article that states the Nyquist theorm as:
> >     "The sample frequency must be at least twice the highest frequency
> >component within the analog signal for an accurate representation of the
> >analog signal".

I think this should be "GREATER than twice the highest frequency
component".

> >     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
> >only going to get two points per cycle.

If you sample ABOVE 120Hz, you will have enough information to
reconstruct your original signal as described.
If you sample at precisely 120 Hz, you will be unable to reconstruct
accurately.

Without loss of generality, consider samples at t = kT, where k is an
integer, and T is (1/120)s:
    x(t) = cos(120*pi*t)
is indistinguishable from
    y(t) = 2 * cos(120*pi*t + (pi/3))
when comparing the sampled data.  Therefore, there is a reconstruction
ambiguity.

>     No, it applies to all signals, as the _minimum_ possible sampling
> rate that will without error give you the signal back. Sampling a sine
> wave at the Nyquist rate will give you, in the best case:
> 
>     | | | | | | | | |
>     -----------------
>      | | | | | | | |
> 
> where alternate samples hit the plus and minus peaks of the sine wave.
> Practically, to ensure that you aren't just hitting the nodes rather
> than the peaks of the waves, it's best to sample much faster. However,
> it is possible to reconstruct it accurately with Nyquist rate sampling,
> and impossible to reconstruct it accurately if sampled any slower. The
> Nyquist criterion gives the minimum rate at which you can sample and
> retain information. 

I believe this is misleading... One must sample ABOVE the Nyquist rate
(not AT the Nyquist rate) for reconstruction to be possible.
I think this is a fairly common misconception, though.

Sorry kwr -- I don't mean to be impolite.

Best regards,
ruck
-- 
John R Ruckstuhl, Jr	ruck%sphere@cis.ufl.edu, sphere!ruck
University of Florida 	ruck@cis.ufl.edu, uflorida!ruck

zimmer@calvin.stanford.edu (Andrew Zimmerman) (02/17/91)

In article <317@sphere.UUCP> ruck@sphere.UUCP (John R Ruckstuhl Jr) writes:
>In article <Abj8CR200WBN04RPUr@andrew.cmu.edu>, kr0u+@andrew.cmu.edu (Kevin William Ryan) writes:
>> F0O@psuvm.psu.edu
>> >     I was reading an article that states the Nyquist theorm as:
>> >     "The sample frequency must be at least twice the highest frequency
>> >component within the analog signal for an accurate representation of the
>> >analog signal".
>
>I think this should be "GREATER than twice the highest frequency
>component".

Just to nit-pick, it should be "GREATER then twice the bandwidth of the 
signal", not twice the highest frequency.

Andrew
zimmer@calvin.stanford.edu

kahhan@bnr.ca (02/17/91)

In article <D04gX2w163w@shark.cs.fau.edu> terryb.bbs@shark.cs.fau.edu (terry bohning) writes:
>north@manta.NOSC.MIL (Mark H. North) writes:
>
>> >     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
>> >only going to get two points per cycle.
>> 
>> And imagine that those two points are phased such that they land at the
>> zero crossing of the 60Hz signal. All your samples are zero! This is
>> why you must sample at greater than 2nu.
>> 
>The catch is that you *know* you're sampling the highest input freq at
>2 points per cycle.  That is, the input signal is bandlimited.  So if
>someone gives you a set of all zero samples and you know the sample
>rate is 120 Hz, the only frequency it can be is 60 Hz.

Not quite. Another signal that will yield an all zero sample set to
the 120 Hz sampling rate is DC. In general, you must sample at greater
than twice the maximum frequency, not exactly twice. However, there are
techniques that can be used to sample at exactly twice the frequency,
under certain conditions (like looking for a single frequency, where
you kick the phase of your sampler periodically to avoid sampling
the input waveform only at zero crossings).

A
-- 
----------------------------------------------------------------------------------
Larry Kahhan - NRA, NRA-ILA, CSG, GOA, GSSA |   The opinions expressed here do
                                            |   not necessarily represent the
                                            |   views of the management.
----------------------------------------------------------------------------------

north@manta.NOSC.MIL (Mark H. North) (02/18/91)

In article <39342@cup.portal.com> R_Tim_Coslet@cup.portal.com writes:
>In Article: <1751@manta.NOSC.MIL>
>	north@manta.NOSC.MIL (Mark H. North) writes:
>>
>>This is an incorrect statement of the Nyquist theorem. The sample freq
>>must be *greater* than twice the highest freq component...
>>
>Looks accurate to me, it says "at least twice" and you say "*greater* than 
>twice". Both wordings mean the same to me (although the second is probably
>clearer due to the emphasis).
>

I'm sorry I must be losing it. At least twice implies twice is good enough
which it isn't. No?

>While the Nyquist criteria sets a minimum theoretical sampling rate, practical
>sampling rates are generally at least 5 times max frequency component (and
>more samples may be required if better reproduction is required, 5 times is
>just a "rule of thumb").
>

The reason for requiring a much higher sampling rate than the minimum
theoretical is because no filter is perfect and your nominal 'highest'
frequency is not really the highest. If you knew for sure in advance that
your signal had *no* components greater than nu, say, then you could do
no better a reproduction with a faster sampling rate than 2nu + epsilon.

Mark

north@manta.NOSC.MIL (Mark H. North) (02/18/91)

In article <D04gX2w163w@shark.cs.fau.edu> terryb.bbs@shark.cs.fau.edu (terry bohning) writes:
>north@manta.NOSC.MIL (Mark H. North) writes:
>
>> >     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
>> >only going to get two points per cycle.
>> 
>> And imagine that those two points are phased such that they land at the
>> zero crossing of the 60Hz signal. All your samples are zero! This is
>> why you must sample at greater than 2nu.
>> 
>The catch is that you *know* you're sampling the highest input freq at
>2 points per cycle.  That is, the input signal is bandlimited.  So if
>someone gives you a set of all zero samples and you know the sample
>rate is 120 Hz, the only frequency it can be is 60 Hz.
>The Nyquist theorem is at least, not greater than. Oppenheim & Schafer,
>"Digital Signal Processing", Prentice-Hall, 1975, pg. 28 bottom.
>In reality, of course, since ideal filters are unavailable for
>band-limiting, the rate must be higher.

I knew that 8^). Actually, I got to thinking about it since this discussion
came up and I believe I emailed someone mentioning the above possibility but
this would be as they say a trivial (and useless) case wouldn't you agree?

Mark

north@manta.NOSC.MIL (Mark H. North) (02/18/91)

In article <1758@manta.NOSC.MIL> north@manta.NOSC.MIL (Mark H. North) writes:
>In article <D04gX2w163w@shark.cs.fau.edu> terryb.bbs@shark.cs.fau.edu (terry bohning) writes:
>>north@manta.NOSC.MIL (Mark H. North) writes:
>>
>>> >     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
>>> >only going to get two points per cycle.
>>> 
>>> And imagine that those two points are phased such that they land at the
>>> zero crossing of the 60Hz signal. All your samples are zero! This is
>>> why you must sample at greater than 2nu.
>>> 
>>The catch is that you *know* you're sampling the highest input freq at
>>2 points per cycle.  That is, the input signal is bandlimited.  So if
>>someone gives you a set of all zero samples and you know the sample
>>rate is 120 Hz, the only frequency it can be is 60 Hz.
>>The Nyquist theorem is at least, not greater than. Oppenheim & Schafer,
>>"Digital Signal Processing", Prentice-Hall, 1975, pg. 28 bottom.
>>In reality, of course, since ideal filters are unavailable for
>>band-limiting, the rate must be higher.
>
>I knew that 8^). Actually, I got to thinking about it since this discussion
>came up and I believe I emailed someone mentioning the above possibility but
>this would be as they say a trivial (and useless) case wouldn't you agree?
>
>Mark
>

Sorry to answer my own post but I take that last paragraph back. I think
you are wrong after all. Look at it this way -- suppose I tell you I'm
going to send you one of two signals, either 1 volt 60 Hz or a DC voltage
between -1 and 1 volt. You may sample at 120 Hz. You get all identical
samples at 0.5 volts. Which signal did I send?

Mark

ruck@sphere.UUCP (John R Ruckstuhl Jr) (02/18/91)

In article <D04gX2w163w@shark.cs.fau.edu>, terryb.bbs@shark.cs.fau.edu (terry bohning) writes:
> north@manta.NOSC.MIL (Mark H. North) writes:
> > >     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
> > >only going to get two points per cycle.

> > And imagine that those two points are phased such that they land at the
> > zero crossing of the 60Hz signal. All your samples are zero! This is
> > why you must sample at greater than 2nu.

> The catch is that you *know* you're sampling the highest input freq at
> 2 points per cycle.  That is, the input signal is bandlimited.  So if
> someone gives you a set of all zero samples and you know the sample
> rate is 120 Hz, the only frequency it can be is 60 Hz.

Or 0 Hz.  And supposing the signal you sampled *was* 60 Hz.  You have no
magnitude information.  You cannot reconstruct.

> The Nyquist theorem is at least, not greater than. Oppenheim & Schafer,
> "Digital Signal Processing", Prentice-Hall, 1975, pg. 28 bottom.

Yes.  They say "at least twice the highest frequency".  But the equation
they give is not ambiguous:  Wmax < pi/Tsample (or, 2*Wmax < Wsample)
(same page).

Terry, please be very careful of misinformation.

Best regards,
ruck.
-- 
John R Ruckstuhl, Jr	ruck%sphere@cis.ufl.edu, sphere!ruck
University of Florida 	ruck@cis.ufl.edu, uflorida!ruck

tomb@hplsla.HP.COM (Tom Bruhns) (02/18/91)

terryb.bbs@shark.cs.fau.edu (terry bohning) writes:
>zimmer@calvin.stanford.edu (Andrew Zimmerman) writes:
>> Just to nit-pick, it should be "GREATER then twice the bandwidth of the 
>> signal", not twice the highest frequency.
>> 
>
>Wow, that's great! So I only need to sample my 10 Hz bandwidth signal
>which is centered at 1 MHz at 20 Hz!

Just so--except that with exactly 20Hz sampling, you cannot get information
on the amplitude of the component at exactly 1MHz, since that is a harmonic
of 1/2 the sampling frequency.  You would want to adjust your sampling
frequency slightly, by a factor of 5/1000000...

paul@frcs.UUCP (Paul Nash) (02/19/91)

Thus spake terryb.bbs@shark.cs.fau.edu (terry bohning):

> zimmer@calvin.stanford.edu (Andrew Zimmerman) writes:
> > Just to nit-pick, it should be "GREATER then twice the bandwidth of the 
> > signal", not twice the highest frequency.
> 
> Wow, that's great! So I only need to sample my 10 Hz bandwidth signal
> which is centered at 1 MHz at 20 Hz!

Correct!  Now all you need to do is to pass the reconstructed
analogue output through a 1 MHz +/- 10 Hz bandpass filter
(with infinitely fast roll-off), and you will have a 10 Hz
signal centred on 1 MHz.  The amplitude might be a bit low, 
but it will be there.  Try reading about it -- it _does_ work.

 ---=---=---=---=---=---=---=---=---=---=---=---=---=---=---=---=---=---
Paul Nash				   Free Range Computer Systems cc
paul@frcs.UUCP				      ...!uunet!m2xenix!frcs!paul

kr0u+@andrew.cmu.edu (Kevin William Ryan) (02/19/91)

amichiel@rodan.acs.syr.edu (Allen J Michielsen)
>In article <D04gX2w163w@> terryb.bbs@shark.cs.fau.edu (terry bohning) writes:
>>north@manta.NOSC.MIL (Mark H. North) writes:
>>> >     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
>>> >only going to get two points per cycle.
>>> And imagine that those two points are phased such that they land at the
>>> zero crossing of the 60Hz signal. All your samples are zero! This is 
>>The catch is that you *know* you're sampling the highest input freq at
>>2 points per cycle.  That is, the input signal is bandlimited.  So if
>>someone gives you a set of all zero samples and you know the sample
>>rate is 120 Hz, the only frequency it can be is 60 Hz.
> 
>And what theory are you using to eliminate all other even multiples of 60
>like 120....

    Then you've violated the Nyquist criterion (shame, shame) which
states that the _maximum_ frequency must be less than one half the
sampling rate. If the signal contains frequencies above this, good luck
reconstructing it. Because you can't. You don't have the information,
and in fact your reconstructed signal will contain frequencies not in
the original - frequencies aliased in from the signal freqencies that
were greater than 1/2 the sampling rate. 

    This is usually enforced with some sort of prefiltering of the input
frequency. The bandlimiting requirement is _very_ important. 

                                                    kwr

Internet: kr0u+@andrew.cmu.edu

kr0u+@andrew.cmu.edu (Kevin William Ryan) (02/19/91)

    Mea culpa. The requirement that you must sample _above_ 2w is
correct: sampling _at_ twice the highest frequency produces indefinite
results, as phase and therefore magnitude information is lost. Thus the
proper definition is:

        max freq signal < 1/2 sampling rate

in order to preserve information. 

    It's been too damn long since that signals course... :-)

                                                    kwr

Internet: kr0u+@andrew.cmu.edu

robf@mcs213f.cs.umr.edu (Rob Fugina) (02/19/91)

In article <1759@manta.NOSC.MIL> north@manta.NOSC.MIL (Mark H. North) writes:
>Sorry to answer my own post but I take that last paragraph back. I think
>you are wrong after all. Look at it this way -- suppose I tell you I'm
>going to send you one of two signals, either 1 volt 60 Hz or a DC voltage
>between -1 and 1 volt. You may sample at 120 Hz. You get all identical
>samples at 0.5 volts. Which signal did I send?
>Mark

You sent a DC signal of 0.5 volts.  If it were AC, you the samples would
be alternating positive and negative of the same magnitude.

Rob  robf@cs.umr.edu

kdq@demott.com (Kevin D. Quitt) (02/19/91)

In article <1759@manta.NOSC.MIL> north@manta.NOSC.MIL (Mark H. North) writes:
>
>Sorry to answer my own post but I take that last paragraph back. I think
>you are wrong after all. Look at it this way -- suppose I tell you I'm
>going to send you one of two signals, either 1 volt 60 Hz or a DC voltage
>between -1 and 1 volt. You may sample at 120 Hz. You get all identical
>samples at 0.5 volts. Which signal did I send?

    You've reduced this past absurdity.  If I know it *must* be one or the
other, a single measure will almost always be sufficient.  The discussion
revolves around reconstructing *any* waveform (requires >2x sampling).


-- 
 _
Kevin D. Quitt         demott!kdq   kdq@demott.com
DeMott Electronics Co. 14707 Keswick St.   Van Nuys, CA 91405-1266
VOICE (818) 988-4975   FAX (818) 997-1190  MODEM (818) 997-4496 PEP last

terryb.bbs@shark.cs.fau.edu (terry bohning) (02/19/91)

ruck@sphere.UUCP (John R Ruckstuhl Jr) writes:
> > The Nyquist theorem is at least, not greater than. Oppenheim & Schafer,
> > "Digital Signal Processing", Prentice-Hall, 1975, pg. 28 bottom.
> 
> Yes.  They say "at least twice the highest frequency".  But the equation
> they give is not ambiguous:  Wmax < pi/Tsample (or, 2*Wmax < Wsample)
> (same page).
> 
> Terry, please be very careful of misinformation.
> 
OK, OK.  I'll say it "I  M A D E   A  M I S T A K E".  I'll
say 5 Hail Mary's and build 10 linear phase anti-aliasing filters!

It's too bad you can't make one on this board without getting hate 
mail in your box (not you John, I'm referring to the type of 
people who probably wonder why they're never invited to the meetings).

north@manta.NOSC.MIL (Mark H. North) (02/19/91)

In article <2189@umriscc.isc.umr.edu> robf@mcs213f.cs.umr.edu (Rob Fugina) writes:
>In article <1759@manta.NOSC.MIL> north@manta.NOSC.MIL (Mark H. North) writes:
>>Sorry to answer my own post but I take that last paragraph back. I think
>>you are wrong after all. Look at it this way -- suppose I tell you I'm
>>going to send you one of two signals, either 1 volt 60 Hz or a DC voltage
>>between -1 and 1 volt. You may sample at 120 Hz. You get all identical
>>samples at 0.5 volts. Which signal did I send?
>>Mark
>
>You sent a DC signal of 0.5 volts.  If it were AC, you the samples would
>be alternating positive and negative of the same magnitude.
>
Yes, thanks for pointing that out. How about all zero samples? Yes, I know,
pretty damn likely it was the DC signal. I think I made my point, you must
sample at >2nu to reconstruct the signal.

Mark

grayt@Software.Mitel.COM (Tom Gray) (02/19/91)

In article <1751@manta.NOSC.MIL> north@manta.NOSC.MIL (Mark H. North) writes:
}In article <91046.095459F0O@psuvm.psu.edu> F0O@psuvm.psu.edu writes:
}>
}>     I was reading an article that states the Nyquist theorm as:
}>     "The sample frequency must be at least twice the highest frequency
}>component within the analog signal for an accurate representation of the
}>analog signal".
}
}This is an incorrect statement of the Nyquist theorem. The sample freq
}must be *greater* than twice the highest freq component...
}

This is an incorrect correction. The original satement is accurate.
Sampling at twice or greater than the highest freqeuncy n a
band limited signal is all that is required for Nyquist sampling. 

}>     I'd guess here he is talking about complex signals.  But what do you
}>do with a pure sine wave?  There is only one frequency component in a sine
}>wave(the fundamental), and if you sample at twice that, you're not going
}>to get a good representation of the signal.
}
}A pure sine wave is fine. As long as you sample at greater than twice its
}freq. Even though it may appear that you are not getting a good represen-
}tation of the signal it can be shown with Fourier analysis that the
}sample set is unique to this component and hence the exact signal can
}be recovered from the sample set.
}
}>     i.e. If you have a 60HZ sine wave, and you sample at 120HZ, you're
}>only going to get two points per cycle.
}
}And imagine that those two points are phased such that they land at the
}zero crossing of the 60Hz signal. All your samples are zero! This is
}why you must sample at greater than 2nu.
}
This is a common misconception. The sampling pulses are of finite widht.
The shape of the wave is preserved within the sampling pulse. This
information allows representation of a signal at exactly 1/2
the Nyquist freqency.

The origin of this misconcetion is a confusion about the sampling
methods assumed for the Nyquist theorem. Nyquist assumed
natural sampling in which the shape of the signal is preseved
by multiplication with the sampling pulse. This is a simple
multilication of the two signals in the time domain.
Digital sample storage cannot do this, Only one value
of the signal can be obtained per sample (not the
continuous representation through out the sampling
period which is obtained for natural). The digital
method of sampling is called commonly flat top sampling.
Flat top sampling cannot represent signals at the
half sampling frequency. it is a limitation of flat top
sampling and not of sampling in general ( including
Nyquist sampling) which makes this limitation.

If you have text books proving Nyquist by multyplying
with instabtaneous pulses and referring the Dirac
delat functions, I have text books which properly prove
Nyquist with pulses of any width. The instantaneous
pulse case is only a special case and is not true
in general since it implies limitations which do not
occur for pulses of finite width (ie all REAL sampling
pulses).
 
} 
} A good reference is "Digital Signal Analysis" by Samuel D Stearns. It is
} no longer in print but is available in most engr. libraries. Also there
} is a new edition of this book published by Printice Hall.
}

Most text books play fast and lose with Nyquist.

jfa0522@hertz.njit.edu (john f andrews ece) (02/19/91)

an added practical note: after you get through all of the theory,
sample at about 2.5 or more times the Nyquist frequency to get everything
in your signal (you will have noise and such at up to there or so).

Of course better is to seriously prefilter your analog signal to well
below the Nyquist. This will ensure that all the nasties above your
*theoretical* Nyquist will be more than 78dB or so down and thus less than
1/2 LSB of you AD converter (assuming you are using one). Then the noise will 
be below the input threshold of the ADC.


-----------------------------------------------------------------------------
john f andrews                SYSOP           The Biomedical Engineering BBS
    24 hrs                300/1200/2400               (201) 596-5679
-----------------------------------------------------------------------------
INTERNET jfa0522@hertz.njit.edu    LabRat@faraday.njit.edu    CIS 73710,2600
-----------------------------------------------------------------------------

stebbins@musial.ucr.edu (john stebbins) (02/20/91)

In article <1772@manta.NOSC.MIL>, north@manta.NOSC.MIL (Mark H. North) writes:
|> In article <2189@umriscc.isc.umr.edu> robf@mcs213f.cs.umr.edu (Rob
Fugina) writes:
|> >In article <1759@manta.NOSC.MIL> north@manta.NOSC.MIL (Mark H.
North) writes:
|> >>Sorry to answer my own post but I take that last paragraph back. I think
|> >>you are wrong after all. Look at it this way -- suppose I tell you I'm
|> >>going to send you one of two signals, either 1 volt 60 Hz or a DC voltage
|> >>between -1 and 1 volt. You may sample at 120 Hz. You get all identical
|> >>samples at 0.5 volts. Which signal did I send?
|> >>Mark
|> >
|> >You sent a DC signal of 0.5 volts.  If it were AC, you the samples would
|> >be alternating positive and negative of the same magnitude.
|> >
|> Yes, thanks for pointing that out. How about all zero samples? Yes, I know,
|> pretty damn likely it was the DC signal. I think I made my point, you must
|> sample at >2nu to reconstruct the signal.
|> 
|> Mark


Here's a different example coupled with a question.

Suppose I was sampleing a 20khz sign wave at 44khz and my first sample
just happened to occur at the positive peak of the sign wave.  My next
sample would occur a little before ( and thus above ) the negative peak.
And the next would occur a little more before ( and a little more below )
the next positive peak.  This continues until zero crossing at which point
my samples start growing instead of decreasing.  Its pretty easy to see why
filtering the sample back down to 20khz will reproduce a 20khz signal, but
how does the filtering recover the original amplitude of my sign wave. 
It appears that what I'll get is an am modulated signal that is some 
combination of the 20khz signal and the 44khz sample rate.

By the way, I chose 20khz and 44khz because they are a combination that
is suppose to work (ie. CD rates).

John Stebbins
stebbins@ucrmath.ucr.edu 

jewett@hpl-opus.hpl.hp.com (Bob Jewett) (02/20/91)

> > Just to nit-pick, it should be "GREATER then twice the bandwidth of the 
> > signal", not twice the highest frequency.
> 
> Wow, that's great! So I only need to sample my 10 Hz bandwidth signal
> which is centered at 1 MHz at 20 Hz!

Yes, that's almost true.  You also have to worry about how quickly the
sidebands and unwanted signals roll off, and about folding the
components of the signal of interest on top of each other.  The same
considerations apply to standard IF mixers as well.  A practical sampled
system with margins might be:  center of IF = 1MHz+50Hz, sampling rate =
200Hz.  This could be done with crystal filters.

Bob

mac@idacrd.UUCP (Robert McGwier) (02/21/91)

From article <6607@healey>, by grayt@Software.Mitel.COM (Tom Gray):
> This is a common misconception. The sampling pulses are of finite widht.
> The shape of the wave is preserved within the sampling pulse. This
> information allows representation of a signal at exactly 1/2
> the Nyquist freqency.
> 
> Most text books play fast and lose with Nyquist.



I pose the following question.  Suppose you are sampling at rate N samples
per second, and you see a constant value V for your A/D sample.  Is the
frequency of the signal which produced those samples 0 or N/2?  Since
I obviously posed this question because I know you CANNOT discriminate
between these two cases, what exactly is it then that Nyquist IS telling
us and am I asking about apples and oranges?

Bob

-- 
____________________________________________________________________________
    My opinions are my own no matter	|	Robert W. McGwier, N4HY
    who I work for! ;-)			|	CCR, AMSAT, etc.
----------------------------------------------------------------------------

whit@milton.u.washington.edu (John Whitmore) (02/21/91)

In article <12122@ucrmath.ucr.edu> stebbins@musial.ucr.edu (john stebbins) writes:

>Suppose I was sampleing a 20khz sine wave at 44khz and my first sample
>just happened to occur at the positive peak of the sine wave.  My next
>sample would occur a little before ( and thus above ) the negative peak.
>And the next would occur a little more before ( and a little more below )
>the next positive peak.  This continues until zero crossing at which point
>my samples start growing instead of decreasing.  Its pretty easy to see why
>filtering the sample back down to 20khz will reproduce a 20khz signal, but
>how does the filtering recover the original amplitude of my sign wave. 
>It appears that what I'll get is an am modulated signal that is some 
>combination of the 20khz signal and the 44khz sample rate.

	Your last comment is the key to the solution.  The combination
is the difference frequency, 24 kHz.  A sum of equal-amplitude 20 kHz
and 24 kHz pure sine waves gives exactly the AM-modulated signal that
you describe.
	So, it is the responsibility of the CD playback unit (because
these numbers are appropriate for high-frequency audio in a CD player)
to correctly erase the spurious 24 kHz tone.  To the best of my knowledge,
no CD players actually work at 1x sampling, but ALL (even the oldest
and cheapest) first calculate a bandwidth-limited intermediate sample
from the recorded samples (with a FIR filter, comprised of simple
arithmetic operations).  The simplest mechanism, 2x oversampling,
gets rid of that 24 kHz signal and introduces instead (because
the new Nyquist limit is 44 kHz) some junk at 46 kHz or so.
The analog filter then can be trusted with the task of getting
rid of the 44 kHz-and-up junk, while passing (with low distortion)
all of the 22 kHz-and-under signal.

	John Whitmore

gsteckel@vergil.East.Sun.COM (Geoff Steckel - Sun BOS Hardware CONTRACTOR) (02/21/91)

In article <883@idacrd.UUCP> mac@idacrd.UUCP (Robert McGwier) writes:
 From article <6607@healey>, by grayt@Software.Mitel.COM (Tom Gray):
 > The shape of the wave is preserved within the sampling pulse. This
 > information allows representation of a signal at exactly 1/2
 > the Nyquist freqency.
 
 I pose the following question.  Suppose you are sampling at rate N samples
 per second, and you see a constant value V for your A/D sample.  Is the
 frequency of the signal which produced those samples 0 or N/2?  Since

AARRGGGHHH!  The Nyquist criterion requires that sampling be GREATER
THAN the highest frequency of interest.  Note also that the amplitude
response near Fs/2 rolls off towards 0  (sin X / X response).
	geoff steckel (gwes@wjh12.harvard.EDU)
			(...!husc6!wjh12!omnivore!gws)
Disclaimer: I am not affiliated with Sun Microsystems, despite the From: line.
This posting is entirely the author's responsibility.

grayt@Software.Mitel.COM (Tom Gray) (02/21/91)

In article <2189@umriscc.isc.umr.edu> robf@mcs213f.cs.umr.edu (Rob Fugina) writes:
>In article <1759@manta.NOSC.MIL> north@manta.NOSC.MIL (Mark H. North) writes:
>>Sorry to answer my own post but I take that last paragraph back. I think
>>you are wrong after all. Look at it this way -- suppose I tell you I'm
>>going to send you one of two signals, either 1 volt 60 Hz or a DC voltage
>>between -1 and 1 volt. You may sample at 120 Hz. You get all identical
>>samples at 0.5 volts. Which signal did I send?
>>Mark
>
>You sent a DC signal of 0.5 volts.  If it were AC, you the samples would
>be alternating positive and negative of the same magnitude.
>
>Rob  robf@cs.umr.edu

This is an accurate statement. The ability to reconstruct signals
at the half-nyquist depends on the sampling method used.
Digital systems use instantaneous (or flat top) sampling and
cannot reconstruct half-nyquist signals. Other systems can
use perfect sampling in which the shape of the sampled wave is
preseved within the finite width sampling period. This type
of sampling can reconstruct half-nyquist signals.

So in general f<= 2B (assuming that you can use perfect
sampling) but for the special case of digital systems
F<2B because of the limitations of the sampling 
method used.
 
I have not seen Nyquist's paper on his theorem so I cannot say for
ceratin what result he derived. However Mischa Schwatz's text
Information Transmission Modulation and Noise gives a derivation
of the Sampling Theorem using finite width pulses. The instantaneous
pulse case is used as a limit.

I was taught the Sampling theorem as the multiplication of
a finite width sampling pulse of unit height with the
sampled signal. The instantaneous pulse case was then
derived as a limiting case. In those days (early 70's)
PAM systems (Pulse Amplitude Modulation) were used
in telephoney. These systems definitely used one
of two systems - Perfect Sampling for smaller
systems and Flat Top Sampling (by a resonanat transfer)
method for larger systems. The distinctions in
sampling method were important since the flat top
method produce another sinc effect in the transfer
function.

Nowadays digital systems prevail and the old
perfect sampling systems are no longer important.
The derivation with instantaneous pulses is suitable
for digital systems since the flat top sampling
is assumed by it. However the math has not changed
and as Mischa Schwartz says f >= 2B is the
Nyquist criterion.

Hope this is coherent.



 


 D

greenba@gambia.crd.ge.com (ben a green) (02/21/91)

In article <4402@eastapps.East.Sun.COM> gsteckel@vergil.East.Sun.COM (Geoff Steckel - Sun BOS Hardware CONTRACTOR) writes:

   In article <883@idacrd.UUCP> mac@idacrd.UUCP (Robert McGwier) writes:
    From article <6607@healey>, by grayt@Software.Mitel.COM (Tom Gray):
    > The shape of the wave is preserved within the sampling pulse. This
    > information allows representation of a signal at exactly 1/2
    > the Nyquist freqency.

    I pose the following question.  Suppose you are sampling at rate N samples
    per second, and you see a constant value V for your A/D sample.  Is the
    frequency of the signal which produced those samples 0 or N/2?  Since

   AARRGGGHHH!  The Nyquist criterion requires that sampling be GREATER
   THAN the highest frequency of interest.  Note also that the amplitude
   response near Fs/2 rolls off towards 0  (sin X / X response).

Furthermore, the claim that "the shape of the wave is preserved within the
smpling pulse" would imply that an analytic signal could be reproduced
entirely from ONE sample, since all the derivatives of the signal are
available in the sample. Can't believe Nyquist said that. 
--
Ben A. Green, Jr.              
greenba@crd.ge.com
  Speaking only for myself, of course.

marshall@elric.dec.com (Hunting the Snark) (02/21/91)

In article <4402@eastapps.East.Sun.COM>, gsteckel@vergil.East.Sun.COM (Geoff Steckel - Sun BOS Hardware CONTRACTOR) writes...
>In article <883@idacrd.UUCP> mac@idacrd.UUCP (Robert McGwier) writes:
> From article <6607@healey>, by grayt@Software.Mitel.COM (Tom Gray):
> > The shape of the wave is preserved within the sampling pulse. This
> > information allows representation of a signal at exactly 1/2
> > the Nyquist freqency.
> 
> I pose the following question.  Suppose you are sampling at rate N samples
> per second, and you see a constant value V for your A/D sample.  Is the
> frequency of the signal which produced those samples 0 or N/2?  Since
> 
>AARRGGGHHH!  The Nyquist criterion requires that sampling be GREATER
>THAN the highest frequency of interest.  

Wrong answer. The answer to the question posed is that you are sampling a DC
signal. if it was N/2 then the sign of the samples would alternate. The only
time you can't distinguish DC and N/2 is when you just happen to sample at the
zero crossings of the N/2 frequency signal. 

The way to get around this degenerate case is to random sample with an
_average_ sample rate of N samples per second. Actually, you do not even need
to be randomly sampling, you could for instance instead of sampling with a 50%
duty cycle you could use a 25% duty cycle and still meet the Nyquist criteria.



               />                                             
  (           //------------------------------------------------------------(
 (*)OXOXOXOXO(*>=S=T=O=R=M=B=R=I=N=G=E=R--------                             \
  (           \\--------------------------------------------------------------)
               \>                                        Steven Marshall

"Hard to say Ma'am. I think my cerebellum just fused" -- Calvin

siegman@sierra.STANFORD.EDU (siegman) (02/22/91)

In article <D9FJX4w163w@shark.cs.fau.edu> terryb.bbs@shark.cs.fau.edu
(terry bohning) writes:

> Wow, that's great! So I only need to sample my 10 Hz bandwidth signal
> which is centered at 1 MHz at 20 Hz!

Yup.  That's exactly right.

myers@hpfcdj.HP.COM (Bob Myers) (02/23/91)

>It appears that what I'll get is an am modulated signal that is some 
>combination of the 20khz signal and the 44khz sample rate.

Yup, that's what you'll get, all right.  But now run through the math that
describes how to make this "AM signal," and you'll find that one of the 
components you'll get out of all this is a 20 kHz signal at (or at least in 
proportion to) the original amplitude, plus some other stuff that will be 
above the cutoff of the required LP filter (assume a "brick wall" at 22 kHz if 
you like).  You CAN recover the original.

Sampling theory is not always (some would say "never") intuitive.  But the
math does describe what we should expect out of it, and agrees with what
you'll find in the "real world" pretty well when you go to try it. 


Bob Myers  KC0EW   HP Graphics Tech. Div.|  Opinions expressed here are not
                   Ft. Collins, Colorado |  those of my employer or any other
myers@fc.hp.com                          |  sentient life-form on this planet.

grayt@Software.Mitel.COM (Tom Gray) (02/25/91)

In article <GREENBA.91Feb21095446@gambia.crd.ge.com> greenba@gambia.crd.ge.com (ben a green) writes:
:In article <4402@eastapps.East.Sun.COM> gsteckel@vergil.East.Sun.COM (Geoff Steckel - Sun BOS Hardware CONTRACTOR) writes:
:
:   In article <883@idacrd.UUCP> mac@idacrd.UUCP (Robert McGwier) writes:
:    From article <6607@healey>, by grayt@Software.Mitel.COM (Tom Gray):
:    > The shape of the wave is preserved within the sampling pulse. This
:    > information allows representation of a signal at exactly 1/2
:    > the Nyquist freqency.
:
:Furthermore, the claim that "the shape of the wave is preserved within the
:smpling pulse" would imply that an analytic signal could be reproduced
:entirely from ONE sample, since all the derivatives of the signal are
:available in the sample. Can't believe Nyquist said that. 
:--

 The claim is that in natural sampling, the wave shape DURING the
sample period is preserved. For example, sampling could take place
with an analog gate.

The sampling technique is

   sample = signal during sampling period
          = 0 ohterwise

No claim of anyother properties were made.


 

ingoldsb@ctycal.UUCP (Terry Ingoldsby) (02/26/91)

Pursuing the discussion of the Nyquist theorem, I have a question
about practical sampling applications.  If you have a sine wave at
frequency f, which you sample at just over 2f samples per second then
the Nyquist theorem is satisfied.  I know that by performing a Fourier
transform it is possible to recover all of the signal, i.e. deduce that
the original wave was at frequency f.

Note that this is different than just playing connect the dots with the
samples.  Most of the algorithms I've heard of used with CD players
perform a variety of interpolation, oversampling, etc., but these all
seem to be elaborate versions of connect the dots.  I'm not aware that
the digital signal processing customarily done will restore the wave to
anything resembling its original.

I suspect that there is something I am missing here.  Can anyone clarify
the situation?

E.g.

Original:


      x x         x x         x x
     x   x       x   x       x   x
    x     x     x     x     x     x
           x   x       x   x       x
            x x         x x

    ^    ^    ^    ^    ^    ^    ^
Sample points 


Connect the dots reproduction (you can draw in the lines, I hate ascii drawings)

                   x
         x    
    x
              x              x    x
                        x

The only thing I can think is that the resulting waveform must contain
frequencies greater than the Nyquist limit allows, thus permitting them
to be filtered out with a brick wall filter (approachable with digital
filtering) letting the orignal come through unaltered.  Can someone confirm
my belief?

-- 
  Terry Ingoldsby                ingoldsb%ctycal@cpsc.ucalgary.ca
  Land Information Services                 or
  The City of Calgary       ...{alberta,ubc-cs,utai}!calgary!ctycal!ingoldsb

jamesv@hplsla.HP.COM (James Vasil) (02/28/91)

> Wow, that's great! So I only need to sample my 10 Hz bandwidth signal
> which is centered at 1 MHz at 20 Hz!

You might wish to read "Undersampling reduces data-acquisition costs
for select applications" to see how some people are doing just this.
The article, written by Jeff Kirsten & Tarlton Fleming of Maxim
Integrated Products, is in the June 21, 1990 issue of EDN magazine, 
pp 217-228.

Regards,
James Vasil
Applications Development Engineer

jbuck@galileo.berkeley.edu (Joe Buck) (02/28/91)

In article <625@ctycal.UUCP>, ingoldsb@ctycal.UUCP (Terry Ingoldsby) writes:
|> Pursuing the discussion of the Nyquist theorem, I have a question
|> about practical sampling applications.  If you have a sine wave at
|> frequency f, which you sample at just over 2f samples per second then
|> the Nyquist theorem is satisfied.  I know that by performing a Fourier
|> transform it is possible to recover all of the signal, i.e. deduce that
|> the original wave was at frequency f.
|> 
|> Note that this is different than just playing connect the dots with the
|> samples.  Most of the algorithms I've heard of used with CD players
|> perform a variety of interpolation, oversampling, etc., but these all
|> seem to be elaborate versions of connect the dots.  I'm not aware that
|> the digital signal processing customarily done will restore the wave to
|> anything resembling its original.

The Nyquist sampling theorem says more than just that you need to sample
at a rate higher than twice the highest frequency.  It also gives the
formula for the reconstructed time series.

If you have a signal with no frequency components higher than f = 1/2T,
where T is the spacing between samples, then the original waveform x(t)
may be found exactly at any point by computing the sum

x(t) = sum from m=-infinity to infinity x[m] * sinc (pi (t - m*T) / T)

where sinc(x) is just sin(x)/x (note: sinc(0) is 1).

This is exactly what you get when you pass a series of Dirac delta functions
with weights x[m] through an ideal low pass filter with cutoff frequency
1/2T; the impulse response of such a filter is sinc(pi*t/T).

You can't make an ideal low pass filter; for one thing, it's noncausal.
All you can do is approximate this.  To know more about how this works,
you need to study some digital signal processing; then you can go laugh
at your CD or DAT sales critter when he attempts to tell you about why
one system is better than another.

Example of CD salespeak: pushing oversampling as an advanced technical
feature.  Oversampling is simply inserting zeros between the digital
samples and thus increasing the sampling rate.  It's used because then you
can use cheaper, less complex analog filters; it reduces the system cost.
Still, some sales critters think it's an advanced technical extra.

--
Joe Buck
jbuck@galileo.berkeley.edu	 {uunet,ucbvax}!galileo.berkeley.edu!jbuck	

todd@appmag.com (Todd Day) (02/28/91)

jbuck@galileo.berkeley.edu (Joe Buck) writes:

%Example of CD salespeak: pushing oversampling as an advanced technical
%feature.  Oversampling is simply inserting zeros between the digital
%samples and thus increasing the sampling rate.  It's used because then you
%can use cheaper, less complex analog filters; it reduces the system cost.
%Still, some sales critters think it's an advanced technical extra.

Not all CD players just insert zeroes.  I used the same double oversampling
chip and the same DAC as my Denon 1500 CD player (well, I used the serial
versions) in my 56000 project board.  The oversampling chip did do
interpolation (and it was slightly more complex than bilinear (cubic spline?
I don't remember)).

I know the math works for inserting zeroes if you use a sinc function to
reconstruct the signal.  However, how does it work out for reconstruction
with a near step function?  I've never run through the math on that one...
Quickly off the top of my head, it doesn't look like it will work...

-- 
Todd Day  |  todd@appmag.com  |  appmag!todd@hub.ucsb.edu
		  ^^^^^^^^^^ coming soon!

wilf@sce.carleton.ca (Wilf Leblanc) (03/01/91)

jbuck@galileo.berkeley.edu (Joe Buck) writes:

>[deleted]

>Example of CD salespeak: pushing oversampling as an advanced technical
>feature.  Oversampling is simply inserting zeros between the digital
>samples and thus increasing the sampling rate.  It's used because then you
>can use cheaper, less complex analog filters; it reduces the system cost.
>Still, some sales critters think it's an advanced technical extra.

This kills me too.  Especially 8x oversampling !
(I always thought oversampling was used because analog filters usually
have a horrible phase response near the cutoff.  However, if you want
to spend enough money, you can get very near linear phase response
with an analog filter.  So, you are right).

When I bought my CD player, it said on the front panel 'Dual D/A
converters'.  For fun, I asked the salesperson what that meant.
The reply was rather funny, and of course completely inaccurate.

What does this really mean ?  (I figured maybe two distinct D/A's rather
than 1 D/A and two sample and holds ??).

>--
>Joe Buck
>jbuck@galileo.berkeley.edu	 {uunet,ucbvax}!galileo.berkeley.edu!jbuck	
--
Wilf LeBlanc                                 Carleton University
Internet: wilf@sce.carleton.ca               Systems & Computer Eng.
    UUCP: ...!uunet!mitel!cunews!sce!wilf    Ottawa, Ont, Canada, K1S 5B6

jbuck@galileo.berkeley.edu (Joe Buck) (03/01/91)

In article <1991Feb28.084837.7506@appmag.com>, todd@appmag.com (Todd Day) writes:
|> Not all CD players just insert zeroes.  I used the same double oversampling
|> chip and the same DAC as my Denon 1500 CD player (well, I used the serial
|> versions) in my 56000 project board.  The oversampling chip did do
|> interpolation (and it was slightly more complex than bilinear (cubic spline?
|> I don't remember)).
|> 
|> I know the math works for inserting zeroes if you use a sinc function to
|> reconstruct the signal.  However, how does it work out for reconstruction
|> with a near step function?  I've never run through the math on that one...
|> Quickly off the top of my head, it doesn't look like it will work...

If a step-function is used, then you get a sinc function in the frequency
domain.  What happens is that you have a rolloff at high frequencies (that
is, this introduces a distortion).  Some manufacturers use this anyway, and
then add another filter to boost the high frequencies by a corresponding
amount to compensate for this distortion.

--
Joe Buck
jbuck@galileo.berkeley.edu	 {uunet,ucbvax}!galileo.berkeley.edu!jbuck	

cjwein@watcgl.waterloo.edu (Chris J. Wein) (03/01/91)

In article <wilf.667759065@rigel.sce.carleton.ca> wilf@sce.carleton.ca (Wilf Leblanc) writes:
>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>
>>[deleted]
>
>>Example of CD salespeak: pushing oversampling as an advanced technical
>>feature.  Oversampling is simply inserting zeros between the digital
>>samples and thus increasing the sampling rate.  It's used because then you
>>can use cheaper, less complex analog filters; it reduces the system cost.
>>Still, some sales critters think it's an advanced technical extra.
>
>This kills me too.  Especially 8x oversampling !
>(I always thought oversampling was used because analog filters usually
>have a horrible phase response near the cutoff.  However, if you want
>to spend enough money, you can get very near linear phase response
>with an analog filter.  So, you are right).
>

I also understand that oversampling increases the 'transition region'
of the filter thus allowing for lower order filters.  However, the sharper
the cutoff, the more ringing will be present in the step response.  This
ringing might be below audible levels though.  Comments?

As for the CD's that do not oversample, what type of filter is generally used?
Theoretically, the transition region for the filter is about 4.1 Khz  
(cutoff frequency is 20Khz and stopband at 44.1-20=24.1 Khz) but in 
practice I think you could get away with much more since there shouldn't
be much energy above 12Khz which extends the transition region to about 
12Khz.  Nevertheless, to get the necessary attenuation (which is what, 
40 dB+?) in 12Khz is a demanding spec.

So what type of filter?  Chebyshev type 2? 

-- 
==============================================================================
 Chris Wein                           | cjwein@watcgl.waterloo.edu 
 Computer Graphics Lab, CS Dept.      | cjwein@watcgl.uwaterloo.ca
 University of Waterloo               | (519) 888-4548 

rea@egr.duke.edu (Rana E. Ahmed) (03/01/91)

In article <11515@pasteur.Berkeley.EDU> jbuck@galileo.berkeley.edu (Joe Buck) writes:
>In article <625@ctycal.UUCP>, ingoldsb@ctycal.UUCP (Terry Ingoldsby) writes:
>|> Pursuing the discussion of the Nyquist theorem, I have a question
>|> about practical sampling applications.  If you have a sine wave at
>|> frequency f, which you sample at just over 2f samples per second then
>|> the Nyquist theorem is satisfied.  I know that by performing a Fourier
>|> transform it is possible to recover all of the signal, i.e. deduce that
>|> the original wave was at frequency f.
>
>
>If you have a signal with no frequency components higher than f = 1/2T,
>where T is the spacing between samples, then the original waveform x(t)
>may be found exactly at any point by computing the sum
>
>x(t) = sum from m=-infinity to infinity x[m] * sinc (pi (t - m*T) / T)
>
>where sinc(x) is just sin(x)/x (note: sinc(0) is 1).
>
>Joe Buck

Suppose we sample a pure sine wave of frequency 'f' at the Nyquist rate,
i.e., at 2f samples/sec (exact), such that we start sampling the sine wave
at the time of its zero crossing. Thus, if we assume uniform sampling,
then all subsequent samples will have values equal to zero, i.e., x[m]=0 for 
all m (Assuming instantaneous sampling, i.e., no Hold Time for samples).
Intutively, if we pass these samples (each of zero voltage (say)) through an 
ideal low-pass filter, then we should expect to get zero voltage at the output
of filter. In other words, reconstructed signal voltage =0 for all t. 
(see also the formula for x(t) above ).
How can we recover the pure sine in this sampling strategy? 
Am I missing something ??  

Comments are appreciated.

Rana Ahmed
========================================================================

mpurtell@iastate.edu (Purtell Michael J) (03/01/91)

In article <1991Feb28.084837.7506@appmag.com> appmag!todd@hub.ucsb.edu writes:
>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>
>%Example of CD salespeak: pushing oversampling as an advanced technical
>%feature.  Oversampling is simply inserting zeros between the digital
>%samples and thus increasing the sampling rate.  It's used because then you
>%can use cheaper, less complex analog filters; it reduces the system cost.
>%Still, some sales critters think it's an advanced technical extra.
>
>Not all CD players just insert zeroes.  I used the same double oversampling
>chip and the same DAC as my Denon 1500 CD player (well, I used the serial
>versions) in my 56000 project board.  The oversampling chip did do
>interpolation (and it was slightly more complex than bilinear (cubic spline?
>I don't remember)).

A Singnetics chip I've seen that does oversampling does sinusoidal
interpolation in addition to interpolation in the case of uncorrectable
errors in the data stream (from a CD).  

If you're going to have interpolation for errors, which I think IS important,
you might as well do it for oversampling too, even if you can't hear the
difference.  If nothing else it eases the constraints on the output filter.
-- 
-- Michael Purtell --  | "In a hundred years, | There's an Old Irish Recipe for
mpurtell@iastate.edu   |  we'll all be dead." |   Longevity: Leave the Table
Iowa State University  |  -- The January Man  |  Hungry.  Leave the Bed Sleepy.
                "slow is real"                |    Leave the Tavern Thirsty.

jbuck@galileo.berkeley.edu (Joe Buck) (03/02/91)

In article <1347@cameron.egr.duke.edu>, rea@egr.duke.edu (Rana E. Ahmed) writes:
|> Suppose we sample a pure sine wave of frequency 'f' at the Nyquist rate,
|> i.e., at 2f samples/sec (exact), such that we start sampling the sine wave
|> at the time of its zero crossing. Thus, if we assume uniform sampling,
|> then all subsequent samples will have values equal to zero, i.e., x[m]=0 for 
|> all m (Assuming instantaneous sampling, i.e., no Hold Time for samples).
|> Intutively, if we pass these samples (each of zero voltage (say)) through an 
|> ideal low-pass filter, then we should expect to get zero voltage at the output
|> of filter. In other words, reconstructed signal voltage =0 for all t. 
|> (see also the formula for x(t) above ).
|> How can we recover the pure sine in this sampling strategy? 
|> Am I missing something ??  

Yes.  The theorem has a "<" and you're assuming "<=".  Sampling produces aliasing.
If the Nyquist frequency is f, then a sine wave at frequency q looks exactly
like a sine wave at frequency q+mf, where m is any integer.  So if you want
to recover your sine wave at frequency f, sampling at 2f isn't enough: you
need 2f + delta (for an arbitrarily small delta).

--
Joe Buck
jbuck@galileo.berkeley.edu	 {uunet,ucbvax}!galileo.berkeley.edu!jbuck	

whit@milton.u.washington.edu (John Whitmore) (03/02/91)

In article <1991Mar1.004711.15100@watcgl.waterloo.edu> cjwein@watcgl.waterloo.edu (Chris J. Wein) writes:
>In article <wilf.667759065@rigel.sce.carleton.ca> wilf@sce.carleton.ca (Wilf Leblanc) writes:
>>jbuck@galileo.berkeley.edu (Joe Buck) writes:

>>>Example of CD salespeak: pushing oversampling as an advanced technical
>>>feature.  Oversampling is simply inserting zeros ...
>>>Still, some sales critters think it's an advanced technical extra.

	It isn't, mainly because it's NOT an extra.  All CD players
oversample, and use a FIR filter (i.e. digital filtering).  The
cheapest, oldest ones use 2x oversampling, and don't advertise the
'feature'.

>>This kills me too.  Especially 8x oversampling !
>>If you want to spend enough money, you can get very near linear 
>>phase response with an analog filter.  So, you are right).

	Not exactly.  You'd have to spend money on things like thermostats
to keep a really high Q filter (for a brickwall filter), because
REGARDLESS of cost, ALL the components in an analog filter are
poorly controlled.  Typical digital frequency accuracy (if you
aren't trying hard) is 0.001%; that kind of performance is
equivalent to an analog pole with a Q of 100,000 (which is not
feasible).

>I also understand that oversampling increases the 'transition region'
>of the filter thus allowing for lower order filters. 
>
>As for the CD's that do not oversample, what type of filter is generally used?
>  [more about why this would be a difficult filter to make}

	As I mentioned above, there ARE no non-oversampling CD
players, for exactly the reasons you gave.  A steep-slope
cutoff filter with good passband linearity is going to require
a bevy of trimmed capacitors/inductors, and would (1) cost
mightily, (2) require difficult factory adjustments, (3) not
survive shipping without needing readjustment, (4) fail when
the room temperature changed.
	The last 2x-oversampled CD player I took apart had four
trimmer components (inductors) in the filter.  4x-oversampling
filters typically have merely selected components (not trimmed).
I'd trust the long-term reliability of the 4x models before 
the 2x ones.
	IMHO, 8x and 16x oversampling are simple hype; the available
capacitors and other filter components cannot be characterized
accurately for the range of frequencies involved (20 Hz to 200000 Hz)
so if I were designing a filter for 'em, I'd just use the same
design as for the 4x units.  The part of the filtering that
ISN'T hype is the digital part (the number of taps in the FIR
{Finite Impulse Response} filter is a major factor in both cost
and performance).  None of the sales info ever mentions this,
though...

	John Whitmore

ajf@maximo.enet.dec.com (Adam J Felson) (03/02/91)

In article <17510@milton.u.washington.edu>, whit@milton.u.washington.edu (John
Whitmore) writes:
::In article <1991Mar1.004711.15100@watcgl.waterloo.edu>
cjwein@watcgl.waterloo.edu (Chris J. Wein) writes:
::>In article <wilf.667759065@rigel.sce.carleton.ca> wilf@sce.carleton.ca (Wilf
Leblanc) writes:
::>>jbuck@galileo.berkeley.edu (Joe Buck) writes:
::
::>>>Example of CD salespeak: pushing oversampling as an advanced technical
::>>>feature.  Oversampling is simply inserting zeros ...
::>>>Still, some sales critters think it's an advanced technical extra.
::
::	It isn't, mainly because it's NOT an extra.  All CD players
::oversample, and use a FIR filter (i.e. digital filtering).  The
::cheapest, oldest ones use 2x oversampling, and don't advertise the
::'feature'.
::

The first CD players did NOT have oversampling.  They ran @ 44.1 KHZ with a
pretty
steep 20KHZ analog filter.  


 
__a__d__a__m__

jroth@allvax.enet.dec.com (Jim Roth) (03/02/91)

In article <1991Mar1.191955@maximo.enet.dec.com>, ajf@maximo.enet.dec.com (Adam J Felson) writes...
>In article <17510@milton.u.washington.edu>, whit@milton.u.washington.edu (John
>> ...other stuff deleted...

>The first CD players did NOT have oversampling.  They ran @ 44.1 KHZ with a
>pretty
>steep 20KHZ analog filter.  

This is false.

The first players were from Sony and Phillips.  The Sony DAC's ran at Nyquist
rate with analog lowpass filters.  The Phillips used 4x oversampling and
noise shaping with a 14 bit DAC, followed by a rather gentle 3rd order Bessel
filter.  The oversampling FIR filter included compensation for the slight
rolloff of the Bessel filter.

See the Phillips technical journal that appeared about that time.

- Jim

mcmahan@netcom.COM (Dave Mc Mahan) (03/03/91)

 In a previous article, appmag!todd@hub.ucsb.edu writes:
>Not all CD players just insert zeroes.  I used the same double oversampling
>chip and the same DAC as my Denon 1500 CD player (well, I used the serial
>versions) in my 56000 project board.  The oversampling chip did do
>interpolation (and it was slightly more complex than bilinear (cubic spline?
>I don't remember)).
>
>I know the math works for inserting zeroes if you use a sinc function to
>reconstruct the signal.  However, how does it work out for reconstruction
>with a near step function?  I've never run through the math on that one...
>Quickly off the top of my head, it doesn't look like it will work...

Weell, If you look at the frequency content of a step function, you will find
that it contains harmonics that stretch into infinity.  To get a perfect
representation of that function when you re-construct, you would need
samples that are spaced infintly close together.  Since this can't be done
(without actually using the original function, since it is the only
representation of this type of sampling that meets the Nyquist criteria),
you can never re-construct the original step.  If you wish to sample this
function and then re-construct it, you will first need to lowpass filter it
to get rid of all harmonics greater than 1/2 your sample frequency.  This
will instantly transmute your nice step function into something resembling it
but containing overshoot and/or ringing right at the step edge.  You can
re-construct THAT waveform exactly, but it's not going to be a perfect step.

Vertical edges get lost during sampling.  A close representation will be
generated during reconstruction, but you will never get the perfect step
(or squarewave) that you originally had.


>Todd Day  |  todd@appmag.com  |  appmag!todd@hub.ucsb.edu


   -dave


-- 
Dave McMahan                            mcmahan@netcom.com
					{apple,amdahl,claris}!netcom!mcmahan

mcmahan@netcom.COM (Dave Mc Mahan) (03/03/91)

 In a previous article, rea@egr.duke.edu (Rana E. Ahmed) writes:
>Suppose we sample a pure sine wave of frequency 'f' at the Nyquist rate,
>i.e., at 2f samples/sec (exact), such that we start sampling the sine wave
>at the time of its zero crossing. Thus, if we assume uniform sampling,
>then all subsequent samples will have values equal to zero, i.e., x[m]=0 for 
>all m (Assuming instantaneous sampling, i.e., no Hold Time for samples).
>Intutively, if we pass these samples (each of zero voltage (say)) through an 
>ideal low-pass filter, then we should expect to get zero voltage at the output
>of filter. In other words, reconstructed signal voltage =0 for all t. 
>(see also the formula for x(t) above ).
>How can we recover the pure sine in this sampling strategy? 
>Am I missing something ??  

You have to go back and carefully read the criteria that Nyquist stated.  He
did NOT state that you can sample AT twice the lowest frequency for perfect
representation.  He stated that you have to sample at GREATER THAN twice the
lowest frequency for perfect representation.  You are quite correct in your
analysis above.  However, it doesn't meet Nyquist criteria for the original
waveform, so you will distort the information content of the original
waveform.

   -dave



-- 
Dave McMahan                            mcmahan@netcom.com
					{apple,amdahl,claris}!netcom!mcmahan

gaby@Stars.Reston.Unisys.COM ( UNISYS) (03/05/91)

>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>
>
>Example of CD salespeak: pushing oversampling as an advanced technical
>feature.  Oversampling is simply inserting zeros between the digital
>samples and thus increasing the sampling rate.  It's used because then you
>can use cheaper, less complex analog filters; it reduces the system cost.
>Still, some sales critters think it's an advanced technical extra.

I think the oversampling is not time interpolation (which by Nyquist
does not add any more information to the originial signal), but more 
error correction oversampling.  I.e. the same bit is sampled multiple
times to determine its value.  I assume that this is done by sampling
over the duration (space on the CD) of the bit.  Since the same bit
value is sampled multiple times (eight in the case of 8 times over
sampling) I assume some voting procedure is used to determine the
"true" (or best estimate) of the bit value.  I assume this results in
less tracking and sample errors.  For an ideal system, it also implies
that if the CD had a higher density (say 8 times) the laser can read 
it at this resolution (i.e. you could put 8 time the music on one
CD).

I think it is a little generous to think that the industry truely 
does "oversampling" as it is implied by signal processing connotations.
This (as you say) requires more compute requirements to ensure proper
interpolation of the sampled data.  If the interpolation (filtering)
is done wrong, then the quality of the output would go down...

- Jim Gaby

  gaby@rtc.reston.unisys.com

tohall@mars.lerc.nasa.gov (Dave Hall (Sverdrup)) (03/05/91)

In article <wilf.667759065@rigel.sce.carleton.ca>, wilf@sce.carleton.ca (Wilf Leblanc) writes...
>jbuck@galileo.berkeley.edu (Joe Buck) writes:
> 
>>[deleted]
> 
>When I bought my CD player, it said on the front panel 'Dual D/A
>converters'.  For fun, I asked the salesperson what that meant.
>The reply was rather funny, and of course completely inaccurate.
> 
>What does this really mean ?  (I figured maybe two distinct D/A's rather
>than one D/A and two sample and holds).

        OK, I may be advertising my ignorance of CD player design here,
but it seems to me it would be very difficult to produce right and left 
channel (stereo) outputs without 2 separate D/A's. My lack of expert 
knowledge leads me to believe that the 'Dual D/A converters' logo
is like building a car with a V-8 engine and advertising 'Dual Quad
Cylinder Heads' as a unique technical advancement!  What is the real
story? Let's hear from some CD technology wizards.

hedstrom@sirius.UVic.CA (Brad Hedstrom) (03/06/91)

In article <1991Mar5.155748.29328@eagle.lerc.nasa.gov> tohall@mars.lerc.nasa.gov (Dave Hall (Sverdrup)) writes:
>>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>>When I bought my CD player, it said on the front panel 'Dual D/A
>>converters'.  For fun, I asked the salesperson what that meant.
>>The reply was rather funny, and of course completely inaccurate.
>> 
>>What does this really mean ?  (I figured maybe two distinct D/A's rather
>>than one D/A and two sample and holds).
 
> OK, I may be advertising my ignorance of CD player design here,
> but it seems to me it would be very difficult to produce right and left 
> channel (stereo) outputs without 2 separate D/A's. My lack of expert 
> knowledge leads me to believe that the 'Dual D/A converters' logo
> is like building a car with a V-8 engine and advertising 'Dual Quad
> Cylinder Heads' as a unique technical advancement!  What is the real
> story? Let's hear from some CD technology wizards.

I don't profess to be a CD (or any other kind of) guru but here goes.
The samples are stored on the CD serially alternating between L and R
channels. This means that for each samples time interval, two samples
are read from the CD

sample	L	R	L	R	L	R	...
-----------------------------------------------------------
time	t = 0		t = t1		t = t2		...

where the sampling rate, t(n) - t(n-1) = 1/(44.1 kHz) = 22.7 usec. The
L and R samples were taken at the same time (in parallel) but can only
be read from the CD one at a time (serially). Assuming the L is read
first, that means that the corresponding R is delayed by 1/(44.1
kHz)/2 = 11.3 usec. Since the L and R are available in a serial
stream, only one D/A is required. At the output of the D/A are two
parallel analog sections composed of sample and holds, filters,
amplifiers, etc. This is where the "stereoness" in created.

Now since CD player manufactures are always looking for that little
thing that distinguished their product from the rest, they started
offering CD players with 2 D/A's. The reason: as shown above there is
a time delay between L and R which translates to a phase difference
between the two channels. By using 2 D/A's and delaying the L sample
by 11.3 usec, the two channels could be put back in phase.

Sounds very impressive. Of course there are numerous "audiophiles" who
claim to be able to audibly distinguish single and dual D/A players.
To put this phase difference in perspective, assume that your speakers
are *optimally* placed in an acoustically perfect room. Further assume
that you, the only listener (or object for that matter) in the room,
have placed yourself equidistant from the two optimally placed
speakers. Now move the R speaker (the one delayed by 11.3 usec) about
1/2" closer than its optimal position. Now you have compensated for
the time delay. But don't you dare change your location in the room;
you'll throw everything totally out of wack!



--
_____________________________________________________________________________
Brad Hedstrom                  Electrical and Computer Engineering Department
University of Victoria                     Victoria, British Columbia, Canada
UUCP: ...!{uw-beaver,ubc-vision}!uvicctr!hedstrom       ``I don't think so.''
Internet: hedstrom@sirius.UVic.CA                  ``Homey don't play that.''

mzenier@polari.UUCP (Mark Zenier) (03/06/91)

In article <wilf.667759065@rigel.sce.carleton.ca> wilf@sce.carleton.ca (Wilf Leblanc) writes:
>When I bought my CD player, it said on the front panel 'Dual D/A
>converters'.  For fun, I asked the salesperson what that meant.
>The reply was rather funny, and of course completely inaccurate.
>
>What does this really mean ?  (I figured maybe two distinct D/A's rather
>than 1 D/A and two sample and holds ??).

Some of the first ones used 1 D/A and sample and holds. The 
BBC wanted to broadcast monophonic off of some CD's.  With
the half sample time delay, it made the signal sound terrible.

They had to switch to one of the CD players with two 14 bit dacs.

Ancient history.

Mark Zenier  mzenier@polari.uucp  markz@ssc.uucp

todd@appmag.com (Todd Day) (03/06/91)

gaby@Stars.Reston.Unisys.COM ( UNISYS) writes:

%I think the oversampling is not time interpolation

Take it from someone who's played with the innards of more than a couple
CD players that it is.

%(which by Nyquist
%does not add any more information to the originial signal)

By "real-world" electronics, it doesn't add any more info, but allows
you to build an analog output filter that doesn't take away more info.

%, but more 
%error correction oversampling.  I.e. the same bit is sampled multiple
%times to determine its value.

No need to do this... it's usually real obvious or the Reed-Solomon
codes will allow recovery of the original info if a couple bits are
incorrect.

%I assume that this is done by sampling
%over the duration (space on the CD) of the bit.

The bits are not sampled off the disc like analog data.  They are
read off the disc much like serial data comes out of your modem
into your computer.

%I think it is a little generous to think that the industry truely 
%does "oversampling" as it is implied by signal processing connotations.
%This (as you say) requires more compute requirements to ensure proper
%interpolation of the sampled data.  If the interpolation (filtering)
%is done wrong, then the quality of the output would go down...

But it isn't done wrong and it is done on the fly.  There are a lot of
specialized chips on the market for just this purpose.  I used one of
them in my DSP board.  Remember, these chips are just mini-computers
with a built in program and they generally run at about 2 MHz.  They
only need to produce an update every 1/88kHz.  Even for a microprocessor
running at 2 MHz, that's all day long.

-- 
Todd Day  |  todd@appmag.com  |  appmag!todd@hub.ucsb.edu
		  ^^^^^^^^^^ coming soon!

edf@sm.luth.se (Ove Edfors) (03/06/91)

wilf@sce.carleton.ca (Wilf Leblanc) writes:

>jbuck@galileo.berkeley.edu (Joe Buck) writes:

>>[deleted]

>>Example of CD salespeak: pushing oversampling as an advanced technical
>>feature.  Oversampling is simply inserting zeros between the digital
>>samples and thus increasing the sampling rate.  It's used because then you
>>can use cheaper, less complex analog filters; it reduces the system cost.
>>Still, some sales critters think it's an advanced technical extra.

>This kills me too.  Especially 8x oversampling !
>(I always thought oversampling was used because analog filters usually
>have a horrible phase response near the cutoff.  However, if you want
>to spend enough money, you can get very near linear phase response
>with an analog filter.  So, you are right).

> ... [ stuff deleted ] ...

>--
>Wilf LeBlanc                                 Carleton University
>Internet: wilf@sce.carleton.ca               Systems & Computer Eng.
>    UUCP: ...!uunet!mitel!cunews!sce!wilf    Ottawa, Ont, Canada, K1S 5B6

---
  Let me first point out that I'm not very familiar with CD players, so
please forgive me if this posting is not compatible with contemprary CD
technology.
---

  The reason for oversampling is, as mentioned above, that analog filters
with very sharp cutoff are expensive and/or have a horrible phase response.
With oversampling it's possible to use (generaliszed) linear phase discrete
time filters prior to the D/A conversion. As a result of this operation one
can use much cheaper analog filters on the output. 

This media is not ideal for graphical illustrations, but I'll try anyway.

Let:       fs  - sampling frequency ( 44 kHz )
           Fs  - new sampling frequency ( L*44 kHz )

Consider the following amplitude spectrum on a CD:

                             ^  
        --           --------|--------           --
           \       /         |         \       /
             \   /           |           \   /
        -------+-------------+-------------+--------->  
             -fs/2                       fs/2

Reconstruction of this signal require an analog filter with a
sharp cutoff frequency at fs/2.

After insertion of (L-1) 0's between the samples we get:

                             ^  
--     -----     -----     --|--     -----     -----     -----
  \   /     \   /     \   /  |  \   /     \   /     \   /     \ 
   \ /       \ /       \ /   |   \ /       \ /       \ /       \
---------+--------------+----+----+--------------+--------------->
       -Fs/2          -fs/2     fs/2            Fs/2

Now ... use a discrete time filter (generalized linear phase) with a
sharp cutoff frequency at fs/2 (i.e at fs/Fs - normalized frequency).

This operation will give us the following spectrum:
(which is a copy of the first one except for the difference that the
lobes are furter apart)

                             ^ 
                           --|--  
                          /  |  \ 
                         /   |   \
---------+--------------+----+----+--------------+--------------->
       -Fs/2          -fs/2     fs/2            Fs/2


Reconstruction of this signal is much "cheaper" since the analog
filter on the output could have a much wider transition region. 


--------------------------------------------------------------------
Ove Edfors                          PHONE:    Int. +46 920 910 65
Div. of Signal Processing                     Dom.  0920 - 910 65
University of Lulea                 FAX:      Int. +46 920 720 43
S-951 87  LULEA                               Dom.  0920 - 720 43
SWEDEN                              E-MAIL:   edf@sm.luth.se
--------------------------------------------------------------------

edf@sm.luth.se (Ove Edfors) (03/06/91)

OOPS ... I forgot to throw in the punch line in my posting on oversampling.

  The essence of my last posting is that OVERSAMPLING is not just
"insertion of zeros between the samples", it also includes discrete
time filtering of the signal prior to D/A conversion.

--------------------------------------------------------------------
Ove Edfors                          PHONE:    Int. +46 920 910 65
Div. of Signal Processing                     Dom.  0920 - 910 65
University of Lulea                 FAX:      Int. +46 920 720 43
S-951 87  LULEA                               Dom.  0920 - 720 43
SWEDEN                              E-MAIL:   edf@sm.luth.se
--------------------------------------------------------------------

touch@grad2.cis.upenn.edu (Joseph D. Touch) (03/07/91)

In article <3463@polari.UUCP> mzenier@polari.UUCP (Mark Zenier) writes:
>Some of the first ones used 1 D/A and sample and holds. The 
>BBC wanted to broadcast monophonic off of some CD's.  With
>the half sample time delay, it made the signal sound terrible.


WHAT???  I saw the few posts about the time delay alledgedly IMPOSED
by using a single D/A.  A little thought reveals that using 4 sample
and holds and 1 D/A removes the time delay completely, and S/H's are
cheaper than D/A's.  

S/H1 locks the D/A output for the left channel as it comes off the
CD, S/H2 does the same for the right channel.  S/H's 3 and 4 grab
the values of S/H 1 and 2 just before the next set of values gets
locked in, resulting in an output completely in phase.

Joe Touch
PhD Candidate
Dept of Computer and Information Science
University of Pennsylvania

Time	1	2	3	4

CD	L1  R1  L2  R2  L3  R3  L4  R4 

D/A	L1  R1  L2  R2  L3  R3  L4  R4 

S/H 1	L1------L2------L3------L4-----

S/H 2       R1------R2------R3------R4------

S/H 3	       L1------L2------L3------L4------

S/H 4	       R1------R2------R3------R4------

jbuck@galileo.berkeley.edu (Joe Buck) (03/07/91)

In article <1180@aviary.Stars.Reston.Unisys.COM>, gaby@Stars.Reston.Unisys.COM ( UNISYS) writes:
|> I think the oversampling is not time interpolation (which by Nyquist
|> does not add any more information to the originial signal), but more 
|> error correction oversampling.  I.e. the same bit is sampled multiple
|> times to determine its value.  I assume that this is done by sampling
|> over the duration (space on the CD) of the bit. 

When you don't know what you're talking about, please save the
network bandwidth by refraining from posting.  You clearly haven't
a clue about the way error correction is done on CDs (it's done
by error correcting codes) and your phrasing ("I think", "I assume")
indicates that you're making it up.


--
Joe Buck
jbuck@galileo.berkeley.edu	 {uunet,ucbvax}!galileo.berkeley.edu!jbuck	

mcmahan@netcom.COM (Dave Mc Mahan) (03/07/91)

 In a previous article, gaby@Stars.Reston.Unisys.COM (Jim Gaby - UNISYS) writes:
>>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>>
>>
>>Example of CD salespeak: pushing oversampling as an advanced technical
>>feature.  Oversampling is simply inserting zeros between the digital
>>samples and thus increasing the sampling rate.  It's used because then you
>>can use cheaper, less complex analog filters; it reduces the system cost.
>
>I think the oversampling is not time interpolation (which by Nyquist
>does not add any more information to the originial signal), but more 
>error correction oversampling.  I.e. the same bit is sampled multiple
>times to determine its value.

I think (once again, I too am no CD expert) that this is incorrect.  A little
thought on the subject should show this.  Data on a CD follows an industry
standard format.  It has to, or nobody could use the same CD player for all
the variety of CD's that have been released.  This alone indicates that you
can't "sample the same bit multiple times to determine it's value".  I guess
you could try to spin the CD twice as fast to read the same track twice in
during the same amount of time and then do some kind of voting to determine
which bit is correct, but I doubt this is also the case.  CD drive motors are
all standard to keep costs down.  It is much more effective to use the built-in
error correction coding on a CD to correct the random bit flips that occur.
The scheme used is pretty powerful for all but the worst scratches.  It is
my opinion that 'over-sampling' means exactly that.  Creating more samples
than were originally read off the disk.  How can they do that, you ask?  It's
quite simple.  They just stuff in 3 zeros for every value read off the disk
in addition to the value from the disk.  Why do they do that, you ask?  Again,
the answer is simple.  Doing this allows them to increase the effective
sample rate to the FIR digital filters within the CD.  They then use a
sine(x)/(x) correction (sometimes called the sync function) to 'smooth' the
data at the higher sample rate.  This effectively increases your sample rate
to the DAC and allows you to push your analog low-pass filtering farther out
so it distorts the music less.  You STILL need to do the final analog lowpass
filtering, but now you don't need to make such a critical filter to get the
same performance.  I have used exactly this technique with ECG waveform
restoration, and it works amazingly well.  You can take very blocky, crummy
data that has been properly sampled (follows the Nyquist criteria) and turn
it into a much smoother, better looking waveform.  This technique makes the
steps between each sample smaller and performs peak restoration of the original
sample.  This is needed if the original samples didn't happen to fall on the
exact peak of the waveform, which almost always happens.  A side benefit is
that you get automatic scaling of the data to take full advantage of the range
of your D-to-A converter.  This is probably not a big deal for a CD player
since the original sample was intended to be played back exactly as it was
recorded, but for my ECG re-construction it works great.  Samples come in to
me as 7 bit digitial samples, and with no extra overhead (other than scaling
the FIR filter weights properly when I first design the filter) I get samples
out that take advantage of the full 10 bit range of the playback DAC I have
selected.  The oversampling interpolates between the original samples to make
the full range useful.  The original samples are scaled as well and come out
of the FIR filter properly scaled along with all the original data.  The
'chunky-ness' of the data steps is much reduced, and the whole thing looks
better than it did.

What is the cost of this technique?   This type of over-sampling requires you
to be able to do multiplications and additions at a fairly high rate.  That is
the limiting factor.  With some special selection of FIR tap weights and
creative hardware design, you can turn the multiplications required into
several ROM table lookups that can be implemented quite cheaply.  Adders
are also needed, but these are relativly simple to do (as compared to a 'true'
multiplier).  You shift data in at one clock rate, and shift it out at 4 times
that rate for a 4x oversampling rate.  The next step is to do the final D-to-A
conversion and analog lowpass filtering with a less complicated filter.

So what do you think?  Is that how it is done?  Does anybody out there REALLY
know and shed some light on this question?


>- Jim Gaby
>
>  gaby@rtc.reston.unisys.com

   -dave

-- 
Dave McMahan                            mcmahan@netcom.com
					{apple,amdahl,claris}!netcom!mcmahan

todd@appmag.com (Todd Day) (03/08/91)

mcmahan@netcom.COM (Dave Mc Mahan) writes:

%I guess
%you could try to spin the CD twice as fast to read the same track twice in
%during the same amount of time and then do some kind of voting to determine
%which bit is correct, but I doubt this is also the case.

Even this is impossible.  The CD has only one track or "groove",
just like an LP.  It would be difficult for a servo system to track
bits on the hi-density CD like the stepper system on the relatively
low density magnetic platter.

But of course, I'm off the subject.  This has nothing to do with DSP...

-- 
Todd Day  |  todd@appmag.com  |  appmag!todd@hub.ucsb.edu
		  ^^^^^^^^^^ coming soon!

mberg@dk.oracle.com (Martin Berg) (03/08/91)

In article <38839@netnews.upenn.edu> touch@grad1.cis.upenn.edu (Joseph D. Touch) writes:
>WHAT???  I saw the few posts about the time delay alledgedly IMPOSED
>by using a single D/A.  A little thought reveals that using 4 sample
>and holds and 1 D/A removes the time delay completely, and S/H's are
>cheaper than D/A's.  
>
>S/H1 locks the D/A output for the left channel as it comes off the
>CD, S/H2 does the same for the right channel.  S/H's 3 and 4 grab
>the values of S/H 1 and 2 just before the next set of values gets
>locked in, resulting in an output completely in phase.

It looks right, but have you considered that S/H-circuits actually are a 
specialized kind of analog hardware ? This means that you will get some added 
noise and distortion for every S/H you add in series with the signal. 
This may not amount to much, but in a time where more and more HiFi-companies
exclude unnecessary circuits (f.ex.: bass/treble controls) and uses
more and more sofisticated analog circuits in the CD-players, I am not
sure if your idea will be usable - anyway not in 'real' HiFi CD's.

BTW: does anyone know about any CD-manufacturer actually using this
solution - maybe to produce cheap CP-players ?

Martin Berg

Oracle Denmark

touch@grad1.cis.upenn.edu (Joseph D. Touch) (03/09/91)

In article <1284@dkunix9.dk.oracle.com> mberg@dk.oracle.com (PUT YOUR NAME HERE) writes:
>In article <38839@netnews.upenn.edu> touch@grad1.cis.upenn.edu (Joseph D. Touch) writes:
>>(solution using 1 DAC and 4 S/H's
>It looks right, but have you considered that S/H-circuits actually are a 
>specialized kind of analog hardware ? This means that you will get some added 
>noise and distortion for every S/H you add in series with the signal. 

Yes - but has anyone considered that DAC's have S/H's inside them, or
that signals are amplified and S/H'd on the way to recording?  There
is no way to remove ALL distortion, of course, but lets not waste
money designing playback equipment that is better than the recorded
signal.

	Joe Touch

duerr@motcid.UUCP (Michael L. Duerr) (03/09/91)

From article <wilf.667759065@rigel.sce.carleton.ca>, by wilf@sce.carleton.ca (Wilf Leblanc):
> When I bought my CD player, it said on the front panel 'Dual D/A
> converters'.  For fun, I asked the salesperson what that meant.
> The reply was rather funny, and of course completely inaccurate.
> 
> What does this really mean ?  (I figured maybe two distinct D/A's rather
> than 1 D/A and two sample and holds ??).

Yes, it means that.  Or, more likely, one dual-channel D/A.

There are a couple of reasons why.  Sample and holds have errors known as
pedestal - a voltage step that occurs when they transition between state -
and droop, where the signal decays as it is held.  The would be irrelevant
to sound quality, except that they are nonlinear and thus introduce
distortion.  Yes, D/A's have distortion too, but adding more only degrades
things.  

Also, a S/H will have some feedthrough.  Thus, when it is holding and the
D/A is producing a value for the second channel, some of the second channel
will feed through.  While the ammount is slight, remember that the output
of the FIR into the DAC may be 22 bits.  That represents 132 dB of dynamic 
range.  At 1 volt levels, -120 dB would be 10 mV.  Depending on the noise
level that may be burried, but the it is amazing what the human ear can
integrate up out of white noise.  Thus, channel isolation problems are 
potentially lessened by using dual D/A's.

Of course, its easier to use a dual D/A than a single one plus two more
S/H chips, even if the isolation between D/A sides is not an issue or
the DAC itself has bad isolation.  This is probably the biggest reason
for Dual D/A - less chips, less board space, less $.  Sound quality
improvements will be undiscernable to most listeners, who probably buy
more based on ( real and perceived ) features.

fcr@saturn.wustl.edu (Frank C. Robey X5569) (03/13/91)

In article <4692@apricot30.UUCP> duerr@motcid.UUCP (Michael L. Duerr) writes:
>From article <wilf.667759065@rigel.sce.carleton.ca>, by wilf@sce.carleton.ca (Wilf Leblanc):
>> When I bought my CD player, it said on the front panel 'Dual D/A
>> converters'.  For fun, I asked the salesperson what that meant.
>> The reply was rather funny, and of course completely inaccurate.
>> 
>> What does this really mean ?  (I figured maybe two distinct D/A's rather
>> than 1 D/A and two sample and holds ??).
>
>Yes, it means that.  Or, more likely, one dual-channel D/A.
>
>There are a couple of reasons why.  Sample and holds have errors known as
>pedestal - a voltage step that occurs when they transition between state -
>and droop, where the signal decays as it is held.
.. other reasons for using dual dacs based on current technology deleted.

When CD players were introduced, the D/A's used produced a lot of
"glitch" energy during transitions in level.  This glitch energy
showed up as harmonics and other undesireable spurious and thus
needed to be removed.  Sample and holds (or more correctly track and
holds) were used to remove the glitches.  Even if they had used
dual DAC's, dual track and holds would have been needed.   I seem
to recall that the level of distortion terms without a track and
hold was about -70dB below the fundamental.  This level and the
noise floor varied with the fundamental frequency and the sampling
frequency- the higher frequencies had higher higher distortion levels
and noise.

Current audio-quality DAC's balance delays (the major cause of the
glitches) between bits to minimize the glitch energy.  This was not
true of the DAC's available several years ago.

As for pedestal, it does not need to be a particularly non-linear
effect.  If you look at feedthrough, a linear effect, then this will
only slightly reduce channel separation- to maybe 60 dB or so- not a
particularly drastic problem in my opinion.

I was working in at group at HP- Lake Stevans in the early 80's
that was trying to get a dynamic range from a digital source around
130 dB.  Around this level, many components are no longer linear.  
Depending upon the type, voltage, and impedance levels many capacitors
and inductors created harmonics at levels far in excess of that.
At that level even some resistors were not useable.  I suspect that
some of the high-end audio equipment is finding this problem now with
the "18" and "20" bit DAC's. 

Frank Robey 
now at: fcr@ll.mit.edu    MIT- Lincoln Laboratory

jefft@phred.UUCP (Jeff Taylor) (03/13/91)

In article <625@ctycal.UUCP> ingoldsb@ctycal.UUCP (Terry Ingoldsby) writes:
>Pursuing the discussion of the Nyquist theorem, I have a question
>about practical sampling applications.  If you have a sine wave at
>frequency f, which you sample at just over 2f samples per second then
>the Nyquist theorem is satisfied.  I know that by performing a Fourier
>transform it is possible to recover all of the signal, i.e. deduce that
>the original wave was at frequency f.
>
>Note that this is different than just playing connect the dots with the
>samples.  Most of the algorithms I've heard of used with CD players
>perform a variety of interpolation, oversampling, etc., but these all
>seem to be elaborate versions of connect the dots.  I'm not aware that
>the digital signal processing customarily done will restore the wave to
>anything resembling its original.
>
>I suspect that there is something I am missing here.  Can anyone clarify
>the situation?


I wrote this about 5 years ago, and have posted it a couple of times in the
past.  Nyquest and Oversampling/zero filling seems to be on of those things
that most people know about, but don't really understand.  I've gotten enough
mail back saying this clears up some of the same sorts of questions that are
appearing again - that I'll post it again. 

jt


---- OVERSAMPLING AND ZERO FILLING ----------------------------------
What follows is a hand waiving (no math) justification on why this is logical
(although it defies common sense).

Back up to the basics about sampling (talk about the signals, and leave
out A/D's for the moment).  Everyone *knows* that the
sampling must be done at twice the bandwidth of the signal.  This is because

	1) The fourier transform of a periodic impulse (time domain) is
	a periodic impulse train (freq domain).

	2) The Multiplication of two signals in the time domain is
	equivalant to convolution in the frequency domain.

                time                            freq

	|   **         **                 |   *
	|  *   *      *   *               |**
Signal	|  *    *     *    *              |    *
	|-*-------*--*-------*            |     *
	*          *          *           |     *
	|*          *                     |--------------------


Sample  |                                 |
impulse ^   ^   ^   ^   ^   ^             ^               ^             ^
train   |   |   |   |   |   |             |               |             |
        +---------------------            +-------------------------------
        |<T>|                             |<---- 1/T ---->|

If we multiply the two time domain signals together (sample the signal) we
get:

	|                                 |   *       *       *      *
	|   ^           ^                 |**           *****          *****
	|   |           |                 |    *     *         *    *
	|   |   ^       |   ^             |     *   *           *  * 
	+----------------------           |     *   *           *  *
	v           |                     +---------------------------------
	|           v                             ^
                                                  |
						 1/2T

Looking at the freq plot, if we filter everything to the right of 1/2T,
we get the original signal back.  Therefore this impulse train (time domain)
contains all the information in the original signal.

A couple of important points about this time domain signal. 1) it is a
different signal then the original 'analog' signal, but contains all
the information that the original signal had.  2) It is a periodic sequence
of impulses, and *zero* everywhere else (the definitive digital signal,
only two values, 0 and infinity :-)). 3) It can be completely described
with a finite number of terms (the area under the impulses) so it is 
well suited for digital systems.

The disadvantage of this signal is that it is hard to realize (infinite
bandwidth, infinite amplitude).  However it is easy to get the weighting of
(area under) the impulses.  The area under each impulse is the value of the
original waveform at the instant it is sampled.  (Sample/Hold -> A/D).

[Key point coming up]

If you think of the 'digital' signal as completely describing the impulse
train signal, instead of an approximation of the original analog signal, it
is easy to accept zero filling as not introducing any errors.


	|                                 |   *       *       *      *
	|   ^           ^                 |**           *****          *****
	|   |           |                 |    *     *         *    *
	|   |   ^       |   ^             |     *   *           *  * 
	+-o---o---o---o---o---o           |     *   *           *  *
	v           |                     +---------------------------------
	|           v                                     ^
                                                          |
						         1/2T

By adding redundant information (the "o"'s above) of impulses with zero
area, we have not changed the spectrum of the signal, or it's ability
to represent the original analog signal.  Granted, this signal will not
look much like the original analog signal if plotted. So what. [try
sampling a 99 hz sine wave (which we know is bandlimited < 100hz) at
200 samples/sec.  It won't look like a sine wave either].  Two other
approaches, linear interpolation and sample duplication change the
impulse train, and the spectrum.  [ sin(f)/f ** 2  *I think*  and sin(f)/f ]


[Draw out a couple of cycles of 99hz and sample it at 200 S/sec, then
upsample to 400 by 1) zero filling, 2) linear interpolation 3) sample
duplication. None of them will be very accurate representation of the
original signal (if they are, change the phase 90 deg)]

Why bother oversampling?  Twice the sample rate, twice the processing
required (or more (or less)).  In the case of CD's which have a signal
BW of 20Khz, and a sample rate of ~44 khz, that means any signal at
20khz gets mirrored at 24khz.  To get rid of it you either need a
*very* sharp analog filter (with phase distortion/ringing), or lower
the BW of the filter (and lose some of the high freq).  If you
oversample by zero filling, it is possible to remove the
aliased signal with a digital filter.

A digital FIR filter has some good properties for removing the aliased
signal.  It is easy to make mulit-stage (90+) filters.  They are
noncausal (for proper reconstruction in the time domain, each of the
'zero' samples should be influenced by next impulse (not easily achieved
in an analog design :-) )).

		IMPULSE RESPONSE FIR INTERPOLATION FILTER

                                 |
                                _-_
                               - | -
                      _-_     _  |  _     _-_
                -*---*---*---*-------*---*---*---*
                   -      -_-         -_-      -


An important thing to notice about this filter is, it is zero at every
other sample (original sample rate), so running the oversampled signal
through this filter does not change any of the original samples (also
hard to do with an analog filter :-) ).  Adding more stages to the
filter moves the added zeros closer to the values of the original
waveform (by removing the aliased frequencies).  If the filter was
perfect, and the analog signal was bandlimited, they would become
identical to what would have been sampled at 88Khz.

The signal, and it's spectrum after running through this filter is:


	|                                 |   *                      *
	|   ^           ^                 |**                          *****
	|   | ^         | ^               |    *                    *
	|   | | ^       | | ^             |     *                  * 
	+----------------------           |     *                  *
	v           |                     +---------------------------------
	|           v                                     ^
                                                          |
						         1/2T

This is then fed to a D/A converter (at the 88 Khz rate), and the analog
output filter has a much simpler job. The signal at 20Khz is aliased at
68khz.

[side note on this FIR filter - half of the coefficents are zero,  half
of the signal samples are zero,  and the coefficents that are left are
duplicated.  But IIR filters have the reputation of being more efficent?
(but then I often use IIR filters when I want less ripple disortion,  and the
traditional rational for FIR filters is low disortion due to linear
phase delay).  Such is dsp, it often doesn't make sense, until you remember
the reason for your prejudice.]

mcphail@dataco.UUCP (Alex McPhail) (03/14/91)

In article <1180@aviary.Stars.Reston.Unisys.COM> gaby@Stars.Reston.Unisys.COM (Jim Gaby - UNISYS) writes:
>>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>>
>>
>
>I think the oversampling is not time interpolation (which by Nyquist
>does not add any more information to the originial signal), but more 
>error correction oversampling.  I.e. the same bit is sampled multiple
>times to determine its value.  I assume that this is done by sampling
>over the duration (space on the CD) of the bit.  Since the same bit
>value is sampled multiple times (eight in the case of 8 times over
>sampling) I assume some voting procedure is used to determine the
>"true" (or best estimate) of the bit value.  I assume this results in
>less tracking and sample errors.  For an ideal system, it also implies
>that if the CD had a higher density (say 8 times) the laser can read 
>it at this resolution (i.e. you could put 8 time the music on one
>CD).

Actually, this is not true.  You can not increase the density of information
on a compact disk without changing the technology.  Right now, each bit
of information occupies an area 1.6 microns square (ie adjacent bits must
be seperated by at least 1.6 microns).  If you attempt to compress the
data using closer seperation, the optical interference patterns will produce
intolerable noise on adjacent bits, even with oversampling.  You must use
a much higher frequence laser (producing a higher energy output, thus 
requiring more robust material in the compact disk, thus requiring even
higher energy writing lasers, etc., etc.) to achieve a closer seperation
of information in the compact disks.

The bottom line is the optical disk media has already reached physical
bandwidth saturation, and will not support an increase in binary density
without degradation to the desired signal.
============================================================================
________
|\     /  Alex McPhail
| \   /    
|  \ /    mail to mcphail@dataco              
|---X     (uunet!mitel!melair!dataco!mcphail)
|  / \     
| /   \   The opinions are mine alone.
|/_____\  The rest is yours.
  Alex     

***--------------------------------------------------------------***
* DISCLAIMER:                                                      *
* ==========:                                                      *
*    The opinions expressed are solely of the author and do not    *
*    necessarily reflect the opinions of Canadian Marconi Company. *
***--------------------------------------------------------------***

stephen@corp.telecom.co.nz (Richard Stephen) (03/14/91)

In article <3354@phred.UUCP> jefft@phred.UUCP (Jeff Taylor) writes in
response to 
>In article <625@ctycal.UUCP> ingoldsb@ctycal.UUCP (Terry Ingoldsby) writes:
>>Pursuing the discussion of the Nyquist theorem, I have a question
>>about practical sampling applications.  If you have a sine wave at
     [...etc...]
>
>I wrote this about 5 years ago, and have posted it a couple of times in the
>past.  Nyquest and Oversampling/zero filling seems to be on of those things
>that most people know about, but don't really understand.  I've gotten enough
>mail back saying this clears up some of the same sorts of questions that are
>appearing again - that I'll post it again. 

 [...etc...long explanation deleted...]

For those interested, check out the following paper:

MAX W HAUSER: Principles of Oversampling A/D conversion; J. Audio Eng.
Soc., Vol 39, No 1/2 1991 (January/February)
The first sentence of the abstract says:
"Growing practical importance of oversampling analog-to-digital
converters (OSDACs) reflects a synergism between microelectronic
technology trends and signal theory, neither of which alone is sufficient
to explain OSDACs fully......"

Besides being an excellent comprehensive discourse, it has one the best
collected bibliograhies on A/D, sampling, noise, digital filters, dither
etc that I have ever seen.

richard
============================ Richard Stephen ===============================
|   Technology Strategy             |      email: stephen@corp.telecom.co.nz
|   Telecom Corporation of NZ Ltd   |      voice: +64-4-823 180
|   P O Box 570, Wellington         |        FAX: +64-4-801 5417
|   New Zealand                     |

gsteckel@vergil.East.Sun.COM (Geoff Steckel - Sun BOS Hardware CONTRACTOR) (03/15/91)

In article <504@dcsun21.dataco.UUCP> mcphail@dcsun18.UUCP (Alex McPhail,DC ) writes:
>
>Actually, this is not true.  You can not increase the density of information
>on a compact disk without changing the technology.  Right now, each bit
>of information occupies an area 1.6 microns square (ie adjacent bits must
>be seperated by at least 1.6 microns).  If you attempt to compress the
>data using closer seperation, the optical interference patterns will produce
>intolerable noise on adjacent bits, even with oversampling.

Oversampling has nothing to do with data recovery off the disk.
Oversampling is a technique to make the engineering job of reconstructing
the output waveform easier or cheaper.  It is done after the data stream
has been recovered off of the medium.

>You must use
>a much higher frequence laser (producing a higher energy output, thus 
>requiring more robust material in the compact disk, thus requiring even
>higher energy writing lasers, etc., etc.) to achieve a closer seperation
>of information in the compact disks.

A couple of misconceptions here:
1) a higher frequency reading laser does not need any change in the read-only CD
materials.  A read-write CD might require some change in composition.
Just because the photons have higher energy doesn't mean that 1 milliwatt of
green light affects an aluminized reflector any more than 1 milliwatt of infrared.

Currently 5 milliwatt orange-red semiconductor lasers are available on the surplus
market, which are a good deal brighter than you need to read CDs.  A green laser
(semiconductor) has been announced by several companies.  I expect to see them
in products as soon as a standard for denser disks is hashed out.

2) read-only CDs are molded from a master, not written with a laser, and would
require only good quality control to be produced with 50% smaller pits.

	geoff steckel (gwes@wjh12.harvard.EDU)
			(...!husc6!wjh12!omnivore!gws)
Disclaimer: I am not affiliated with Sun Microsystems, despite the From: line.
This posting is entirely the author's responsibility.

whit@milton.u.washington.edu (John Whitmore) (03/15/91)

In article <504@dcsun21.dataco.UUCP> mcphail@dcsun18.UUCP (Alex McPhail,DC ) writes:
>In article <1180@aviary.Stars.Reston.Unisys.COM> gaby@Stars.Reston.Unisys.COM (Jim Gaby - UNISYS) writes:
>>>jbuck@galileo.berkeley.edu (Joe Buck) writes:
>>... if the CD had a higher density (say 8 times) the laser can read 
>>it at this resolution (i.e. you could put 8 time the music on one
>>CD).
>
>Actually, this is not true.  You can not increase the density of information
>on a compact disk without changing the technology. 
> ...  If you attempt to compress the
>data using closer seperation, the optical interference patterns will produce
>intolerable noise

	There are no semiconductor lasers available that can do the
readout task at higher resolution, BUT a frequency-doubling or -tripling
scheme can conceivably be employed.  IBM has shown an 80%-efficient
doubler on a semiconductor laser.  If made commercial in a CD or
similar optical disk, such a frequency-doubling would (in theory)
allow a quadrupling of disk capacity.


I am known for my brilliance,                  John Whitmore
 by those who do not know me well.