kallaus@leadsv.UUCP (Jerry Kallaus) (10/04/89)
THIS IS A REQUEST FOR INFORMATION
Posting this for a co-worker, as well as myself.
In working with sonic data, we are trying to create an algorithm for
aural enhancement which has the effect of stretching or increasing the
bandwidth of a signal but without reducing its time base. (This
undesirable result would occur for example if the data were simply
played back at a higher sample rate).
Alternately the desired algorithm can be viewed as a stretch of the time
base without a corresponding decrease in signal bandwidth.
Thus for example a tone and its harmonics could be effectively increased
in frequency by the same relative (fractional) amounts and therefore
sound like a similar tone but at a higher pitch and preserving the
original duration.
Does anyone have an idea how to accomplish this, perhaps using Fourier
techniques (or any other way). If necessary, it would be okay to
consider the signal as consisting of basically tones but with slowly
changing frequencies and/or amplitudes.
Perhaps a more general situation (e.g. music or voice) could be handled
as well by some techniques that come under the heading of 'maximum
entropy', etc.?
There seems to be some basic philosophical problem with the idea of
increasing the time-bandwidth product without adding extraneous
information.
For the slowly varying tonal case, as mentioned above, we have tried
processing by segments or blocks of time data, but have had trouble with
clicks or bumps sounding at the block boundaries.
Thanks!
--
Jerry Kallaus {pyramid.arpa,ucbvax!sun!suncal}leadsv!kallaus
"Funny, how just when you think life can't possibly get
any worse, it suddenly does." - Marvin