RE: and yet more buzz ...

From: Robin Davies (robind@MessageWise.com)
Date: Tue Jun 06 2000 - 14:56:44 EDT


> The major source of artifacts using table lookups gets
> triggeres when the denominator is too close to 0 (at
> least using my simplistic table lookups, maybe your
> two-table method is sufficiently accurate so that its
> not an issue).

I think so, although I'm not sure. Analytically, I'm considering
errors in roundoff in the two-table method as phase-distortion
rather than considering the error in the results as error noise.

e.g. the error introduced by the table lookup is more of the form

        SUM(0...N) a^N sin(phi+beta+Ein) [1].

than
        [ SUM(0..N) a^N sin(phi+beta) ] + Eout [2].

where Ein and Eout are error noise terms introduced by the calculations. I
think you can make a case that Ein in equation [1] will be greater than or
equal to Eout in equation [2] when calculations are performed in R, since
the derivatives of each of the sin terms in the sum is less than 1... ...
well. ok. less than a^N*(dScale) which is roughly one when a^N is large.

<grumble>Numerical analysis was never one of my favorite course. </grumble>

This is reasonable, I think, given that the table contents are actually of
higher
precision than the index inputs. Truncation of the phase angle can be
performed early in the calculation so the input error is applied in correct
proportion to each term of the closed-form equation.

Anyway. I need to rip it apart a little more rigorously. I'll let you know
if that's not the case.

Intermediate results are calculated at double precision, so I have less of a
problem (although admittedly I do have a problem) with precision when abs(a)
and nHarm are large.

At some point, also, I think I'm going to need to run some sample outputs
through an FFT to see what they look like spectrally. (Just gotta get
IFT/FFT running first. :-/ ).



This archive was generated by hypermail 2b29 : Mon Jan 28 2002 - 12:03:57 EST