Re: Steven Curtin: Waveshaping?

From: Eric Scheirer (eds@media.mit.edu)
Date: Wed Jan 07 1998 - 13:42:24 EST


Steven Curtin wrote:

> >(Right now, all interpolation in 'saolc' is linear, so it sounds
> >pretty bad for non-oversampled wavetables. The spec on this
> >hasn't been written yet, but I'm hoping to get to it before
> >the next MPEG meeting and then bring the code up to spec.)
>
> Maybe the user should specify the interpolation method globally like they
> do sample rate. Linear is fine for most close sampling synths on the
> market and probably for a quick audition- you could then switch in the more
> CPU-intensive and better-sounding interpolation for the final mixdown to a
> file or with fewer instruments.

The idea right now is that you could just write your own
(using the raw 'tableread' with integer values) if you
want something special. Interpolation is a big issue
throughout the whole audio standard, and it's going to
be most elegant if there's one solution throughout.
For example, it comes in at the mixing level in the
audio scene description: if I want to mix the output
of the CELP speech decoder sampled at 8 KHz with
the output of the SA decoder sampled at 32 KHz,
you probably want to interpolate the speech
rather than decimate the music. The method here
will be the same as the method in SAOL itself,
to simplify construction of hardware solutions.

> Yes, I would vote to make an exception for cheby since it's so often used.
> For now you could just generate the transfer function in Matlab or C. (If
> anybody out there has Matlab to generate the cheby I'd like to see it).

Okay, maybe I'll see about putting it back in.
 
> >Eh? No, that's not right, can you give me a reference so I can
> >fix it. All calculation is done internally in double-float,
> >then converted to 16-bit integer for output.
>
> Oh good, I'm glad to hear that. The sample bank reference keeps referring
> to 16-bit samples and that was where the confusion came from. I suppose
> the internal data representation might also be implemetation-independent.

Yes, that's right. The internal representation is standardized,
and implementation indepedent. But how you map those internal
samples to audio is implementation-dependent and nonstandard.

The SASBF stuff is indeed 16-bit, because its major use is in
lower-complexity terminals.

Best,

 -- Eric

-- 
+-----------------+
|  Eric Scheirer  |A-7b5 D7b9|G-7 C7|Cb   C-7b5 F7#9|Bb  |B-7 E7|
|eds@media.mit.edu|      < http://sound.media.mit.edu/~eds >
|  617 253 0112   |A A/G# F#-7 F#-/E|Eb-7b5 D7b5|Db|C7b5 B7b5|Bb|
+-----------------+



This archive was generated by hypermail 2b29 : Mon Jan 28 2002 - 11:46:31 EST