> I wouldn't talk of determinism or not, I would talk of quality or not.
Yes. I think one useful approach to measuring this is to compute the
same operation in different ways in a single SAOL file, and then
measure the error signal between the two methods. For example, here's
one of my test programs for filter core opcodes:
instr filt () {
ksig freq;
asig sigin, old1, old2, old3, stage1, stage2, stage3;
sigin = 0.5*buzz(freq,1,0,0);
freq = min(freq+25, 10000);
stage1 = iir(sigin, 0.2871, -1.2971, -0.4466, 0.6949, 0, 0, 0);
stage2 = iir(sigin, -2.1428, -1.0691, 1.1454, 0.3699, 0, 0, 0);
stage3 = iir(sigin, 1.8558, -0.9972, -0.6304, 0.2570, 0, 0, 0);
old1 = biquad(sigin,0.2871,-0.4466,0,-1.2971,0.6949);
old2 = biquad(sigin,-2.1428,1.1454,0,-1.0691,0.3699);
old3 = biquad(sigin,1.8558,-0.6304,0,-0.9972,0.2570);
output(stage1+stage2+stage3 - old1 - old2 - old3);
}
In this case I was just using my ears as the measurement device,
but an "effects instrument" that computed a psychoacoustic measure
on the difference signal is a way to generate an automatic go/no-go.
I could imagine a set of 50-100 SAOL programs of this nature that
cover the core opcode and basic language functionality, and come
up with a "quality rating" for a decoder that wouldn't require
comparison to "reference" audio output.
--jl
This archive was generated by hypermail 2b29 : Wed May 10 2000 - 12:15:29 EDT