Currently, the sfront implementation contains opcodes as a library of source
code, which is then drawn upon to generate an application.
I think if it was possible to change the sfront implementation so that each
opcode was a function and the functions were precompiled into a DLL, that
would go a long way towards satisfying Joe's needs, without affecting the
functionality of sfront and MP4 at all.
Any ideas? Is there some reason I'm not aware of why the sfront opcodes are
generated from source code instead of linked to from a library? I can think
of two reasons:
o The finished applications contain ONLY the code required, and are smaller.
o The sfront compiler treats EVERYTHING the same way:
source code generated into sa.c. Maybe this makes the compiler simpler?
-----Original Message-----
From: owner-music-dsp@shoko.calarts.edu
[mailto:owner-music-dsp@shoko.calarts.edu]On Behalf Of Joe Wright
Sent: Friday, December 10, 1999 8:39 AM
To: music-dsp@shoko.calarts.edu
Subject: Re: Common standard for DSP acceleration
I think the scheme you outline below is good for games and multimedia titles
but not for audio apps. I don't want an event based architecture. However,
there is still a great overlap with what I want and what you have been
describing. For example, SAOL is fine as the language definition.
I would like to see an API which exposes the opcodes as routines so that
they can be used at this low level for pro audio, and built upon for MPEG4.
Joe
> Ah, this is a good point. We should distinguish
> strictly by-the-book implementations of the standard,
> which are targeted very narrowly, to the kinds of
> functions that a platform implementing part of the
> standard could actually perform.
>
> I don't mean that the whole MPEG-4 standard is directly
> applicable to your problem; rather, I suggest that a
> large part of what you need is a robust audio DSP
> language that runs on a hardware accelerator, and a
> way to control audio DSP algorithms in real-time. This
> this is a major overlap with the things provided by
> the SA standard.
>
> What I suggest is the following: a useful platform that does
> meet your goals is a sound card that could interpret
> SAOL code and then accept interactive control from
> host-side (or local, of course) programs. Seen in
> this way, the "server" is the host and the "client"
> is the sound card. The host sends SAOL algorithms
> to the card, and then later sends interactive controls
> in SASL or MIDI to the running DSP engine that was
> earlier described in SAOL.
>
> This kind of application is, as you point out,
> strictly outside the scope of the standard proper.
> However, it is the application that many, perhaps
> most, of the people currently working on MPEG-4
> Structured Audio are most interested in. I think
> this is what you have in mind.
>
> Thus, there is some architecture work that yet needs
> to be done, because for example there is no standard
> for the programmer's API itself. But I think if
> the API mostly is about sending SASL/MIDI commands
> to the running DSP engine, it is pretty clear how
> it should work. I've already had some informal
> discussions about the DirectSound team about this.
>
> To make it more concrete, the way I envision the steps
> in a host program that uses a SAOL accelerator, for
> example for a video game, are the following:
>
> 1. At the beginning of a "level" or "stage" in the
> game, the host program sends all of the DSP algorithms
> that might be used in that stage to the sound card.
> This includes SFX, music synthesizers, effects
> algorithms, EQs, and so forth. Also included is
> the routing structure, that defines the audio-signal
> patching between sounds, effects, sends and returns.
> Also, some kinds of preassembled program material,
> like samples and sequences, are sent and stored in
> memory.
>
> In response to this, the sound card compiles the SAOL
> code, or otherwise gets itself ready to do the
> necessary real-time synthesis.
>
> 2. During the run-time interaction, the host programs
> sends off very small parametric commmands, triggers,
> and cues. Each of the cues has a certain meaning in
> the DSP for the current level as defined by the
> SAOL algorithms that were sent. Some cues might trigger
> SFX or sequences, while some change parametric
> EQ or reverb sweetness. The particular granularity
> of cues, their meanings, and how often they need to
> be sent, depends on the particular design of the
> audio content. But fundamentally these are "event"
> based cues that happen on the order of no more than
> dozens per second.
>
> Note that most of the time, the host is *not* sending
> new SAOL algorithms to the sound card during the
> run-time interaction. There is a certain "session-level"
> granularity, within which the ensemble of algorithms
> that might be used should all be resident on the card
> the whole time.
>
> Redbook audio could also be streaming into the card
> at the same time, there's instructions for how that
> works in the standard.
>
> 3. In response to the commands and cues, the sound card
> does the synthesis and effects-processing described
> in the SAOL program, and either spits the audio out
> the DAC or returns it to the host.
>
> 4. At the end of the "level" or "stage" or "session",
> the SAOL algorithms are cleared from the card, in
> preparation for a return to step 1.
>
> To me, this scenario is not a by-the-book MPEG-4
> implementation, but is a system that is easily realizable
> around MPEG-4 SA tools. In fact, much of the time
> this is the model that we had in mind while creating
> the standard.
>
> If you have continuing questions on how this structure
> maps on what you want to do I'm happy to try to answer
> them!
>
> Best,
>
> -- Eric
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
This archive was generated by hypermail 2b29 : Wed May 10 2000 - 12:15:50 EDT