Hi Richard,
Glad to see you've joined us!
I'm pleased to hear that CDP folks are at least thinking
about synergies between MPEG-4 tools and their longstanding
interests in accessible, useful musician's interfaces.
I have some comments...
>Given that, unlike Csound, there are (so far as I can see)
>no opcodes specifically for displaying data, graphic code will have to
>be linked to saolc at a fairly deep level.
There's always a question whether 'saolc' is the proper implementation
of SAOL for real-time experiments in the long run, or whether we
would be better served to steal the parser and other front-end
parts and start again. I think the latter is the approach of
EPFL -- can someone from there comment on how much of 'saolc'
you're using in your work? Are you trying to "improve" saolc
or only use the best bits to build a better SAOL implementation?
I'm also interested in discussing the philosophy of local
extensions to SAOL. There are a number of things that are
outside the standard, such as graphics (which Richard highlights),
real-time control (faders and things), and MIDI output; live
MIDI and microphone input are covered in an informative note to the
standard, but are not a required part of a SAOL implementation.
Is it better if the community agree on some principles for how
to do these things, so that different implementations have a
similar philosophy? Or are the goals of different implementations
going to be so different that there's no point in this?
To take one specific example, real-time graphic output is
an interesting case. In MPEG-4, there's other tools that
do this and can be synchronized with SAOL (for example, VRML
is integrated as part of MPEG-4, so SAOL and the VRML part can
communicate with each other). So there was no need within
the standard to have direct graphics commands in SAOL.
In a standalone SAOL system that's not really an MPEG-4
decoder, the VRML part might not be there. Is it better if
we come up with a set of "recommended core opcodes" for
controlling graphics and UIs? (We should look at
SuperCollider if this is the case). Or is each implementation
going to want to do it in native code at the C/C++ level?
Best,
-- Eric
+-----------------+
| Eric Scheirer |A-7b5 D7b9|G-7 C7|Cb C-7b5 F7#9|Bb |B-7 E7|
|eds@media.mit.edu| < http://sound.media.mit.edu/~eds >
| 617 253 0112 |A A/G# F#-7 F#-/E|Eb-7b5 D7b5|Db|C7b5 B7b5|Bb|
+-----------------+
This archive was generated by hypermail 2b29 : Wed May 10 2000 - 12:15:22 EDT