> If you want to know about the MP4 file format before the standard is
> available you should look at the QuickTime format specification on
> Apple's web site. There is not much difference between QuickTime
> files and MP4, and I would be happy to describe the differences to
> people who want to know.
Thanks, I appreciate it!
From Mike's posting, and from Eric's earlier posting, there seem to
be two ways for sfront (and other implementations not specifically
being done as part of a full MPEG system, but from a bottom-up
"Structured Audio is cool" perspective) to go:
[1] Implement MPEG 4 Systems in a compliant way.
[2] The approach which Eric seemed to suggest in the earlier snippet:
> Put another way, you don't need to use the official
> MP4 format in a streaming MPEG-4 application, since
> you're just sending and receiving chunks of data and don't
> need 'files'.
Basically, I read this as "an implementer has my blessing
to take StructuredAudioSpecificConfig and SA_access_unit
chuncks and incorporate them into other file formats
besides MPEG 4 -- don't feel like you need to use MPEG
4 Systems if you're (for example) writing a Structured
Audio decoder for an open-source streaming system. Use
some other existing streamer if you wish, or just
concatenate StructuredAudioSpecificConfig and SA_access_unit
chunks in the worst case.
The only technical issue I can see in the "worst case"
of concatenating StructuredAudioSpecificConfig and
SA_access_unit chunks is the midi_event issue we hashed
out earlier -- every midi_event chunch has an implicit
has_time bit set, and the time needs to come from somewhere.
The simplest convention I could imagine is sending
dummy score_line chunks at regular intervals with its
has_time bit set and an explicit timestamp, along with
a convention that a midi_event uses the timestamp of the
last score_line sent. This has the advantages of being
compatible on the bit level, whereas defining a non-standard
event_type for SA_access_unit would not.
---I guess the MSB from my end in that binary formats aren't the biggest problem at the moment for Structured Audio -- the problem is that a culture of SAOL programming needs to flower first, so that content worth streaming (be it SAOL only algorithmic content or SAOL + (MIDI || SASL) score-based content) can be created. In that sense, sfront's focus right now is improve code optimization, to inspire people to write SAOL programs -- once the SAOL programming culture takes off, hopefully it will be obvious if [1], [2], or some other option is the right one for sfront to support. In the short term, I hope to get the "worst case" version of [1], or some variant of it as people suggest, up and running in sfront, so that people interested in streaming can use sfront to do experiments with.
--jl
------------------------------------------------------------------------- John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro -------------------------------------------------------------------------
This archive was generated by hypermail 2b29 : Wed May 10 2000 - 12:15:48 EDT