SAOLC used in streaming enviroment

From: Marek Claussen (m.claussen@tuhh.de)
Date: Sun Oct 17 1999 - 17:18:40 EDT


Hy !

I was just thinking about the next steps for a streaming implementation
of the (speeded-up version) of the SAOLC decoder.
After the implemention of the virtual-machine approach, introduced by
Giorgio, which should lead to a "close to real-time" decoder, the
streaming aspects become more and more interesting.
Looking at the SAOLC-decoder, I came across a few questions concerning
the input-files we got so far and the handling of these inside the
decoding algorithm:

- Using SAOL and SASL streams seperately ("-sco" and "-orc" options),
the files are "sucked in" completely to be used for initialization and
scheduling. Because the order of the SASL-commands can be any (and does
not have to be time sorted), the whole file has to be scheduled first,
to realize decoding. But what about the mp4-files we got so far (decoded

e.g. by SFRONT)? Is the SASL part encoded timesorted ? Will it be enough

to suck in only those lines, which relate to a certain timestap? Can one

be sure, that all events in the following lines will relate to later
events ? This would be a must for a streaming implementation.

- Will it be possible to initialize the SAOL-orchestra also in a
streaming fashion ? Or do we first have to read in (and buffer) the
whole file part of the mp4-file, describing the instruments? If a
streaming implementation would be possible, will there be a difference
in the initialization phase in the new virtual-machine approach ?

- When using a streaming file, what about changing the setup of the used

instruments ?
Taking a complete structured-audio mp4-file, we allways have the SAOL
block first, followed by the SASL part. This means that we have to
initialize all instruments for the output-audiofile first. Thinking of
large SA-contents split into following streams, it could make sense to
reuse instruments from a previous SA-stream for the next stream and only

add those instruments which come in new. This could result in a reduced
amount of SAOL data to be transmitted for the second stream and in less
time required for initializing these new instruments. What about this
implementation in a streaming enviroment?

- The mp4 files for the structured audio part are not embedded inside
the whole MPEG4 layer structure, yet. Are there any information
available which give some informations about the way this will look like

? How will the structured-audio block/frames be marked? What about
timestamping and the BIFS-embedment ?

any suggestions ?

thanx

mareK



This archive was generated by hypermail 2b29 : Wed May 10 2000 - 12:15:39 EDT