Davide Rocchesso wrote:
> I think that the problem might be restated as finding a topological sort
> of a graph which becomes direct and acyclic as soon as the unit delays
> implicit in variable sample-carried dependencies have been cut.
> In x=sum(x,y); x=sum(x,z); there is one such delay between the second
> left-side x and the first right-side x.
Yes, I agree with this analysis.
> Once a thread of execution has been found in topological sorting, this
> can be enclosed within a for loop such as that indicated above. This
> loop might be unrolled but I am not quite sure it can be parallelized
> because of loop-carried dependencies.
Yes, I think this is also true. It cannot be fully parallelized if
there is any feedback (implicit delay dependency). It seems to me
that a basic block is parallelizable exactly when all variables
are set before they are used and the above topological sort criterion
is met. Such a basic block may be parallelized in the simple
way:
for (i=0;i<N;i++) {
block[i];
}
where each asig X within BLOCK is replaced with x[i].
It is pleasing to note that this problem is similar to detection
of parallelizability in vector supercomputing, because there is
a large literature on this topic. Such optimizations can easily
become very tricky to implement, I think!
> Another, even harder, issue is the possibility of solving loops which
> are non-computable by specification. They occur frequently in physical
> modeling where subsystems are discretized independently and then
> connected. Up to now, the most used approach is to insert a fictituous
> unit delay (but this introduces inaccuracy). However, there are methods
> for solving these loops exactly by changing the underlying graph, which
> one might consider to automate.
Is it true that all such physical systems can be so converted? I
think there are some signal-flow graphs which are fundamentally
unrealizable?
> As an example, consider the following pseudo-code, representing a
> nonlinear map coupled with a dynamic linear system:
> y = f(x);
> x = dyn(y);
> In the saol semantics, there is always a unit delay between the two x's.
> Am I right? This might be source of instabilities and inaccuracy.
There is an implicit delay of 1 sample between the time x is used
and the time it is set in this case. Can this not be fixed by
instead of using dyn(y) using dyn'(y) where dyn'(y) = dyn'(y * z),
where z^-1 is the unit delay?
I have not so much experience thinking about these issues, and I
would welcome your input. Are there other language models (lazy
evaluation perhaps) which allow direct specification of such
structures?
Best,
-- Eric
-- +-----------------+ | Eric Scheirer |A-7b5 D7b9|G-7 C7|Cb C-7b5 F7#9|Bb |B-7 E7| |eds@media.mit.edu| < http://sound.media.mit.edu/~eds > | 617 253 0112 |A A/G# F#-7 F#-/E|Eb-7b5 D7b5|Db|C7b5 B7b5|Bb| +-----------------+
This archive was generated by hypermail 2b29 : Wed May 10 2000 - 12:14:10 EDT