A
Andy
Guest
Of course there are multiple functionally equivalent methods (i.e. they
synthesize to the same hardware) of describing a given architecture,
but some methods (e.g. using integers, variables and clocked processes,
while minimizing slv, signals, combinatorial processes and concurrent
assignments) simulate much more efficiently (approaching cycle based
performance) than others. The more clock cycles I can simulate, the
more bugs I can find, and the more quickly I can verify alternate
architectures. It may not make a big difference on small projects, but
on larger projects, the performance difference is huge.
Andy
KJ wrote:
synthesize to the same hardware) of describing a given architecture,
but some methods (e.g. using integers, variables and clocked processes,
while minimizing slv, signals, combinatorial processes and concurrent
assignments) simulate much more efficiently (approaching cycle based
performance) than others. The more clock cycles I can simulate, the
more bugs I can find, and the more quickly I can verify alternate
architectures. It may not make a big difference on small projects, but
on larger projects, the performance difference is huge.
Andy
KJ wrote:
Andy wrote:
Yes, but...
Assuming the signals that those concurrent assignments depend on are
driven from clocked processes, they do not update until after the
clock, which means they are the registered (delayed) values.
So what? I typically don't care about waiting a delta cycle delay,
when you put them up on a wave window to debug they all happen at the
same time.
snip
Note that both out1 and out2 have the same cycle-accurate behavior.
Note also that if both out1 and out2 exist, Synplify will combine them
and use out1 for both.
And this can be written in a functionally equivalent manner using a
process and concurrent assignments and it will synthesize to the exact
same thing....equivalent.
KJ