A
Andy
Guest
On Sep 4, 10:11 am, KJ <Kevin.Jenni...@Unisys.com> wrote:
gives it the permission to is the built in assertion, thus specifying
that conditions which would result in a value outside the declared
range are don't care. After all, I guarantee that the simulator
implements "integer range 0 to 512" as a 16 or 32 bit value, not a
nine bit value!
My point is that the simulator and synthesis tool have "different"
behavior. Because the simulator errors out, the synthesis tool is free
to optimize the hardware for those conditions. It would be very
difficult for a synthesis tool to come up with an error routine in
hardware, so it does not try. It assumes (rightly so) that the user is
not interested in implementing cases where the simulation does not
complete.
Andy
The way the synthesis tool recognizes it is in fact, as you say. WhatOn Sep 4, 10:13 am, Andy <jonesa...@comcast.net> wrote:
If you already have the complete logic description (which is encoded in all
of the 'non-assert' statements) the synthesis tool will not really have much
use for these unverifiable asserts that purport to give this supposedly
'higher level' information. Just what should happen if the logic
description says one thing but the assertion claim another...and the
supposedly impossible thing happens (i.e. the 'assert' fires)? The
synthesis tool generated logic will have no way to flag this and, I'm
guessing, is free to implement this case however it wants to independent of
the logic description....I think I'll pass on such a tool, the supposedly
'impossible' happens, 'specially as code gets reused in new applications.
If you've ever used integer types for synthesis, then you've already
used "built-in" assertions, and seen the synthesis tools' ability to
infer information from them, and optimize the logic to take advantage
of input combinations that cannot exist in the simulation:
signal my_int : integer range 0 to 255;
This is an integer, which if it's value ever exceeds the range of 0 to
255, inclusive, will cause a simulation halt, with an unrecoverable
error.
The synthesis tool recognizes this, and decides it does not need all
32 signed bits to represent all possible values of my_int, it only
needs 8 unsigned bits.
Not at all. Synthesis tools key off the range definition that is
typed in as being from 0 to 255 to calculate that an 8 bit
representation is needed. Assertions have nothing to do with it.
So indeed there is precedence for synthesis tools using information
that is in the form of built-in assertions to allow optimization of
logic.
And how can you say with any certainty that this is not just
precedence for using the defined range of an integer and has
absolutely nothing to do with assertions?
KJ
gives it the permission to is the built in assertion, thus specifying
that conditions which would result in a value outside the declared
range are don't care. After all, I guarantee that the simulator
implements "integer range 0 to 512" as a 16 or 32 bit value, not a
nine bit value!
My point is that the simulator and synthesis tool have "different"
behavior. Because the simulator errors out, the synthesis tool is free
to optimize the hardware for those conditions. It would be very
difficult for a synthesis tool to come up with an error routine in
hardware, so it does not try. It assumes (rightly so) that the user is
not interested in implementing cases where the simulation does not
complete.
Andy