Clock Edge notation

On Sep 4, 10:11 am, KJ <Kevin.Jenni...@Unisys.com> wrote:
On Sep 4, 10:13 am, Andy <jonesa...@comcast.net> wrote:



If you already have the complete logic description (which is encoded in all
of the 'non-assert' statements) the synthesis tool will not really have much
use for these unverifiable asserts that purport to give this supposedly
'higher level' information. Just what should happen if the logic
description says one thing but the assertion claim another...and the
supposedly impossible thing happens (i.e. the 'assert' fires)? The
synthesis tool generated logic will have no way to flag this and, I'm
guessing, is free to implement this case however it wants to independent of
the logic description....I think I'll pass on such a tool, the supposedly
'impossible' happens, 'specially as code gets reused in new applications.

If you've ever used integer types for synthesis, then you've already
used "built-in" assertions, and seen the synthesis tools' ability to
infer information from them, and optimize the logic to take advantage
of input combinations that cannot exist in the simulation:

signal my_int : integer range 0 to 255;

This is an integer, which if it's value ever exceeds the range of 0 to
255, inclusive, will cause a simulation halt, with an unrecoverable
error.

The synthesis tool recognizes this, and decides it does not need all
32 signed bits to represent all possible values of my_int, it only
needs 8 unsigned bits.

Not at all. Synthesis tools key off the range definition that is
typed in as being from 0 to 255 to calculate that an 8 bit
representation is needed. Assertions have nothing to do with it.



So indeed there is precedence for synthesis tools using information
that is in the form of built-in assertions to allow optimization of
logic.

And how can you say with any certainty that this is not just
precedence for using the defined range of an integer and has
absolutely nothing to do with assertions?

KJ
The way the synthesis tool recognizes it is in fact, as you say. What
gives it the permission to is the built in assertion, thus specifying
that conditions which would result in a value outside the declared
range are don't care. After all, I guarantee that the simulator
implements "integer range 0 to 512" as a 16 or 32 bit value, not a
nine bit value!

My point is that the simulator and synthesis tool have "different"
behavior. Because the simulator errors out, the synthesis tool is free
to optimize the hardware for those conditions. It would be very
difficult for a synthesis tool to come up with an error routine in
hardware, so it does not try. It assumes (rightly so) that the user is
not interested in implementing cases where the simulation does not
complete.

Andy
 
KJ <Kevin.Jennings@Unisys.com> writes:
Synthesis unrolls the functions and the net result of converting an
enumerated type to a std_logic_vector and then back to an enumerated
type on the other end is zero logic cells (as it should be since the
one function is the inverse of the other)....and yes it really does do
this with real tools, not just an academic 'well it should do this'.
Care to provide a working example? This sounds really interesting.

Regards,
Marcus

--
note that "property" can also be used as syntaxtic sugar to reference
a property, breaking the clean design of verilog; [...]

(seen on http://www.veripool.com/verilog-mode_news.html)
 
KJ,

In any case, the idea of an "unverifiable" assertion is surely
absurd. The whole point of assertions is that they continually
monitor the data whose properties they check; long after the
assertion's author has forgotten why he put it there, the
assertion continues to contribute to the verification effort.

In simulation yes. If the synthesized logic is somehow 'different' because
of the inclusion or not of the assert statement in the source code then how
does the physical part behave when the asserted condition fails? That was
my basic point. I would expect it to behave as defined by the the logic
description completely.
The guiding principle of HDL-logic synthesis is this: The behavior
exhibited by the simulator should determine the behavior exhibited by
the hardware synthesized by the same code.

An assertion is a way to force the simulation to stop if certain
conditions occur. Since functional behavior due to those conditions is
not handled in the simulation, it is reasonable to expect that the
synthesized hardware should not have to handle them either. The "logic
description" you refer to is the sum total of the behavior of the
simulator when simulating that code, and therefore includes the
assertions.

Having synthesis check assertions that are statically determined things is
appropriate. Dynamically though? Not convinced that it could or even
should attempt this.
This may only be a matter of semantics, but the synthesis tool would
not be "checking" assertions, but merely using the information
provided by them to identify "don't care" conditions. Assertions may
provide information like: "if more than one of these bits is set, I
don't care what the hardware does, because that won't happen (and did
not happen in simulation)." The assertion gives permission to optimize
out any additional logic related to those don't care conditions.

Just what should happen if the logic
description says one thing but the assertion claim another...and the
supposedly impossible thing happens (i.e. the 'assert' fires)?
Again, the assertion causes the simulation to not handle the
condition, so there is no behavior related to that condition to
emulate in hardware. Since there is no behavior to emulate, the
synthesis tool is free to optimize out any additional hardware related
to the condition that would fire the assertion. The behavior of the
simulator was a "don't care," as should be the behavior of the
synthesized hardware.

The assertion IS a part of the logic description!

Then your verification effort correctly shows that you have made
unjustified assumptions that you must review. That's the whole
point: by capturing *in a single language construct* both an
assumption that you're making about runtime behaviour, and a
permission to the synthesis tool to share that assumption,
you have a safe way to capture knowledge of a system's
expected behaviour that cannot be described any other way.

The logic function though is described though by the logic that is written
not by the assert. Maybe we're not understanding what the other is saying
on this particular point though..
Repeat after me; "The 'logic function' is determined by the
simulator's behavior, which includes the assertions."

Andy
 
On Sep 5, 8:40 am, Marcus Harnisch <marcus.harni...@gmx.net> wrote:
KJ <Kevin.Jenni...@Unisys.com> writes:
Synthesis unrolls the functions and the net result of converting an
enumerated type to a std_logic_vector and then back to an enumerated
type on the other end is zero logic cells (as it should be since the
one function is the inverse of the other)....and yes it really does do
this with real tools, not just an academic 'well it should do this'.

Care to provide a working example? This sounds really interesting.

Regards,
Marcus

--
note that "property" can also be used as syntaxtic sugar to reference
a property, breaking the clean design of verilog; [...]

(seen onhttp://www.veripool.com/verilog-mode_news.html)
Sure. See code below. What I've done is defined an enumerated type
('My_Type') with 8 values and To/From conversion functions that
convert this enumerated type into a std_ulogic_vector format. For
simplicity, the std_ulogic_vector is simply an unsigned binary
representation of the position within the enumerated type list. Since
I had 8 enumeration values, I need three bits to represent a signal of
that type. Other vector representations are permissable (like perhaps
an 8 bit 'one hot' representation). The vector representation itself
is not important, the key thing is that the To_Std_Ulogic_Vector()
function is the inverse of the From_Std_ULogic_Vector() function...in
other words each one 'undoes' the other.

To demonstrate cross entity optomizations I've created a 'Transmitter'
entity that takes a vector in and spits out a signal of 'My_Type'.
Similarly, I've created a 'Receiver' entity that takes a signal of
'My_Type' in and spits out a std_ulogic_vector. Lastly, I've created
the top level entity called 'Junk' which instantiates both of these
entities, creating an intermediate signal 'My_Type_Signal' that is an
enumerated type.

The expected result would be that the output of the top level maps
directly to the inputs since the two entities undo what the other has
done. Running this through Quartus and looking at the 'Technology
Map' viewer this is exactly what you'll see, so what gets implemented
is 0 logic cells (which is also was Quartus report).

Enjoy.

KJ

-- START OF CODE --
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
package pkg_My_Package is
type My_Type is (s0, s1, s2, s3, s4, s5, s6, s7);
function To_Std_ULogic_Vector(L: My_Type) return
std_ulogic_vector;
function From_Std_ULogic_Vector(L: std_ulogic_vector) return
My_Type;
end package pkg_My_Package;
package body pkg_My_Package is
function To_Std_ULogic_Vector(L: My_Type) return std_ulogic_vector
is
variable RetVal: std_ulogic_vector(2 downto 0);
begin
RetVal := std_ulogic_vector(to_unsigned(My_Type'pos(L),
RetVal'length));
return(RetVal);
end function To_Std_ULogic_Vector;

function From_Std_ULogic_Vector(L: std_ulogic_vector) return
My_Type is
variable RetVal: My_Type;
begin
RetVal := My_Type'val(to_integer(unsigned(L)));
return(RetVal);
end function From_Std_ULogic_Vector;
end package body pkg_My_Package;

library ieee;
use ieee.std_logic_1164.all;
entity Transmitter is port(
Gazinta: in std_ulogic_vector(2 downto 0);
Gazouta: out work.pkg_My_Package.My_Type);
end Transmitter;
architecture RTL of Transmitter is
begin
Gazouta <= work.pkg_My_Package.From_Std_ULogic_Vector(Gazinta);
end RTL;

library ieee;
use ieee.std_logic_1164.all;
entity Receiver is port(
Gazinta: in work.pkg_My_Package.My_Type;
Gazouta: out std_ulogic_vector(2 downto 0));
end Receiver;
architecture RTL of Receiver is
begin
Gazouta <= work.pkg_My_Package.To_Std_ULogic_Vector(Gazinta);
end RTL;


library ieee;
use ieee.std_logic_1164.all;

entity Junk is port(
Gazinta: in std_ulogic_vector(2 downto 0);
Gazouta: out std_ulogic_vector(2 downto 0));
end Junk;

architecture RTL of Junk is
signal My_Type_Signal: work.pkg_My_Package.My_Type;
begin
XMIT : entity work.Transmitter port map(Gazinta, My_Type_Signal);
RCV : entity work.Receiver port map(My_Type_Signal, Gazouta);
end RTL;
-- END OF CODE --
 
On Wed, 05 Sep 2007 07:21:49 -0700, KJ wrote:

On Sep 5, 8:40 am, Marcus Harnisch <marcus.harni...@gmx.net> wrote:
KJ <Kevin.Jenni...@Unisys.com> writes:
Synthesis unrolls the functions and the net result of converting an
enumerated type to a std_logic_vector and then back to an enumerated
type on the other end is zero logic cells (as it should be since the
one function is the inverse of the other)....and yes it really does do
this with real tools, not just an academic 'well it should do this'.

Care to provide a working example? This sounds really interesting.

Sure. See code below.
[snip]

Sorry Kevin, this has not even slightly convinced me. Your Receiver
has statically-guaranteed mutual exclusivity because the communication
path from transmitter to receiver was an enumeration.
Rather than converting enum->SLV->enum as you claim, you're
doing SLV->enum->SLV (with the enum re-coded as SLV by the tool)
which is quite different.

I have a counter-example that multiplexes a 9-value enumeration or
an 8-bit std_logic_vector onto an 8-bit SLV bus, the enum being
one-hot-zero coded by a function, and the mux control being conveyed
on a separate std_logic. My receiver uses the mux control to
either drive the 8-bit SLV on to an 8-bit output port, or
drive the 8-bit one-hot-zero code on to the same 8-bit output
port using this bit of code:

if mode_i = '0' then -- receive an operation code
if data_i(0) = '1' then opr_o <= X"01" ;
elsif data_i(1) = '1' then opr_o <= X"02" ;
elsif data_i(2) = '1' then opr_o <= X"04" ;
elsif data_i(3) = '1' then opr_o <= X"08" ;
elsif data_i(4) = '1' then opr_o <= X"10" ;
elsif data_i(5) = '1' then opr_o <= X"20" ;
elsif data_i(6) = '1' then opr_o <= X"40" ;
elsif data_i(7) = '1' then opr_o <= X"80" ;
else opr_o <= X"00";
end if;
else -- receive an 8-bit value
opr_o <= data_i;
end if;

And, surprise surprise, NO synth tool I've yet tried can
automatically infer that the various branches of the long
if-elsif are mutually exclusive, even though I've put all
the relevant modules in a single file and, taking the
design as a whole, the mutual exclusivity can easily be
proven. I get truckloads of redundant logic.

Replace my "elsif" with Weng's hypothetical "orif" and...
(1) simulation behaviour would be identical if the
"one-hot-zero" property holds good;
(2) simulation would throw a runtime error if the
"one-hot-zero" property were false at any moment
when the if-orif is executed;
(3) synthesis would rely on (2) to justify an assumption
that the various branches of the if-orif are
mutually exclusive, and therefore the gates can be
simplified to wires exhibiting exactly the behaviour
specified in (1).

Now, I am fully aware that I could easily re-code this
no-op decoder in a different way that would give me
the wires-only hardware I desire. But any such coding
would inherently capture my knowledge of the mutual
exclusivity. What Weng is very reasonably trying to
do is to find a way to capture known mutual exclusivity
in situations that are much less self-evident than
my example, and in which no simple re-coding could
capture the required knowledge. The results of my
little synthesis experiment shows that this is worth
trying to achieve.

Note, for the avoidance of any confusion, that I am
NOT hereby advocating the "orif" proposal.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Hi Andy,
Now it is clear to me, if not to you,


What assertion onehot0() can do, orif can do better !


"You can NEVER use a parameterized loop of if-then structures"

It is wrong now after our long discussions.

Whenever you write a loop and try to transfer mutually exclusive
information into it, you must remember that you must clearly write
'if..elsif..." structure, then each elsif is a perfect candatate to be
replaced within your loop.

REMEMBER: if you want to use assertion onehot0() to transfer mutually
exclusive information, all signals in onehot() entries must be used
after keywords if and elsif. Other coding short cut would leave the
information useless and wastful.

Weng
 
Jonathan Bromley wrote:

What Weng is very reasonably trying to
do is to find a way to capture known mutual exclusivity
in situations that are much less self-evident than
my example, and in which no simple re-coding could
capture the required knowledge. The results of my
little synthesis experiment shows that this is worth
trying to achieve.
This is interesting from a language point of view
but I am not convinced that it would make my job
any easier. I use lots of enumeration types in my code
and have not suffered (so far) with a synthesis setting
of AUTO. With quartus and ise this usually results
in a binary encoding for small enums and one-hot
for larger ones.

My point is that, until I inadvertently
set off this thread, I have not
had to invest any intellectual effort in considering
the synthesis details of enumeration encoding and decoding.

I have played with changing the AUTO setting to binary,
because this is what I used in the 22v10 days
when I had no choice but to do this by hand.
This costs a trifling amount of resources,
but since synthesis is willing to keep track
of the vector encoding, I have no good reason not
to stick with AUTO.

Does anyone have a synthesis example where
describing such enum encoding details in my code
is a big win?

Note, for the avoidance of any confusion, that I am
NOT hereby advocating the "orif" proposal.
So say we all.

-- Mike Treseler
 
On Sep 5, 1:01 pm, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
On Wed, 05 Sep 2007 07:21:49 -0700, KJ wrote:
On Sep 5, 8:40 am, Marcus Harnisch <marcus.harni...@gmx.net> wrote:
KJ <Kevin.Jenni...@Unisys.com> writes:
Synthesis unrolls the functions and the net result of converting an
enumerated type to a std_logic_vector and then back to an enumerated
type on the other end is zero logic cells (as it should be since the
one function is the inverse of the other)....and yes it really does do
this with real tools, not just an academic 'well it should do this'.

Care to provide a working example? This sounds really interesting.
Sure. See code below.

[snip]

Sorry Kevin, this has not even slightly convinced me. Your Receiver
has statically-guaranteed mutual exclusivity because the communication
path from transmitter to receiver was an enumeration.
Rather than converting enum->SLV->enum as you claim, you're
doing SLV->enum->SLV (with the enum re-coded as SLV by the tool)
which is quite different.
I think you read more into this than was intended. What Marcus had
asked for (or at least what I thought he had asked for) was example
code showing that the conversion of an enumerated type to and from a
std_logic_vector really took 0 logic elements.

It really had nothing to do with Weng's 'orif'. The branch that this
sub-topic was on started because you challenged me to optomize across
entity boundaries where the interface between the entities is a
std_logic_vector (or something along those lines). My sample code
conversion was slv->enum->slv, although the reverse of what I had
stated, simply because the synthesizer does not make the assumption
that I/O pins are mutually exclusive so putting an enumerated type as
pins of the FPGA can be problematic. Since I was not trying to
demonstrate optomization across FPGA boundaries, I made that change.
While that point IS relevant to Weng's 'orif' (but also likely
solvable by the sum of products), my posting was simply to show that
conversions can be made internal to a single FPGA design to and from
enumerated type using std_logic_vectors as the interface between
entities and it will incur no synthesis cost in terms of resource
usage....that's all.

Just to show that things can go the other way as well though, I've
posted code at the end that does the following conversions in separate
entities:
slv->enum->slv->enum->slv

Again, it takes 0 logic cells and reduces to wires in the technology
map viewer and there is an enum->slv->enum conversion inside the
design. Now you can again take issue with the fact that I'm starting
with slv but these enumerated type signals are all going to be
internal to the device but in order to produce example code
demonstrating the concept I had to bring signals out to pins otherwise
it will get reduced to 0 logic cells and 0 pins.

By the way, my conversion functions have never used if
statements....not saying they couldn't, or that if they did that they
might consume resources, just that I've never run across the need for
an if statement in a type conversion function yet and I'm happy with
the usage, the localization of stuff that tends to be 'bit twiddling'
types of operations to a single spot in the code, and of course the 0
logic resource cost.

I have a counter-example that multiplexes a 9-value enumeration or
an 8-bit std_logic_vector onto an 8-bit SLV bus, the enum being
one-hot-zero coded by a function, and the mux control being conveyed
on a separate std_logic. My receiver uses the mux control to
either drive the 8-bit SLV on to an 8-bit output port, or
drive the 8-bit one-hot-zero code on to the same 8-bit output
port using this bit of code:

if mode_i = '0' then -- receive an operation code
if data_i(0) = '1' then opr_o <= X"01" ;
elsif data_i(1) = '1' then opr_o <= X"02" ;
elsif data_i(2) = '1' then opr_o <= X"04" ;
elsif data_i(3) = '1' then opr_o <= X"08" ;
elsif data_i(4) = '1' then opr_o <= X"10" ;
elsif data_i(5) = '1' then opr_o <= X"20" ;
elsif data_i(6) = '1' then opr_o <= X"40" ;
elsif data_i(7) = '1' then opr_o <= X"80" ;
else opr_o <= X"00";
end if;
else -- receive an 8-bit value
opr_o <= data_i;
end if;

And, surprise surprise, NO synth tool I've yet tried can
automatically infer that the various branches of the long
if-elsif are mutually exclusive, even though I've put all
the relevant modules in a single file and, taking the
design as a whole, the mutual exclusivity can easily be
proven. I get truckloads of redundant logic.
Maybe you should consider my approach of constructing simple type
conversion functions to/from an enumerated type, you likely won't need
the big 'if' statement in those functions which is a likely cause of
the redundant logic you're seeing....just speculating though, you
certainly know your particular code better than I.

<snip>
Now, I am fully aware that I could easily re-code this
no-op decoder in a different way that would give me
the wires-only hardware I desire. But any such coding
would inherently capture my knowledge of the mutual
exclusivity.
Sounds like a good approach to me.

What Weng is very reasonably trying to
do is to find a way to capture known mutual exclusivity
in situations that are much less self-evident than
my example, and in which no simple re-coding could
capture the required knowledge. The results of my
little synthesis experiment shows that this is worth
trying to achieve.
Again though, your simple experiment may not really be demonstrating
what problem really can best be solved by 'orif' as opposed to
enumerated types or sum of products....which is where we were at a
couple days ago in this thread when the question was raised about just
what problem is Weng trying to solve? Or more to the point, how come
there is no actual good example of such a problem that 'orif' is a
better solution than the current forms? Fitting all control logic
into an 'if' statement (Weng's stated goal) is a solution to WHAT
problem?

Note, for the avoidance of any confusion, that I am
NOT hereby advocating the "orif" proposal.
--
And I am neither an advocate nor a detractor of 'orif'. I rail
against the unsubstantiated claims mostly.

KJ
---- START OF UPDATED CODE ----
entity Junk is port(
Gazinta1: in std_ulogic_vector(2 downto 0);
Gazouta1: out std_ulogic_vector(2 downto 0);

Gazinta2: in std_ulogic_vector(2 downto 0);
Gazouta2: out std_ulogic_vector(2 downto 0));
end Junk;

architecture RTL of Junk is
signal My_Type_Signal: work.pkg_My_Package.My_Type;
signal My_Type_Signal2 : work.pkg_My_Package.My_Type;
signal My_Type_Signal3 : work.pkg_My_Package.My_Type;
signal my_slv : std_ulogic_vector(2 downto 0);
begin
XMIT1 : entity work.Transmitter port map(Gazinta1, My_Type_Signal);
RCV1 : entity work.Receiver port map(My_Type_Signal, Gazouta1);

RCV2A : entity work.Transmitter port map(Gazinta2,
My_Type_Signal2);
RCV2B : entity work.Receiver port map(My_Type_Signal2, my_slv);
XMIT2 : entity work.Transmitter port map(my_slv, My_Type_Signal3);
RCV2C : entity work.Receiver port map(My_Type_Signal3, Gazouta2);
end RTL;
---- END OF UPDATED CODE ----
 
KJ,

I understand what you are trying to demonstrate, but I don't think
that is what we're asking for.

What we want is a way to take an input slv for which we are certain
that exactly one of those bits is set at any given time (one-hot,
mutually exclusive), and convert that slv into an output integer that
is the index of the bit that is set, with no priority encoding, etc.
An integer is sufficient because integer encodings are by definition
mutually exclusive. Also enumerated types are impossible to define
with an arbitrary (parameterized) number of possible values.

Now, we know this can be done with boolean equations, but can you do
it in behavioral code without resorting to boolean operations (and,
or, etc.). I think it can be done with a case statement (and std_logic
types), but I'm not sure whether most synthesis tools will "take the
bait":

case input is
when "0001" => output <= "00";
when "0010" => output <= "01";
when "0100" => output <= "10";
when "1000" => output <= "11";
when others => output <= "--";
end case;

For a loop implementation:

variable temp : slv(input'range);
output <= (others => '-');
for i in input'range loop
temp := (others => '0');
temp(i) := '1';
if input = temp then
output <= slv(to_unsigned(i, output'length));
end if;
end loop;

Now, if I convert the integer encoding back to one hot:

result := (others => '0');
result(to_integer(unsigned(output))) := '1';


However, if this string of logic is synthesized, It will not reduce to
wires (because we had to use binary encoding because we had an
arbitrary number of inputs).

So, I would presume that, for an arbitrary number of mutually
exclusive inputs, there is no conversion from one hot to any mutually
exclusive encoding and back that reduces to wires.

Therefore, we need a method to do that.

Andy
 
"Andy" <jonesandy@comcast.net> wrote in message
news:1189032622.361254.59810@50g2000hsm.googlegroups.com...
KJ,

I understand what you are trying to demonstrate, but I don't think
that is what we're asking for.

What we want is a way to take an input slv for which we are certain
that exactly one of those bits is set at any given time (one-hot,
mutually exclusive), and convert that slv into an output integer that
is the index of the bit that is set, with no priority encoding, etc.
An integer is sufficient because integer encodings are by definition
mutually exclusive. Also enumerated types are impossible to define
with an arbitrary (parameterized) number of possible values.

Now, we know this can be done with boolean equations, but can you do
it in behavioral code without resorting to boolean operations (and,
or, etc.). I think it can be done with a case statement (and std_logic
types), but I'm not sure whether most synthesis tools will "take the
bait":

case input is
when "0001" => output <= "00";
when "0010" => output <= "01";
when "0100" => output <= "10";
when "1000" => output <= "11";
when others => output <= "--";
end case;

For a loop implementation:

variable temp : slv(input'range);
output <= (others => '-');
for i in input'range loop
temp := (others => '0');
temp(i) := '1';
if input = temp then
output <= slv(to_unsigned(i, output'length));
end if;
end loop;

Now, if I convert the integer encoding back to one hot:

result := (others => '0');
result(to_integer(unsigned(output))) := '1';


However, if this string of logic is synthesized, It will not reduce to
wires (because we had to use binary encoding because we had an
arbitrary number of inputs).

So, I would presume that, for an arbitrary number of mutually
exclusive inputs, there is no conversion from one hot to any mutually
exclusive encoding and back that reduces to wires.

Therefore, we need a method to do that.
This is an interesting example. I tried your above example on Quartus 7.1
(see complete code below), and indeed it leads to a sub-optimal logic
design. Running a gate-level simulation shows that the circuit behaves
correctly when the input is one-hot, i.e. "0001" -> "0001", "0010" ->
"0010", etc. The most optimal implementation would be straight wires, but
the netlist generated contains a number of ALUT's. (Interestingly enough,
the input signal input[0] is not used.)

Anyway, in this simple example I would have guessed the synthesis tool to be
able to recognize that straight wires is a valid and optimal design. It
surprises me that Quartus does not recognize that.

However, this really has nothing to do with mutual exclusiveness or one-hot
encoding. This problem is more general: In a case statement, some inputs may
be specified as "don't care" behaviour. How does the synthesis tool use
those extra degress of freedom?

Returning to the OP's suggestion of using orif: In this specific example, I
don't see a need for changing the source code. It seems that all the
information necessary for the synthesis tool is already present in the
source file. Instead, this specific example shows that the synthesis tool
itself could be made smarter.

-Michael.


library ieee;
use ieee.std_logic_1164.all;

package p_test_decode is
subtype t_data is std_logic_vector(3 downto 0);
end p_test_decode;

library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
use work.p_test_decode.all;

entity test_decode is
port (
input : in t_data;
output : out t_data
);
end entity test_decode;

architecture rtl of test_decode is
signal val : std_logic_vector(1 downto 0);
begin
decode: process(input) is
begin
case input is
when "0001" => val <= "00";
when "0010" => val <= "01";
when "0100" => val <= "10";
when "1000" => val <= "11";
when others => val <= "--";
end case;
end process decode;

encode: process(val) is
begin
output <= (others => '0');
output(to_integer(unsigned(val))) <= '1';
end process encode;

end architecture rtl;
 
Michael,

Yep, I stumbled across the lack of input(0) myself when I manually
converted it. It seems that the code assumes that EXACTLY one bit of
input is set (no bits set is not an option). So absence of the three
other bits is taken as the presence of the first bit. It may be that
if an extra bit is used to indicate that no bits were set, and that
bit used to reconstruct the one hot (er... zero one hot) vector, maybe
it would actually optimize out?

Just for grins, did you try it with one process? Perhaps the
optimization on the total stream failed because parts of it are in
separate processes. Not that this would be particularly acceptable,
since the nature of the problem we are trying to solve involves
communication between different entities, let alone different
processes.

Andy
 
Andy wrote:

Yep, I stumbled across the lack of input(0) myself when I manually
converted it. It seems that the code assumes that EXACTLY one bit of
input is set (no bits set is not an option). So absence of the three
other bits is taken as the presence of the first bit. It may be that
if an extra bit is used to indicate that no bits were set, and that
bit used to reconstruct the one hot (er... zero one hot) vector, maybe
it would actually optimize out?
This is just a logic minimization from 4 input to 3 input LUTs.
It follows directly from the "others" case.

I tried this both ways:

case input is
when "0001" => val <= "00";
when "0010" => val <= "01";
when "0100" => val <= "10";
when "1000" => val <= "11";
-- when others => val <= "00"; -- 4in LUT-OR, input(0) used
when others => val <= "--"; -- 3in LUT-OR input(0) ignored
end case;

So synthesis found an optimization,
It just didn't find the best one.

Just for grins, did you try it with one process?
This made no difference.

-- Mike Treseler
 
On Sep 6, 12:13 pm, Mike Treseler <mike_trese...@comcast.net> wrote:
Andy wrote:
Yep, I stumbled across the lack of input(0) myself when I manually
converted it. It seems that the code assumes that EXACTLY one bit of
input is set (no bits set is not an option). So absence of the three
other bits is taken as the presence of the first bit. It may be that
if an extra bit is used to indicate that no bits were set, and that
bit used to reconstruct the one hot (er... zero one hot) vector, maybe
it would actually optimize out?

This is just a logic minimization from 4 input to 3 input LUTs.
It follows directly from the "others" case.

I tried this both ways:

case input is
when "0001" => val <= "00";
when "0010" => val <= "01";
when "0100" => val <= "10";
when "1000" => val <= "11";
-- when others => val <= "00"; -- 4in LUT-OR, input(0) used
when others => val <= "--"; -- 3in LUT-OR input(0) ignored
end case;

So synthesis found an optimization,
It just didn't find the best one.

Just for grins, did you try it with one process?

This made no difference.

-- Mike Treseler
Wow; interesting...

Thanks,

Andy
 
"Andy" <jonesandy@comcast.net> wrote in message
news:1189001186.435598.131280@50g2000hsm.googlegroups.com...
KJ,

In any case, the idea of an "unverifiable" assertion is surely
absurd. The whole point of assertions is that they continually
monitor the data whose properties they check; long after the
assertion's author has forgotten why he put it there, the
assertion continues to contribute to the verification effort.

In simulation yes. If the synthesized logic is somehow 'different'
because
of the inclusion or not of the assert statement in the source code then
how
does the physical part behave when the asserted condition fails? That
was
my basic point. I would expect it to behave as defined by the the logic
description completely.

The guiding principle of HDL-logic synthesis is this: The behavior
exhibited by the simulator should determine the behavior exhibited by
the hardware synthesized by the same code.
In fact the guiding principle of logic synthesis is that the behaviour
exhibited by the hardware should conform to the language specification not
'the simulator' (whatever that might be). Every tool that I've picked up
claims conformance to some flavor of VHDL (i.e. '87, '93, '02), I've yet to
pick one up that claims conformance to Modelsim 6.3a.

<snip>
This may only be a matter of semantics, but the synthesis tool would
not be "checking" assertions, but merely using the information
provided by them to identify "don't care" conditions. Assertions may
provide information like: "if more than one of these bits is set, I
don't care what the hardware does, because that won't happen (and did
not happen in simulation)." The assertion gives permission to optimize
out any additional logic related to those don't care conditions.

But the logic description (i.e. everything BUT the asserts) completely
describes the logic function by definition. The asserts are:
- Generally not complete behavioural descriptions
- Unverifiable as to correctness. In fact, a formal logic checker program
would take the asserts and the logic description and formally prove (or
disprove) that the description (the non-assert statements) accurately
produce the stated behaviour (the asserts). Since formal logic checkers can
'fail' (i.e. show that incorrect behaviour can result from the given logic
description) this would tend to drive a stake completely through your claim.

The assertion IS a part of the logic description!
No, it is a (possibly incorrect) statement of what is believed to be correct
behaviour.


Repeat after me; "The 'logic function' is determined by the
simulator's behavior, which includes the assertions."

On that point, we most definitely do not agree. The non-asserts completely
define the logic function, the asserts incompletely (at least in VHDL)
describe correct behaviour.

KJ
 
"Andy" <jonesandy@comcast.net> wrote in message
news:1189032622.361254.59810@50g2000hsm.googlegroups.com...
KJ,

I understand what you are trying to demonstrate, but I don't think
that is what we're asking for.
We? Marcus asked for an example that demonstrated enumeration conversion
functions that result in 0 logic cell usage when the two functions are in
separate entities. My code demonstrates that.

What we want is a way to take an input slv for which we are certain
that exactly one of those bits is set at any given time (one-hot,
mutually exclusive), and convert that slv into an output integer that
is the index of the bit that is set, with no priority encoding, etc.
What you want is not always logically achievable....in any case, you're
posing a different question, one that interests you so read on for more on
why

An integer is sufficient because integer encodings are by definition
mutually exclusive. Also enumerated types are impossible to define
with an arbitrary (parameterized) number of possible values.

Now, we know this can be done with boolean equations,
Depending on your starting point, no you can't....again read on for more on
why.

but can you do
it in behavioral code without resorting to boolean operations (and,
or, etc.). I think it can be done with a case statement (and std_logic
types), but I'm not sure whether most synthesis tools will "take the
bait":
Why are you insisting on a particular form for the solution? You're just
like Weng who is trying to fit it into an if statement. I'll use the
appropriate form for the solution.

case input is
when "0001" => output <= "00";
when "0010" => output <= "01";
when "0100" => output <= "10";
when "1000" => output <= "11";
when others => output <= "--";
end case;

Now look at your code again. You have not really told the synthesis tool
that the input is one hot coded but I'll bet you think you did. To do so,
you 'should' have specified the cases as "---1", '--1-", etc. You know you
can't really do this and get the function you want (it will fail sim because
'0001' is not equal to '---1') so you chose (incorrectly) to write incorrect
case statements because you chose to use a case statement....which is the
wrong tool....just like Weng would like to augment the 'if' statement
because he wants to see everything inside an 'if' statement for whatever
reason. In any case, you haven't 'told' the synthesis tool that this is one
hot encoded, in fact you've left it up to it's discretion about how to
implement 3/4 of the cases.

Run your code through a synthesis tool and check the resource usage. Now
convert the case statement to the if/elsif form using the std_match function
instead where you match to "---1", "--1-", "-1--" and "1---" and compile
again. You'll probably find your logic resource usage drops way down (but
still not to 0 when paired with it's 'inverse' decode function).

Now, if I convert the integer encoding back to one hot:

result := (others => '0');
result(to_integer(unsigned(output))) := '1';


However, if this string of logic is synthesized, It will not reduce to
wires (because we had to use binary encoding because we had an
arbitrary number of inputs).

So, I would presume that, for an arbitrary number of mutually
exclusive inputs, there is no conversion from one hot to any mutually
exclusive encoding and back that reduces to wires.

Therefore, we need a method to do that.

Well OK then, now we get into the reason why what you're asking for is not
achievable except under certain usage conditions. In an earlier post to
Marcus I stated

"The encoding and decoding functions can be lots of things but they could
simply be a one hot coding of the enumeration list or a binary coding or
whatever"

While I think it was clear from that post that the encode/decode functions
are functional inverses of each other, an important point that I had
neglected to mention is that these functions need to operate over the same
domain (i.e. all the possible input values) if you want to be able to apply
these functions as a pair in any order and incur no logic cost. I don't
have that as a proof but I'm pretty sure that it is both necessary and
sufficient. I haven't found a case where it is not true but I'll simply
call it "KJ's postulate" since that's about all it is at that point.

If you now have two functions f(x) and g(x) that are candidate conversion
functions (but the input 'x' is not necessarily over the same domain...or in
the digital logic case it would mean that the the 'x' input into f() and g()
do not necessarily have the same number of bits) you can have the following
situations:
1. f(g(x)) = x
2. g(f(x)) /= x

I claim that if f() and g() operate over the same domain (same number of
bits), then condition #2 above will never be true and f(g(x)) = g(f(x)) = x
and the resulting implementation will be wires with no logic resource
required to synthesize.

I claim also that if f() and g() operate over different domains that if the
application USAGE is only of form #1 then the resulting implementation will
be wires with no logic resources. But if the usage is of form #2 there will
be some non zero cost to implementing and the result will not be just wires.

The example I originally put up used 'val and 'pos as the conversion
function pair. Those both operate on the same domain (i.e. the enumerated
type) so they will result in 0 logic when used as a pair in any order. I
could change that function pair to remap things however I wanted (gray code,
7 - 'pos, etc.) and this will always be true (per my first claim).

Now take as an example something to test claim 2, let's say that g(x) is a
3->8 decoder and f(x) is an 8->3 mux. This is an example where the domains
of the two functions are not the same, one works with three bits, the other
with 8. Now implement f(g(x)). The primary input is the 3 bit input which
goes through the 3-> decoder which is fed into the 8->3 mux and out the
chip. This will result in a 0 logic, 'wires' solution.

Now turn it around and implement g(f(x)). The primary input is an 8 bit
input vector which gets encoded to a 3 bit number which then gets decoded to
produce an 8 bit output. Again, implement it but now you'll find that you
get some non-zero logic. The reason is quite simple, the 3 bit code can not
possibly produce the 256 possible input values so g() and f() IN THIS USAGE
are not quite functional inverses of each other . This whole thread has
been about mutual exclusive stuff but putting that aside for a moment, and
ponder, if you have an 8 bit input and happen to set all of them to 1 and
then feed them into the mux/decode logic do you really expect that the
output will be 'FF'? It might, but you definitely won't be able to cycle
through all the input codes from "00" to "FF" and produce those same outputs
on the 8 bit output bus.

You've been preaching on about how telling the synthesis tool that these are
mutually exclusive by way of an assertion could somehow magically produce
what you'd like to see, but I don't think it can (again depending on the
usage). In my case, if there is some known mutual exclusiveness information
I start from that point (whether it's an enumerated type or a coded slv) and
know that the decode/encode process pair costs nothing in logic (but
enhances readability and maintainability) but that the encode/decode process
pair does have a non-zero cost....and of course either function by itself
has a non-zero cost to it. I'd suggest you back off on your unfounded
claims that assertions could do what you seem to think they could until you
can produce anything tangible.

KJ
 
KJ <Kevin.Jennings@Unisys.com> writes:

I think you read more into this than was intended. What Marcus had
asked for (or at least what I thought he had asked for) was example
code showing that the conversion of an enumerated type to and from a
std_logic_vector really took 0 logic elements.
Your posting suggested (but maybe I was reading too much into it) that
some synthesis tools would be able to figure out by themselves whether
the usage of conversion functions at either end of a connection is
redundant and make everything dissolve in a puff of logic
automagically. I strongly doubt that this is true unless the
conversion is one-to-one, where this could potentially work.

It really had nothing to do with Weng's 'orif'.
A functionality like that (if it existed) could be utilized to take
advantage of the inherent uniqueness of enum elements.

-- Marcus

--
note that "property" can also be used as syntaxtic sugar to reference
a property, breaking the clean design of verilog; [...]

(seen on http://www.veripool.com/verilog-mode_news.html)
 
"Marcus Harnisch" <marcus.harnisch@gmx.net> wrote in message
news:868x7hh2au.fsf@harnisch.dyndns.org...
KJ <Kevin.Jennings@Unisys.com> writes:

I think you read more into this than was intended. What Marcus had
asked for (or at least what I thought he had asked for) was example
code showing that the conversion of an enumerated type to and from a
std_logic_vector really took 0 logic elements.

Your posting suggested (but maybe I was reading too much into it) that
some synthesis tools would be able to figure out by themselves whether
the usage of conversion functions at either end of a connection is
redundant and make everything dissolve in a puff of logic
automagically. I strongly doubt that this is true unless the
conversion is one-to-one, where this could potentially work.

See my reply from yesterday to Andy on this thread for what I believe are
the necessary conditions for the conversion functions to dissolve away....as
well the reasoning why the 'one hot -> encoded -> one hot' will never
dissolve away completely regardless of the form that one chooses to write
the equations (i.e. and/or, if/end if, case) and why 'asserts' will not help
either.

It really had nothing to do with Weng's 'orif'.

A functionality like that (if it existed) could be utilized to take
advantage of the inherent uniqueness of enum elements.

Again, from my reply yesterday, the conversion 'encoded -> one hot -
encoded' will always dissolve away resulting in 0 logic usage. The basic
concept is that if some arbitrary collection of signals is truly 'one hot'
then there exists an encoding (whether as a vector or an enum) that those
arbitrary signals can be decoded from that is the basic underpinning of why
that collection is truly a logical 'one hot'.

KJ
 
On Sep 8, 9:51 am, "KJ" <kkjenni...@sbcglobal.net> wrote:
"Marcus Harnisch" <marcus.harni...@gmx.net> wrote in message

news:868x7hh2au.fsf@harnisch.dyndns.org...> KJ <Kevin.Jenni...@Unisys.com> writes:

I think you read more into this than was intended. What Marcus had
asked for (or at least what I thought he had asked for) was example
code showing that the conversion of an enumerated type to and from a
std_logic_vector really took 0 logic elements.

Your posting suggested (but maybe I was reading too much into it) that
some synthesis tools would be able to figure out by themselves whether
the usage of conversion functions at either end of a connection is
redundant and make everything dissolve in a puff of logic
automagically. I strongly doubt that this is true unless the
conversion is one-to-one, where this could potentially work.

See my reply from yesterday to Andy on this thread for what I believe are
the necessary conditions for the conversion functions to dissolve away....as
well the reasoning why the 'one hot -> encoded -> one hot' will never
dissolve away completely regardless of the form that one chooses to write
the equations (i.e. and/or, if/end if, case) and why 'asserts' will not help
either.

It really had nothing to do with Weng's 'orif'.

A functionality like that (if it existed) could be utilized to take
advantage of the inherent uniqueness of enum elements.

Again, from my reply yesterday, the conversion 'encoded -> one hot -
encoded' will always dissolve away resulting in 0 logic usage. The basic
concept is that if some arbitrary collection of signals is truly 'one hot'
then there exists an encoding (whether as a vector or an enum) that those
arbitrary signals can be decoded from that is the basic underpinning of why
that collection is truly a logical 'one hot'.

KJ
KJ,

Sorry; I did not make myself clear. Yes, you did demonstrate
specifically what you were asked to, and I thank you for it. I
apologize if my response was taken as a negative critique of your
demonstration.

By "what we wanted", I meant that the subject of the thread, taken as
a whole, has to do with methods for optimally dealing with encodings
or conditions that are not by definition mutually exclusive, but are
functionally guaranteed to be so. A binary encoding is by definition
mutually exclusive, so starting with that would not demonstrate what
the thread, as a whole, seeks.

In light of the thread subject, a more useful demonstration (which you
were not asked for) would be to demonstrate a conversion from a one-
hot vector to a binary or enumeration encoding, and back, with minimal
(preferably zero) logic. As you have pointed out, that is likely not
possible.

Several options have been proposed within the existing language and
tool capabilities, but they are all limited to either data types that
include a "don't care" value understood by synthesis, or that support
an OR operation between values. It is possible that an enum value
could be treated as a don't care if nothing is done in response to it,
but I have not tried that. Methods capable of dealing with
unconstrained dimensions of input and output would be required as
well.

Thanks,

Andy
 
On Sep 7, 6:00 am, "KJ" <kkjenni...@sbcglobal.net> wrote:
"Andy" <jonesa...@comcast.net> wrote in message

news:1189001186.435598.131280@50g2000hsm.googlegroups.com...



KJ,

In any case, the idea of an "unverifiable" assertion is surely
absurd. The whole point of assertions is that they continually
monitor the data whose properties they check; long after the
assertion's author has forgotten why he put it there, the
assertion continues to contribute to the verification effort.

In simulation yes. If the synthesized logic is somehow 'different'
because
of the inclusion or not of the assert statement in the source code then
how
does the physical part behave when the asserted condition fails? That
was
my basic point. I would expect it to behave as defined by the the logic
description completely.

The guiding principle of HDL-logic synthesis is this: The behavior
exhibited by the simulator should determine the behavior exhibited by
the hardware synthesized by the same code.

In fact the guiding principle of logic synthesis is that the behaviour
exhibited by the hardware should conform to the language specification not
'the simulator' (whatever that might be). Every tool that I've picked up
claims conformance to some flavor of VHDL (i.e. '87, '93, '02), I've yet to
pick one up that claims conformance to Modelsim 6.3a.

snip>> This may only be a matter of semantics, but the synthesis tool would
not be "checking" assertions, but merely using the information
provided by them to identify "don't care" conditions. Assertions may
provide information like: "if more than one of these bits is set, I
don't care what the hardware does, because that won't happen (and did
not happen in simulation)." The assertion gives permission to optimize
out any additional logic related to those don't care conditions.

But the logic description (i.e. everything BUT the asserts) completely
describes the logic function by definition. The asserts are:
- Generally not complete behavioural descriptions
- Unverifiable as to correctness. In fact, a formal logic checker program
would take the asserts and the logic description and formally prove (or
disprove) that the description (the non-assert statements) accurately
produce the stated behaviour (the asserts). Since formal logic checkers can
'fail' (i.e. show that incorrect behaviour can result from the given logic
description) this would tend to drive a stake completely through your claim.



The assertion IS a part of the logic description!

No, it is a (possibly incorrect) statement of what is believed to be correct
behaviour.



Repeat after me; "The 'logic function' is determined by the
simulator's behavior, which includes the assertions."

On that point, we most definitely do not agree. The non-asserts completely
define the logic function, the asserts incompletely (at least in VHDL)
describe correct behaviour.

KJ
KJ,

What happens if the VHDL description does not fully define the
behavior? The VHDL specification is written in terms of an executable
model. That model may be SW, in terms of an executable simulation or a
static (formal) analysis; or it may be physical, in terms of an FPGA
or ASIC implementation.

Let's look at the following example:

signal count, output : integer range 0 to 511;
....
process (rst, clk) is
begin
if rst = '1' then
count <= 0;
output <= 0;
elsif rising_edge(clk) then
if restart = '1' then
output <= count;
count <= 0;
else
count <= count + 1;
end if;
end if;
end process;

The restart input above is externally guaranteed to be set at least
once every 500 clocks. If it is not, then an error has occurred
somewhere else, and we don't have to handle it or recover from it
without external assistance.

The question is what should the hardware do if count is 511, and
restart is not set? The logic description does not say what happens to
count, because the executable model stops, per the language
specification. Should the hardware model stop too? Or should the
synthesis tool interpret the situation as a "don't care"? If the
former, what does "stop" mean in the context of the hardware model? If
the latter, every synthesis tool I know of already implements a
rollover, which, while optimal from a resource POV, is not described
anywhere in the "logic description".

So what is the difference in the example above, and one in which a
user-defined assertion stops the model?

Andy
 
KJ,

What happens if the VHDL description does not fully define the
behavior?
I could say that you're at the mercy of how the synthesis tool wants
to generate it...but in reality, by definition, synthesis (and
simulation) assume that the logic description is complete. If it's
incomplete the simulator can help you find it, just like it can help
you find any other design flaw. But synthesis is supposed to
implement it, if you have a design flaw in your logic that's not it's
concern.

Let's look at the following example:

signal count, output : integer range 0 to 511;
...
process (rst, clk) is
begin
if rst = '1' then
count <= 0;
output <= 0;
elsif rising_edge(clk) then
if restart = '1' then
output <= count;
count <= 0;
else
count <= count + 1;
end if;
end if;
end process;

The restart input above is externally guaranteed to be set at least
once every 500 clocks. If it is not, then an error has occurred
somewhere else, and we don't have to handle it or recover from it
without external assistance.
OK...you're declaring this to be the case.

The question is what should the hardware do if count is 511, and
restart is not set? The logic description does not say what happens to
count
Sure it does...it says to add 1. No place in your code do you say to
not count if the count gets to 511.

, because the executable model stops, per the language
specification.
No, the simulator stops...

Should the hardware model stop too?
Why should it? Just because the simulator stops does not imply that
the synthesized form should as well. You're going back again to your
claim that synthesis implements what is simulatable whereas in fact
synthesis and simulation are two totally different tools do two
totally different thing that are supposed to conform to the same
language specification...nowhere does anyone claim that different
tools should conform to some other tool.

Or should the
synthesis tool interpret the situation as a "don't care"?
No, it should do what you said and attempt to add 1....like it clearly
says to do. The fact that it is unable to produce '512' is your
design issue to resolve, not the synthesis tools problem.

If the
former, what does "stop" mean in the context of the hardware model? If
the latter, every synthesis tool I know of already implements a
rollover, which, while optimal from a resource POV, is not described
anywhere in the "logic description".
Sure it is. You said to take the current value and add 1 and
implements this digital logic with the given bits of precision. As
soon as you (implicitly) say "implement this in digital logic with the
given bits of precision" you've made certain design trade offs that
you don't seem to be willing to accept.

So what is the difference in the example above, and one in which a
user-defined assertion stops the model?
Asserts don't stop synthesized hardware...nor do they change the
implemented code in any way....nor should they.

KJ
 

Welcome to EDABoard.com

Sponsor

Back
Top