Clock Edge notation

"Philipp Richter" <p.Richter@yahoo.co.uk> wrote in message
news:fvn895$45o$1@aioe.org...
Hi

I have an architecture that implements a register file in the following
way:


library ieee;
use ieee.std_logic_1164.all;
use ieee.std_logic_unsigned.all;

entity Reg_File is
port (
clk : in std_logic;
rd1_addr : in std_logic_vector(4 downto 0);
rd2_addr : in std_logic_vector(4 downto 0);
rd1_val : out std_logic_vector(31 downto 0);
rd2_val : out std_logic_vector(31 downto 0);
wr_ena : in std_logic;
wr_addr : in std_logic_vector(4 downto 0);
wr_val : in std_logic_vector(31 downto 0)
);
end Reg_File;

architecture syn of Reg_File is

type ram_type is array (0 to 31) of std_logic_vector(31 downto 0);
signal RAM : ram_type;

begin
process (clk)
begin
if (clk'event and clk = '1') then
if (wr_ena = '1') then
RAM(conv_integer(wr_addr)) <= wr_val;
end if;
end if;
end process;

rd1_val <= RAM(conv_integer(rd1_addr));
rd2_val <= RAM(conv_integer(rd2_addr));

end syn;

The problem that I face here, is that I read an undefined value when
rd1_addr or rd2_addr are the same as wr_addr. In other words, I cant write
a register value and read from it at the same time (makes sense
obviously). However, whats the best way to overcome this problem, maybe
writing back the register values at the negative edge of the clock? Is
this still sythesizeable and will run stable on an FPGA or will I face
there some problems? Thanks for any other helpful suggestions.

Philipp
Do you have to support asynchronous reads of the registers? If not, then
move the assignments for 'rd1_val' and 'rd2_val' into the process. The
behavior should then be consistent. Of course, you don't have any form of
reset so simulation will show an undefined for any register you read before,
or while, writing it.

Also, you should avoid using "(clk'event and clk = '1')" and instead use
"(rising_edge(clk))" which will simulate correctly for all possible values
of clk. For example, using the original form would fail to recognize a
'0' -> 'H' transition as a valid clock, but would think the non-transition
'H' -> '1' was a valid edge.
 
"Philipp Richter" <p.Richter@yahoo.co.uk> wrote in message
news:fvn895$45o$1@aioe.org...
Hi

I have an architecture that implements a register file in the following
way:


library ieee;
use ieee.std_logic_1164.all;
use ieee.std_logic_unsigned.all;

entity Reg_File is
port (
clk : in std_logic;
rd1_addr : in std_logic_vector(4 downto 0);
rd2_addr : in std_logic_vector(4 downto 0);
rd1_val : out std_logic_vector(31 downto 0);
rd2_val : out std_logic_vector(31 downto 0);
wr_ena : in std_logic;
wr_addr : in std_logic_vector(4 downto 0);
wr_val : in std_logic_vector(31 downto 0)
);
end Reg_File;

architecture syn of Reg_File is

type ram_type is array (0 to 31) of std_logic_vector(31 downto 0);
signal RAM : ram_type;

begin
process (clk)
begin
if (clk'event and clk = '1') then
if (wr_ena = '1') then
RAM(conv_integer(wr_addr)) <= wr_val;
end if;
end if;
end process;

rd1_val <= RAM(conv_integer(rd1_addr));
rd2_val <= RAM(conv_integer(rd2_addr));

end syn;

The problem that I face here, is that I read an undefined value when
rd1_addr or rd2_addr are the same as wr_addr. In other words, I cant write
a register value and read from it at the same time (makes sense
obviously). However, whats the best way to overcome this problem, maybe
writing back the register values at the negative edge of the clock? Is
this still sythesizeable and will run stable on an FPGA or will I face
there some problems? Thanks for any other helpful suggestions.

Philipp
Do you have to support asynchronous reads of the registers? If not, then
move the assignments for 'rd1_val' and 'rd2_val' into the process. The
behavior should then be consistent. Of course, you don't have any form of
reset so simulation will show an undefined for any register you read before,
or while, writing it.

Also, you should avoid using "(clk'event and clk = '1')" and instead use
"(rising_edge(clk))" which will simulate correctly for all possible values
of clk. For example, using the original form would fail to recognize a
'0' -> 'H' transition as a valid clock, but would think the non-transition
'H' -> '1' was a valid edge.
 
"Philipp Richter" <p.Richter@yahoo.co.uk> wrote in message
news:fvpe1q$k2m$1@aioe.org...
Your code should work with FF. If it works with RAM deends on the
actual implementation of the RAM, but I know no RAM with three
independant ports, so I guess FF are be generated anyway.

Yes and the next thing is that I need asynchronous reads. This implies
that I need distributed RAM as BRAM is only possible with synchronous
reads.

To use the design with FF I would use a reset.

Are the values realy undefined ('X'), or just uninitialised ('U'),
because you read before first write operation. First would indicate a
Setup-Hold violation.

At the moment I have initialised them to some values as follows:

type ram_type is array (0 to 31) of std_logic_vector(31 downto 0);
signal RAM : ram_type :=
(
B"00000000000000000000000000000000",
B"00000000000000000000000000000001",
....
);

I assume this works just for simulation as I cant initialise distributed
RAM cells if I am not completly wrong. In other words, I have to use
probably the rst signal to initialise the Reg FIle contents to ZERO

Cheers,
Philipp
Is this register file accessed from an external processor? If so, then you
probably want a reset anyway don't you? If so, the register array will have
to be implemented as flip-flops. It will take 1024 flip-flops which probably
isn't excessive in a mid-size FPGA. The alternative would be to include
extra logic to zero the RAM array after reset.

Apart from the lack of a reset, I still can't see why your original logic
didn't work though.
 
"Philipp Richter" <p.Richter@yahoo.co.uk> wrote in message
news:fvpe1q$k2m$1@aioe.org...
Your code should work with FF. If it works with RAM deends on the
actual implementation of the RAM, but I know no RAM with three
independant ports, so I guess FF are be generated anyway.

Yes and the next thing is that I need asynchronous reads. This implies
that I need distributed RAM as BRAM is only possible with synchronous
reads.

To use the design with FF I would use a reset.

Are the values realy undefined ('X'), or just uninitialised ('U'),
because you read before first write operation. First would indicate a
Setup-Hold violation.

At the moment I have initialised them to some values as follows:

type ram_type is array (0 to 31) of std_logic_vector(31 downto 0);
signal RAM : ram_type :=
(
B"00000000000000000000000000000000",
B"00000000000000000000000000000001",
....
);

I assume this works just for simulation as I cant initialise distributed
RAM cells if I am not completly wrong. In other words, I have to use
probably the rst signal to initialise the Reg FIle contents to ZERO

Cheers,
Philipp
Is this register file accessed from an external processor? If so, then you
probably want a reset anyway don't you? If so, the register array will have
to be implemented as flip-flops. It will take 1024 flip-flops which probably
isn't excessive in a mid-size FPGA. The alternative would be to include
extra logic to zero the RAM array after reset.

Apart from the lack of a reset, I still can't see why your original logic
didn't work though.
 
"KJ" <kkjennings@sbcglobal.net> wrote in message
news:e8d4d304-9daa-449b-95bf-cd6c4d359fd6@e53g2000hsa.googlegroups.com...
I still can't see why your original logic
didn't work though.


Because Philipp's definition of 'working' is that the read data must
show up at the *same time* as it is being written into memory, not one
clock cycle later when it has been stored away.

KJ
Has he said that? I took his original post to mean that he must get valid
data if a register is read at the same time as being written, but in most
register bank type situations the previous contents would be considered as
valid and accpetable data.

I think Philipp needs to tell us a little bit more about the application
because I think we are only seeing a tiny portion without enough context to
understand the bigger picture.
 
"Thomas" <thomas.b36@gmail.com> wrote in message
news:445774a1-9f55-40a1-9256-65bafffc92f2@26g2000hsk.googlegroups.com...
On May 5, 12:27 pm, "Brad Smallridge" <bradsmallri...@dslextreme.com
wrote:
You should be able switch your signals, if you can
get at the IOs before they are combined into inouts.
Can you do that? Or is there some sort of proprietary
code problem?

What tools are you using and what is your target CPLD?

Brad Smallridge
AiVision


Well to Brad,

Its Altear MAX II CPLD and I am using Quartus II 7.2sp3 tool.
I've discussed this morning with some senors at work and they seems to
say the same thing as David is saying here. But, giving the situation,
I am working on it trying to find some work around. It could be not
realy nice design but if it works we will be happy here.
So any other suggestions are more then welcome. I am sure that I am
not the first person that had this issue.

Thoms
The fundamental problem with bridging I2C is its inherent bidirectional
nature. Essentially, you are connecting side A to side B and if either side
pulls the signal low you need to repeat that low on the other side. However,
you then need to make sure you don't try to repeat the destination low (that
you are driving) on the source side otherwise you just latch up. Custom I2C
repeaters and muxes tend to use a low value series resistor and measure the
voltage drop across that to determine which side is the driving side. Of
course, you can't do this in an FPGA.

One way that may work is to have a state machine for each signal that keeps
track of which side is driving the signal low. In other words, when you
detect one side driving low, you remember which side it is and enter a state
that drives the other side low. When the source side stops driving you
release the other side. You will almost certainly need some form of delay to
allow the signal time to go high again otherwise you might detect a low on
the destination side immediately after you stop driving it.
 
"Thomas" <thomas.b36@gmail.com> wrote in message
news:445774a1-9f55-40a1-9256-65bafffc92f2@26g2000hsk.googlegroups.com...
On May 5, 12:27 pm, "Brad Smallridge" <bradsmallri...@dslextreme.com
wrote:
You should be able switch your signals, if you can
get at the IOs before they are combined into inouts.
Can you do that? Or is there some sort of proprietary
code problem?

What tools are you using and what is your target CPLD?

Brad Smallridge
AiVision


Well to Brad,

Its Altear MAX II CPLD and I am using Quartus II 7.2sp3 tool.
I've discussed this morning with some senors at work and they seems to
say the same thing as David is saying here. But, giving the situation,
I am working on it trying to find some work around. It could be not
realy nice design but if it works we will be happy here.
So any other suggestions are more then welcome. I am sure that I am
not the first person that had this issue.

Thoms
The fundamental problem with bridging I2C is its inherent bidirectional
nature. Essentially, you are connecting side A to side B and if either side
pulls the signal low you need to repeat that low on the other side. However,
you then need to make sure you don't try to repeat the destination low (that
you are driving) on the source side otherwise you just latch up. Custom I2C
repeaters and muxes tend to use a low value series resistor and measure the
voltage drop across that to determine which side is the driving side. Of
course, you can't do this in an FPGA.

One way that may work is to have a state machine for each signal that keeps
track of which side is driving the signal low. In other words, when you
detect one side driving low, you remember which side it is and enter a state
that drives the other side low. When the source side stops driving you
release the other side. You will almost certainly need some form of delay to
allow the signal time to go high again otherwise you might detect a low on
the destination side immediately after you stop driving it.
 
Its Altear MAX II CPLD and I am using Quartus II 7.2sp3 tool.
I've discussed this morning with some senors at work and they seems to
say the same thing as David is saying here.
So, you don't have the code to the I2C module?
 
"Kevin Neilson" <kevin_neilson@removethiscomcast.net> wrote in message
news:fvqti3$omd2@cnn.xsj.xilinx.com...
This seems simple, but I've been unable to find the answer. I have an
enumerated type like this:

type opmode_type is (m, p_plus_m, p_minus_m, m_plus_c...);
signal opmode : opmode_type;

As I understand it, the synthesizer will assign sequential values to these
(i.e., m=1, p_plus_m=2). But I need to assign particular (3-bit signed)
values to these: e.g., m="101", p_plus_m="110". Is there a way to do
this?
Not really. By using the enumerated type you're in a sense divorcing
yourself from particular encodings...but read on.

Perhaps I need to do something else entirely. I supposed I could use
aliases to assign names to the constants. I could make a constant array
and somehow index it (maybe with the enumerated type?). Or maybe I could
make a record?

type opmode_type is record m, p_plus_m, ... : unsigned(2 downto 0);
signal opmode : opmode_type := ("101","110",...);

and then access the opmode with a field, as in: opmode.p_plus_m

I just wondered what is most stylistically proper way to do this.
Typically I would create a to_std_logic_vector and from_std_logic_vector
function pair to convert between enumerated types and particular bit
encodings (or records...very useful for bits in a read/write port). In your
case, maybe you want a to_unsigned/from_unsigned pair. In any case, with
the to/from functions in hand, you are basically free to use whatever form
is appropriate.

Sample conversion function:

to_unsigned(L: opmode_type) return unsigned is
variable RetVal: unsigned(2 downto 0);
begin
case L is
when m => RetVal := "101";
when p_plus_m => ...
...
end case;
return(RetVal);
end function to_unsigned;

Sample usage:

opmode <= from_unsigned("101");

The conversion functions synthesize quite nicely, there is no 'overhead' in
the final hardware to using it. The appearance and maintainability of the
code is greatly enhanced by converting to/from required bit patterns in this
fashion.

Kevin Jennings
 
"KJ" <kkjennings@sbcglobal.net> wrote in message
news:b09730cb-96dc-49e5-b7c3-7b1d864926fc@l42g2000hsc.googlegroups.com...
On May 6, 12:48 pm, Kausi <kauser.jo...@gmail.com> wrote:
Iam struggling with a piece of code that has been giving me sleepless
nights for the past one week. What i fail to understand is that the
code works absolutely fine, without even a glitch in modelsim.
...snip

- Get some sleep and then fix those warnings. It's only a 'warning'
because it does not prevent the synthesis tool from completing it's
mission which is simply to produce a bitstream output file.

- Synthesis tool warnings are usually design errors.

- Good luck.

Kevin Jennings
In addition to all the other good advised given I would also do a netlist
simulation (out of your synthesis tool) and a gatelevel with SDF (timing
check) and without SDF (structural). The process from RTL to gates is very
complex so testing at intermediate stages is definitely recommended.

Hans
www.ht-lab.com
 
"KJ" <kkjennings@sbcglobal.net> wrote in message
news:b09730cb-96dc-49e5-b7c3-7b1d864926fc@l42g2000hsc.googlegroups.com...
On May 6, 12:48 pm, Kausi <kauser.jo...@gmail.com> wrote:
Iam struggling with a piece of code that has been giving me sleepless
nights for the past one week. What i fail to understand is that the
code works absolutely fine, without even a glitch in modelsim.
...snip

- Get some sleep and then fix those warnings. It's only a 'warning'
because it does not prevent the synthesis tool from completing it's
mission which is simply to produce a bitstream output file.

- Synthesis tool warnings are usually design errors.

- Good luck.

Kevin Jennings
In addition to all the other good advised given I would also do a netlist
simulation (out of your synthesis tool) and a gatelevel with SDF (timing
check) and without SDF (structural). The process from RTL to gates is very
complex so testing at intermediate stages is definitely recommended.

Hans
www.ht-lab.com
 
"Shannon" <sgomes@sbcglobal.net> wrote in message news:7c7acb7d-b7c3-440c-97c8-2b7dfe65c484@d19g2000prm.googlegroups.com...
Ok, here is the relevant code snipits:

LIBRARY ieee;
USE ieee.std_logic_1164.ALL;
USE ieee.numeric_std.ALL;

HWID : INOUT STD_LOGIC_VECTOR(7 DOWNTO 0);
RAM_addr : OUT UNSIGNED(9 DOWNTO 0);

TYPE reg_type IS ARRAY (0 TO NUM_REGS-1) OF
STD_LOGIC_VECTOR(HWID'RANGE);
SIGNAL regs : reg_type;
SIGNAL data_in : STD_LOGIC_VECTOR(HWID'RANGE);

Line 156: RAM_addr <= UNSIGNED("00" & data_in);

and the error is:

Error (10327): VHDL error at xFace.vhd(156): can't determine
definition of operator ""&"" -- found 2 possible definitions
Error (10647): VHDL type inferencing error at xFace.vhd(156): type of
expression is ambiguous - "reg_type" or "std_logic_vector" are two
possible matches
Error (10411): VHDL Type Conversion error at xFace.vhd(156): can't
determine type of object or expression near text or symbol "UNSIGNED"

I have no idea why it thinks "reg_type" is a possible match. It seems
very clear that RAM_addr is unsigned, "00" is SLV, and data_in is
SLV. "&" can only have one possible meaning. I'm sure that I'm doing
something else wrong that you guys will point out in less than a
second! ;)

Shannon
Hi Shannon,

This is not a bug. As others tried to explain to you, the expression ("00" & data_in) could mean an array of 10 elements of
std_logic, or it could mean an array of 2 elements of std_logic_vector. In the first case, the result type is 'std_logic_vector', in
the second case it can be type 'reg_type'.

The tool cannot know which one you mean because the expression is used as an operand to the type conversion (to type 'UNSIGNED'),
and LRM 7.3.5 states explicitly : "The type of the operand of a type conversion must be determinable independent of the context (in
particular, independent of the target type)."

So either type 'reg_type' or type 'std_logic_vector' could match here.

How to fix this ?
Work with EITHER 'std_logic_vector' OR with 'unsigned'. Convert between the two as little as you can. That will also make it clear
what the representation of the data is in the signals. If you have to convert, do it with a plain conversion (no expression in the
argument).

So, two solutions :
(1) Change 'data_in' to be an 'unsigned'. Then use normal assignment :

RAM_addr <= "00" & data_in;

(2) If you want to keep 'data_in' the same (type std_logic_vector), then concert it by itself :

RAM_addr <= "00" & UNSIGNED(data_in);

Either way should work (no ambiguity).

Rob
 
"Ken" <kkersti@gmail.com> wrote in message
news:deeced2c-1141-403b-917a-7ed71a1575fe@y22g2000prd.googlegroups.com...
Quick question: is it possible to have a scenario where you can use an
FPGA as a true bidirectional pipe without caring about the direction?
I'm referring to problem below:

entity true_bidir is
port ( io_a : inout : std_logic;
io_b : inout : std_logic
);

end entity;

architecture bidir_arch of true_bidir is

begin

io_a <= io_b;
io_b <= io_a;

end architecture;

This will not map (not even through Synplicity), because I'm getting
an error saying, hey, you need a buffer or register between these
pins. So, is it possible with some kind of code trick to allow a true
bidirectional pipe through an FPGA? I don't care about a direction. I
would think, in theory, you could map a 'Z' to a 'Z' and thus not
worry about a buffer. I'm using a Virtex 4.

Thanks

Ken
kkersti@gmail.com
Simple answer is no - you need to know what direction the signal is at any
one time to control the I/O buffers. If you can't understand why, run the
FPGA Editor within ISE and look at the schematic of an IOB (or look it up in
the data sheet).
 
<HansWernerMarschke@web.de> wrote in message
news:e1bb65ae-70ff-4402-82e1-9f7d3eef39a1@k13g2000hse.googlegroups.com...

So let´s assert that there is no index violation.
Wouldn't need to if the index is defined properly.

The statement after assert must be an boolean expression.
This should be the case in the following statement.
If the following statement were VHDL...but it's not.

assert (index in the_enigma.rotor(i+1).wheel'range)
report "index violation" severity failure;
'the_enigma.rotor' must've been defined as an array of some type and in
order for it to be valid, that range must have definite limits defined. Any
access to anything outside that range will cause the simulator (you are
using one aren't you?) to croak at run time.

Example: if 'the_enigma.rotor' is a vector of some type in the range 0 to 2
then
the_enigma.rotor(3) will immediately be caught by the simulator at run time
as a fatal error.

Any variable or signal that you would like to use as an index into
'the_enigma.rotor' should be defined as an integer using the range of
'the_enigma.rotor'.

Example:
signal index: integer range the_enigma.rotor'range;
....
the_enigma.rotor(index) <= ...

Then if you ever change 'index' to be outside of the defined range, the
simulator will flag it as a fatal error at run time. This will happen even
if you don't attempt to access 'the_enigma.rotor' using 'index'.

KJ
 
"rickman" <gnuarm@gmail.com> wrote in message news:f0283d1c-aeab-4651-ad45-
One of the bad things of being a "jack of all trades" is that I tend
to forget a lot of details between jacking any given trade. I think
it has been at least two years since I have written any VHDL and I
have forgotten a lot of my style.

I get a bit tired of all the typing that is needed to do things in
VHDL and I thought that using integers might be a bit simpler than
using slv. So instead of typing...

DataWr <= DataWr(DataWr'high-Scfg_Din'width downto 0) &
AddrReg(AddrReg'high downto AddrReg'high-Scfg_Din'width);

I was thinking about

DataWr <= sllbar(DataWr, CTPDATAWDTH) + srlbar(AddrReg, AddrRegWidth-
CTPDATAWDTH);

where sllbar is a function that returns an unconstrained integer.
Another approach would be a function that takes in DataWr and AddrReg and
Scfg_Din and computes a new DataWr output if that particular type of
function is something you would reuse in several places. That way the mess
is in one place (the function) but the usage (where the function is called)
is easier to follow.

I guess the missing information the width of the data field which is
one of the things that makes the slv version so long. I assume that
if DataWr is a constrained integer, it would be an error in simulation
if the value of sllbar was outside the range of DataWr.
Yes, if the actual value returned from the function is outside of the
defined range for DataWr then the simulator would give you an error.

How might
this synthesize?
Just fine, synthesis doesn't 'check' for overflow that's a problem that the
design is supposed to catch in simulation. If you had
signal DataWr: integer range 0 to 1023;
then DataWr would get synthesized as a 10 bit number. If there is some
condition under which your design would attempt to assign the value of 1024
to DataWr due to some design error on your part, then DataWr in the actual
design would be set to 0 due to your lack of providing sufficient precision.

This also works for entities as well. If you have an entity that receives
or returns an unconstrained type, you would instantiate that entity and in
doing so would connect it to some signal which *does* have a constrained
type. One possible silent 'gotcha' here though is that integers don't have
to be constrained in which case they will end up being synthesized as full
32 bit things, which might not be what you want. Vectors don't have this
problem since if the vector is not constrained somewhere, it won't compile
so you'll find and fix the problem right away.

Or would it also be a synthesis error because of the
mismatch in range of DataWr and the sllbar result?
Synthesis does not check that you've defined sufficient precision for your
calculations. That is your job as the designer and synthesis expects you to
do that so no errors or warnings would likely be reported (one exception
though can be the assignment of an out of range constant such as DataWr <=
1024; when DataWr has the 0 to 1023 range).

If I create a separate sllbar for each data width, sllnib, sllbyte,...
and used the mod operator to restrict the range of the result, that
should cure things, no?
That would temporarily 'cover up' your design error and push off the
ultimate resolution of this problem until sometime later when it will most
likely be much harder to diagnose. Instead of trying to sweep the
assignment of something that equals '1024' under the rug you should be using
the simulator to catch those problems so you can fix them.

Having the simulator report the assignment of something that evaluates to
1024 to something that is 'supposed' to be in the range from 0 to 1023 is a
good thing (a very good thing actually).

I was hoping to use an overloaded operator so that each one would have
the same name and the correct one would be picked based on the subtype
of the operands. But I believe that this won't work. Is that
correct?

Not quite sure what you're getting at here.

Kevin Jennings
 
"rickman" <gnuarm@gmail.com> wrote in message news:f0283d1c-aeab-4651-ad45-
One of the bad things of being a "jack of all trades" is that I tend
to forget a lot of details between jacking any given trade. I think
it has been at least two years since I have written any VHDL and I
have forgotten a lot of my style.

I get a bit tired of all the typing that is needed to do things in
VHDL and I thought that using integers might be a bit simpler than
using slv. So instead of typing...

DataWr <= DataWr(DataWr'high-Scfg_Din'width downto 0) &
AddrReg(AddrReg'high downto AddrReg'high-Scfg_Din'width);

I was thinking about

DataWr <= sllbar(DataWr, CTPDATAWDTH) + srlbar(AddrReg, AddrRegWidth-
CTPDATAWDTH);

where sllbar is a function that returns an unconstrained integer.
Another approach would be a function that takes in DataWr and AddrReg and
Scfg_Din and computes a new DataWr output if that particular type of
function is something you would reuse in several places. That way the mess
is in one place (the function) but the usage (where the function is called)
is easier to follow.

I guess the missing information the width of the data field which is
one of the things that makes the slv version so long. I assume that
if DataWr is a constrained integer, it would be an error in simulation
if the value of sllbar was outside the range of DataWr.
Yes, if the actual value returned from the function is outside of the
defined range for DataWr then the simulator would give you an error.

How might
this synthesize?
Just fine, synthesis doesn't 'check' for overflow that's a problem that the
design is supposed to catch in simulation. If you had
signal DataWr: integer range 0 to 1023;
then DataWr would get synthesized as a 10 bit number. If there is some
condition under which your design would attempt to assign the value of 1024
to DataWr due to some design error on your part, then DataWr in the actual
design would be set to 0 due to your lack of providing sufficient precision.

This also works for entities as well. If you have an entity that receives
or returns an unconstrained type, you would instantiate that entity and in
doing so would connect it to some signal which *does* have a constrained
type. One possible silent 'gotcha' here though is that integers don't have
to be constrained in which case they will end up being synthesized as full
32 bit things, which might not be what you want. Vectors don't have this
problem since if the vector is not constrained somewhere, it won't compile
so you'll find and fix the problem right away.

Or would it also be a synthesis error because of the
mismatch in range of DataWr and the sllbar result?
Synthesis does not check that you've defined sufficient precision for your
calculations. That is your job as the designer and synthesis expects you to
do that so no errors or warnings would likely be reported (one exception
though can be the assignment of an out of range constant such as DataWr <=
1024; when DataWr has the 0 to 1023 range).

If I create a separate sllbar for each data width, sllnib, sllbyte,...
and used the mod operator to restrict the range of the result, that
should cure things, no?
That would temporarily 'cover up' your design error and push off the
ultimate resolution of this problem until sometime later when it will most
likely be much harder to diagnose. Instead of trying to sweep the
assignment of something that equals '1024' under the rug you should be using
the simulator to catch those problems so you can fix them.

Having the simulator report the assignment of something that evaluates to
1024 to something that is 'supposed' to be in the range from 0 to 1023 is a
good thing (a very good thing actually).

I was hoping to use an overloaded operator so that each one would have
the same name and the correct one would be picked based on the subtype
of the operands. But I believe that this won't work. Is that
correct?

Not quite sure what you're getting at here.

Kevin Jennings
 
"Jim Lewis" <jim@synthworks.com> wrote in message news:D_ydnRA1x6d9Jr7VnZ2dnUVZ_vninZ2d@easystreetonline...
Shannon wrote:
...
Personally I think type casting it as an SLV should have worked.

Shannon

Unfortunately I agree with James' and Rob's analysis.
I don't like it, but there must be some reason for the
restriction.

Jim
Hi Jim,

The restriction you talk about is probably the LRM 7.3.5 rule : "The type of the operand of a type conversion must be determinable
independent of the context (in
particular, independent of the target type)."

Why is this in there ?
I'm not entirely sure, but it does make sense from a wider point of view. Here is the type conversion in simplified syntax :

target_type( expression )

'expression' is 'cast' to type 'target_type'.
That means that 'target_type' is not the same as the expression type.
So how do we know which type 'expression' can or should be ? We don't !
That's why they put the rule in there that the type of the expression should be determinible without context info.

Later in 7.3.5, it is stated that the type conversion is only allowed if the target type is at least 'closely related' to the
expression type. 'closely related' is then defined as a certain dependency between expression type and target type. Other type
conversions are not allowed (except for some 'universal' implicit type conversions).
So it seems that they could have made the type-inference rule dependent on the 'closely related' semantics of the construct. Maybe
something like this :

"The type of the operand of a type conversion must be determinable independent of the context, but taken into consideration that the
type of the expression and the target type must be closely related."

This sentence is not entirely correct yet, since implicit (universal) type conversions would need to be included, but it would allow
for overloading to determine the expression type based on the target type in some form. This could be a headache for compiler
builders, since VHDL hardly ever has special-purpose type-conditions like this, and remember this was written before 1987, when
compiler technology was not yet so sophisticated as it is today. So I think that by making the type conversion rule simple, it would
be easier to bring out a full VHDL parser, and that's why the rules are a bit easier on the compiler.

Any way, that's my 2 cts on this issue.

Rob Dekker
Verific
 
"Jim Lewis" <jim@synthworks.com> wrote in message news:D_ydnRA1x6d9Jr7VnZ2dnUVZ_vninZ2d@easystreetonline...
Shannon wrote:
...
Personally I think type casting it as an SLV should have worked.

Shannon

Unfortunately I agree with James' and Rob's analysis.
I don't like it, but there must be some reason for the
restriction.

Jim
Hi Jim,

The restriction you talk about is probably the LRM 7.3.5 rule : "The type of the operand of a type conversion must be determinable
independent of the context (in
particular, independent of the target type)."

Why is this in there ?
I'm not entirely sure, but it does make sense from a wider point of view. Here is the type conversion in simplified syntax :

target_type( expression )

'expression' is 'cast' to type 'target_type'.
That means that 'target_type' is not the same as the expression type.
So how do we know which type 'expression' can or should be ? We don't !
That's why they put the rule in there that the type of the expression should be determinible without context info.

Later in 7.3.5, it is stated that the type conversion is only allowed if the target type is at least 'closely related' to the
expression type. 'closely related' is then defined as a certain dependency between expression type and target type. Other type
conversions are not allowed (except for some 'universal' implicit type conversions).
So it seems that they could have made the type-inference rule dependent on the 'closely related' semantics of the construct. Maybe
something like this :

"The type of the operand of a type conversion must be determinable independent of the context, but taken into consideration that the
type of the expression and the target type must be closely related."

This sentence is not entirely correct yet, since implicit (universal) type conversions would need to be included, but it would allow
for overloading to determine the expression type based on the target type in some form. This could be a headache for compiler
builders, since VHDL hardly ever has special-purpose type-conditions like this, and remember this was written before 1987, when
compiler technology was not yet so sophisticated as it is today. So I think that by making the type conversion rule simple, it would
be easier to bring out a full VHDL parser, and that's why the rules are a bit easier on the compiler.

Any way, that's my 2 cts on this issue.

Rob Dekker
Verific
 
"Kevin Neilson" <kevin_neilson@removethiscomcast.net> wrote in message
news:g0fsiu$80g6@cnn.xsj.xilinx.com...
How do I embed an if-else-if within a constant definition? This doesn't
compile in Modelsim:

constant Y : integer := A when A>B else B;

If I use the math_real library, I can do this:

constant Y : integer := integer(realmax(real(A),real(B)));

but I'd still like a general solution. Must I write a constant function?
Yes, you have to write a function.

Kevin Jennings
 
"Reuven" <rpaley000@gmail.com> wrote in message
news:e51600d6-a75c-467b-8165-27f5df435a6d@8g2000hse.googlegroups.com...
On May 14, 12:11 am, Peter <peter.hermans...@sts.saab.se> wrote:
I thougth I was a rather experienced VHDL designer until recently....
A very simple mistake - I forgot to initialize the state vector at
reset in a state machine - caused a difference between simulation and
reality. Because the state type was an enumerated type, the state
vector was initialized to its leftmost value at simulation start and
that value was the idle state. That behaviour masked the fact that the
state vector was never reset in the hardware. What do you gentlemen do
to avoid such simple but fatal errors?

Coding rules of some kind?
Code inspection?
Better discipline?
Declaring the idle state as the the rightmost value?
Gate level simulation?

Regards, Peter

If you have the budget, a formal verification tool

( not a netlist checker ) will check for resets.
Indeed a formal tool like Averant's SolidAC will check your reset and will
tell you which FF's are not being reset. However, some high-end synthesis
tools like Mentor's Precision will analyse your statemachines and in
addition to the reset state will also tell you if you have equivalent states
and/or unreachable states.

circuit : http://www.ht-lab.com/misc/fsm.html

******************************************
* Extracted FSM Analysis
* Module: fsmb_fsm_0
* State Vector: current_state
******************************************
* States : 5
* Asynch. Reset States : (s0)
* Synch. Reset States : <none>
* Equivalent States : (s1, s4)
* Unreachable States : <none>
******************************************

Hans
www.ht-lab.com
 

Welcome to EDABoard.com

Sponsor

Back
Top