Clock Edge notation

Hi Rick,
This is what I like about vectors. You can generally force an
implementation with the code and reduce the quirks that a given
synthesis tool subjects you to.

Cheers,
Jim
 
On Aug 20, 11:02 pm, JimLewis <J...@SynthWorks.com> wrote:
This is what I like about vectors.  You can generally force an
implementation with the code and reduce the quirks that a given
synthesis tool subjects you to.
Only if you are willing to constrain the code to keep certain terms
from being optimized away. The same synthesis tool, on the same
target, generally applies the same sets of "back end" (technology
mapping) optimizations regardless of what data type was used in the
RTL, since by the time those optimizations are applied, everything is
a vector of bits. The operations, defined by the data types, give
certain hints & constraints based on behavior, but the back end is
still free to accept those hints or offer something it thinks is
better (so long as it implements the prescribed behavior, at the
boundaries of interest). And in my observations, when Synplify has
abandoned the explicit, traditional carry implied by the RTL, the
circuit it came up with was faster and/or smaller. I learned to quit
wasting my time second guessing the synthesis tool's chosen
implementation, as long as it met performance and resource
constraints. I still use the subtract and compare condition only
because it is easy enough to write and understand, and consistently
gives equal or better results than a simple pre-subtraction compare to
zero.

I actually prefer some of the attributes of integer arithmetic that
have flowed into the IEEE fixed point vector types (e.g. length
expansion to cover the potential range of the results). Unfortunately,
the subtraction of unsigned operands still returns an unsigned result.
Arithmetically, that is not always the case. Like integer operations,
you specify the arithmetic operation with the fixed point expression,
and control the data path width by assigning it to objects of a
specific type/subtype with whatever "resizing" operation is required.
You could probably code this whole exercise in fixed point more easily
than in unsigned.

Andy
 
hi,

Weng Tianxiang wrote:
In the situation, at most only one latch is needed to resolve the
problem. Why full state machine?
We can safely assume the state machine is one-hot encoded.
if you have troubles infering latches,
then you can implement them directly
with a specific entity ?
it's not looking as "smart" but
removes any ambiguity from your code.


--
http://ygdes.com / http://yasep.org
 
On Feb 24, 5:40 pm, feng <xu_feng...@yahoo.com> wrote:
Hi,
Assume i have  a counter which i want to set to 0 at the first clock
edge. For this, i set its value to the maximum during the reset phase,
i.e.

if reset='1' then
counter_int<="1111"
elsif rising_edge(clk) then,,,
Why did you post this to a Verilog group?
 
On 3/9/2010 1:46 AM, Weng Tianxiang wrote:
Hi,
I have a question about when to generate a latch.

In Example_1 and Exmaple_2, I don't think it will generate a latch. I
don't know why.

Example_1: process(RESET, CLK)
Begin
If RESET = ‘1’ then
StateA<= S0;
Elsif CLK’event = ‘1’ and CLK = ‘1’ then
If SINI = ‘1’ then
StateA<= S0;
Elsif E2 = ‘1’ then
null; -- missing a signal assignment statement
-- I suppose it will not generate a latch, why?
Elsif StateA = S1 then
StateA<= S3;
Else
StateA<= StateA_NS;
End if;
End if;
End process;

Example_2: process(…)
Begin
Case StateA is
...; -- no signal assignement statements are missing
End case;
End process;

Weng
It will generate latches. Specifically, ones which are paired as master
and slave.
HTH., Syms.

http://en.wikipedia.org/wiki/Flip-flop_%28electronics%29#Master.E2.80.93slave_.28pulse-triggered.29_D_flip-flop
 
On Apr 16, 2:24 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
rickman wrote:
People say that strong typing catches bugs, but I've never seen any
real proof of that.  There are all sorts of anecdotal evidence, but
nothing concrete.

My practical experience is that strong typing creates another class of bugs,
simply by making things more complicated.  I've last seen VHDL in use more
than 10 years ago, but the typical pattern was that a designer wanted a bit
vector, and created a subranged integer instead.  Seems to be identical, but
isn't.  If you increment the subranged integer, it will stop simulation on
overflow, if you increment the bit vector, it will wrap around.  My coworker
who did this subranged integer stuff quite a lot ended up with code like

if foo = 15 then foo <= 0 else foo <= foo + 1 endif;

And certainly, all those lines had started out as

foo <= foo + 1;

and were only "fixed" later when the simulation crashed.

The good news is that the synthesis tool really generates the bitvector
logic for both, so all those simulator crashes were only false alarms.
I can't say I understand what the point is. If you want a counter to
wrap around you only need to use the mod operator. In fact, I tried
using unsigned and signed types for counters and checked the synthesis
results for size. I found that the coding style greatly influences
the result. I ended up coding with subranged integers using a mod
operator because it always gave me a good synthesis result. I never
did understand some of the things the synthesis did, but it was not
uncommon to see one adder chain for the counter and a second adder
chain for the carry out!

After I run the Verilog gauntlet this summer, I plan to start
verifying a library of common routines to use in designs when the size
is important. My current project is very tight on size and it is such
a PITA to code every line thinking of size.

Rick
 
On Apr 16, 6:37 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
glen herrmannsfeldt wrote:
If your test bench is good enough then it will catch all static
timing failures (eventually).  With static timing analysis, there
are many things that you don't need to check with the test bench.

And then there are some corner cases where neither static timing analysis
nor digital simulation helps - like signals crossing asynchronous clock
boundaries (there *will* be a setup or hold violation, but a robust clock
boundary crossing circuit will work in practice).

Example: We had a counter running on a different clock (actually a VCO,
where the voltage was an analog input), and to sample it robust in the
normal digital clock domain, I grey-encoded it.  There will be one bit which
is either this or that when sampling at a setup or hold violation condition,
but this is only just one bit, and it's either in the state before the
increment or after.

But this has nothing to do with timing analysis. Clock domain crosses
*always* violate timing and require a logic solution, not a timing
test.

Rick
 
On Apr 16, 2:56 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
Andy wrote:
IMHO, they missed the point. Any design that can be completed in a
couple of hours will necessarily favor the language with the least
overhead. Unfortunately, two-hour-solvable designs are not
representative of real life designs, and neither was the contest's
declared winner.

Well, we pretty much know that the number of errors people make in
programming languages basically depends on how much code they have to write
- a language which has less overhead and is more terse is being written
faster and has less bugs.  And it goes non-linear, i.e. a program with 10k
lines of code will have less bugs per 1000 lines than a program with 100k
lines of code.  So the larger the project, the better the more terse
language is.

That must be why we are all programming in APL, no?

Rick
 
Bernd Paysan <bernd.paysan@gmx.de> writes:

rickman wrote:
People say that strong typing catches bugs, but I've never seen any
real proof of that. There are all sorts of anecdotal evidence, but
nothing concrete.

My practical experience is that strong typing creates another class of bugs,
simply by making things more complicated. I've last seen VHDL in use more
than 10 years ago, but the typical pattern was that a designer wanted a bit
vector, and created a subranged integer instead.
Surely the designer should've used a bit vector (for example
"unsigned" type) then? That's not the language's fault!

Cheers,
Martin

--
martin.j.thompson@trw.com
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.net/electronics.html
 
Bernd Paysan <bernd.paysan@gmx.de> writes:

Andy wrote:
IMHO, they missed the point. Any design that can be completed in a
couple of hours will necessarily favor the language with the least
overhead. Unfortunately, two-hour-solvable designs are not
representative of real life designs, and neither was the contest's
declared winner.

Well, we pretty much know that the number of errors people make in
programming languages basically depends on how much code they have to write
- a language which has less overhead and is more terse is being written
faster and has less bugs.
Citation needed :) (I happen to agree, but if you can point to good
studies, I'd be interested in reading ...)

And it goes non-linear, i.e. a program with 10k lines of code will
have less bugs per 1000 lines than a program with 100k lines of
code. So the larger the project, the better the more terse language
is.
Is that related to the terseness of the core language, or how many
useful library functions are available, so you don't have to reinvent
the wheel (and end up with a triangular one...)?

Cheers,
Martin

--
martin.j.thompson@trw.com
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.net/electronics.html
 
The cost of bugs in code is not a constant, per-bug figure. The cost
is dominated by how hard it is to find the bug as early as possible.

So, in a verbose language, the number of bugs may go up, but the cost
of fixing the bugs goes down.

Case in point: Would you suggest that using positional notation in
port maps and argument lists is more prone to cause errors? And which
is prone to cost more to find and fix any errors? My point is not that
positional notation is an advantage of one language over another, it
is simply to debunk the "fewer lines of code = better code" myth.

Don't kid yourself that cryptic one-liners are more bug free than well
documented (e.g. compiler-enforced comments) code that is more
verbose.


Andy
 
Andy <jonesandy@comcast.net> writes:

The cost of bugs in code is not a constant, per-bug figure. The cost
is dominated by how hard it is to find the bug as early as possible.

So, in a verbose language, the number of bugs may go up, but the cost
of fixing the bugs goes down.

Case in point: Would you suggest that using positional notation in
port maps and argument lists is more prone to cause errors? And which
is prone to cost more to find and fix any errors? My point is not that
positional notation is an advantage of one language over another, it
is simply to debunk the "fewer lines of code = better code" myth.
Agreed - as with all things, any extreme position is daft...

The problem (as I see it) comes with languages (or past
design-techniques enforced by synthesis tools) which are not
descriptive enough to allow you to express your intent in a
"relatively" small amount of code. Which is why assembly is not as
widely used anymore, and more behavioural descriptions win out over
instantiating LUTs/FFs by hand. It's not about the verboseness of the
language per-se, more about the ability to show (clearly) your intent
relatively concisely.

And much of the "verboseness" in VHDL can be mitigated with tools like
Emacs or Sigasi's product. And much of the other perceived
verboseness can be overcome by writing "modern" code: single process,
using variables, functions, procedures (the sort of thing some of us
do all the time!)

BTW - I write a lot of VHDL and a fair bit of Python, so I see both
sides of the language fence. (I also write a fair amount of Matlab, which
annoys me in many many ways due to the way features have been kluged
on over time, but it's sooo quick to do some things that way!). I
can't see myself moving over the Verilog either - the conciseness
doesn't seem to be the "right sort" of conciseness for me.

Don't kid yourself that cryptic one-liners are more bug free than well
documented (e.g. compiler-enforced comments) code that is more
verbose.
Indeed, I don't (kid myself that is)! You can write rubbish in any
language :)

BTW, what do you mean by "compiler-enforced comments" - is it "simply"
that code should be as self-documenting as possible? Or something else?

Cheers,
Martin

--
martin.j.thompson@trw.com
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.net/electronics.html
 
On Apr 20, 4:11 am, Martin Thompson <martin.j.thomp...@trw.com> wrote:
BTW, what do you mean by "compiler-enforced comments" - is it "simply"
that code should be as self-documenting as possible? Or something else?
Self-documenting and more.

I look at many aspects of a strongly typed language as encouraging or
requiring "verbosity" that would otherwise need comments to explain.
Hence we have different types for vectors of the same base element
type, based on how the contents are to be interpretted, and/or
limitations of the data contained. Integer subtypes allow you to
constrain (and verify) the contents more thoroughly, at least
numerically.

By choosing the appropriate data types, you are telling the tool and
subsequent reviewers/users/maintainers something about your code, and
how it works (and sometimes more importantly, how it doesn't or won't
work.)

By using assertion statements, you can not only document assumptions
and limitations of your code, but also ensure that those are met.

Some of these features are enforced by the compiler, some are enforced
by standards compliant implementations. But they are enforced, which
is more than we can say about comments. How many times have you seen
comments that were clearly written for a previous version of the code,
but not completely updated in later revisions to that code?

Andy
 
On Apr 20, 6:36 pm, Andy <jonesa...@comcast.net> wrote:
Some of these features are enforced by the compiler, some are enforced
by standards compliant implementations. But they are enforced, which
is more than we can say about comments. How many times have you seen
comments that were clearly written for a previous version of the code,
but not completely updated in later revisions to that code?
Hear, hear!

There is, of course, another form of enforced comment - the assertion.
Because assertions are pure observers and can't affect anything else
[*]
they have a similar status to comments - they make a statement about
what
you THINK is happening in your code - but they are *checked at
runtime*.
VHDL has permitted procedural assertions since forever; nowadays,
though,
we have come to expect more sophisticated assertions allowing us to
describe (assert) behaviours that span over a period of time.

Some of the benefits of strict data types that Andy and Martin mention
can be replicated by assertions. Others, though, are not so easy;
for example, an integer subtype represents an assertion on _any_
attempt to write to _any_ variable of that subtype, checking that
the written value is within bounds. It would be painful to write
explicit assertions to do that. So the two ideas go hand in hand:
data types (and a few other constructs) can automate certain simple
assertions that would be tiresome to write manually; hand-written
assertions can be very powerful for describing specific behaviours
at points in the code where you fear that bad things might happen.

Choosing to use neither of these sanity-checking tools seems to me
to be a rather refined form of masochism, given how readily
available both are.

[*] Before anyone asks: yes, I know that SystemVerilog assertions
can have side effects. And yes, I have made use of that in real
production code, albeit only to build observers in testbenches.
The tools are there to be used, dammit.....
--
Jonathan Bromley
 
On Apr 20, 5:11 am, Martin Thompson <martin.j.thomp...@trw.com> wrote:
Andy <jonesa...@comcast.net> writes:
The cost of bugs in code is not a constant, per-bug figure. The cost
is dominated by how hard it is to find the bug as early as possible.

So, in a verbose language, the number of bugs may go up, but the cost
of fixing the bugs goes down.

Case in point: Would you suggest that using positional notation in
port maps and argument lists is more prone to cause errors? And which
is prone to cost more to find and fix any errors? My point is not that
positional notation is an advantage of one language over another, it
is simply to debunk the "fewer lines of code = better code" myth.

Agreed - as with all things, any extreme position is daft...

The problem (as I see it) comes with languages (or past
design-techniques enforced by synthesis tools) which are not
descriptive enough to allow you to express your intent in a
"relatively" small amount of code.  Which is why assembly is not as
widely used anymore, and more behavioural descriptions win out over
instantiating LUTs/FFs by hand.  It's not about the verboseness of the
language per-se, more about the ability to show (clearly) your intent
relatively concisely.
Or do you think it has to do with the fact that the tools do a better
job so that the efficiencies of using assembly language and
instantiating components is greatly reduced?


And much of the "verboseness" in VHDL can be mitigated with tools like
Emacs or Sigasi's product.  And much of the other perceived
verboseness can be overcome by writing "modern" code: single process,
using variables, functions, procedures (the sort of thing some of us
do all the time!)
I am going to give Emacs a try this summer when I have more free
time. I don't see the things you mention as being a solution because
they don't address the problem. The verboseness is inherent in VHDL.
Type casting is something that makes it verbose. That can often be
mitigated by using the right types in the right places. I never use
std_logic_vector anymore.


Rick
 
On Apr 20, 1:36 pm, Andy <jonesa...@comcast.net> wrote:
On Apr 20, 4:11 am, Martin Thompson <martin.j.thomp...@trw.com> wrote:

BTW, what do you mean by "compiler-enforced comments" - is it "simply"
that code should be as self-documenting as possible? Or something else?

Self-documenting and more.

I look at many aspects of a strongly typed language as encouraging or
requiring "verbosity" that would otherwise need comments to explain.
Hence we have different types for vectors of the same base element
type, based on how the contents are to be interpretted, and/or
limitations of the data contained. Integer subtypes allow you to
constrain (and verify) the contents more thoroughly, at least
numerically.

By choosing the appropriate data types, you are telling the tool and
subsequent reviewers/users/maintainers something about your code, and
how it works (and sometimes more importantly, how it doesn't or won't
work.)
I used to use boolean for most signals that were used as controls for
ifs and such. But in the simulator these are displayed as values
rather than the oscope type trace used for std_logic which I find much
more readable. So I don't use boolean. I seldom use enumerated
types. I find nearly everything I want to do works very well with
std_logic, unsigned, signed and integers with defined ranges. I rely
on comments to explain what is going on, because when it is not clear
from reading the code, I think there is little that using various
types will add to the picture.


By using assertion statements, you can not only document assumptions
and limitations of your code, but also ensure that those are met.
I've never uses assertions in my synthesized code. I hate getting
warnings from the tools so I don't like to provoke them. There are
times I use assignments in declarations of signed or unsigned types to
avoid warnings I get during simulation. But then these produce
warnings in synthesis, so you can't always win.

Can you give an example of an assertion you would use in synthesized
code?


Some of these features are enforced by the compiler, some are enforced
by standards compliant implementations. But they are enforced, which
is more than we can say about comments. How many times have you seen
comments that were clearly written for a previous version of the code,
but not completely updated in later revisions to that code?
I only worry about my comments...

Rick
 
I don't know of any assertions that are used by the synthesis tool,
but I do use assertions in my RTL to help verify the code in
conjunction with the testbench.

The most common occurrence is with counters. I can use integer
subtypes with built-in bouds checking to make sure a counter never
overflows or rolls over on its own (when the surrounding circuitry
should never allow that to happen, but if it did, I would want to know
about it first hand). Or I can use assertion statements with unsigned
when the allowable range for a counter is not 0 to 2**n-1.

If I know that a set of external interface strobe inputs should be
mutually exclusive, and I have optimized the circuit to take advantage
of that, then I use an assertion to verify it. It would be nice if the
synthesis tool recognized the assertion, and optimized the circuit for
me, but I'll take what I can get.

I'm rarely concerned with what waveforms look like, since the vast
majority of my debugging is with the source level debugger, not the
waveform viewer.

I suspect that if your usage is constrained to the data types you
listed (except bounded integer subtypes), you may do well with
verilog. But given that you may not use a lot of what is available in
VHDL, it would be worthwhile to compare the productivity gains from
using more of the capabilities of the language you already know, to
the gains from changing to a whole new language.

Andy
 
On Apr 20, 11:11 am, Martin Thompson <martin.j.thomp...@trw.com>
wrote:
And much of the "verboseness" in VHDL can be mitigated with tools like
Emacs or Sigasi's product.  And much of the other perceived
verboseness can be overcome by writing "modern" code: single process,
using variables, functions, procedures (the sort of thing some of us
do all the time!)
BTW, it's long time I hear about VHDL "modern" code and the methods
you enumerate. Since typical VHDL books do not deal with coding style,
do you know about any VHDL book explaining this modern coding style?

César
 
Andy <jonesandy@comcast.net> writes:

On Apr 20, 4:11 am, Martin Thompson <martin.j.thomp...@trw.com> wrote:
BTW, what do you mean by "compiler-enforced comments" - is it "simply"
that code should be as self-documenting as possible? Or something else?

Self-documenting and more.

I look at many aspects of a strongly typed language as encouraging or
requiring "verbosity" that would otherwise need comments to explain.
Hence we have different types for vectors of the same base element
type, based on how the contents are to be interpretted, and/or
limitations of the data contained. Integer subtypes allow you to
constrain (and verify) the contents more thoroughly, at least
numerically.

By choosing the appropriate data types, you are telling the tool and
subsequent reviewers/users/maintainers something about your code, and
how it works (and sometimes more importantly, how it doesn't or won't
work.)
Indeed so - capturing the knowledge that gets lost when you just use a
"bag-of-bits" type for everything.

By using assertion statements, you can not only document assumptions
and limitations of your code, but also ensure that those are met.
Yes, I sprinkle assertions through both RTL and testbench code - they
usually trigger when I come back to the code after several weeks ;)

Some of these features are enforced by the compiler, some are enforced
by standards compliant implementations. But they are enforced, which
is more than we can say about comments. How many times have you seen
comments that were clearly written for a previous version of the code,
but not completely updated in later revisions to that code?
Well said - I had a feeling that was what you meant by
compiler-enforced comments: it's the whole of the code which should be
used as documentation.

Cheers,
Martin

--
martin.j.thompson@trw.com
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.net/electronics.html
 

Welcome to EDABoard.com

Sponsor

Back
Top