Clock Edge notation

prodigy wrote:
hi,
can i get some information on topic:
operand isolation

thnx in advance
you're welcome in advance:

http://www.google.com/search?hl=en&q=operand+isolation.+&btnG=Google+Search

Perhaps you can be a little more specific with your question?
 
mk wrote:

[I moved my reply to comp.lang.vhdl as it's GHDL related...]

Hi,
I updated the GTKWave for Win32 port I am maintaining. It's at 3.0.5
now. This one gets rid of the annoying console window.
http://www.dspia.com/gtkwave.html
Note that 3.0.5 (in general) mainly fixes bugs in coredumps when
loading certain GHW files. The GHW loader also now supports on the fly
decompression of GHW files compressed with gzip or bzip2.

-t
 
Try the text "RTL Hardware Design Using VHDL:
Coding for Efficiency, Portability, and Scalability."
A sample chapter on FSM can be found in

http://academic.csuohio.edu/chu_p/

Hope this helps.

S.C.

Davy wrote:
Hi all,

Is there some hardware RTL book like "Code Complete" by Steve
McConnell?

Thanks!
Davy
 
Too bad the author is a proponent of the ancient style of separate
clocked and combinatorial processes in VHDL. He even uses a third
process for registered outputs.

I think he needs to discover what variables can do for you in a clocked
process.

Andy
 
I believe that separating the FSM into a combinational logic and a
register is the first guideline for coding state machines in "Reuse
methodology manual" by Keating.

Mike G.

Andy wrote:
Too bad the author is a proponent of the ancient style of separate
clocked and combinatorial processes in VHDL. He even uses a third
process for registered outputs.

I think he needs to discover what variables can do for you in a clocked
process.

Andy
 
On 20 Jul 2006 18:51:11 -0700, mikegurche@yahoo.com wrote:

I believe that separating the FSM into a combinational logic and a
register is the first guideline for coding state machines in "Reuse
methodology manual" by Keating.
It could well be that you are right.

Someone's going to have to work pretty hard to convince me
that such a split has any benefit for reusability. As I and
many other contributors to this group have noted on
numerous occasions, splitting a design of any kind into
combinational logic and a bunch of registers is a sad
waste of the expressive power of RTL.

Every synchronous design is a state machine in party dress.
Am I expected to write the whole of every design in the
two-process FSM style?

I have no intention of allowing pedantic coding guidelines to
force me into writing my designs as a bunch of isolated
flip-flops dangling on the side of a mess of combinational
logic, with its inevitably restrictive rules and coding style
limitations.

Decoding glitches on outputs, unexpected combinational
feedthrough paths from input to output, unwanted
combinational feedback, unpredictable clock-to-output
delays, unnecessary declarations of next-state signals
cluttering the top level of your modules, artificially clumsy
coding styles to assure freedom from unwanted latches -
all these can be yours, merely by following the textbooks'
advice to split your logic into combinational and registers.

Two-process FSMs for re-use? Hmmmm.

You HAVE a choice :)
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Jonathan Bromley wrote:
On 20 Jul 2006 18:51:11 -0700, mikegurche@yahoo.com wrote:

I believe that separating the FSM into a combinational logic and a
register is the first guideline for coding state machines in "Reuse
methodology manual" by Keating.


[ ommitted]

Decoding glitches on outputs, unexpected combinational
feedthrough paths from input to output, unwanted
combinational feedback, unpredictable clock-to-output
delays, unnecessary declarations of next-state signals
cluttering the top level of your modules, artificially clumsy
coding styles to assure freedom from unwanted latches -
all these can be yours, merely by following the textbooks'
advice to split your logic into combinational and registers.
I agree that good code depends on your understanding of hardware rather
than any "coding style." Either one- or two-process style will be
fine if you know what you are doing. But the two-process style is
easier to understand and less error-prone, especially for a novice
user.

BTW, "Reuse methodology manual" is not a textbook. It is written by
two veterans in two major EDA companies. I have the second edition and
it indicates the first author is the VP of engineering for design reuse
group of Synopsys and the second author is the director of R&D for IP
in Mentor Graphics.

Mike G.

Two-process FSMs for re-use? Hmmmm.

You HAVE a choice :)
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
mikegurche@yahoo.com wrote:
... But the two-process style is
easier to understand and less error-prone, especially for a novice
user.
In what ways is a two-process style less error prone? Let's see:

The two-process style is infinitely more prone to latches.

The two-process style is more prone to simulation-synthesis mismatches
(sensitivity list problems).

The two-process style requires twice as many signals to be declared and
managed.

The two-process style simulates slower (more signals, more events, more
processes sensitive to more signals, and fewer processes that can be
combined in simulation), allowing less simulation to be performed and
more errors to go undetected.

The two-process style encourages combinatorial outputs (touted as one
of the prime benefits of 2-process descriptions), which have
less-predictable clock-to-out performance than registered outputs.
Besides, if you really need combinatorial outputs, you can code them
from your synchronous processes if you use variables for the registers.

The two-process style encourages a net-list approach to design, rather
than a functional approach. It is important to know what kind of
hardware gets generated; it is more important to understand the
functionality of that hardware.

Andy
 
Andy wrote:
Too bad the author is a proponent of the ancient style of separate
clocked and combinatorial processes in VHDL. He even uses a third
process for registered outputs.

I think he needs to discover what variables can do for you in a clocked
process.
Really? Which of his arguments do you disagree with?

I always thought of the two-process style as being redundant, but after
reading Dr. Chu's argument, I'm revising my thinking. For one thing,
this style makes it much less disruptive to change a Moore output to a
Mealy and vise versa.

My thanks to S.C. for the reference. Good one.

Tommy
 
Andy wrote:
mikegurche@yahoo.com wrote:
... But the two-process style is
easier to understand and less error-prone, especially for a novice
user.

In what ways is a two-process style less error prone? Let's see:

The two-process style is infinitely more prone to latches.
Not if you assign default values to all signals in the beginning of the
process.
On the other hand, in one-process code every signal within the clocked
process infers an FF, regardless you need it or not. A variable within
the clocked process may infer an FF (if used before assigned) or may
not (if assigned before used).

The two-process style is more prone to simulation-synthesis mismatches
(sensitivity list problems).
Missing a signal in sensitivity list for combinational circuit is a bad
practice and has nothing to do with one or two processes.

The two-process style requires twice as many signals to be declared and
managed.
This is true. It is a small price for clarity.

The two-process style simulates slower (more signals, more events, more
processes sensitive to more signals, and fewer processes that can be
combined in simulation), allowing less simulation to be performed and
more errors to go undetected.
The two-process style surely simulates slower. However, since the code
is in RTL level, not synthesized cell net list, it should be tolerable.


The two-process style encourages combinatorial outputs (touted as one
of the prime benefits of 2-process descriptions), which have
less-predictable clock-to-out performance than registered outputs.
Besides, if you really need combinatorial outputs, you can code them
from your synchronous processes if you use variables for the registers.
The clock-to-output delay is only important in the "boundary" of a
relatively large subsystem . Two-process style allows you to add
buffer in any place (the buffer can be grouped within the register
process), including any desired output. Except for the "boundary"
of a large system, an output buffer is not needed within the system if
it is synchronous. Blindly adding output buffers wastes resource
(additional FFs) and penalizes performance (one extra clock delay for
the output signal).

The two-process style encourages a net-list approach to design, rather
than a functional approach. It is important to know what kind of
hardware gets generated; it is more important to understand the
functionality of that hardware.

I disagree. I feel that it is more important to understand the
hardware structure in RTL level, particularly for synthesis.

Mike G.

 
mikegurche@yahoo.com wrote:

The two-process style is infinitely more prone to latches.
Not if you assign default values to all signals in the beginning of the
process.
True, but if you forget one, you have an error.

On the other hand, in one-process code every signal within the clocked
process infers an FF, regardless you need it or not. A variable within
the clocked process may infer an FF (if used before assigned) or may
not (if assigned before used).
Sounds like using a variable is more flexible. I'm sold.

See rising_v.strobe in http://home.comcast.net/~mike_treseler/rise_count.vhd
for an example of an asynchronous node within a synchronous process.

The two-process style is more prone to simulation-synthesis mismatches
(sensitivity list problems).
Missing a signal in sensitivity list for combinational circuit is a bad
practice and has nothing to do with one or two processes.
I disagree. For one process, the sensitivity list is always the same.

The two-process style requires twice as many signals to be declared and
managed.
This is true. It is a small price for clarity.
It's clear to me, that with one process,
I don't need any signals at all.

-- Mike Treseler
 
Mike,

So, what you're saying is that if the (novice) user goes to the extra
trouble of definining default assignments, and correctly managing the
sensitivity list, then it is easier for him to design/code that way?...

The whole point of using variables in clocked processes is that you can
have either behavior you like (registered or combinatorial), in one
process. In fact, the variable itself does not infer combo or
register, each access to it does. The same variable can represent both
via two different references to it. A reference to the variable can
even represent a mux between the combo and registered value. The whole
point is, the synthesizer will generate hardware that behaves exactly
like it simulates, using combinatorial and/or registered logic to do
so.

Even combinatorial RTL code simulates faster than gate level code (the
ulimate extension of combinatorial code), but that is not the issue.
Simulating RTL that consists almost entirely of clocked processes
approaches cycle based simulation performance in modern simulators
(that merge processes with same sensitivities). You can simulate many
more corner cases in RTL than gate-level, and many more than that if
you code almost exclusively with clocked processes.

Depending on the target choice and synthesis approach (top-down or
bottom-up), the argument over combinatorial ouptuts from modules (i.e.
state machines) may be more or less important. The bottom line is
you'll have fewer timing problems if combinatorial paths don't cross
hierarchical boundaries (though some tools and methods minimize those
problems). The argument of a performance hit by incurring a clock
delay on registered outputs is nonsense for outputs decoded from states
(not from inputs). In all such cases, it is possible to recode those
outputs to be registered with the exact same clock cycle performance as
with combinatorial outputs.

My point about netlist vs functional is more about level of
abstraction. We're aren't going to improve productivity by continuing
to focus on "this is the combo logic, that is the registers". Adopting
single process descriptions using variables for registers and
combinatorial logic allows one to move up the ladder, abstraction-wise.
Of course, you still have to have an idea of what is going to happen at
the gates and flops level, but you needn't focus on it.

Andy
 
Andy,

Thank you for your response and I see your point of functionality and
level of abstraction.

But using lower level of abstraction (i.e., separating comb logic and
register) has its advantage. For example, if we want to add scan chain
(for testing) to a synchronous system after completing the design, we
can simply replace the regular D FFs with dual-input scan FFs in a
two-process code. This will be messy for one-process code. Another
example is to use a meta-stabile hardened D FF for synchronization.

I am not an expert in verification, but I believe the two-process code
helps formal verification as well. It is the guideline presented in
"Principle of verifiable RTL design" by Bening and Foster (The
summary is in http://home.comcast.net/~bening/Broch10f.pdf. It's
written for Verilog, but the idea is there). They even go to the
extreme to suggest that all memory elements should be separately
instantiated. This is may be one reason that "Reuse Methodology
Manual" I mentioned in an earlier message also recommends two-process
style.

Mike G.

Andy wrote:
Mike,

So, what you're saying is that if the (novice) user goes to the extra
trouble of definining default assignments, and correctly managing the
sensitivity list, then it is easier for him to design/code that way?...

The whole point of using variables in clocked processes is that you can
have either behavior you like (registered or combinatorial), in one
process. In fact, the variable itself does not infer combo or
register, each access to it does. The same variable can represent both
via two different references to it. A reference to the variable can
even represent a mux between the combo and registered value. The whole
point is, the synthesizer will generate hardware that behaves exactly
like it simulates, using combinatorial and/or registered logic to do
so.

Even combinatorial RTL code simulates faster than gate level code (the
ulimate extension of combinatorial code), but that is not the issue.
Simulating RTL that consists almost entirely of clocked processes
approaches cycle based simulation performance in modern simulators
(that merge processes with same sensitivities). You can simulate many
more corner cases in RTL than gate-level, and many more than that if
you code almost exclusively with clocked processes.

Depending on the target choice and synthesis approach (top-down or
bottom-up), the argument over combinatorial ouptuts from modules (i.e.
state machines) may be more or less important. The bottom line is
you'll have fewer timing problems if combinatorial paths don't cross
hierarchical boundaries (though some tools and methods minimize those
problems). The argument of a performance hit by incurring a clock
delay on registered outputs is nonsense for outputs decoded from states
(not from inputs). In all such cases, it is possible to recode those
outputs to be registered with the exact same clock cycle performance as
with combinatorial outputs.

My point about netlist vs functional is more about level of
abstraction. We're aren't going to improve productivity by continuing
to focus on "this is the combo logic, that is the registers". Adopting
single process descriptions using variables for registers and
combinatorial logic allows one to move up the ladder, abstraction-wise.
Of course, you still have to have an idea of what is going to happen at
the gates and flops level, but you needn't focus on it.

Andy
 
"Tommy Thorn" <tommy.thorn@gmail.com> wrote in message
news:1153510009.206861.78040@p79g2000cwp.googlegroups.com...
Andy wrote:
Too bad the author is a proponent of the ancient style of separate
clocked and combinatorial processes in VHDL. He even uses a third
process for registered outputs.

I think he needs to discover what variables can do for you in a clocked
process.

Really? Which of his arguments do you disagree with?

I always thought of the two-process style as being redundant, but after
reading Dr. Chu's argument, I'm revising my thinking. For one thing,
this style makes it much less disruptive to change a Moore output to a
Mealy and vise versa.

My thanks to S.C. for the reference. Good one.
But in practice one doesn't much care if any outputs are 'Mealy' or 'Moore'.
What one has is a function that needs to be implemented within specific area
(or logic resource) constraints and performance (i.e. clock cycle, Tpd, Tsu,
Tco) constraints.

Breaking what can be accomplished in one process into two (or more)
logically equivalent processes should be considered for code clarity which
can aid in support and maintenance of the code during it's lifetime as well
as for potential design reuse (although realistically re-use of any single
process is probably pretty low). Re-use happens more often at the
entity/architecture level, but the 'copy/paste/modify' type of re-use
probably happens more at the process level when it does happen.

Breaking a process into two just to have a combinatorial process to describe
the 'next' state and a separate process to clock that 'next' state into the
'current' state has no particular value when using design support and
maintenance cost as a metric of 'value'. Since the different methods
produce the final end logic there is no function or performance advantage to
either approach. On the other hand, there are definite drawbacks to
implementing combinatorial logic in a VHDL process. Two of these are
- Introduction of 'unintended' latches
- Missing signals in the sensitivity list that result in different
simulation versus synthesis results

Both of these drawbacks have manual methods that can be used to try to
minimize them from happening but the bottom line is that extra effort
(a.k.a.. cost or negative value) must be incurred to do this....all of which
is avoided by simply not using the VHDL process to implement combinatorial
logic (i.e. the 'next' state computation).

So as far as the VHDL language is concerned, there are real costs that will
be incurred every time the two process method is used but no real value
add....or at least that's my 2 cents.....

KJ
 
KJ wrote:

Breaking what can be accomplished in one process into two (or more)
logically equivalent processes should be considered for code clarity
I find the idea of describing both the "Q side" state
and the "D side" next state more confusing than clarifying.

which
can aid in support and maintenance of the code during it's lifetime as well
as for potential design reuse (although realistically re-use of any single
process is probably pretty low). Re-use happens more often at the
entity/architecture level,
....one reason that I favor single process entities.

but the 'copy/paste/modify' type of re-use
probably happens more at the process level when it does happen.
....for this type of reuse, I use procedures.
This provides some modularity and built-in documentation.
Even if the operation is not reused, well-named
procedures in process scope make code easier to read and maintain.


Breaking a process into two just to have a combinatorial process to describe
the 'next' state and a separate process to clock that 'next' state into the
'current' state has no particular value when using design support and
maintenance cost as a metric of 'value'. Since the different methods
produce the final end logic there is no function or performance advantage to
either approach. On the other hand, there are definite drawbacks to
implementing combinatorial logic in a VHDL process. Two of these are
- Introduction of 'unintended' latches
- Missing signals in the sensitivity list that result in different
simulation versus synthesis results

Both of these drawbacks have manual methods that can be used to try to
minimize them from happening but the bottom line is that extra effort
(a.k.a.. cost or negative value) must be incurred to do this....all of which
is avoided by simply not using the VHDL process to implement combinatorial
logic (i.e. the 'next' state computation).

So as far as the VHDL language is concerned, there are real costs that will
be incurred every time the two process method is used but no real value
add....or at least that's my 2 cents.....
Well said. Thanks for the posting.

-- Mike Treseler
 
Breaking what can be accomplished in one process into two (or more)
logically equivalent processes should be considered for code clarity

I find the idea of describing both the "Q side" state
and the "D side" next state more confusing than clarifying.
I find the 'D/Q' side coding less usefull too. I was referring more to
having multiple clocked processes rather than the 'two process' discussion
that is going on here. Even though the multiple clocked processes that I
meant can all be logically thought of as one process I tend to break them up
into the multiple clocked processes simply to group together somewhat
related functions. Somewhat like the way a story/chapter is broken up into
multiple paragraphs that express a particular thought.

KJ
 
But using lower level of abstraction (i.e., separating comb logic and
register) has its advantage. For example, if we want to add scan chain
(for testing) to a synchronous system after completing the design, we
can simply replace the regular D FFs with dual-input scan FFs in a
two-process code. This will be messy for one-process code.
Maybe I'm missing something but in the one process code you would add the
following...
if (Scan_Chain_Active = '1') then
-- Assign the clocked signals here per how it's needed for scan chain
else
-- Assign the clocked signals here per how the design needs it to
function
-- (i.e. this was what was here prior to adding scan)
end if;

I don't see that as any particular burden or how that would be any different
in the clocked process of the two process approach. It certainy isn't any
messier.

Another example is to use a meta-stabile hardened D FF for
synchronization.
Kinda lost me on what you mean or how the two process approach would
help have any advantage over the one process approach.

They even go to the
extreme to suggest that all memory elements should be separately
instantiated. This is may be one reason that "Reuse Methodology
Manual" I mentioned in an earlier message also recommends two-process
style.
And I think that is the crux of the debate between the 'one process' versus
'two process' camps, the 'two process' folks can't explicitly state any real
advantage to their approach even though they recognize real costs to doing
so....to be blunt, you're paying a price and getting nothing in return. In
this particular case, can you explain what benefit is received for
separately instantiating each memory element?

In reality, since either approach yields the exact same function and
performance the debate centers solely on productivity....which method, can
one be more productive with and why? Productivity is economoics and
measuring productivity means using currency ($$, Euros, etc) as your basis
for comparison. You can't really measure productivity without also bringing
up which tools are being used either since use of a 'better' tool may
involve being somewhat less efficient on one end with the overall goal that
the entire process is more efficient.

Comparing the two process approach to the one process the basic difference
is that you have several more lines of code to write:
- Declarations for the additional set of signals (i.e. the 'D' and 'Q'
instead of just the 'Q')
- Adding the 'default' values for each of the 'D' equations so as to not
create unintended latches.
- The 'Q <= D;' code in the second (i.e. clocked) process of the two process
approach.
- Process sensitivity list maintenance (in VHDL)

Now take design creation and debug as an activity to measure cost. For
every line of code written, one can assume that there is a certain
probability that it will be wrong. For a given designer using a given
language using a given coding technique one will likely tend to have some
particular error rate measured in errors per line of code. Since errors
need to be fixed at some point you incur some cost to doing so. If nothing
else, the two process approach encourages more lines of code and therefore a
higher probability of having an error than the one process approach so what
does the two process approach bring to the table that would lower the error
rate that might somehow offset the increased lines of code?

One can take other tasks like scan chain insertion as you did, or test
creation, life cycle support etc. whatever tasks you can think of and try to
compare costs of those tasks using the two approaches and go through the
same process to come up with a $$ measure for each task. Total up the tasks
and see which approach is better.

Now if someone in the two process camp can walk through a single task and
show how even that one task is more efficient (i.e. productive) than the one
process approach you might be able to start convincing readers until
then....well one can always dream.

KJ
 
The two-process style is infinitely more prone to latches.
Not if you assign default values to all signals in the beginning of the
process.
On the other hand, in one-process code every signal within the clocked
process infers an FF, regardless you need it or not. A variable within
the clocked process may infer an FF (if used before assigned) or may
not (if assigned before used).
Translation: More work to accomplish the same thing with no benefit.

The two-process style is more prone to simulation-synthesis mismatches
(sensitivity list problems).
Missing a signal in sensitivity list for combinational circuit is a bad
practice and has nothing to do with one or two processes.
Again, more work is required (sensitivity list maintenance) to
accomplish the same thing with no benefit.

The two-process style requires twice as many signals to be declared and
managed.
This is true. It is a small price for clarity.

Once again more work is required (the additional signal declarations
result in more lines of code) to accomplish the same thing with no
benefit.

As an aside, the signal declaration price can be minimized using the
approach outlined below. I don't really recommend it because it makes
debugging in a simulator (or perhaps just Modelsim) much tougher and
you still have more lines of code (translation: more chances for
error) than with the one process approach (using either variables or
consurrent signal assigns for the combinatorial outputs).

Instead of having discrete signals (i.e. A, B, etc.) and then doubling
that number to have a 'D' and a 'Q' version like this...

signal A_d, A_q: std_ulogic;
signal B_d, B_q: std_ulogic;

Instead have the following types....

type t_THE_SIGS is record --***
A: std_ulogic;
B: std_ulogic;
end record; --***
type t_PRE_POST_FF is record --***
D: t_THE_SIGS; --***
Q: t_THE_SIGS; --***
end record; --***
signal All_My_Sigs; --***

Then the 'unclocked' process in the two process model is calculating
the signal All_My_Sigs.D; the 'clocked' process is simply
process(Clock)
begin
if rising_edge(Clock) then
All_My_Sigs.Q <= All_My_Sigs.D; --***
end if;
end process;

This makes the overhead related to handling double the number of
signals be exactly 8 lines of code (the ones that I marked with
"--***") regardless of how many signals we're talking about. 8 extra
is still 8 extra not 8 less so it's still more work and there are some
things that look nice about code written this way, but actually using
this approach has a few drawbacks that are killers
- Can be cumbersome to debug (try it with your fav simulator and you'll
probably see why)
- It generally can not be used at the top level of the design at all so
there will need to be additional code to connect up the top level
ports.
- Even on internal entities it can be difficult to use because you'll
generally need to split things further into 'in', 'out' and 'inout' for
each 'D' and 'Q'. Certain special cases might work but in general it
will mean additional code just to connect up a higher level entity to a
lower level.

I believe the constant of 8 extra lines of code is about the best that
one could hope for so I've minimized one aspect of the drawbacks only
to have it pop out with yet more code and cumbersome quirks. As a
general rule, I don't think having two sets of signals where only one
is actually needed is in any way a 'small price' to pay if this is code
written by a human. If it's machine generated code and it confers some
advantage to the tool to generate code in that manner than I have no
trouble with that.

The two-process style simulates slower (more signals, more events, more
processes sensitive to more signals, and fewer processes that can be
combined in simulation), allowing less simulation to be performed and
more errors to go undetected.
The two-process style surely simulates slower. However, since the code
is in RTL level, not synthesized cell net list, it should be tolerable.

So after putting in the extra work....the simulation runs slower....but
probably not intolerably slower....not exactly a benefit here either.

KJ
 
Although scan chains can be inserted at the RTL source level, they are
usually best handled at the gate level, and are thus irrelevent to RTL
coding styles. Besides, as KJ has pointed out, the same structure that
infers the scan chain in a separate clocked process can be applied in a
combined process, with little or no impact to the functional code (i.e.
surround it with an if/then/else). Choice of types of registers
(metastable hardened, etc.) can be controlled by constraints/attributes
anyway, no matter how you infer them.

Any verification guide written for Verilog will naturally have a bias
towards the flexibility of two-process descriptions, since Verilog
lacks a safe way to perform blocking assignments (handled admirably
with variables in VHDL), which restore the "lost" flexibility in single
process descriptions.

The reuse guide has its foundations in tool capabilities (or
limitations) that are more than 10 years old! Separate combinatorial
and clocked descriptions were the only method available in the first
synthesis tools, when registers could not be inferred from RTL anyway,
and had to be instantiated. Thus, the combinatorial logic had to be
separated out, and it was a smaller leap for the tools (and their
users) to inferring storage from simple, Q <= D clocked processes,
still separate from the combinatorial logic. Mainstream synthesis tools
have progressed far beyond that (OTOH, the last time I looked, synopsys
still could not infer RAM from RTL arrays!)

I prefer not to limit my VHDL descriptions to the manner in which I
would write them in Verilog.

Andy
 
KJ wrote:
"Tommy Thorn" <tommy.thorn@gmail.com> wrote in message
news:1153510009.206861.78040@p79g2000cwp.googlegroups.com...
Andy wrote:
Too bad the author is a proponent of the ancient style of separate
clocked and combinatorial processes in VHDL. He even uses a third
process for registered outputs.

I think he needs to discover what variables can do for you in a clocked
process.

Really? Which of his arguments do you disagree with?

I always thought of the two-process style as being redundant, but after
reading Dr. Chu's argument, I'm revising my thinking. For one thing,
this style makes it much less disruptive to change a Moore output to a
Mealy and vise versa.

My thanks to S.C. for the reference. Good one.


But in practice one doesn't much care if any outputs are 'Mealy' or 'Moore'.
What one has is a function that needs to be implemented within specific area
(or logic resource) constraints and performance (i.e. clock cycle, Tpd, Tsu,
Tco) constraints.

Breaking what can be accomplished in one process into two (or more)
logically equivalent processes should be considered for code clarity which
can aid in support and maintenance of the code during it's lifetime as well
as for potential design reuse (although realistically re-use of any single
process is probably pretty low). Re-use happens more often at the
entity/architecture level, but the 'copy/paste/modify' type of re-use
probably happens more at the process level when it does happen.

Breaking a process into two just to have a combinatorial process to describe
the 'next' state and a separate process to clock that 'next' state into the
'current' state has no particular value when using design support and
maintenance cost as a metric of 'value'. Since the different methods
produce the final end logic there is no function or performance advantage to
either approach. On the other hand, there are definite drawbacks to
implementing combinatorial logic in a VHDL process. Two of these are
- Introduction of 'unintended' latches
- Missing signals in the sensitivity list that result in different
simulation versus synthesis results

Both of these drawbacks have manual methods that can be used to try to
minimize them from happening but the bottom line is that extra effort
(a.k.a.. cost or negative value) must be incurred to do this....all of which
is avoided by simply not using the VHDL process to implement combinatorial
logic (i.e. the 'next' state computation).

So as far as the VHDL language is concerned, there are real costs that will
be incurred every time the two process method is used but no real value
add....or at least that's my 2 cents.....

KJ
After reading the arguments here, I have started using a mixed
approach, but I still use the two process model for state machines and
other complex logic. There are just too many times when I don't want to
wait until the next clock for an output to take affect.

At work, we have a hard requirement (as in, it won't pass a peer
review) to write in the two process model. Another hard requirement
(that I agree with) is that there should only be one clocked process
per clock. The guy that came up with the requirement predates HDL's in
general - and I'm sure there was a good reason for it at one time.

However, for my home projects, I tend to mix the one and two-process
models based on what is most convenient.

Having done so, I don't see that the two-process model is that terribly
inconvenient. I simply place a default condition at the beginning of
the process, and override the default as needed. For most processes,
this adds maybe 1-10 "extra" lines.

Perhaps it's because I was taught in the two-process model, but I find
it easier to understand what is going on when I use it, so anything
that requires me to think, I use a separate combinatorial process for.
Simple logic, like counters, pipeline registers, etc. goes into the
appropriate clocked process.

For me, this mixed approach works pretty well.
 

Welcome to EDABoard.com

Sponsor

Back
Top