inter-dependent assignments in process

K

kristoff

Guest
Hi all,



I have a question on a very simple piece of code:

I have been looking in the different VHDL documentation I have here, but
have not found it.




This is part of a process:

if (s_sclk_edgeup = '1') then
-- on rising edge of spi-clock, read data, and determine
-- next bit position

buffer(bitcounter) <= spi_mosi;

-- determine new bitposition
if (bitcounter > 0) then
bitcounter <= bitcounter - 1;
else
bitcounter <= 7;
-- last bit -> go process received data
state <= PROCESS_DATA;
end if;
end if;

My question is related to the fact there are two signals that are
assignments, but these two signals are inter-dependent.


Here there are two actions:
- store data from the spi_mosi input-pin to the correct position of the
buffer
- move down the buffer-position pointer

The order in which these two actions are executed does matter.

If I understand this correctly, as this is part of a process, the
general rule is that the assignment is actually done at the END of the
process!!!



So, I tested this and ... in this case, this works nice. ... But ... why?

How does the VHDL compiler know what actions comes first?


At first, I thought that this was related to the order in which the two
actions are written in the process (commands inside a process do are
processed in sequence, no?), but when I move the "move down
buffer-position pointer" part to the start of this process, the code
still works. (so this part was executed LAST, not first!)


What exactly is going on here?
Is this just a random behaviour or how quartus is processing this
paricular example, or is this senario exactually described in the VHDL
specification?

(as said, I have been looking in the documentation I have, and not found
any information about this).



Now, I know there are other ways to code this that are more explicit
about the order these two actions are executed:
- make the statemachine more complex
- use a temporary variable.


What would be "best practice" to code this "the proper way"?



Cheerio!

Kristoff
 
On Thursday, September 29, 2016 at 7:45:54 PM UTC-4, kristoff wrote:
This is part of a process:

if (s_sclk_edgeup = '1') then
-- on rising edge of spi-clock, read data, and determine
-- next bit position

buffer(bitcounter) <= spi_mosi;

-- determine new bitposition
if (bitcounter > 0) then
bitcounter <= bitcounter - 1;
else
bitcounter <= 7;
-- last bit -> go process received data
state <= PROCESS_DATA;
end if;
end if;




If I understand this correctly, as this is part of a process, the
general rule is that the assignment is actually done at the END of the
process!!!

Almost. What happens inside the process is that the signals are 'scheduled' to be updated. At the end of the process, it suspends. Once *all* the processes that are active in the entire simulation have run and suspended, the simulator takes the list of all signals that have been scheduled to be updated and actually updates them. Scheduling to be updated and updating are two very different things.

This behavior is different than for variables. For a variable assignment, the variable is updated immediately, there is no scheduling.

So, I tested this and ... in this case, this works nice. ... But ... why?

How does the VHDL compiler know what actions comes first?

The VHDL LRM defines this signal update scheduling.

At first, I thought that this was related to the order in which the two
actions are written in the process (commands inside a process do are
processed in sequence, no?),

Correct.

but when I move the "move down
buffer-position pointer" part to the start of this process, the code
still works. (so this part was executed LAST, not first!)

'Executed' might be the wrong word to use here. When the signal assignment statement is 'executed' it is not updating the signal, it is just scheduling it to be updated. If there had been multiple assignments to a particular signal, then yes the order would matter since the scheduling that occurs for the last assignment would essentially override the scheduling that already occurred for the first assignment. In your process, there is only one assignment statement that will be executed no matter how one goes through the process.

(as said, I have been looking in the documentation I have, and not found
any information about this).

The LRM can be difficult to read.

Now, I know there are other ways to code this that are more explicit
about the order these two actions are executed:
- make the statemachine more complex
- use a temporary variable.

Not sure what you're trying to get at here. The order doesn't matter in your example. You proved it yourself by noting the behavior was the same even when you shuffled the order.

What would be "best practice" to code this "the proper way"?

There is nothing objectionable about the process you posted.

Kevin Jennings
 
On 9/29/2016 9:49 PM, KJ wrote:
On Thursday, September 29, 2016 at 7:45:54 PM UTC-4, kristoff wrote:

This is part of a process:

if (s_sclk_edgeup = '1') then -- on rising edge of spi-clock,
read data, and determine -- next bit position

buffer(bitcounter) <= spi_mosi;

-- determine new bitposition if (bitcounter > 0) then bitcounter
= bitcounter - 1; else bitcounter <= 7; -- last bit -> go
process received data state <= PROCESS_DATA; end if; end if;




If I understand this correctly, as this is part of a process, the
general rule is that the assignment is actually done at the END of
the process!!!

Almost. What happens inside the process is that the signals are
'scheduled' to be updated. At the end of the process, it suspends.
Once *all* the processes that are active in the entire simulation
have run and suspended, the simulator takes the list of all signals
that have been scheduled to be updated and actually updates them.
Scheduling to be updated and updating are two very different things.

Your code segment does not indicate if these items are signals or
variables, but from your description of their behavior I expect they are
signals. I would just add to what KJ said by saying the terminology for
this delay of assignment is delta delay. Delta delays are like an
infinitely small increment of time, but no actual time elapses. When a
process executes and updates a signal, the update is scheduled for now +
1 delta delay. The same thing happens when a concurrent statement is
executed, the assignment is scheduled for 1 delta delay later.

So a process runs at time 100 ns and does a signal assignment scheduled
for 100 ns + 1 delta delay. This signal is used in a concurrent
assignment and that signal update is scheduled for 100 ns + 2 delta
delays. All of these events look like they happen at 100 ns but the
delta delays preserve the proper order of events to get the right result.

In a process multiple assignments can be done to the same signal. The
last one to be executed is the one that actually is assigned to the
signal 1 delta delay later.

You can create problems by running clocks through signal assignments.
If process one is clocked by clk1, clk2 is assigned the value of clk1
and clk2 is used to clock process two, this will happen as if the two
processes run at different times, but a simulation will show the changes
at the same time. Example:

clk2 <= clk1;

process_one (clk1) begin
if (rising_edge(clk1) then
A <= input;
end if;
end process;

process_two (clk1) begin
if (rising_edge(clk2) then
B <= A;
end if;
end process;

Start with A = '0' and input = '1' when clk1 has a positive edge at time
100 ns. After the dust has settled, A will be '0' and B will be '0'.
That is because A is assigned a '1' at time 100 ns + 1 delta delay.
Clk2 is also scheduled for a rising edge at 100 ns + 1 delta delay, so
process_two may be executed after A has been updated. If that happens B
will be assigned a '1' rather than a '0' which was the state of A prior
to the rising edge of clk1.

I hope that makes sense. There are times you need to understand to
prevent this sort of error.


This behavior is different than for variables. For a variable
assignment, the variable is updated immediately, there is no
scheduling.

Yes, variables in VHDL work like variables in a typical sequential
programming language like C. A statement is executed entirely before
the next statement is executed. This is the opposite of how signals
work. So if variable A is assigned a value and later in the process A
is used, it will have the updated value. If A is a signal used anywhere
in the process before waiting the old value will be used. Once the
process suspends for any reason (exits at end or encounters a wait
statement) it won't run again until at least one delta delay.

I'm a little unclear on how events scheduled for the same delta delay
are ordered for execution. Maybe someone else can explain that. I
don't know if they are ordered by the order they were executed (the
simulator actually can only run one process at a time, even if the
processes are supposed to run on the same delta delay) or in some other
sequence.


So, I tested this and ... in this case, this works nice. ... But
... why?

How does the VHDL compiler know what actions comes first?

The VHDL LRM defines this signal update scheduling.


At first, I thought that this was related to the order in which the
two actions are written in the process (commands inside a process
do are processed in sequence, no?),

Correct.

but when I move the "move down buffer-position pointer" part to the
start of this process, the code still works. (so this part was
executed LAST, not first!)

'Executed' might be the wrong word to use here. When the signal
assignment statement is 'executed' it is not updating the signal, it
is just scheduling it to be updated. If there had been multiple
assignments to a particular signal, then yes the order would matter
since the scheduling that occurs for the last assignment would
essentially override the scheduling that already occurred for the
first assignment. In your process, there is only one assignment
statement that will be executed no matter how one goes through the
process.

(as said, I have been looking in the documentation I have, and not
found any information about this).


The LRM can be difficult to read.

+1 VERY!

Part of the problem is how it is written, to be precise and unambiguous
rather than easy to read. Partly it is because many times to understand
one thing, you have to go back and learn about something else which
means you have to learn another thing and so on.


Now, I know there are other ways to code this that are more
explicit about the order these two actions are executed: - make the
statemachine more complex - use a temporary variable.


Not sure what you're trying to get at here. The order doesn't matter
in your example. You proved it yourself by noting the behavior was
the same even when you shuffled the order.


What would be "best practice" to code this "the proper way"?

There is nothing objectionable about the process you posted.

I prefer to order statements so it is clear and easy to read. The order
you have written them makes sense whether they are executed in sequence
as variables, or pending a delta delay, so unambiguous even if you don't
know whether they are signals or variables. There are many times I
rewrite my code a bit to make it easier to read, but mostly I used a
style that is not hard to read in the first place.

You get used to using signals pretty quickly once you get the concept of
the delta delay.

BTW, Verilog doesn't have delta delays, so they talk about blocking vs.
non-blocking assignments. When I code Verilog I just use simple
templates so I don't have to remember what "blocking" means exactly.
I'm not so good with names if the meaning isn't clear from the name.
I've never memorized what is being blocked and what that implies. I
just write the code in a way that I know works. lol

--

Rick C
 
On 9/30/2016 12:12 AM, rickman wrote:
On 9/29/2016 9:49 PM, KJ wrote:
On Thursday, September 29, 2016 at 7:45:54 PM UTC-4, kristoff wrote:

This is part of a process:

if (s_sclk_edgeup = '1') then -- on rising edge of spi-clock,
read data, and determine -- next bit position

buffer(bitcounter) <= spi_mosi;

-- determine new bitposition if (bitcounter > 0) then bitcounter
= bitcounter - 1; else bitcounter <= 7; -- last bit -> go
process received data state <= PROCESS_DATA; end if; end if;




If I understand this correctly, as this is part of a process, the
general rule is that the assignment is actually done at the END of
the process!!!

Almost. What happens inside the process is that the signals are
'scheduled' to be updated. At the end of the process, it suspends.
Once *all* the processes that are active in the entire simulation
have run and suspended, the simulator takes the list of all signals
that have been scheduled to be updated and actually updates them.
Scheduling to be updated and updating are two very different things.

Your code segment does not indicate if these items are signals or
variables, but from your description of their behavior I expect they are
signals. I would just add to what KJ said by saying the terminology for
this delay of assignment is delta delay. Delta delays are like an
infinitely small increment of time, but no actual time elapses. When a
process executes and updates a signal, the update is scheduled for now +
1 delta delay. The same thing happens when a concurrent statement is
executed, the assignment is scheduled for 1 delta delay later.

So a process runs at time 100 ns and does a signal assignment scheduled
for 100 ns + 1 delta delay. This signal is used in a concurrent
assignment and that signal update is scheduled for 100 ns + 2 delta
delays. All of these events look like they happen at 100 ns but the
delta delays preserve the proper order of events to get the right result.

In a process multiple assignments can be done to the same signal. The
last one to be executed is the one that actually is assigned to the
signal 1 delta delay later.

You can create problems by running clocks through signal assignments. If
process one is clocked by clk1, clk2 is assigned the value of clk1 and
clk2 is used to clock process two, this will happen as if the two
processes run at different times, but a simulation will show the changes
at the same time. Example:

clk2 <= clk1;

process_one (clk1) begin
if (rising_edge(clk1) then
A <= input;
end if;
end process;

process_two (clk1) begin
if (rising_edge(clk2) then
B <= A;
end if;
end process;

Start with A = '0' and input = '1' when clk1 has a positive edge at time
100 ns. After the dust has settled, A will be '0' and B will be '0'.
That is because A is assigned a '1' at time 100 ns + 1 delta delay. Clk2
is also scheduled for a rising edge at 100 ns + 1 delta delay, so
process_two may be executed after A has been updated. If that happens B
will be assigned a '1' rather than a '0' which was the state of A prior
to the rising edge of clk1.

I hope that makes sense. There are times you need to understand to
prevent this sort of error.


This behavior is different than for variables. For a variable
assignment, the variable is updated immediately, there is no
scheduling.

Yes, variables in VHDL work like variables in a typical sequential
programming language like C. A statement is executed entirely before
the next statement is executed. This is the opposite of how signals
work. So if variable A is assigned a value and later in the process A
is used, it will have the updated value. If A is a signal used anywhere
in the process before waiting the old value will be used. Once the
process suspends for any reason (exits at end or encounters a wait
statement) it won't run again until at least one delta delay.

I'm a little unclear on how events scheduled for the same delta delay
are ordered for execution. Maybe someone else can explain that. I
don't know if they are ordered by the order they were executed (the
simulator actually can only run one process at a time, even if the
processes are supposed to run on the same delta delay) or in some other
sequence.


So, I tested this and ... in this case, this works nice. ... But
... why?

How does the VHDL compiler know what actions comes first?

The VHDL LRM defines this signal update scheduling.


At first, I thought that this was related to the order in which the
two actions are written in the process (commands inside a process
do are processed in sequence, no?),

Correct.

but when I move the "move down buffer-position pointer" part to the
start of this process, the code still works. (so this part was
executed LAST, not first!)

'Executed' might be the wrong word to use here. When the signal
assignment statement is 'executed' it is not updating the signal, it
is just scheduling it to be updated. If there had been multiple
assignments to a particular signal, then yes the order would matter
since the scheduling that occurs for the last assignment would
essentially override the scheduling that already occurred for the
first assignment. In your process, there is only one assignment
statement that will be executed no matter how one goes through the
process.

(as said, I have been looking in the documentation I have, and not
found any information about this).


The LRM can be difficult to read.

+1 VERY!

Part of the problem is how it is written, to be precise and unambiguous
rather than easy to read. Partly it is because many times to understand
one thing, you have to go back and learn about something else which
means you have to learn another thing and so on.


Now, I know there are other ways to code this that are more
explicit about the order these two actions are executed: - make the
statemachine more complex - use a temporary variable.


Not sure what you're trying to get at here. The order doesn't matter
in your example. You proved it yourself by noting the behavior was
the same even when you shuffled the order.


What would be "best practice" to code this "the proper way"?

There is nothing objectionable about the process you posted.

I prefer to order statements so it is clear and easy to read. The order
you have written them makes sense whether they are executed in sequence
as variables, or pending a delta delay, so unambiguous even if you don't
know whether they are signals or variables. There are many times I
rewrite my code a bit to make it easier to read, but mostly I used a
style that is not hard to read in the first place.

You get used to using signals pretty quickly once you get the concept of
the delta delay.

BTW, Verilog doesn't have delta delays, so they talk about blocking vs.
non-blocking assignments. When I code Verilog I just use simple
templates so I don't have to remember what "blocking" means exactly. I'm
not so good with names if the meaning isn't clear from the name. I've
never memorized what is being blocked and what that implies. I just
write the code in a way that I know works. lol

Opps, typo...

process_two (clk1) begin
if (rising_edge(clk2) then
B <= A;
end if;
end process;

Should be...

process_two (clk2) begin
if (rising_edge(clk2) then
B <= A;
end if;
end process;

--

Rick C
 
On 9/30/2016 12:12 AM, rickman wrote:
On 9/29/2016 9:49 PM, KJ wrote:
On Thursday, September 29, 2016 at 7:45:54 PM UTC-4, kristoff wrote:

This is part of a process:

if (s_sclk_edgeup = '1') then -- on rising edge of spi-clock,
read data, and determine -- next bit position

buffer(bitcounter) <= spi_mosi;

-- determine new bitposition if (bitcounter > 0) then bitcounter
= bitcounter - 1; else bitcounter <= 7; -- last bit -> go
process received data state <= PROCESS_DATA; end if; end if;




If I understand this correctly, as this is part of a process, the
general rule is that the assignment is actually done at the END of
the process!!!

Almost. What happens inside the process is that the signals are
'scheduled' to be updated. At the end of the process, it suspends.
Once *all* the processes that are active in the entire simulation
have run and suspended, the simulator takes the list of all signals
that have been scheduled to be updated and actually updates them.
Scheduling to be updated and updating are two very different things.

Your code segment does not indicate if these items are signals or
variables, but from your description of their behavior I expect they are
signals. I would just add to what KJ said by saying the terminology for
this delay of assignment is delta delay. Delta delays are like an
infinitely small increment of time, but no actual time elapses. When a
process executes and updates a signal, the update is scheduled for now +
1 delta delay. The same thing happens when a concurrent statement is
executed, the assignment is scheduled for 1 delta delay later.

So a process runs at time 100 ns and does a signal assignment scheduled
for 100 ns + 1 delta delay. This signal is used in a concurrent
assignment and that signal update is scheduled for 100 ns + 2 delta
delays. All of these events look like they happen at 100 ns but the
delta delays preserve the proper order of events to get the right result.

In a process multiple assignments can be done to the same signal. The
last one to be executed is the one that actually is assigned to the
signal 1 delta delay later.

You can create problems by running clocks through signal assignments. If
process one is clocked by clk1, clk2 is assigned the value of clk1 and
clk2 is used to clock process two, this will happen as if the two
processes run at different times, but a simulation will show the changes
at the same time. Example:

clk2 <= clk1;

process_one (clk1) begin
if (rising_edge(clk1) then
A <= input;
end if;
end process;

process_two (clk1) begin
if (rising_edge(clk2) then
B <= A;
end if;
end process;

It is just not my night. I also left off a close parentheses on the IF
statements. :(

The example should be...

clk2 <= clk1;

process_one (clk1) begin
if (rising_edge(clk1)) then
A <= input;
end if;
end process;

process_two (clk2) begin
if (rising_edge(clk2)) then
B <= A;
end if;
end process;

Each of the data items are signals.

--

Rick C
 
On Friday, September 30, 2016 at 12:12:52 AM UTC-4, rickman wrote:
On 9/29/2016 9:49 PM, KJ wrote:
I'm a little unclear on how events scheduled for the same delta delay
are ordered for execution. Maybe someone else can explain that. I
don't know if they are ordered by the order they were executed (the
simulator actually can only run one process at a time, even if the
processes are supposed to run on the same delta delay) or in some other
sequence.

Events scheduled for the same delta delay are not ordered for execution. If a signal has more than one event scheduled (for example being driven by more than one process), and it is a resolved signal, then the resolved function for that signal's data type is called. The inputs to that resolved function are the various events that have been scheduled for that signal, the output is a single value. Once all of the signal events have been resolved, every signal has at most exactly one scheduled event. At that point, all of the signals are then updated and time is advanced, typically to the next delta cycle.

Kevin Jennings
 
On 30.09.2016 06:12, rickman wrote:
On 9/29/2016 9:49 PM, KJ wrote:
On Thursday, September 29, 2016 at 7:45:54 PM UTC-4, kristoff wrote:

This is part of a process:

if (s_sclk_edgeup = '1') then -- on rising edge of spi-clock,
read data, and determine -- next bit position

buffer(bitcounter) <= spi_mosi;

-- determine new bitposition if (bitcounter > 0) then bitcounter
= bitcounter - 1; else bitcounter <= 7; -- last bit -> go
process received data state <= PROCESS_DATA; end if; end if;
[...]

Your code segment does not indicate if these items are signals or
variables, but from your description of their behavior I expect they are
signals.

Well obviously they are signals because the assignment operator is "<=",
variables would have been assigned using ":=".

Nicolas
 
On 9/30/2016 8:12 AM, KJ wrote:
On Friday, September 30, 2016 at 12:12:52 AM UTC-4, rickman wrote:
On 9/29/2016 9:49 PM, KJ wrote: I'm a little unclear on how events
scheduled for the same delta delay are ordered for execution.
Maybe someone else can explain that. I don't know if they are
ordered by the order they were executed (the simulator actually can
only run one process at a time, even if the processes are supposed
to run on the same delta delay) or in some other sequence.


Events scheduled for the same delta delay are not ordered for
execution. If a signal has more than one event scheduled (for
example being driven by more than one process), and it is a resolved
signal, then the resolved function for that signal's data type is
called. The inputs to that resolved function are the various events
that have been scheduled for that signal, the output is a single
value. Once all of the signal events have been resolved, every
signal has at most exactly one scheduled event. At that point, all
of the signals are then updated and time is advanced, typically to
the next delta cycle.

Thanks for that explanation. I think I was mixing up the execution of
statements and the assignment of values. So consider the example where
clk2 is assigned a value from clk1; the rising edge of clk2 is 1 delta
delay after the rising edge of clk1. So the signals in the clk1
triggered process are assigned a value at time delta 1, but then the
clk2 process runs *after* the signals are updated, correct? If it is
not done this way where all the signals are updated prior to any code
being executed, the ordering of events can change the outcome.

So do I have this correct now?

--

Rick C
 
On Friday, September 30, 2016 at 6:05:26 PM UTC-4, rickman wrote:
So consider the example where
clk2 is assigned a value from clk1; the rising edge of clk2 is 1 delta
delay after the rising edge of clk1. So the signals in the clk1
triggered process are assigned a value at time delta 1, but then the
clk2 process runs *after* the signals are updated, correct?

Correct. Processes that have clk2 in the sensitivity list (or are waiting for a clk2'event) will run after all the signals that have been affected by the clk1'event have been updated. Updating signals is the last thing that happens prior to time advancing.

Kevin
 
Hi Kevin, (and Rick who also replied)




On 30-09-16 03:49, KJ wrote:
This is part of a process:
if (s_sclk_edgeup = '1') then
-- on rising edge of spi-clock, read data, and determine
-- next bit position

buffer(bitcounter) <= spi_mosi;

-- determine new bitposition
if (bitcounter > 0) then
bitcounter <= bitcounter - 1;
else
bitcounter <= 7;
-- last bit -> go process received data
state <= PROCESS_DATA;
end if;
end if;

If I understand this correctly, as this is part of a process, the
general rule is that the assignment is actually done at the END of the
process!!!

Almost. What happens inside the process is that the signals are
'scheduled' to be updated. At the end of the process, it suspends.
Once *all* the processes that are active in the entire simulation have
run and suspended, the simulator takes the list of all signals that
have been scheduled to be updated and actually updates them.
Scheduling to be updated and updating are two very different things.

Thanks for the very clear explanation!
It really helped me to understand things.


How does the VHDL compiler know what actions comes first?
The VHDL LRM defines this signal update scheduling.

(...)
The LRM can be difficult to read.
(...)

Well, a simple summary of this particular process would be nice :)


In another message, you talked about a "resolved function".

Does this mean that -in my case- that
- "bitcounter <= bitcounter - 1" is a resolved function (as it requires
itself).

- "buffer(bitcounter) <= spi_mosi" is not a resolved function as the
"buffer(...)" depends on another signal (bitcounter in this case)?


So can I assume that resolved functions are scheduled to be executed first?


Or is there something else in play here?




> Kevin Jennings

Cheerio! Kr. Bonne.
 
Rickman,


On 30-09-16 06:12, rickman wrote:

What would be "best practice" to code this "the proper way"?

There is nothing objectionable about the process you posted.

I prefer to order statements so it is clear and easy to read. The order
you have written them makes sense whether they are executed in sequence
as variables, or pending a delta delay, so unambiguous even if you don't
know whether they are signals or variables. There are many times I
rewrite my code a bit to make it easier to read, but mostly I used a
style that is not hard to read in the first place.

That was indeed my point.

Some of my code ends up on github and I myself also look at other
peoples code from public source. I like code to be self-explaining.



In fact, there three layer here:
- the algorithm you want to implement
- the source-code
- the actual execution.


On a microcontroller, it is pretty simple: if the source-code matches
the algorithm, the actual execution of the code will match that too.
In VHDL, that's not the case.

Now, you can say "ok, that's VHDL is like, learn it!".



But I just wonder that if -say- using temporary variables (e.g. in my
example, to store the value of the index) is not a better idea.
That way, you can write VHDL-code that does *always* result in an
execution in the same order as what the VHDL-source looks like.





BTW, Verilog doesn't have delta delays, so they talk about blocking vs.
non-blocking assignments. When I code Verilog I just use simple
templates so I don't have to remember what "blocking" means exactly. I'm
not so good with names if the meaning isn't clear from the name. I've
never memorized what is being blocked and what that implies. I just
write the code in a way that I know works. lol
Interesting point. Thanks!



Kr. Bonne.
 
On 10/1/2016 12:53 AM, KJ wrote:
On Friday, September 30, 2016 at 6:05:26 PM UTC-4, rickman wrote:
So consider the example where clk2 is assigned a value from clk1;
the rising edge of clk2 is 1 delta delay after the rising edge of
clk1. So the signals in the clk1 triggered process are assigned a
value at time delta 1, but then the clk2 process runs *after* the
signals are updated, correct?

Correct. Processes that have clk2 in the sensitivity list (or are
waiting for a clk2'event) will run after all the signals that have
been affected by the clk1'event have been updated. Updating signals
is the last thing that happens prior to time advancing.

I think I know what you mean, but you said that backwards. When a
statement is evaluated and an assignment is scheduled, the update for
the signal is scheduled for the *next* delta cycle. So the update
should be the *first* thing to happen on that t+1 delta cycle.

Or maybe when you say "prior to time advancing", maybe you mean actual
clock time? I had the impression delta time was invented so it could be
treated the same as clock time, every event is scheduled for a time and
when that time arrives the event happens. The order of assignments are
not important because the values to be assigned to the signals are
already determined. Once all the assignment events are completed, the
various processes are checked to see what events trigger a sensitivity
list and those processes run, possibly scheduling new assignments at
future times.

The assignments happen a bit like a master-slave FF. At this time the
value to be assigned is calculated (the master FF is latched) and at a
later time the assignment is made to the signal (the slave FF is latched).

--

Rick C
 
On 10/1/2016 8:15 AM, kristoff wrote:
Rickman,


On 30-09-16 06:12, rickman wrote:

What would be "best practice" to code this "the proper way"?

There is nothing objectionable about the process you posted.

I prefer to order statements so it is clear and easy to read. The order
you have written them makes sense whether they are executed in sequence
as variables, or pending a delta delay, so unambiguous even if you don't
know whether they are signals or variables. There are many times I
rewrite my code a bit to make it easier to read, but mostly I used a
style that is not hard to read in the first place.

That was indeed my point.

Some of my code ends up on github and I myself also look at other
peoples code from public source. I like code to be self-explaining.



In fact, there three layer here:
- the algorithm you want to implement
- the source-code
- the actual execution.


On a microcontroller, it is pretty simple: if the source-code matches
the algorithm, the actual execution of the code will match that too.
In VHDL, that's not the case.

Now, you can say "ok, that's VHDL is like, learn it!".

I don't follow this. Are you saying that because signals work the way
they do, this means the source code does not match the algorithm???


But I just wonder that if -say- using temporary variables (e.g. in my
example, to store the value of the index) is not a better idea.
That way, you can write VHDL-code that does *always* result in an
execution in the same order as what the VHDL-source looks like.

I think this is a hold over in thinking from coding sequential code like
MCUs. VHDL works like it does because it supports parallelism at a
fundamental level. Of course this requires thinking about coding in a
different way. Signal assignments happen in a consistent way. You just
need to understand how they happen and not think of them like sequential
code. In other words, "that's VHDL, learn it!" You are going to have a
hard time if you don't.


BTW, Verilog doesn't have delta delays, so they talk about blocking vs.
non-blocking assignments. When I code Verilog I just use simple
templates so I don't have to remember what "blocking" means exactly. I'm
not so good with names if the meaning isn't clear from the name. I've
never memorized what is being blocked and what that implies. I just
write the code in a way that I know works. lol
Interesting point. Thanks!



Kr. Bonne.

--

Rick C
 
On 10/1/2016 7:22 AM, kristoff wrote:
Hi Kevin, (and Rick who also replied)




On 30-09-16 03:49, KJ wrote:
This is part of a process:
if (s_sclk_edgeup = '1') then
-- on rising edge of spi-clock, read data, and determine
-- next bit position

buffer(bitcounter) <= spi_mosi;

-- determine new bitposition
if (bitcounter > 0) then
bitcounter <= bitcounter - 1;
else
bitcounter <= 7;
-- last bit -> go process received data
state <= PROCESS_DATA;
end if;
end if;

If I understand this correctly, as this is part of a process, the
general rule is that the assignment is actually done at the END of the
process!!!

Almost. What happens inside the process is that the signals are
'scheduled' to be updated. At the end of the process, it suspends.
Once *all* the processes that are active in the entire simulation have
run and suspended, the simulator takes the list of all signals that
have been scheduled to be updated and actually updates them.
Scheduling to be updated and updating are two very different things.

Thanks for the very clear explanation!
It really helped me to understand things.


How does the VHDL compiler know what actions comes first?
The VHDL LRM defines this signal update scheduling.

(...)
The LRM can be difficult to read.
(...)

Well, a simple summary of this particular process would be nice :)


In another message, you talked about a "resolved function".

Does this mean that -in my case- that
- "bitcounter <= bitcounter - 1" is a resolved function (as it requires
itself).

- "buffer(bitcounter) <= spi_mosi" is not a resolved function as the
"buffer(...)" depends on another signal (bitcounter in this case)?


So can I assume that resolved functions are scheduled to be executed first?


Or is there something else in play here?

Not exactly. A resolved function relates to signals that have a
definition of what happens when multiple drivers are on the same signal.
An unresolved does not, so multiple drivers create a simulation error.
The correct term would be a resolved data type or a resolution
function which is the algorithm to define what happens to resolve the
conflict. So a resolved data type will have a resolution function.

Otherwise resolved data types are exactly the same as any other data
types and execute the same.

buffer(bitcounter) <= spi_mosi is resolved if buffer is a resolved data
type. It only matters if you assign to this signal in two processes...

process A (...) begin
...
buffer(bitcounter) <= (others -> '0');
...
end process;

process B (...) begin
...
buffer(bitcounter) <= spi_mosi;
...
end process;

They do things similar to this in Verilog with an "initial" block and a
usage block. In VHDL this creates two drivers for the signal buffer.
Since Hi-Z buffers are very seldom used these days (at least inside
FPGAs) this is usually an error. I would use unresolved data types to
immediately flag this as an error, but some of the newer data types
which are useful for math and such are defined in terms of std_logic
which is a resolved function. I don't know of any way to make the
signed or unsigned data types unresolved since they are based on
std_logic and not std_ulogic (the unresolved type).

--

Rick C
 

Welcome to EDABoard.com

Sponsor

Back
Top