creating program

> Processes run in parallel in both VHDL and in software.

That can only be done if :

a) You have multiple cores / processors
b) You have an OS that supports it

If either of them are not met, it all runs sequential.

In software your
process may be *hugely* complex doing many, many things in each one.
Partly that's because there is a huge overhead for setting up and
managing each process in software. In VHDL a process has no overhead in
the implementation. I'm not sure how complex process management is in
simulation. I've not heard it is a problem though. The speed problems in
simulation often come from the data structures. Integers are faster than
std_logic, et. al.

I don't expect that simulation runs at fullspeed. Similar tools in software usually also have an overhead.

I've used Altera's Max Plus II, that only had waveforms. Hooking a
real simulator up with Quartus failed for me. I might try the Xilinx
tools, see if I have better luck with them.

I've worked with Max +II. Debugging in VHDL is built in. ASSERT is a
great tool. Waveforms are for user exploration when a bug is found.

Hmm.. I never saw it. On the other hand : That was 15 years ago.

I was trying to point out there is a difference between getting
something to work, and actually understanding it. I failed at that
:)

I've seen that many times. I've even done it when required. Sometimes
you don't have the time to "understand" something if you just need a
simple fix.

I personally don't like that myself.

BTW, I can't write PHP code. I don't even know what it is, so obviously
I'm not a monkey. ;)

Advise : Keep it that way :)


Igmar
 
Datastructures come at a price. In software the're cheap, in hardware
the're less cheap. I need to think harder in VHDL about the structure
in general. I find myself far less limited in software (which is also
a potential problem, if you ask me)

I use data structures often in hardware, RAM, FIFO, stacks, arrays, etc.

Is that always synthesizable ?

With software, you attach a debugger, and you can step through.
With VHDL, it's not that simple. So yes, I call this a different
mindset. If you think like a software programmer, you'll sooner
or later end up with a non-working, hard to debug prototype.

So you don't use the debugger to set breakpoints and step through
complex RTL? I guess maybe not if you code too close to the edif.

I did that on old Altera software. I failed at the latest version,
still need to look into that. In IntelliJ, it just attach and it
works (c).

Been developing FPGAs in VHDL like SW (enlightened by digital HW
circuit design experience) for 20+ years now. Have fewer problems
than when I tried to code netlists, doing the synthesis tool's job
for it. Sure it's not _exactly_ like SW, but many, many principles
of SW development are highly applicable to RTL.

True. I have 20+ in software, not in hardware. Getting up-to-speed on
VHDL again, which I last used at the university (that was 15 years
ago).

Any questions or issues? I think the hard part of VHDL is the strong
typing which is very similar to Ada and that is no longer hard to me so
it's all easy other than dealing with my requirements for the problem.

I'm used to languages with strong typing. I consider that a big plus myself, especially if you have a decent compiler that warns you when things look weird.

In what sense ? Cutting it up in the right modules you mean ? I
especially found the VHDL variable vs signals confusing, and that
fact that it looks sequential, but isn't.

Any time you need to break a problem down to parallel tasks in software
it gets *much* more difficult. In VHDL this is not so much an issue.

The sequential part of VHDL (processes) are *exactly* like software when
you use variables. Signals are only different in that they are not
updated until the process stops. This is because of the fact that
signals are intended to model hardware with delays. So all signal
assignments are made with a delta delay as a minimum which is zero time
(think infinitesimal in math) and so won't happen until the process
ends. All statements in a process happen without time advancing, even
delta time.

Clear. I find debugging programs with a large number of thread a huge PITA. Is something goes wrong, it's sometimes nearly impossible to trace, especially since debuggers change behaviour (read : timing).

I'm still struggling testing in VHDL. With software, I'm more
confortable : Junit, gtest, mockito, pick one or combine them. That's
getting harder in the modern async works : Akka for example is
message based, high-parallel.

I guess it has been awhile since I've done C development. I've never
heard of these tools. Mostly my software is done in Forth.

I've never done that. Only C, C++, Pascal, PHP, Java, Scala, Python and bash.

I'm looking at vunit for VHDL at the moment, but it's still a bit
confusing : Waveform in, waveform out. With software it's value in,
value out.

VHDL has built in testing tools. ASSERT statements are how I do it.
There are other features provided by system Verilog that are even
fancier I hear. Even so, it's not about waveforms really. Its about
inputs and outputs. VHDL describes the inputs. VHDL verifies the
outputs. Waveforms are for viewing by the user when you have a problem.

Well, take for example the LD A, 0xFF Z80 instruction. That roughly does :

1) Fetch instruction byte from PC and increase PC
2) Fetch operands from PC, and increase PC
3) Move operand to A register
4) Increase PC

How do you test that ? You also want to test timing, since on a real Z80, this instruction takes a fixed amount of time (no pipeline, no cache, no prefetching, no nothing).

I would say confirm each step, but I'm still working that out.


Regards,


Igmar
 
On 11/12/2015 2:36 AM, Igmar Palsenberg wrote:
Op dinsdag 10 november 2015 04:04:31 UTC+1 schreef Andy:
On Monday, November 9, 2015 at 5:50:51 AM UTC-6,
igmar.pa...@boostermedia.com wrote:
It's a whole different thing, assuming the end result ends up on
a FPGA. Software is sequenced, around datastructures. VHDL is not
sequenced, and doesn't have thing software has : Lock issues,
memory alignment issues, etc.

Really? You think a modern SW compiler doesn't tweak your sequence
to take advantage of the processor's capabilities. Whether it's SW
or HW: coder, know thy compiler!

Sure.

And who says data structures are exclusive to SW? Oh, you mean they
aren't > available in Verilog? Use a better language!

Datastructures come at a price. In software the're cheap, in hardware
the're less cheap. I need to think harder in VHDL about the structure
in general. I find myself far less limited in software (which is also
a potential problem, if you ask me)

I use data structures often in hardware, RAM, FIFO, stacks, arrays, etc.


With software, you attach a debugger, and you can step through.
With VHDL, it's not that simple. So yes, I call this a different
mindset. If you think like a software programmer, you'll sooner
or later end up with a non-working, hard to debug prototype.

So you don't use the debugger to set breakpoints and step through
complex RTL? I guess maybe not if you code too close to the edif.

I did that on old Altera software. I failed at the latest version,
still need to look into that. In IntelliJ, it just attach and it
works (c).

Been developing FPGAs in VHDL like SW (enlightened by digital HW
circuit design experience) for 20+ years now. Have fewer problems
than when I tried to code netlists, doing the synthesis tool's job
for it. Sure it's not _exactly_ like SW, but many, many principles
of SW development are highly applicable to RTL.

True. I have 20+ in software, not in hardware. Getting up-to-speed on
VHDL again, which I last used at the university (that was 15 years
ago).

Any questions or issues? I think the hard part of VHDL is the strong
typing which is very similar to Ada and that is no longer hard to me so
it's all easy other than dealing with my requirements for the problem.


I doubt that still works for a *very* large design. On software,
the OS handes a lot for you. No cache flushes, no memory
barriers, etc.

I would beg to differ. Very large designs are where SW approaches
make the most sense and benefit. The larger the design (and body of
code), the harder it is to maintain if you don't think about it
like SW.

In what sense ? Cutting it up in the right modules you mean ? I
especially found the VHDL variable vs signals confusing, and that
fact that it looks sequential, but isn't.

Any time you need to break a problem down to parallel tasks in software
it gets *much* more difficult. In VHDL this is not so much an issue.

The sequential part of VHDL (processes) are *exactly* like software when
you use variables. Signals are only different in that they are not
updated until the process stops. This is because of the fact that
signals are intended to model hardware with delays. So all signal
assignments are made with a delta delay as a minimum which is zero time
(think infinitesimal in math) and so won't happen until the process
ends. All statements in a process happen without time advancing, even
delta time.


Cache flushes are not unique to SW. The OS is just more SW. So we
need to write a little more code to do that in RTL.

I haven't reached that point yet :)


Can a programmer while VHDL code ? Sure. But there is a huge
different between good and "it works". Every monkey can write PHP
code, but writing good, maintainable code is a different story.

Yes there is a difference between SW and HW, nobody is denying
that. But I've reviewed, maintained, and debugged too many RTL
designs written too close to the netlist level not to recognize the
benefits of SW approach to RTL.

You have to know where to pay attention to the HW (async clock
boundaries are a big chunk). Then handle that close to the HW
level, but encapsulate it into a few reusable entities (like system
calls to the OS in SW) and then concentrate on the function,
throughput and latency of the rest of the design. On a multi-person
team, only one or two need to deal with the low level stuff, the
rest can design at a much higher level, where the behavior of the
code is critical.

I'm still struggling testing in VHDL. With software, I'm more
confortable : Junit, gtest, mockito, pick one or combine them. That's
getting harder in the modern async works : Akka for example is
message based, high-parallel.

I guess it has been awhile since I've done C development. I've never
heard of these tools. Mostly my software is done in Forth.


I'm looking at vunit for VHDL at the moment, but it's still a bit
confusing : Waveform in, waveform out. With software it's value in,
value out.

VHDL has built in testing tools. ASSERT statements are how I do it.
There are other features provided by system Verilog that are even
fancier I hear. Even so, it's not about waveforms really. Its about
inputs and outputs. VHDL describes the inputs. VHDL verifies the
outputs. Waveforms are for viewing by the user when you have a problem.

--

Rick
 
On 11/12/2015 2:41 AM, Igmar Palsenberg wrote:
Op dinsdag 10 november 2015 09:15:13 UTC+1 schreef rickman:
On 11/9/2015 6:50 AM, igmar.palsenberg@boostermedia.com wrote:
On Sunday, November 1, 2015 at 8:27:30 AM UTC+1, rickman wrote:

I think your distinction is pointless. You said "VHDL is a
description, not a program" and I have you an example when this
is not true. End of discussion for me.

Fine. That doesn't mean you're right.

??? You don't make sense. I give you an example of VHDL that is
a program as used everyday and you reject that???


As to the "mindset", there was a software designer who wanted
to code an FPGA in VHDL and came here asking for advice. We
told him about how he needed to adjust his thinking to design
hardware and not code software. I wrote to him personally to
explain why this was important and came close to getting some
consulting time with his firm. In the end his bosses had faith
that he could do a good job and so he wrote the code himself,
without any trouble.

It's a whole different thing, assuming the end result ends up on
a FPGA. Software is sequenced, around datastructures. VHDL is not
sequenced, and doesn't have thing software has : Lock issues,
memory alignment issues, etc.

I don't think you understand VHDL. VHDL has sequential code, that
is what a process it.

But all of t hem run parallel. With software, it's the other way
around.

Processes run in parallel in both VHDL and in software. In software your
process may be *hugely* complex doing many, many things in each one.
Partly that's because there is a huge overhead for setting up and
managing each process in software. In VHDL a process has no overhead in
the implementation. I'm not sure how complex process management is in
simulation. I've not heard it is a problem though. The speed problems in
simulation often come from the data structures. Integers are faster than
std_logic, et. al.


With software, you attach a debugger, and you can step through.
With VHDL, it's not that simple. So yes, I call this a different
mindset. If you think like a software programmer, you'll sooner
or later end up with a non-working, hard to debug prototype.

Huh??? I use single stepping with VHDL at times. Normally it
isn't that useful because there is so much parallelism, things tend
to jump around as one process stops and another starts... same as
software on a processor with interrupts or multitasking.

That's also getting more common in software these days. Ever tried
debugging a 1M messages / second Akka application ?

I've used Altera's Max Plus II, that only had waveforms. Hooking a
real simulator up with Quartus failed for me. I might try the Xilinx
tools, see if I have better luck with them.

I've worked with Max +II. Debugging in VHDL is built in. ASSERT is a
great tool. Waveforms are for user exploration when a bug is found.


Can a programmer while VHDL code ? Sure. But there is a huge
different between good and "it works". Every monkey can write PHP
code, but writing good, maintainable code is a different story.

I'm not sure what you are going on about. You started by saying
"VHDL is a description, not a program." Now you seem to be
splitting all manner of hairs and calling programmers "monkeys".

I was trying to point out there is a difference between getting
something to work, and actually understanding it. I failed at that
:)

I've seen that many times. I've even done it when required. Sometimes
you don't have the time to "understand" something if you just need a
simple fix.

BTW, I can't write PHP code. I don't even know what it is, so obviously
I'm not a monkey. ;)

--

Rick
 
Data Structures (records, arrays, etc.) can be custom defined, and as long as they boil down to synthesizable types (Boolean, integer, enumerated types, std_logic, etc.) it all synthesizes. But they can sure make passing data around a lot easier.

I took a couple of Ada programming classes shortly after I started using VHDL, and it helped a lot. Just like Ada, VHDL functions and procedures can be defined to perform common functionality (combinatorial) that can be called sequentially, rather than using a separate entity/architecture that must be concurrently instantiated and communicated with. Things like Hamming/ECC encoding/decoding functions are much easier to use than entities to do the same thing. These functions can be defined inside another subprogram, a process, or inside an architecture or a package, depending on how widely they need to be used.

Use variables for data that does not need to leave the process. Then assign signals with the variables to send to another process/entity. That works much better in that the description is then purely sequential, just like SW. You just have to make sure you don't put too much behavior in one cycle of latency.

And if the logic won't meet the clock period, then add a register (a clock cycle of latency in the process behavior) before and/or after the subprogram call and enable retiming optimization.

Subprograms work really well with state machines, since you can call the subprogram right there in the state, rather than setup an interface to some other entity or process. Just remember that VHDL subprograms have no static variables, so if your procedure needs to remember something from one call to the next, it needs to be passed (an inout data structure parameter works well here). Otherwise, if you declare a procedure in a process, anything declared beforehand in the same process is also visible inside the procedure. And signals/ports that are visible to the process are visible inside the procedure too! That can cut down a lot on how much you have to explicitly pass in/out of a procedure with each call. This way, it is easy to describe an entire state machine, in one subprogram.

The trick is, rather than thinking registers with so many gates between them, think in terms of clock cycles of latency (iterations of the clocked process's inherent "forever" loop). Try not to put too much serial work in any one clock cycle. How much is too much depends on the device and the clock rate.

As far as testing, we use a continuous integration flow with Jenkins. Our testbenches are 100% self-checking (waveforms are for debugging only), and use constrained-random stimulus, while monitoring all DUT outputs for comparison in scoreboards with an untimed reference model. Coverage models for the stimulus ensure we cover what we need to cover in terms of functionality. This can all be done in SystemVerilog using UVM, or in VHDL using OSVVM. We do not do unit level testing. We may use test versions of DUT entities to make it easier to get to an internal entity's functionality, but we always use the DUT interface to simplify the stimulus application (drivers) and response capture (monitors). The scoreboards hook up to the monitors, not the DUT, and to the reference model. Monitors can also verify interface protocols, without having to know anything about the stimulus or expected response.

Hope this helps,

Andy
 
On 11/12/2015 12:10 PM, Igmar Palsenberg wrote:
Datastructures come at a price. In software the're cheap, in hardware
the're less cheap. I need to think harder in VHDL about the structure
in general. I find myself far less limited in software (which is also
a potential problem, if you ask me)

I use data structures often in hardware, RAM, FIFO, stacks, arrays, etc.

Is that always synthesizable ?

They are if you describe them with synthesizable code. There is nothing
special about any of them that a compiler can't understand. It's all
just logic.


With software, you attach a debugger, and you can step through.
With VHDL, it's not that simple. So yes, I call this a different
mindset. If you think like a software programmer, you'll sooner
or later end up with a non-working, hard to debug prototype.

So you don't use the debugger to set breakpoints and step through
complex RTL? I guess maybe not if you code too close to the edif.

I did that on old Altera software. I failed at the latest version,
still need to look into that. In IntelliJ, it just attach and it
works (c).

Been developing FPGAs in VHDL like SW (enlightened by digital HW
circuit design experience) for 20+ years now. Have fewer problems
than when I tried to code netlists, doing the synthesis tool's job
for it. Sure it's not _exactly_ like SW, but many, many principles
of SW development are highly applicable to RTL.

True. I have 20+ in software, not in hardware. Getting up-to-speed on
VHDL again, which I last used at the university (that was 15 years
ago).

Any questions or issues? I think the hard part of VHDL is the strong
typing which is very similar to Ada and that is no longer hard to me so
it's all easy other than dealing with my requirements for the problem.

I'm used to languages with strong typing. I consider that a big plus myself, especially if you have a decent compiler that warns you when things look weird.

I like it in some cases, but until more recently it was a PITA to try to
use as it requires a lot more typing. I am will to give Verilog a shot
if I can find a good book.


In what sense ? Cutting it up in the right modules you mean ? I
especially found the VHDL variable vs signals confusing, and that
fact that it looks sequential, but isn't.

Any time you need to break a problem down to parallel tasks in software
it gets *much* more difficult. In VHDL this is not so much an issue.

The sequential part of VHDL (processes) are *exactly* like software when
you use variables. Signals are only different in that they are not
updated until the process stops. This is because of the fact that
signals are intended to model hardware with delays. So all signal
assignments are made with a delta delay as a minimum which is zero time
(think infinitesimal in math) and so won't happen until the process
ends. All statements in a process happen without time advancing, even
delta time.

Clear. I find debugging programs with a large number of thread a huge PITA. Is something goes wrong, it's sometimes nearly impossible to trace, especially since debuggers change behaviour (read : timing).

Delta delays get around repeatability issues while Verilog can be a
killer because of them. If you are single stepping to debug code you
are most likely doing it wrong. That is a poor technique in nearly any
language.


I'm still struggling testing in VHDL. With software, I'm more
confortable : Junit, gtest, mockito, pick one or combine them. That's
getting harder in the modern async works : Akka for example is
message based, high-parallel.

I guess it has been awhile since I've done C development. I've never
heard of these tools. Mostly my software is done in Forth.

I've never done that. Only C, C++, Pascal, PHP, Java, Scala, Python and bash.

I'm looking at vunit for VHDL at the moment, but it's still a bit
confusing : Waveform in, waveform out. With software it's value in,
value out.

VHDL has built in testing tools. ASSERT statements are how I do it.
There are other features provided by system Verilog that are even
fancier I hear. Even so, it's not about waveforms really. Its about
inputs and outputs. VHDL describes the inputs. VHDL verifies the
outputs. Waveforms are for viewing by the user when you have a problem.

Well, take for example the LD A, 0xFF Z80 instruction. That roughly does :

1) Fetch instruction byte from PC and increase PC
2) Fetch operands from PC, and increase PC
3) Move operand to A register
4) Increase PC

How do you test that ? You also want to test timing, since on a real Z80, this instruction takes a fixed amount of time (no pipeline, no cache, no prefetching, no nothing).

I would say confirm each step, but I'm still working that out.

Again, you only care about inputs and outputs. Make the opcode
available at the last point in time it can be read by the timing spec or
model the memory as its own function (which you then need to verify). I
don't know that the timing of internal events is required other than
clock cycle alignment. Even then it is hard to test internal features
other than functionally. So execute the next instruction to read the A
register and read it out. That also verifies the PC increment. Test it
as you would a Z80 chip.

I believe I have found ways to read internal signals of modules in VHDL.
I think this is a simulator feature rather than a language feature
though. Verilog supports this directly.

What issues are you concerned about?

--

Rick
 
On 11/12/2015 12:00 PM, Igmar Palsenberg wrote:
Processes run in parallel in both VHDL and in software.

That can only be done if :

a) You have multiple cores / processors

Neither VHDL nor single processors actually run processes in parallel.
The processor is time multiplexed to run one process at a time. But I'm
sure you know that.


> b) You have an OS that supports it

Yeah... so?

> If either of them are not met, it all runs sequential.

So??? Even with multiple processors multiprocessing has all the same
issues.


In software your process may be *hugely* complex doing many, many
things in each one. Partly that's because there is a huge overhead
for setting up and managing each process in software. In VHDL a
process has no overhead in the implementation. I'm not sure how
complex process management is in simulation. I've not heard it is a
problem though. The speed problems in simulation often come from
the data structures. Integers are faster than std_logic, et. al.

I don't expect that simulation runs at fullspeed. Similar tools in
software usually also have an overhead.

I've used Altera's Max Plus II, that only had waveforms. Hooking
a real simulator up with Quartus failed for me. I might try the
Xilinx tools, see if I have better luck with them.

I've worked with Max +II. Debugging in VHDL is built in. ASSERT
is a great tool. Waveforms are for user exploration when a bug is
found.

Hmm.. I never saw it. On the other hand : That was 15 years ago.

ASSERT has been part of VHDL from the beginning. Read up on test
benches. Simulating hardware by generating waveforms manually and
looking at the outputs is a PITA.


I was trying to point out there is a difference between getting
something to work, and actually understanding it. I failed at
that :)

I've seen that many times. I've even done it when required.
Sometimes you don't have the time to "understand" something if you
just need a simple fix.

I personally don't like that myself.

There are lots of things about work I don't like. But the job is to do
the job, not make myself happy... well, not all the time.


BTW, I can't write PHP code. I don't even know what it is, so
obviously I'm not a monkey. ;)

Advise : Keep it that way :)


Igmar

--

Rick
 
@rickman

I'm looking at vunit for VHDL at the moment, but it's still a bit
confusing : Waveform in, waveform out. With software it's value in,
value out.

VHDL has built in testing tools. ASSERT statements are how I do it.

VUnit doesn't replace the assert statement, it builds on top of it. So VUnit starts where plain VHDL test support stops.

Regards,

Lars
 
@Andy

> As far as testing, we use a continuous integration flow with Jenkins. Our testbenches are 100% self-checking (waveforms are for debugging only), and use constrained-random stimulus, while monitoring all DUT outputs for comparison in scoreboards with an untimed reference model. Coverage models for the stimulus ensure we cover what we need to cover in terms of functionality. This can all be done in SystemVerilog using UVM, or in VHDL using OSVVM. We do not do unit level testing. We may use test versions of DUT entities to make it easier to get to an internal entity's functionality, but we always use the DUT interface to simplify the stimulus application (drivers) and response capture (monitors). The scoreboards hook up to the monitors, not the DUT, and to the reference model. Monitors can also verify interface protocols, without having to know anything about the stimulus or expected response.

When I promote the use of VUnit it's usually very easy when people have a previous experience with unit testing tools for SW. They know what to expect and they know that they want it. It seems to me that you may have such experience but decided to do only top level testing anyway. That makes me a bit curious about the reasons. Are you working as a verification engineer, RTL designer, or both?

Anyway, it might be interesting for you to know that VUnit doesn't know what a unit is, it doesn't care about your test strategy as long as your testbenches are self-checking, and it has support for Jenkins integration. So if you wrap your testbench in this

library vunit_lib;
context vunit_lib.vunit_context;

entity tb_example is
generic (runner_cfg : runner_cfg_t);
end entity;

architecture tb of tb_example is
begin
main : process
begin
test_runner_setup(runner, runner_cfg);

-- Put whatever your "main process" is doing here

test_runner_cleanup(runner); -- Simulation ends here
end process;

-- Put your DUT, scoreboards, monitors, reference models here

end architecture;

and create a python script (run.py) like this

from vunit import VUnit
vu = VUnit.from_argv()
lib = vu.add_library("lib")
lib.add_source_files("*.vhd")

# Create as many libraries as needed and add source files to them

vu.main()

and do

python run.py -x test_report.xml

from the command line you will have something that (assuming you're using ModelSim, Riviera-PRO, Active-HDL or GHDL) compiles your source files in dependency order based on what has been modified. The script then finds and runs your testbench(es) and generates a test report on "Jenkins format"

Regards,

Lars
 
I'm used to languages with strong typing. I consider that a big plus myself, especially if you have a decent compiler that warns you when things look weird.

I like it in some cases, but until more recently it was a PITA to try to
use as it requires a lot more typing. I am will to give Verilog a shot
if I can find a good book.

In my experience, a good IDE can help. Sigasi helps a lot, it also has features that at least the Quartus editor doesn't have.

In what sense ? Cutting it up in the right modules you mean ? I
especially found the VHDL variable vs signals confusing, and that
fact that it looks sequential, but isn't.

Any time you need to break a problem down to parallel tasks in software
it gets *much* more difficult. In VHDL this is not so much an issue.

The sequential part of VHDL (processes) are *exactly* like software when
you use variables. Signals are only different in that they are not
updated until the process stops. This is because of the fact that
signals are intended to model hardware with delays. So all signal
assignments are made with a delta delay as a minimum which is zero time
(think infinitesimal in math) and so won't happen until the process
ends. All statements in a process happen without time advancing, even
delta time.

Clear. I find debugging programs with a large number of thread a huge PITA. Is something goes wrong, it's sometimes nearly impossible to trace, especially since debuggers change behaviour (read : timing).

Delta delays get around repeatability issues while Verilog can be a
killer because of them. If you are single stepping to debug code you
are most likely doing it wrong. That is a poor technique in nearly any
language.

Personally, I use logging to a file a lot. And breakpoints on certain points, mainly in async code.

I'm still struggling testing in VHDL. With software, I'm more
confortable : Junit, gtest, mockito, pick one or combine them. That's
getting harder in the modern async works : Akka for example is
message based, high-parallel.

I guess it has been awhile since I've done C development. I've never
heard of these tools. Mostly my software is done in Forth.

I've never done that. Only C, C++, Pascal, PHP, Java, Scala, Python and bash.

I'm looking at vunit for VHDL at the moment, but it's still a bit
confusing : Waveform in, waveform out. With software it's value in,
value out.

VHDL has built in testing tools. ASSERT statements are how I do it.
There are other features provided by system Verilog that are even
fancier I hear. Even so, it's not about waveforms really. Its about
inputs and outputs. VHDL describes the inputs. VHDL verifies the
outputs. Waveforms are for viewing by the user when you have a problem.

Well, take for example the LD A, 0xFF Z80 instruction. That roughly does :

1) Fetch instruction byte from PC and increase PC
2) Fetch operands from PC, and increase PC
3) Move operand to A register
4) Increase PC

How do you test that ? You also want to test timing, since on a real Z80, this instruction takes a fixed amount of time (no pipeline, no cache, no prefetching, no nothing).

I would say confirm each step, but I'm still working that out.

Again, you only care about inputs and outputs. Make the opcode
available at the last point in time it can be read by the timing spec or
model the memory as its own function (which you then need to verify). I
don't know that the timing of internal events is required other than
clock cycle alignment. Even then it is hard to test internal features
other than functionally. So execute the next instruction to read the A
register and read it out. That also verifies the PC increment. Test it
as you would a Z80 chip.

I'll just start trying. The Z80 is pretty simple, that makes it a lot easier to get started with. Good exercise.

I believe I have found ways to read internal signals of modules in VHDL.
I think this is a simulator feature rather than a language feature
though. Verilog supports this directly.

What issues are you concerned about?

I normally test internals (in software, that is), since it will make determining what broke easier then just testing the public interfaces.
I probably just need to get started, and not think about this to much.


Igmar
 
On 11/16/2015 2:22 AM, Igmar Palsenberg wrote:
I'm used to languages with strong typing. I consider that a big plus myself, especially if you have a decent compiler that warns you when things look weird.

I like it in some cases, but until more recently it was a PITA to try to
use as it requires a lot more typing. I am will to give Verilog a shot
if I can find a good book.

In my experience, a good IDE can help. Sigasi helps a lot, it also has features that at least the Quartus editor doesn't have.

I've heard a lot of good about Emacs in this regard.


In what sense ? Cutting it up in the right modules you mean ? I
especially found the VHDL variable vs signals confusing, and that
fact that it looks sequential, but isn't.

Any time you need to break a problem down to parallel tasks in software
it gets *much* more difficult. In VHDL this is not so much an issue.

The sequential part of VHDL (processes) are *exactly* like software when
you use variables. Signals are only different in that they are not
updated until the process stops. This is because of the fact that
signals are intended to model hardware with delays. So all signal
assignments are made with a delta delay as a minimum which is zero time
(think infinitesimal in math) and so won't happen until the process
ends. All statements in a process happen without time advancing, even
delta time.

Clear. I find debugging programs with a large number of thread a huge PITA. Is something goes wrong, it's sometimes nearly impossible to trace, especially since debuggers change behaviour (read : timing).

Delta delays get around repeatability issues while Verilog can be a
killer because of them. If you are single stepping to debug code you
are most likely doing it wrong. That is a poor technique in nearly any
language.

Personally, I use logging to a file a lot. And breakpoints on certain points, mainly in async code.

That requires a lot of manual work. If you are trying to find a problem
it can be useful. But for verification it is better to automate the
process.


I'm still struggling testing in VHDL. With software, I'm more
confortable : Junit, gtest, mockito, pick one or combine them. That's
getting harder in the modern async works : Akka for example is
message based, high-parallel.

I guess it has been awhile since I've done C development. I've never
heard of these tools. Mostly my software is done in Forth.

I've never done that. Only C, C++, Pascal, PHP, Java, Scala, Python and bash.

I'm looking at vunit for VHDL at the moment, but it's still a bit
confusing : Waveform in, waveform out. With software it's value in,
value out.

VHDL has built in testing tools. ASSERT statements are how I do it.
There are other features provided by system Verilog that are even
fancier I hear. Even so, it's not about waveforms really. Its about
inputs and outputs. VHDL describes the inputs. VHDL verifies the
outputs. Waveforms are for viewing by the user when you have a problem.

Well, take for example the LD A, 0xFF Z80 instruction. That roughly does :

1) Fetch instruction byte from PC and increase PC
2) Fetch operands from PC, and increase PC
3) Move operand to A register
4) Increase PC

How do you test that ? You also want to test timing, since on a real Z80, this instruction takes a fixed amount of time (no pipeline, no cache, no prefetching, no nothing).

I would say confirm each step, but I'm still working that out.

Again, you only care about inputs and outputs. Make the opcode
available at the last point in time it can be read by the timing spec or
model the memory as its own function (which you then need to verify). I
don't know that the timing of internal events is required other than
clock cycle alignment. Even then it is hard to test internal features
other than functionally. So execute the next instruction to read the A
register and read it out. That also verifies the PC increment. Test it
as you would a Z80 chip.

I'll just start trying. The Z80 is pretty simple, that makes it a lot easier to get started with. Good exercise.

I believe I have found ways to read internal signals of modules in VHDL.
I think this is a simulator feature rather than a language feature
though. Verilog supports this directly.

What issues are you concerned about?

I normally test internals (in software, that is), since it will make determining what broke easier then just testing the public interfaces.
I probably just need to get started, and not think about this to much.

Ok

--

Rick
 
On Sunday, November 15, 2015 at 6:57:33 PM UTC, Lars Asplund wrote:
@Andy

As far as testing, we use a continuous integration flow with Jenkins. Our testbenches are 100% self-checking (waveforms are for debugging only), and use constrained-random stimulus, while monitoring all DUT outputs for comparison in scoreboards with an untimed reference model. Coverage models for the stimulus ensure we cover what we need to cover in terms of functionality. This can all be done in SystemVerilog using UVM, or in VHDL using OSVVM. We do not do unit level testing. We may use test versions of DUT entities to make it easier to get to an internal entity's functionality, but we always use the DUT interface to simplify the stimulus application (drivers) and response capture (monitors). The scoreboards hook up to the monitors, not the DUT, and to the reference model. Monitors can also verify interface protocols, without having to know anything about the stimulus or expected response.

When I promote the use of VUnit it's usually very easy when people have a previous experience with unit testing tools for SW. They know what to expect and they know that they want it. It seems to me that you may have such experience but decided to do only top level testing anyway. That makes me a bit curious about the reasons. Are you working as a verification engineer, RTL designer, or both?

Anyway, it might be interesting for you to know that VUnit doesn't know what a unit is, it doesn't care about your test strategy as long as your testbenches are self-checking, and it has support for Jenkins integration. So if you wrap your testbench in this

library vunit_lib;
context vunit_lib.vunit_context;

entity tb_example is
generic (runner_cfg : runner_cfg_t);
end entity;

architecture tb of tb_example is
begin
main : process
begin
test_runner_setup(runner, runner_cfg);

-- Put whatever your "main process" is doing here

test_runner_cleanup(runner); -- Simulation ends here
end process;

-- Put your DUT, scoreboards, monitors, reference models here

end architecture;

and create a python script (run.py) like this

from vunit import VUnit
vu = VUnit.from_argv()
lib = vu.add_library("lib")
lib.add_source_files("*.vhd")

# Create as many libraries as needed and add source files to them

vu.main()

and do

python run.py -x test_report.xml

from the command line you will have something that (assuming you're using ModelSim, Riviera-PRO, Active-HDL or GHDL) compiles your source files in dependency order based on what has been modified. The script then finds and runs your testbench(es) and generates a test report on "Jenkins format"

Regards,

Lars

Hmmm, I don't really do too much with this now, but there are echos in what you are doing with what I was doing with this:

http://www.p-code.org/ttask.html

For automatic checking I was just logging to files with this:

http://www.p-code.org/tbmsgs.html

As you can see that's pretty basic, the VHDL code is here:

http://www.p-code.org/tbmsgs.fossil/artifact/19271c770959048b

There is also a verilog version of that if you root around.

I see you are using vunit as a build tool as well as the testing framework. I did that different, as I kept the build system separate. Well, as much as I could, because the build system scans log files and prints a summary if it finds tbmsgs messages. But that can be easily changed with customized extensions (see bottom of first link).
 
On Sunday, November 15, 2015 at 12:57:33 PM UTC-6, Lars Asplund wrote:
@Andy
When I promote the use of VUnit it's usually very easy when people have a previous experience with unit testing tools for SW. They know what to expect and they know that they want it. It seems to me that you may have such experience but decided to do only top level testing anyway. That makes me a bit curious about the reasons. Are you working as a verification engineer, RTL designer, or both?

Mostly design, but some verification experience.

Perhaps I should say, we use unit level testing, but only at the top (design) level! Under SW testing standards, we test using the production compiler on representative production HW, or as close as practical. Simulation and modeling are forms of analysis, not testing, strictly speaking.

Given that the simulator and synthesis, place & route tool are very different "compilers", and the simulation server bears no resemblance to the target HW, the only way to perform testing is by using verification systems that capture stimulus & DUT response from simulation(s), and then play that stimulus against the real programmed FPGA while comparing the FPGA's response to the simulation response, using Aldec CTS for example.

So, if we have to show, under a SW test approach, how all of our verification coverage goals are met, on representative HW, then we have to attain that coverage by stimulating the RTL at the top level during simulation. Then we replay that stimulus to the programmed device, and show that the DUT response was the same as simulated. Note that the simulation showed the response met requirements.

Other than speeding up some simulations, there is no advantage for us in lower level unit testing (directly stimulating/monitoring a lower level module at its ports). We still have to test it via the top level (device) interface. There are still some functional coverage items we cannot cover at the hardware level, especially things that require "white box" verification, like FSM illegal state recovery, internal memory EDAC, etc.

Andy
 
On Monday, November 16, 2015 at 11:40:32 AM UTC-6, rickman wrote:
On 11/16/2015 2:22 AM, Igmar Palsenberg wrote:
In my experience, a good IDE can help. Sigasi helps a lot, it also has features that at least the Quartus editor doesn't have.

I've heard a lot of good about Emacs in this regard.

Even the free version of Sigasi is incredibly useful, if you (and your employer) can tolerate the mandatory talk-back feature in the free version. And the paid version has LOTS more features. IDE's that are truly language aware are extremely valuable.

For example, even in the free version, you can set Sigasi to fontify subprogram names differently than other text. If it cannot find a matching subprogram within scope, WITH MATCHING ARUMENT SIGNATURE, it won't fontify it, telling you immediately that you either misspelled it, or misused it with your arguments/types.

Andy
 
@Andy

Different standards have different opinions on unit testing and to what extent unit test results can be used to prove correctness of a design. Even if you're allowed to use them you might be required to motivate why the results gained from a test run in a non-product environment is valid for the real thing as well.

Anyway, the key value of unit testing isn't in building test coverage, it's in the productivity and design quality boost. A fully automated unit test methodology enables short code/test cycles such that a developer can test frequently and start to test early. With early and continuous feedback it's much easier to keep the work on track considering that most people produce bugs, misinterpret requirements and make bad design decisions on a daily basis. If you only test at the system level you can't test early and frequently, partly because there is a delay before there is a system level to test at all, partly because there is a delay before an already existing system is compatible with your new piece of code, and partly because system level testing is slower. There are number of problems with this and the significance depends on project size

1. A bug discovered late might not affect you much if it's something as simple as a faulty value of a constant. But if the bug reveals a design flaw there is a risk that you have had the time to add a considerable amount of code based on that design and have to make significant changes.
2. A bug released to be integrated with other code before system testing can take place can cause significant harm. We all know this but here are some personal experiences. After you release your bug there is a delay before the code is used at all because not all teams are synchronized. When the code is used and the bug starts to show people spend time debugging their own code and there is delay before the bug report finds its way to your team. Once there it's often incomplete, information is missing to recreate and debug the problem. Meanwhile team schedules are slipping and workarounds are added to not stop progress. Once the root cause has been fixed and the workarounds can be removed it turns out that newly written code depends on these workarounds so more work is needed before development can proceed smoothly.. Unfortunately, a workaround was part of a customer release and now they rely on it which becomes apparent when you make the next release. They are not willing to take the consequences of removing that workaround on short notice so you end up supporting two different variants. At some point, often much later, all dependencies on the workaround are gone. Someone finds the FIXME in the code but the memory of why it was introduced is gone or in the heads of people no longer available. Potentially dead code is bad practice so it has to be investigated to decide if it can be removed or not. And so on...

In the same way that the lint support of Sigasi adds value by providing continuous feedback based on static code analysis unit testing adds value by providing continuous feedback based on dynamic code analysis.

Testing at unit level also drives design. Creating test cases for the units forces you to think about clear functional responsibilities for that unit. This promotes strong cohesion of the unit and loose coupling to other units which are signs of a good (readable and maintainable) design.

/Lars
 
On Thursday, November 26, 2015 at 11:02:27 AM UTC-6, Lars Asplund wrote:
@Andy

Different standards have different opinions on unit testing and to what extent unit test results can be used to prove correctness of a design. Even if you're allowed to use them you might be required to motivate why the results gained from a test run in a non-product environment is valid for the real thing as well.

If you only test at the system level you can't test early and frequently, partly because there is a delay before there is a system level to test at all, partly because there is a delay before an already existing system is compatible with your new piece of code, and partly because system level testing is slower. There are number of problems with this and the significance depends on project size

....
/Lars

Lars,

Sorry it's been a while...

First off, when I say "system level" I mean "FPGA level". With that in mind....

We don't wait for the whole system to be anywhere near complete before we test. See Jim Lewis' <excellent> paper "Accelerating Verification Through Pre-Use of System-Level Testbench Components". We use this approach to get units integrated into a partially implemented system & test early, with no wasted time developing unit level tests that won't need to be used again.

The dificulty in unit level testing is the quantity of unique interfaces for all the units, and all the monitors, drivers and often-unique transactions required for them. By developing the units in a sensible order to allow a testbench with a system-level interface to exercise the functionality provided by lower level units, we can test early and often with one TB <-> DUT interface, drivers, monitors, etc.

This approach also makes it easier to refine the system architecture for improved performance, utilization, functionality, etc. because our verification is immune to changes in unit level interfaces that change with the architecture.

Andy
 
Lars,

Sorry it's been a while...

First off, when I say "system level" I mean "FPGA level". With that in mind...

We don't wait for the whole system to be anywhere near complete before we test. See Jim Lewis' <excellent> paper "Accelerating Verification Through Pre-Use of System-Level Testbench Components". We use this approach to get units integrated into a partially implemented system & test early, with no wasted time developing unit level tests that won't need to be used again.

The dificulty in unit level testing is the quantity of unique interfaces for all the units, and all the monitors, drivers and often-unique transactions required for them. By developing the units in a sensible order to allow a testbench with a system-level interface to exercise the functionality provided by lower level units, we can test early and often with one TB <-> DUT interface, drivers, monitors, etc.

This approach also makes it easier to refine the system architecture for improved performance, utilization, functionality, etc. because our verification is immune to changes in unit level interfaces that change with the architecture.

Andy

Hi Andy,

Thanks for the link to Jim's paper. I've seen a very similar approach at a company I was helping getting started with VUnit but I didn't know the origin. I think there's much to say about this but here is a short summary

- The paper describes the "traditional" approach as a methodology testing the same things at all levels and once you have the top-level testbench you throw away the testbenches at the unit (subblock in the paper) level which is a waste. This is not an inherent property of multi-level testing but rather a result of bad practices. When unit testing I aim for full functional coverage at the unit level, higher levels of testing is focused on verifying integration issues that can't be tested at the lower levels. Since all testbenches add unique value you don't throw them away.

- To people sceptical about unit testing I recommend starting with a unit testing framework and apply that on their current testbenches. It provides many features valuable at all levels of testing. Once you understand how it works and how it removes various obstacles in testing you will hopefully see how unit testing becomes more available and less cumbersome. Once there you can take advantage of the values unit testing provides, values that are lost if you just do higher level testing or variants thereof like the one in the paper.

- When adopting unit testing there is a degree of personal preferences and project specific circumstances that affect how it is applied. However, I have never met any HW or SW developer that want to go back to where they were before if they have been properly introduced to a good unit testing framework. I've also seen a number of HW and SW teams making the transition and they have all noticed a productivity and quality boost


========================================================================
DETAILS:


- In the end we are all looking for improved productivity and there are a number of ways in which we can achieve this. First we should maximize the work not done (avoid waste) and if it's something that must be done we should look for ways to automate the work, speed it up, or make it simpler in some other way. Whatever methodology we use it should scale with design size and complexity and respond well to change. The need for handling change whether it's in requirements, design or implementation is an inevitable part of larger and more complex designs. We are simply not capable of figuring out everything in advance but have to adapt as we move along.

- A short code/test cycle helps us find our mistakes ASAP to avoid wasting time heading in the wrong direction. But how short is short enough? Suppose I'm developing a simple UART, 200 lines of code or so, let's say a day of work. For that I have a handful of of test cases, maybe a little bit more: sending and receiving a byte, several bytes, verifying reset conditions, special cases like overflow. On average I will have a new test case/feature that can be verified every hour. It's not unlikely that I will introduce a bug during an hour of coding and I have to test this eventually anyway so why not do it immediately?

- One thing that may prevent me taking this approach is too long simulation times. The method proposed in the paper would on average contain half the complete system worth of logic in addition to the unit being tested. That is significant in a larger system where the total amount of logic is much larger than that of a unit.

- As you mention, all the interfaces you need to address can become a burden when you have one testbench for each unit. However, you're allowed to take well-motivated short-cuts even if you're into unit testing, and people often do. Given that the CPU interface in Jim's example doesn't affect your ability to observe and control the units behind it and it doesn't add significant delays, it may save you time to test the CPU interface (CIF) + the timer as a "unit", CIF+UART as a unit, and CIF+memory interface as a unit. To verify integration of the full system I need a testbench with a few read/writes to make sure that decoding works and I don't have conflicts on the internal bus. Same number of testbenches and unique interfaces to handle as in the paper but no superfluous logic and no abandon testbenches.

- The reason that unit grouping like the one I explained can be motivated is that the "main" units in the paper sits right behind a "transparent" CPU interface and they are largely independent. A larger and more complex design would have more units, they would be more "embedded" and there would be more dependencies. Figuring out how to control and observe the unit under test becomes harder from the system boundary. There may also be significant delays in the paths before the targeted unit becomes activated (= simulation time). If testing is hard and slow you won't do it as often (= longer code/test cycles). Also, the amount of logic per I/O of a modern FPGA is many times higher than it was in 2003 when this paper was written. The observability and controllability from the system interfaces are constantly getting worse.

- In general it's not possible to add units to a system in such an order that you can test them individually from the system boundary. This is the case with the CPU interface logic in the examples which is partially tested with manual inspection. An alternative would of course be to make that self-checking and fully tested, that is develop a proper unit test. Another approach would be to postpone testing (= longer code/test cycle) and wait until you have more units in place. This is what I do when grouping the CPU interface with the timer. This problem will get worst with larger and more complex designs

- Some other obstacles for the short code/test cycle that VUnit will remove
- If you want frequent testing your testbenches must be self-checking. VUnit provides a check package that improves over plain VHDL asserts (but you can use assert as well) and a test runner to organize the test cases in your testbench
- With many testbenches and test cases it has to be convenient to run them frequently or you won't. VUnit will automatically compile all your files in dependency order, find and execute all your test cases (or a subset you specify), and present the result.
- VUnit can with a command line option split your test case simulations between many threads/CPU cores which can run in the background while you continue to work interactively with another instance of your simulator and the next piece of code. If you can hide the simulation time altogether there's no excuse for not running the tests.
- If you run many simulations in parallel licensing may become an issue. VUnit supports the free and open source GHDL simulator to remove that obstacle. Use whatever simulator you prefer for interactive work and let GHDL handle batch simulations.
- Developing transactions may be cumbersome as you noted. VUnit has a package providing message passing combined with code generation for your transactions.


- In a previous post I described how unit testing drives an architecture with highly cohesive and loosely coupled units, that is a modular design. Such a design is easier to maintain since changes tend to be more localized. More localized changes means less interfaces changes and less rework of testbenches when making the type of optimizations you mention. One thing we changed from the first generation of VUnit to where we are today is that our test cases are no longer procedures defined in a separate package since interface changes means that you have to update the procedure declaration as well. Instead our test cases are defined in the same scope as the unit instance so that they have access to the interfaces directly. I guess you have similar problems when the test cases are defined in architectures of a separate test control entity. Again, remove obstacles whenever you can.
 
Lars,

A fundamental aspect of SW test that is not possible in unit level "test" is that, by definition, sofware "test" must use the production compiler to produce production object code, which is then run and tested on production-representative hardware. In reality, simulation is analysis, not test.

In HDL, the production compiler is a synthesis, place & route tool (not a simulator!), and the production representative hardware/system is the target FPGA. None of these are possible at the unit level. SPR does not SPR each module, and "call" it multiple times. Each instance is separately SPR'd as part of the system. Each instance is uniquely optimized per its environment within the system).

In order to perform testing for Programmable Logic, all stimulus/response must be provided/captured at observable interfaces for the FPGA. RTL simulation at the FPGA level can be used to verify and caputure the stimulus and response at the FPGA level, which can then be applied to a real FPGA, using the real bit file (the "production" object code.)

Even after that, we still have to run integration testing using real system hardware to provide the stimulus/response to the FPGA, to verify that simulation models of those external components, were accurate models.

Therefore, even if we wanted to avail ourselves of the virtues of unit level testing, all coverage must still be achieved at the FPGA level, since that is the only level at which the PL can be truly tested.

Necessary exceptions to this include white-box testing at the RTL or gate level.

All that said, I agree with your statement about the virtues of loosely coupled modules, which are encouraged by unit level development and testing. There are other ways to encourage these virtues, including coding/design standards and reviews.

Just keep in mind that what is loosely coupled in the design is not always loosely coupled in the FPGA after optimization, placement and routing, particularly if physical synthesis in employed.

Andy
 
Hi Andy,

I've probably been a bit careless with the word "system" as well. What I'm trying to say is that unit testing and higher level testing with integration focus is an effective way of developing your code *before* hitting the synthesis button, i.e. the scope described in Jim's paper.

*After* you hit that button, assuming HW is available, you will need additional testing and I agree that you won't see the true nature of your PL until it has been integrated into the complete system, tested over full operating conditions and so on but that doesn't take away the value of getting to this point efficiently.

To me there's also a difference between the full functional testing done at higher levels and that done at the unit level. The PL units that were hard to test from the PL boundary will be even harder to test at the higher levels when additional HW and SW is involved. Due to the sheer amount of PL, SW and HW units in a larger system it would also be unmanageable to expose them all at every level of testing, it's just too much details. For example, if you let system testing focus on verifying system functions you will handle this and also make sure that the decomposition of system functionality into unit functionality was done correctly. Unit functionality will be tested indirectly of course but if the unit details aren't exposed to system testing there may be corner cases only visible to and verified by the unit tests. Both types of testing add unique values.

Your point that there are other activities than test to get valuable feedback on your design is important and often forgotten. Especially reviews which have been shown to be an important ingredient in quality work. The goal to simplify, automate and shorten the feedback loop applies to these activities as well and unit testing plays a role in doing so. Here are some ways to improve on code reviews
* You can let a lint tool check some of your coding guidelines rather than having that manually reviewed.
* If you review code and unit tests at the same time the reviewers won't waste their time trying to figure out if the code works at all. The reviewer can also learn a lot by looking at what was tested and how it was tested. For example, if he/she expects some functionality but there are no test cases for that it means that the functionality is missing or it's not properly verified.
* If you set up a formal code review meeting you have to find a time that fits everyone so the feedback cycle can be long. Since most of the work is done by the reviewers before the meeting you can shorten that time with a tool that allows everyone to review and comment code inline and submit when done.
* Live reviewing through pair programming will make the feedback loop much shorter. To what extent pair programming is cost efficient is often debated but my personal view is that it should be used when *you* feel that you're about to take on a tricky piece of code and could use another pair of eyes.. In the end someone else should review your code, one way or the other.
* Self-reviews also provide fast feedback and should always be done. The act of writing unit tests helps you with that. It helps you take a step back and think critically about your code. I would say that I find/avoid at least as many flaws writing the unit tests as I do running them. So self-reviews are in a way more powerful than unit testing but here is an interesting question. Would you be as rigorous about self-reviewing if you didn't have a precise method "forcing" you into it? Even when all your test cases pass you should have another look. It works, but can I make it work more efficient, can I make the code more readable, is code coverage sufficient and so on.. When you have a set of fast and working unit tests the fear of changing something that works is much reduced.

Reviews are powerful but a problem is that we tend to be lazy. "There's no need to review this small change or run the slow top-level simulations because it's not going to affect anything other than what I was trying to fix. I'm just going to build a new FPGA and release it.". Running a selected set of fast regression unit tests in a background thread is an effortless way of stopping you when you're wrong.

The benefits of loose coupling and high cohesion I was thinking about are those associated with the source code. Better readability, maintainability, and also significantly lower bug rates. The fact that the tools may destroy that modularity when optimizing is actually good because it means, to some extent, that I can meet my constraints while still enjoying the benefits of modular code. The alternative would be to write optimized code, destroy the code modularity and lose the benefits.

/Lars
 
I don't think you understand VHDL. VHDL has sequential code, that is
what a process it.

I prefer to think that VHDL processes do not affect each other while
executing. The process is affected by others only when goes sleeping.
http://www.sigasi.com/content/vhdls-crown-jewel


Huh??? I use single stepping with VHDL at times. Normally it isn't
that useful because there is so much parallelism, things tend to jump
around as one process stops and another starts... same as software on a
processor with interrupts or multitasking.

You do not have interrupts in VHDL. There is no preemption. VHDL is a
graceful, "cooperative" multitasking.
 

Welcome to EDABoard.com

Sponsor

Back
Top