creating program

That's also getting more common in software these days. Ever tried debugging
a 1M messages / second Akka application ?

Thanks. Now, nobody can say that VHDL program is a program after you
pointed this out. Indeed, if VHDL program is indistinguishable from a
modern (Akka) application, they cannot be called a program. BTW, how do
you call a piece of Akka code? Why Akka code is not a "description"?

There was no reason to resort to this killer argument. Because VHDL is
executable "description", it clearly cannot be a program. Being
executable and "program" are mutually exclusive, even disjoint things,
especially if you execute a description.
 
On January 5th, 2016, Valtih1978 claimed:
|---------------------------------------------|
|"[. . .] |
| |
|[. . .] |
|[. . .] Being executable |
|and "program" are mutually exclusive, [. . .]|
|[. . .]" |
|---------------------------------------------|

False.

Truly,
Paul Colin Gloster
 
On 1/5/2016 7:20 AM, valtih1978 wrote:
I don't think you understand VHDL. VHDL has sequential code, that is
what a process it.

I prefer to think that VHDL processes do not affect each other while
executing. The process is affected by others only when goes sleeping.
http://www.sigasi.com/content/vhdls-crown-jewel


Huh??? I use single stepping with VHDL at times. Normally it isn't
that useful because there is so much parallelism, things tend to jump
around as one process stops and another starts... same as software on a
processor with interrupts or multitasking.

You do not have interrupts in VHDL. There is no preemption. VHDL is a
graceful, "cooperative" multitasking.

Uh, interrupts are about a single processor shared between multiple
tasks. VHDL allows you to describe multiple hardware "execution units"
which all operate in parallel. If you want a single execution unit
shared between tasks you can describe that too - complete with
interrupts. Your choice.

What I was describing is the behavior of of the simulator which
essentially *is* a single processor shared between VHDL tasks. No, it
doesn't have interrupts, but the task switching is very messy to try to
follow while single stepping.

--

Rick
 
On January 5th, 2016, Valtih1978 sent:
|-------------------------------------------------------------------------|
|"[. . .] |
| |
|> I don't think you understand VHDL. VHDL has sequential code, that is |
|> what a process it. |
| |
|I prefer" |
|-------------------------------------------------------------------------|

I would prefer to be a millionaire but this does not make it so.

|-------------------------------------------------------------------------|
|"to think that VHDL processes do not affect each other while |
|executing. The process is affected by others only when goes sleeping." |
|-------------------------------------------------------------------------|

Therefore you admit that processes are affected by other processes.

|-------------------------------------------------------------------------|
|" http://www.sigasi.com/content/vhdls-crown-jewel " |
|-------------------------------------------------------------------------|

I quote from this webpage:
"[. . .]
[. . .] A signal value update may trigger a number of
processes. [. . .]
[. . .]"

|-------------------------------------------------------------------------|
|"> Huh??? I use single stepping with VHDL at times. Normally it isn't |
|> that useful because there is so much parallelism, things tend to jump |
|> around as one process stops and another starts... same as software on a|
|> processor with interrupts or multitasking. |
| |
|You do not have interrupts in VHDL. There is no preemption." |
|-------------------------------------------------------------------------|

It was a comparison - like saying that stars are like very big fires
in the sky. There are not very big fires in the sky - there are stars
and stars are like very big fires.

|-------------------------------------------------------------------------|
|"VHDL is a |
|graceful," |
|-------------------------------------------------------------------------|

VHDL is graceful.

|-------------------------------------------------------------------------|
|""cooperative" multitasking."" |
|-------------------------------------------------------------------------|

The SystemC(R) standard is defined in terms of cooperative so-called
multitasking which (unlike VHDL) does not have true concurrency.

Regards,
Paul Colin Gloster
 
> In VHDL you can describe CPU, shared between processes with interrupts

Sorry, I was sure that you are talking about simple simulation of VHDL
processes. Now, you say that my TV has apples in it. Indeed, I can
broadcast the garden and have apples in my TV. It therefore has apples
in it. Gotcha! That is a feature of TV set. VHDL process cannot be
interrupted. It is dedicated to its program. It cannot execute anything
else at all.


Processes can be affected by others only while waiting for events
Therefore you admit that processes are affected by other processes.

I admit that they can be interrupted, you say. Indeed, when you have
nothing to do and come to the task table to request a new task for you,
and they give you that task -- your execution is interrupted! You gotcha
me! Indeed, I did not consider that waiting for tasks can be interrupted
by the response.

This also means that Akka actors are interruptable. I do not understand
what people debate in this case
https://groups.google.com/forum/#!msg/akka-user/TlIbfaC1eb8/EQ8S2NoI-oIJ. They
seem not able to find a way to interrupt the actors. What is the
problem? Every actor is interruptable, as you say. Just let it finish
the work and read your interrupt/task from the queue!
 
On Monday, January 4, 2016 at 5:16:56 PM UTC-6, Lars Asplund wrote:
Hi Andy,

I've probably been a bit careless with the word "system" as well. What I'm trying to say is that unit testing and higher level testing with integration focus is an effective way of developing your code *before* hitting the synthesis button, i.e. the scope described in Jim's paper.

*After* you hit that button, assuming HW is available, you will need additional testing and I agree that you won't see the true nature of your PL until it has been integrated into the complete system, tested over full operating conditions and so on but that doesn't take away the value of getting to this point efficiently.

To me there's also a difference between the full functional testing done at higher levels and that done at the unit level. The PL units that were hard to test from the PL boundary will be even harder to test at the higher levels when additional HW and SW is involved. Due to the sheer amount of PL, SW and HW units in a larger system it would also be unmanageable to expose them all at every level of testing, it's just too much details. For example, if you let system testing focus on verifying system functions you will handle this and also make sure that the decomposition of system functionality into unit functionality was done correctly. Unit functionality will be tested indirectly of course but if the unit details aren't exposed to system testing there may be corner cases only visible to and verified by the unit tests. Both types of testing add unique values.

Your point that there are other activities than test to get valuable feedback on your design is important and often forgotten. Especially reviews which have been shown to be an important ingredient in quality work. The goal to simplify, automate and shorten the feedback loop applies to these activities as well and unit testing plays a role in doing so. Here are some ways to improve on code reviews
* You can let a lint tool check some of your coding guidelines rather than having that manually reviewed.
* If you review code and unit tests at the same time the reviewers won't waste their time trying to figure out if the code works at all. The reviewer can also learn a lot by looking at what was tested and how it was tested. For example, if he/she expects some functionality but there are no test cases for that it means that the functionality is missing or it's not properly verified.
* If you set up a formal code review meeting you have to find a time that fits everyone so the feedback cycle can be long. Since most of the work is done by the reviewers before the meeting you can shorten that time with a tool that allows everyone to review and comment code inline and submit when done.
* Live reviewing through pair programming will make the feedback loop much shorter. To what extent pair programming is cost efficient is often debated but my personal view is that it should be used when *you* feel that you're about to take on a tricky piece of code and could use another pair of eyes. In the end someone else should review your code, one way or the other.
* Self-reviews also provide fast feedback and should always be done. The act of writing unit tests helps you with that. It helps you take a step back and think critically about your code. I would say that I find/avoid at least as many flaws writing the unit tests as I do running them. So self-reviews are in a way more powerful than unit testing but here is an interesting question. Would you be as rigorous about self-reviewing if you didn't have a precise method "forcing" you into it? Even when all your test cases pass you should have another look. It works, but can I make it work more efficient, can I make the code more readable, is code coverage sufficient and so on. When you have a set of fast and working unit tests the fear of changing something that works is much reduced.

Reviews are powerful but a problem is that we tend to be lazy. "There's no need to review this small change or run the slow top-level simulations because it's not going to affect anything other than what I was trying to fix.. I'm just going to build a new FPGA and release it.". Running a selected set of fast regression unit tests in a background thread is an effortless way of stopping you when you're wrong.

The benefits of loose coupling and high cohesion I was thinking about are those associated with the source code. Better readability, maintainability, and also significantly lower bug rates. The fact that the tools may destroy that modularity when optimizing is actually good because it means, to some extent, that I can meet my constraints while still enjoying the benefits of modular code. The alternative would be to write optimized code, destroy the code modularity and lose the benefits.

/Lars

Lars,

You make excellent points about the importance of good code reviews. One cannot over-emphasize that.

I'll admit that reaching the level of coverage with simulations from the chip's edge is not inexpensive. It requires LOTS of simulation time. Luckily that scales up extremely efficiently with relatively few personnel (just computers and simulator licenses). It also plays well in a constrained-random verification environment, making the creation of lots of test cases relatively easy. Coverage models in the testbench (bins, covergroups, etc.) and in the DUT (e.g. statement, FEC, etc.) help tell you when you are done.

Intelligent randomization, like that possible in OSVVM, can significantly reduce redundancy in randomization, and save runs (time and/or licenses).

Even without unit level testing per se, it is easy to create wrapper components (or just architectures) for units that can help ensure that other modules are handling the interface correctly. For instance if I have one unit that provides data with a valid strobe asserted when the data is available, a wrapper can force unknown (or random) data on the output when the strobe is not asserted. If the recipient tries to use the data when the strobe is not asserted, then easily discoverable functional errors will result.

One can also provide protocol monitors on internal unit interfaces to verify coverage of tricky unit interface sequences if needed.

These are much easier to develop than full-blown unit testing, especially in environments such as ours where functional coverage at the device level, in both HW and simulation, is needed. That is the only way you can fully test your "compiled" code, and therefore the "compiler" (SP&R) too.

Andy
 
Andy,

Sorry for a late reply.

You're right that you can increase simulation throughput by adding computer cores and licenses. VUnit has the ability to distribute testing on *all* your CPU cores with a single command line option and it also supports GHDL so that there is no license cost associated with running many simulations in the background. You can use GHDL for batch jobs and use a paid license when working more interactively, for example when debugging.

However, the latency of a test doesn't scale with more computers. A slow test will still be slow and affect the short code/test cycle I'm looking for. A system-level testbench approach also tend to verify more things in the same test which means that they can't be parallelized in the same way as unit tests where each test case can run on its own CPU core.

Unit testing is a complement to other testing methodologies so it doesn't exclude/replace constrained random testing (or system-level testing). OSVVM is even redistributed with VUnit but we're about to stop doing that now that OSVVM is released as an official GitHub repo. However, using randomization as a way to find internal corner cases that you know about can be problematic. How long will it take to activate that corner case? When activated, will the effects be propagated to an observable output? There is a good quote from Neil Johnson on this. He's working with ASIC verification and is very active in promoting unit testing and Agile principles in general for ASIC development. He once said something like this about constrained random and UVM

"Constrained random verification is great for finding bugs you didn't know about but terrible at finding potential bugs you do know"

It seems to me that the wrapper to insert X is also a special case solution to the general problem of testing at the wrong level. The things you want to test are hard at the system level so you force values on an internal node and use the special properties of X to find the effects. However, making sure that such an X isn't consumed by a receiver is only one of the interface properties you want to verify on that hard to reach receiver unit and most of the potential bugs won't result in an easy to spot X value.

Embedding checks to monitor things like the protocol of an interface is something that can be done with unit testing as well. Let the unit test provide the stimuli but put the checks within the code if you want them to be reused in other test contexts. VUnit checks have translate_on/off pragmas internally so that they are ignored by synthesis. You can find this in the VUnit examples https://github.com/VUnit/vunit/tree/master/examples/vhdl

We seem to have different opinions on these matters but I think one of the core Agile values applies. Individuals and interactions over processes and tools. For obvious reasons I believe that VUnit is something all teams should try out but in the end it's up to the team to figure out what works for them. VUnit can be used to automate your type of testing, run constrained random testing, distribute tests on different cores and so on. It's not unit testing but it might be something that works perfectly for you. We're just about to update the VUnit web site (vunit.github.io) and in the new version we've actually changed the one line description of VUnit from "a unit testing framework for VHDL" to "a test framework for HDL". This better reflects its broad application and the fact that we also have emerging support for SystemVerilog.

Lars
 

Welcome to EDABoard.com

Sponsor

Back
Top