EDAboard.com | EDAboard.eu | EDAboard.de | EDAboard.co.uk | RTV forum PL | NewsGroups PL

creating program

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - VHDL Language - creating program

Goto page Previous  1, 2, 3, 4

Andy
Guest

Tue Jan 12, 2016 6:08 am   



On Monday, January 4, 2016 at 5:16:56 PM UTC-6, Lars Asplund wrote:
Quote:
Hi Andy,

I've probably been a bit careless with the word "system" as well. What I'm trying to say is that unit testing and higher level testing with integration focus is an effective way of developing your code *before* hitting the synthesis button, i.e. the scope described in Jim's paper.

*After* you hit that button, assuming HW is available, you will need additional testing and I agree that you won't see the true nature of your PL until it has been integrated into the complete system, tested over full operating conditions and so on but that doesn't take away the value of getting to this point efficiently.

To me there's also a difference between the full functional testing done at higher levels and that done at the unit level. The PL units that were hard to test from the PL boundary will be even harder to test at the higher levels when additional HW and SW is involved. Due to the sheer amount of PL, SW and HW units in a larger system it would also be unmanageable to expose them all at every level of testing, it's just too much details. For example, if you let system testing focus on verifying system functions you will handle this and also make sure that the decomposition of system functionality into unit functionality was done correctly. Unit functionality will be tested indirectly of course but if the unit details aren't exposed to system testing there may be corner cases only visible to and verified by the unit tests. Both types of testing add unique values.

Your point that there are other activities than test to get valuable feedback on your design is important and often forgotten. Especially reviews which have been shown to be an important ingredient in quality work. The goal to simplify, automate and shorten the feedback loop applies to these activities as well and unit testing plays a role in doing so. Here are some ways to improve on code reviews
* You can let a lint tool check some of your coding guidelines rather than having that manually reviewed.
* If you review code and unit tests at the same time the reviewers won't waste their time trying to figure out if the code works at all. The reviewer can also learn a lot by looking at what was tested and how it was tested. For example, if he/she expects some functionality but there are no test cases for that it means that the functionality is missing or it's not properly verified.
* If you set up a formal code review meeting you have to find a time that fits everyone so the feedback cycle can be long. Since most of the work is done by the reviewers before the meeting you can shorten that time with a tool that allows everyone to review and comment code inline and submit when done.
* Live reviewing through pair programming will make the feedback loop much shorter. To what extent pair programming is cost efficient is often debated but my personal view is that it should be used when *you* feel that you're about to take on a tricky piece of code and could use another pair of eyes. In the end someone else should review your code, one way or the other.
* Self-reviews also provide fast feedback and should always be done. The act of writing unit tests helps you with that. It helps you take a step back and think critically about your code. I would say that I find/avoid at least as many flaws writing the unit tests as I do running them. So self-reviews are in a way more powerful than unit testing but here is an interesting question. Would you be as rigorous about self-reviewing if you didn't have a precise method "forcing" you into it? Even when all your test cases pass you should have another look. It works, but can I make it work more efficient, can I make the code more readable, is code coverage sufficient and so on. When you have a set of fast and working unit tests the fear of changing something that works is much reduced.

Reviews are powerful but a problem is that we tend to be lazy. "There's no need to review this small change or run the slow top-level simulations because it's not going to affect anything other than what I was trying to fix.. I'm just going to build a new FPGA and release it.". Running a selected set of fast regression unit tests in a background thread is an effortless way of stopping you when you're wrong.

The benefits of loose coupling and high cohesion I was thinking about are those associated with the source code. Better readability, maintainability, and also significantly lower bug rates. The fact that the tools may destroy that modularity when optimizing is actually good because it means, to some extent, that I can meet my constraints while still enjoying the benefits of modular code. The alternative would be to write optimized code, destroy the code modularity and lose the benefits.

/Lars


Lars,

You make excellent points about the importance of good code reviews. One cannot over-emphasize that.

I'll admit that reaching the level of coverage with simulations from the chip's edge is not inexpensive. It requires LOTS of simulation time. Luckily that scales up extremely efficiently with relatively few personnel (just computers and simulator licenses). It also plays well in a constrained-random verification environment, making the creation of lots of test cases relatively easy. Coverage models in the testbench (bins, covergroups, etc.) and in the DUT (e.g. statement, FEC, etc.) help tell you when you are done.

Intelligent randomization, like that possible in OSVVM, can significantly reduce redundancy in randomization, and save runs (time and/or licenses).

Even without unit level testing per se, it is easy to create wrapper components (or just architectures) for units that can help ensure that other modules are handling the interface correctly. For instance if I have one unit that provides data with a valid strobe asserted when the data is available, a wrapper can force unknown (or random) data on the output when the strobe is not asserted. If the recipient tries to use the data when the strobe is not asserted, then easily discoverable functional errors will result.

One can also provide protocol monitors on internal unit interfaces to verify coverage of tricky unit interface sequences if needed.

These are much easier to develop than full-blown unit testing, especially in environments such as ours where functional coverage at the device level, in both HW and simulation, is needed. That is the only way you can fully test your "compiled" code, and therefore the "compiler" (SP&R) too.

Andy

Lars Asplund
Guest

Mon Feb 01, 2016 12:00 pm   



Andy,

Sorry for a late reply.

You're right that you can increase simulation throughput by adding computer cores and licenses. VUnit has the ability to distribute testing on *all* your CPU cores with a single command line option and it also supports GHDL so that there is no license cost associated with running many simulations in the background. You can use GHDL for batch jobs and use a paid license when working more interactively, for example when debugging.

However, the latency of a test doesn't scale with more computers. A slow test will still be slow and affect the short code/test cycle I'm looking for. A system-level testbench approach also tend to verify more things in the same test which means that they can't be parallelized in the same way as unit tests where each test case can run on its own CPU core.

Unit testing is a complement to other testing methodologies so it doesn't exclude/replace constrained random testing (or system-level testing). OSVVM is even redistributed with VUnit but we're about to stop doing that now that OSVVM is released as an official GitHub repo. However, using randomization as a way to find internal corner cases that you know about can be problematic. How long will it take to activate that corner case? When activated, will the effects be propagated to an observable output? There is a good quote from Neil Johnson on this. He's working with ASIC verification and is very active in promoting unit testing and Agile principles in general for ASIC development. He once said something like this about constrained random and UVM

"Constrained random verification is great for finding bugs you didn't know about but terrible at finding potential bugs you do know"

It seems to me that the wrapper to insert X is also a special case solution to the general problem of testing at the wrong level. The things you want to test are hard at the system level so you force values on an internal node and use the special properties of X to find the effects. However, making sure that such an X isn't consumed by a receiver is only one of the interface properties you want to verify on that hard to reach receiver unit and most of the potential bugs won't result in an easy to spot X value.

Embedding checks to monitor things like the protocol of an interface is something that can be done with unit testing as well. Let the unit test provide the stimuli but put the checks within the code if you want them to be reused in other test contexts. VUnit checks have translate_on/off pragmas internally so that they are ignored by synthesis. You can find this in the VUnit examples https://github.com/VUnit/vunit/tree/master/examples/vhdl

We seem to have different opinions on these matters but I think one of the core Agile values applies. Individuals and interactions over processes and tools. For obvious reasons I believe that VUnit is something all teams should try out but in the end it's up to the team to figure out what works for them. VUnit can be used to automate your type of testing, run constrained random testing, distribute tests on different cores and so on. It's not unit testing but it might be something that works perfectly for you. We're just about to update the VUnit web site (vunit.github.io) and in the new version we've actually changed the one line description of VUnit from "a unit testing framework for VHDL" to "a test framework for HDL". This better reflects its broad application and the fact that we also have emerging support for SystemVerilog.

Lars

Goto page Previous  1, 2, 3, 4

elektroda.net NewsGroups Forum Index - VHDL Language - creating program

Ask a question - edaboard.com

Arabic versionBulgarian versionCatalan versionCzech versionDanish versionGerman versionGreek versionEnglish versionSpanish versionFinnish versionFrench versionHindi versionCroatian versionIndonesian versionItalian versionHebrew versionJapanese versionKorean versionLithuanian versionLatvian versionDutch versionNorwegian versionPolish versionPortuguese versionRomanian versionRussian versionSlovak versionSlovenian versionSerbian versionSwedish versionTagalog versionUkrainian versionVietnamese versionChinese version
RTV map EDAboard.com map News map EDAboard.eu map EDAboard.de map EDAboard.co.uk map