The best VHDL library around for basic testbench checking fu

Guest
According to the Wilson Report (2014 Wilson Research Group Functional Verification Study) on average 50% of FPGA designers' time is spent on verification, and almost half of that verification time is spent on debugging. This means:

1. Good reports for unexpected design behaviour is critical.
2. Good progress reporting is also critical.
3. Good basic testbench features are required

Thus we need a library with good functionality for mismatch reporting, progress reporting and for checks etc. that are needed for every single testbench; like
- checking value against expected
- waiting for something to happen - with a timeout
- checking stability of a signal
- waiting for a signal to be stable for a given time (with timeout)

The only free library library (to my knowledge) to provide all this functionality is Bitvis Utility Library.
A bonus feature of this library is that the user threshold is extremely low, as this has been a main goal throughout the development. Advanced features are available when you need them.

The library is free and open source, - and you will be up and running within 20 min (by browsing through the downloadable PPT presentation).
The library has been checked to work with simulators from Mentor, Aldec and Xilinx. Version 2.5 with lots of new functionality was just published.

If this sounds interesting, you should read the below intro.
You can download the library and PPT from http://bitvis.no/resources/utility-library-download/ without any registration.

====================================================================================================
TB Purpose

The purpose of a testbench (TB) is to check the behaviour of your DUT (Device Under Test). This really goes without saying, - but sometimes stating the obvious is really needed. For any testbench you always provide stimuli and check the response. Sometimes this is a simple operation, and sometimes this is really complex. Most testbenches do however have some basic checking aspects in common.

Basic Checking Aspects

Checking a signal value against an expected value - sometimes with partial don't care or a margin
Checking stability on a given signal (that a certain time has elapsed since the last signal event)
Waiting for a signal change or specific value on a signal
Improving TB development efficiency and quality

The checks above are easily implemented in VHDL, or better - in self-made sub-programs. The challenge is not to make the actual procedures and functions, but to add functionality to these checks to allow far more efficient TB development and problem debugging. The following are some examples that will significantly speed up your FPGA development:

Reporting the actual mismatch - like 'was 0xFE, but expected 0xFF' yields important debug-information
Reporting what is actually being checked - like 'Checking correct CRC for packet 1' yields another piece of important information
Reporting the source of a failing mismatch leads the problem search in the right direction (e.g. problem in UART 1)
A positive acknowledge when passing the test is very useful when building the TB, BFMs, Analysers, etc
Allowing the positive acknowledge to be filtered away is really useful when this part is working
Counting alerts (errors, warnings, etc) and potentially stopping on N errors allows good debugging flexibility
Ignoring certain alerts is useful when provoking a misbehaviour
Timeout on waiting for an event to happen inside a given time window - with a proper message - rather than hanging on a 'wait until'
Adding this functionality makes everything simpler, faster and better. The TB code will be more understandable (by anyone) and far simpler to maintain and extend. Debugging of both the DUT and TB will be far more efficient. The progress report will be more understandable and make more sense to anyone. And the quality of the design and the TB will increase significantly.

A major impact on TB development

Now going back to the introduction. The sad fact is that for most testbenches a lot of development time is wasted and the quality of the TB is insufficient, and a major reason for this is the lack of a structured approach to logging and checking. The good news is that all this functionality is available for free through Bitvis Utility Library. Bitvis Utility Library is a free, open source VHDL library that will yield a major efficiency and quality improvement for almost all FPGA (or ASIC) development. The library has been downloaded by developers all over the world, and the feedback has been very good - also from specialists in the VHDL community.

Bitvis Utility Library also has excellent support for logging/reporting and verbosity control (see a previous post on LinkedIn). The combination of the logging/reporting/verbosity and checking support - all provided with Bitvis Utility Library - now makes it possible to develop more structured testbenches, with better verification of DUT functionality and better simulation transcripts with progress report and debug-support - and at the same time reduce the development workload and schedule.

For more advanced testbenches you might need additional support and TB structure for coverage (e.g. via OSVVM) and simultaneous access (stimuli/check) (e.g. via UVVM) on multiple interfaces, but you still need the functionality provided by Bitvis Utility Library as your base.

A very low user threshold

An essential feature of this library is that it has an extremely low user threshold, and at the same time has advanced options available when needed for more complex testbenches. You will be up and running, making far better testbenches in less than one hour.

Invest 10 minutes to browse through our powerpoint presentations on 'Making a simple, structured and efficient VHDL testbench - Step-by-step' and/or 'Bitvis Utility Library Concepts and usage', both available for download (with no registration) from http://bitvis.no/resources/utility-library-download/. The library may be downloaded from the same page.

The library is free, and there is no catch. Enjoy :)
 
On Fri, 12 Jun 2015 06:39:43 -0700, espen.tallaksen wrote:

According to the Wilson Report (2014 Wilson Research Group Functional
Verification Study) on average 50% of FPGA designers' time is spent on
verification, and almost half of that verification time is spent on
debugging. This means:

1. Good reports for unexpected design behaviour is critical.
2. Good progress reporting is also critical.
3. Good basic testbench features are required

Thus we need a library with good functionality for mismatch reporting,
progress reporting and for checks etc. that are needed for every single
testbench; like - checking value against expected - waiting for
something to happen - with a timeout - checking stability of a signal -
waiting for a signal to be stable for a given time (with timeout)

The only free library library (to my knowledge) to provide all this
functionality is Bitvis Utility Library.

It's great to see a resurgence in VHDL tool development!

How does it compare with other modern VHDL testbench libraries like OSVVM
and Vunit?

Especially great to see lively competition between open-source tools,
where it sometimes feels like the commercial vendors wish VHDL would die
quietly...

http://osvvm.org/
https://github.com/LarsAsplund/vunit

Have you tried it with GHDL and GTKwave, to keep the whole simulation
toolchain open source?

https://sourceforge.net/projects/ghdl-updates/

Both the above libraries now work with the leading edge of GHDL, and have
pushed its VHDL-2008 support forwards so another GHDL release should
happen soon. Tristan, ghdl's main developer, has been especially active
lately in making this happen.

So, any reports of incompatibilities between Bitvis and GHDL would be
welcomed via
https://sourceforge.net/p/ghdl-updates/tickets/?source=navbar

Thanks,
-- Brian
 
It's great to see a resurgence in VHDL tool development!

How does it compare with other modern VHDL testbench libraries like OSVVM
and Vunit?

Especially great to see lively competition between open-source tools,
where it sometimes feels like the commercial vendors wish VHDL would die
quietly...

http://osvvm.org/
https://github.com/LarsAsplund/vunit

Have you tried it with GHDL and GTKwave, to keep the whole simulation
toolchain open source?

https://sourceforge.net/projects/ghdl-updates/

Both the above libraries now work with the leading edge of GHDL, and have
pushed its VHDL-2008 support forwards so another GHDL release should
happen soon. Tristan, ghdl's main developer, has been especially active
lately in making this happen.

So, any reports of incompatibilities between Bitvis and GHDL would be
welcomed via
https://sourceforge.net/p/ghdl-updates/tickets/?source=navbar

Thanks,
-- Brian

Hi Brian,

Bitvis Utility Library (BVUL) is complementary to OSVVM and Vunit - with some minor overlaps. The coverage and advanced random gen. of OSVVM and the unit test support of Vunit are great in combination with Bitvis Utility Library's checking/await and log/alert features. The main advantage with our library is that it provides the functionality you need for every single VHDL testbench independent of verification approach, - with a very low user threshold.
We will soon make a new release of BVUL, where you can combine the coverage and random generation of OSVVM seamlessly with BVUL, resulting in a major improvement for more advanced testbenches.
With respect to Vunit, Lars Asplund of Synective Labs (maintainer of Vunit) stated already in February 2014 that he had used BVUL with Vunit and that it works perfectly well.
BVUL has been tested OK with Riviera Pro (Aldec), Modelsim (Mentor) and Vivado Sim (Xilinx). We can check compatibility with GHDL as well.
(Currently we support a 2008, 2002 and 93 -version of BVUL, but we will soon only continue to develop the 2008 version.)

Any feedback on BVUL is appreciated.
-Espen
 
Den mĺndag 15 juni 2015 kl. 10:10:59 UTC+2 skrev espen.t...@bitvis.no:
It's great to see a resurgence in VHDL tool development!

How does it compare with other modern VHDL testbench libraries like OSVVM
and Vunit?

Especially great to see lively competition between open-source tools,
where it sometimes feels like the commercial vendors wish VHDL would die
quietly...

http://osvvm.org/
https://github.com/LarsAsplund/vunit

Have you tried it with GHDL and GTKwave, to keep the whole simulation
toolchain open source?

https://sourceforge.net/projects/ghdl-updates/

Both the above libraries now work with the leading edge of GHDL, and have
pushed its VHDL-2008 support forwards so another GHDL release should
happen soon. Tristan, ghdl's main developer, has been especially active
lately in making this happen.

So, any reports of incompatibilities between Bitvis and GHDL would be
welcomed via
https://sourceforge.net/p/ghdl-updates/tickets/?source=navbar

Thanks,
-- Brian

Hi Brian,

Bitvis Utility Library (BVUL) is complementary to OSVVM and Vunit - with some minor overlaps. The coverage and advanced random gen. of OSVVM and the unit test support of Vunit are great in combination with Bitvis Utility Library's checking/await and log/alert features. The main advantage with our library is that it provides the functionality you need for every single VHDL testbench independent of verification approach, - with a very low user threshold.
We will soon make a new release of BVUL, where you can combine the coverage and random generation of OSVVM seamlessly with BVUL, resulting in a major improvement for more advanced testbenches.
With respect to Vunit, Lars Asplund of Synective Labs (maintainer of Vunit) stated already in February 2014 that he had used BVUL with Vunit and that it works perfectly well.
BVUL has been tested OK with Riviera Pro (Aldec), Modelsim (Mentor) and Vivado Sim (Xilinx). We can check compatibility with GHDL as well.
(Currently we support a 2008, 2002 and 93 -version of BVUL, but we will soon only continue to develop the 2008 version.)

Any feedback on BVUL is appreciated.
-Espen

Hi Brian and Espen,

The logging, the checking, and the unit test running functionality of VUnit are layers building on top of each other (in that order) and they have a loose coupling. This means that it is easy to use the unit test running functionality on top of other assertion solutions as well, e.g. plain VHDL asserts or BVUL or OVL or OSVVM which also added similar functionality earlier this year. Integration with the OSVVM functionality is described in https://github.com/LarsAsplund/vunit/blob/master/examples/osvvm_integration/osvvm_integration.md and the same principle applies to BVUL as well.

The layered approach also means that the VUnit logging and checking functionality can be used standalone for other verification approaches without the layer for running unit tests. When used standalone you can choose between the VHDL-93 or VHDL-200x versions depending on the simulator you have. VUnit's official support is currently limited to supporting ModelSim and GHDL but that limitation is driven by making the unit test running layer to work since that layer involves scripting of the simulator. The other layers are not limited in this way.

/Lars
 
Just to clarify; VUnit is also independent of the verification approach of the user. I have personally used to run constrained random verification using OSVVM, multi hour long end-to-end tests against golden reference data as well as small directed unit tests. Typically a project would have a bit of each. In projects I have been involved in where there were 200+ tests my estimate was that 60% were constrained random, 30% directed and 10% big end to end.

What makes VUnit different is that it is not just a VHDL library. It tries to be a complete testing tool for VHDL in which the library features you describe are one important piece of the puzzle. The cornerstones of VUnit are:

1)
Support for dependency scanning and incremental compilation.
So that the edit/compile/run cycle is a fast as possible.

2)
A VHDL library for checks/asserts/logging that are needed for writing the actual test bench. Procedures for saving/loading test data to/from .csv and ..raw files etc. We also re-distribute OSVVM since it provides additional library capabilities for random number generation, and coverage.

3)
A Python command line interface such that tests can be automatically run either in batch or in GUI with minimal user effort. Such that test can be configured to run for all combinations of generic values. Such that tests can be run in parallel. Such that VHDL testing can be integrated with Continous Integration environments such as Jenkins.

The BVUL fills the role of 2) and could probably replace the corresponding libraries that were created specifically for VUnit. As Lars Asplund mentions the checking and logging libraries are orthogonal to other parts of VUnit making it possible to use BVUL instead of the VUnit builtin checks. Replacing the built in parts of VUnit with BVUL has not been something I have investigated that much since I would rather focus on adding missing functionality to VUnit rather than replacing existing functionality with something of another flavor.

Although 2) is an important part of the puzzle without 1) and 3) it is just not as productive since the user has to perform a lot of manual work to run, compile and administer their tests. Many companies have some home-brew variant of 3) of varying quality though. The goal of VUnit was to make people stop re-inventing the wheel making their proprietary in-house solutions and instead use the man hours to improve something that everyone can use.

// The second main author of VUnit
 
Hi Olof,

I agree that BVUL only covers your item 2 above, but then again item 2 is where you can actually save by far the most hours in a complex FPGA project. Items 1 and 3 are also important, and will save quite a few hours.

BVUL is however not just another flavour. BVUL has verbosity control, optional positive acknowledge on checks and some very important additional checks resulting in faster testbench development and faster debugging.
We will also very soon integrate BVUL tighter with OSVVM and add even more advanced verification capabilities through UVVM (to be released soon).

I think perhaps it could be great if we could cooperate on the combination Vunit and BVUL, so that we could get the best out of two worlds, but we could take that discussion off line ;-)

-Espen
 
Espen, first I want to just note that the corresponding libraries in VUnit also have the features you describe.

Secondly, maybe this is down to personal preference but I would not value 1) and 3) less than 2). For most projects the majority of test benches do not need to be that advanced; check_equal with automatic report of "got vs expected", random number generation, and some simple watchdog or timeout covers 90% of the need.

My personal testing preference is that each VHDL entity should have a test bench achieving full functional coverage. It is important to do this to drive the design into a good partition of loosely coupled and testable entities. Most entities will be small and their corresponding test benches run fast and then there is not much need for advanced logging or delayed failure. In such a situation I have a strong preference for immediate failure on a failing alert/check since when stopping immediately the VHDL call stack can be emitted at the point of failure. I would rather open the simulator GUI and look at the waveform or single step through the code past the failure, which can be done by just running VUnit command line with the --gui flag. The larger end-to-end tests are also required even using the above philosophy and in such a situation more advanced logging and delayed alert/check failure can be more useful but still check_equal takes you very far.

My experience is that without 1) and 3) many people tend to fall back into just having the large end-to-end tests since it is such a burden to maintain the scripts to handle the small 200+ test cases which would have made the code base a lot easier to maintain, with fever bugs and more modularity. VUnit lets you just add a test to your testbench or add a new testbench and it is automatically part of the test suite due to the test scanner feature. When writing the test it also lets the user focus completely on their task allowing them to effortlessly edit/compile/re-run with the just a single command.

With all that said I would be interested in trying to collaborate in some way and I have sent you a personal email. I can just note that we are doing similar things and to me it seems a shame to fragment a small community by having multiple competing/exclusive implementations of the same puzzle piece.


On Friday, June 19, 2015 at 4:20:42 PM UTC+2, espen.t...@bitvis.no wrote:
Hi Olof,

I agree that BVUL only covers your item 2 above, but then again item 2 is where you can actually save by far the most hours in a complex FPGA project. Items 1 and 3 are also important, and will save quite a few hours.

BVUL is however not just another flavour. BVUL has verbosity control, optional positive acknowledge on checks and some very important additional checks resulting in faster testbench development and faster debugging.
We will also very soon integrate BVUL tighter with OSVVM and add even more advanced verification capabilities through UVVM (to be released soon).

I think perhaps it could be great if we could cooperate on the combination Vunit and BVUL, so that we could get the best out of two worlds, but we could take that discussion off line ;-)

-Espen
 
On Fri, 19 Jun 2015 07:20:39 -0700, espen.tallaksen wrote:

Hi Olof,

I agree that BVUL only covers your item 2 above, but then again item 2
is where you can actually save by far the most hours in a complex FPGA
project. Items 1 and 3 are also important, and will save quite a few
hours.

I think perhaps it could be great if we could cooperate on the
combination Vunit and BVUL, so that we could get the best out of two
worlds, but we could take that discussion off line ;-)

By all means take the details off-line but please summarize the outcome
here!

Thanks to yourself, Olof and Lars for discussing - and indeed, creating -
these useful tools!

-- Brian
 
I downloaded the BVUL to have a look. It looks very similar to what we have in VUnit. I noticed the example test bench bitvis_irqc/tb/irqc_tb.vhd could benefit from the VUnit Python/VHDL automation. My interpretation is that you use log messages with ID_LOG_HDR to visually/textually separate different independent test cases. I count 8 of those. With VUnit you could have those 8 as actual independent test cases run in different simulations (or optionally all in the same simulation) with individual pass/fail in the test report. An individual specific test can be easily run from command line using wildcard (*) pattern. VUnit would also ensure that each test case got its own dedicated output folder to gather all simulation artifacts such as the complete stderr/stdout, wlf and transcript as well as any other user defined outputs such as images other binary data files.

On Saturday, June 20, 2015 at 12:21:59 PM UTC+2, Brian Drummond wrote:
On Fri, 19 Jun 2015 07:20:39 -0700, espen.tallaksen wrote:

Hi Olof,

I agree that BVUL only covers your item 2 above, but then again item 2
is where you can actually save by far the most hours in a complex FPGA
project. Items 1 and 3 are also important, and will save quite a few
hours.

I think perhaps it could be great if we could cooperate on the
combination Vunit and BVUL, so that we could get the best out of two
worlds, but we could take that discussion off line ;-)

By all means take the details off-line but please summarize the outcome
here!

Thanks to yourself, Olof and Lars for discussing - and indeed, creating -
these useful tools!

-- Brian
 
I made a small effort to split the irqc_tb.vhd into use separate test using VUnit. I had some problem with test independence of the "Check irq acknowledge and re-enable test" depending on a variable value from the previous "Check autonomy for all interrupts" test case but I soon found and fixed it. I also ensured your _Alert.txt and _Log.txt files ended up in the test specific output folders. I did not need to use any of your hardcoded compile scripts since VUnit figured out the dependencies automatically. The non-verbose textual output when running looked like this. (The pass being in green color in a real terminal):

Starting irqc_lib.irqc_tb.Check defaults on output ports
pass (P=1 S=0 F=0 T=7) irqc_lib.irqc_tb.Check defaults on output ports (0.9 seconds)

Starting irqc_lib.irqc_tb.Check register defaults and access write read
pass (P=2 S=0 F=0 T=7) irqc_lib.irqc_tb.Check register defaults and access write read (0.3 seconds)

Starting irqc_lib.irqc_tb.Check register trigger clear mechanism
pass (P=3 S=0 F=0 T=7) irqc_lib.irqc_tb.Check register trigger clear mechanism (0.3 seconds)

Starting irqc_lib.irqc_tb.Check interrupt sources IER IPR and irq2cpu
pass (P=4 S=0 F=0 T=7) irqc_lib.irqc_tb.Check interrupt sources IER IPR and irq2cpu (0.3 seconds)

Starting irqc_lib.irqc_tb.Check autonomy for all interrupts
pass (P=5 S=0 F=0 T=7) irqc_lib.irqc_tb.Check autonomy for all interrupts (0.3 seconds)

Starting irqc_lib.irqc_tb.Check irq acknowledge and re-enable
pass (P=6 S=0 F=0 T=7) irqc_lib.irqc_tb.Check irq acknowledge and re-enable (0.3 seconds)

Starting irqc_lib.irqc_tb.Check Reset
pass (P=7 S=0 F=0 T=7) irqc_lib.irqc_tb.Check Reset (0.3 seconds)

==== Summary =========================================================================pass irqc_lib.irqc_tb.Check defaults on output ports (0.9 seconds)
pass irqc_lib.irqc_tb.Check register defaults and access write read (0.3 seconds)
pass irqc_lib.irqc_tb.Check register trigger clear mechanism (0.3 seconds)
pass irqc_lib.irqc_tb.Check interrupt sources IER IPR and irq2cpu (0.3 seconds)
pass irqc_lib.irqc_tb.Check autonomy for all interrupts (0.3 seconds)
pass irqc_lib.irqc_tb.Check irq acknowledge and re-enable (0.3 seconds)
pass irqc_lib.irqc_tb.Check Reset (0.3 seconds)
======================================================================================pass 7 of 7
======================================================================================Total time was 2.6 seconds
Elapsed time was 2.6 seconds
======================================================================================All passed!


The run.py file used to drive everything looked like this:
from os.path import dirname, join
from vunit import VUnit

root = dirname(__file__)

ui = VUnit.from_argv()
bvul_lib = ui.add_library("bitvis_util")
bvul_lib.add_source_files(join(root, "bitvis_util", "src2008", "*.vhd"))

bitvis_vip_spi_lib = ui.add_library("bitvis_vip_sbi")
bitvis_vip_spi_lib.add_source_files(join(root, "bitvis_vip_sbi", "src", "*.vhd"))

irqc_lib = ui.add_library("irqc_lib")
irqc_lib.add_source_files(join(root, "bitvis_irqc", "src", "*.vhd"))
irqc_lib.add_source_files(join(root, "bitvis_irqc", "tb", "*.vhd"))
ui.main()

The modified irqc_tb.vhd looked like this: (The name collision with your and our log method prevented me from using our VHDL-2008 context for the VUnit packages forcing me to use them individually to avoid exposing our log):

--=======================================================================================================================-- Copyright (c) 2015 by Bitvis AS. All rights reserved.
-- A free license is hereby granted, free of charge, to any person obtaining
-- a copy of this VHDL code and associated documentation files (for 'Bitvis Utility Library'),
-- to use, copy, modify, merge, publish and/or distribute - subject to the following conditions:
-- - This copyright notice shall be included as is in all copies or substantial portions of the code and documentation
-- - The files included in Bitvis Utility Library may only be used as a part of this library as a whole
-- - The License file may not be modified
-- - The calls in the code to the license file ('show_license') may not be removed or modified.
-- - No other conditions whatsoever may be added to those of this License

-- BITVIS UTILITY LIBRARY AND ANY PART THEREOF ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
-- INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
-- IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
-- WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH BITVIS UTILITY LIBRARY.
--=======================================================================================================================
------------------------------------------------------------------------------------------
-- VHDL unit : Bitvis IRQC Library : irqc_tb
--
-- Description : See dedicated powerpoint presentation and README-file(s)
------------------------------------------------------------------------------------------


library IEEE;
use IEEE.std_logic_1164.all;
use IEEE.numeric_std.all;

library STD;
use std.textio.all;

-- library ieee_proposed;
-- use ieee_proposed.standard_additions.all;
-- use ieee_proposed.std_logic_1164_additions.all;

library vunit_lib;
use vunit_lib.run_types_pkg.all;
use vunit_lib.run_pkg.all;
use vunit_lib.run_base_pkg.all;

library bitvis_util;
use bitvis_util.types_pkg.all;
use bitvis_util.string_methods_pkg.all;
use bitvis_util.adaptations_pkg.all;
use bitvis_util.methods_pkg.all;

library bitvis_vip_sbi;
use bitvis_vip_sbi.sbi_bfm_pkg.all;

use work.irqc_pif_pkg.all;


-- Test case entity
entity irqc_tb is
generic (runner_cfg : runner_cfg_t);
end entity;

-- Test case architecture
architecture func of irqc_tb is

-- DSP interface and general control signals
signal clk : std_logic := '0';
signal arst : std_logic := '0';
-- CPU interface
signal cs : std_logic := '0';
signal addr : unsigned(2 downto 0) := (others => '0');
signal wr : std_logic := '0';
signal rd : std_logic := '0';
signal din : std_logic_vector(7 downto 0) := (others => '0');
signal dout : std_logic_vector(7 downto 0) := (others => '0');
signal rdy : std_logic := '1'; -- Always ready in the same clock cycle
-- Interrupt related signals
signal irq_source : std_logic_vector(C_NUM_SOURCES-1 downto 0) := (others => '0');
signal irq2cpu : std_logic := '0';
signal irq2cpu_ack : std_logic := '0';


signal clock_ena : boolean := false;

constant C_CLK_PERIOD : time := 10 ns;


procedure clock_gen(
signal clock_signal : inout std_logic;
signal clock_ena : in boolean;
constant clock_period : in time
) is
variable v_first_half_clk_period : time := C_CLK_PERIOD / 2;
begin
loop
if not clock_ena then
wait until clock_ena;
end if;
wait for v_first_half_clk_period;
clock_signal <= not clock_signal;
wait for (clock_period - v_first_half_clk_period);
clock_signal <= not clock_signal;
end loop;
end;

subtype t_irq_source is std_logic_vector(C_NUM_SOURCES-1 downto 0);

-- Trim (cut) a given vector to fit the number of irq sources (i.e. pot. reduce width)
function trim(
constant source : std_logic_vector;
constant num_bits : positive := C_NUM_SOURCES)
return t_irq_source is
variable v_result : std_logic_vector(source'length-1 downto 0) := source;
begin
return v_result(num_bits-1 downto 0);
end;

-- Fit a given vector to the number of irq sources by masking with zeros above irq width
function fit(
constant source : std_logic_vector;
constant num_bits : positive := C_NUM_SOURCES)
return std_logic_vector is
variable v_result : std_logic_vector(source'length-1 downto 0) := (others => '0');
variable v_source : std_logic_vector(source'length-1 downto 0) := source;
begin
v_result(num_bits-1 downto 0) := v_source(num_bits-1 downto 0);
return v_result;
end;




begin

-----------------------------------------------------------------------------
-- Instantiate DUT
-----------------------------------------------------------------------------
i_irqc: entity work.irqc
port map (
-- DSP interface and general control signals
clk => clk,
arst => arst,
-- CPU interface
cs => cs,
addr => addr,
wr => wr,
rd => rd,
din => din,
dout => dout,
-- Interrupt related signals
irq_source => irq_source,
irq2cpu => irq2cpu,
irq2cpu_ack => irq2cpu_ack
);


-- Set upt clock generator
clock_gen(clk, clock_ena, 10 ns);

------------------------------------------------
-- PROCESS: p_main
------------------------------------------------
p_main: process
constant C_SCOPE : string := C_TB_SCOPE_DEFAULT;

procedure pulse(
signal target : inout std_logic;
signal clock_signal : in std_logic;
constant num_periods : in natural;
constant msg : in string
) is
begin
if num_periods > 0 then
wait until falling_edge(clock_signal);
target <= '1';
for i in 1 to num_periods loop
wait until falling_edge(clock_signal);
end loop;
else
target <= '1';
wait for 0 ns; -- Delta cycle only
end if;
target <= '0';
log(ID_SEQUENCER_SUB, msg, C_SCOPE);
end;

procedure pulse(
signal target : inout std_logic_vector;
constant pulse_value : in std_logic_vector;
signal clock_signal : in std_logic;
constant num_periods : in natural;
constant msg : in string) is
begin
if num_periods > 0 then
wait until falling_edge(clock_signal);
target <= pulse_value;
for i in 1 to num_periods loop
wait until falling_edge(clock_signal);
end loop;
else
target <= pulse_value;
wait for 0 ns; -- Delta cycle only
end if;
target(target'range) <= (others => '0');
log(ID_SEQUENCER_SUB, "Pulsed to " & to_string(pulse_value, HEX, AS_IS, INCL_RADIX) & ". " & msg, C_SCOPE);
end;



-- Log overloads for simplification
procedure log(
msg : string) is
begin
log(ID_SEQUENCER, msg, C_SCOPE);
end;



-- Overloads for PIF BFMs for SBI (Simple Bus Interface)
procedure write(
constant addr_value : in natural;
constant data_value : in std_logic_vector;
constant msg : in string) is
begin
sbi_write(to_unsigned(addr_value, addr'length), data_value, msg,
clk, cs, addr, rd, wr, rdy, din, C_CLK_PERIOD, C_SCOPE);
end;

procedure check(
constant addr_value : in natural;
constant data_exp : in std_logic_vector;
constant alert_level : in t_alert_level;
constant msg : in string) is
begin
sbi_check(to_unsigned(addr_value, addr'length), data_exp, alert_level, msg,
clk, cs, addr, rd, wr, rdy, dout, C_CLK_PERIOD, C_SCOPE);
end;

procedure set_inputs_passive(
dummy : t_void) is
begin
cs <= '0';
addr <= (others => '0');
wr <= '0';
rd <= '0';
din <= (others => '0');
irq_source <= (others => '0');
irq2cpu_ack <= '0';
log(ID_SEQUENCER_SUB, "All inputs set passive", C_SCOPE);
end;



variable v_time_stamp : time := 0 ns;
variable v_irq_mask : std_logic_vector(7 downto 0);
variable v_irq_mask_inv : std_logic_vector(7 downto 0);

begin
test_runner_setup(runner, runner_cfg);

-- Use VUnit output path
set_log_file_name(output_path(runner_cfg) & "_Log.txt");
set_alert_file_name(output_path(runner_cfg) & "_Alert.txt");

-- Print the configuration to the log
report_global_ctrl(VOID);
report_msg_id_panel(VOID);

enable_log_msg(ALL_MESSAGES);
--disable_log_msg(ALL_MESSAGES);
--enable_log_msg(ID_LOG_HDR);

log(ID_LOG_HDR, "Start Simulation of TB for IRQC", C_SCOPE);
------------------------------------------------------------

set_inputs_passive(VOID);
clock_ena <= true; -- to start clock generator
pulse(arst, clk, 10, "Pulsed reset-signal - active for 10T");
v_time_stamp := now; -- time from which irq2cpu should be stable off until triggered


check_value(C_NUM_SOURCES > 0, FAILURE, "Must be at least 1 interrupt source", C_SCOPE);
check_value(C_NUM_SOURCES <= 8, TB_WARNING, "This TB is only checking IRQC with up to 8 interrupt sources", C_SCOPE);

while test_suite loop
if run("Check defaults on output ports") then
check_value(irq2cpu, '0', ERROR, "Interrupt to CPU must be default inactive", C_SCOPE);
check_value(dout, x"00", ERROR, "Register data bus output must be default passive");


elsif run("Check register defaults and access write read") then
log("\nChecking Register defaults");
check(C_ADDR_IRR, x"00", ERROR, "IRR default");
check(C_ADDR_IER, x"00", ERROR, "IER default");
check(C_ADDR_IPR, x"00", ERROR, "IPR default");
check(C_ADDR_IRQ2CPU_ALLOWED, x"00", ERROR, "IRQ2CPU_ALLOWED default");

log("\nChecking Register Write/Read");
write(C_ADDR_IER, fit(x"55"), "IER");
check(C_ADDR_IER, fit(x"55"), ERROR, "IER pure readback");
write(C_ADDR_IER, fit(x"AA"), "IER");
check(C_ADDR_IER, fit(x"AA"), ERROR, "IER pure readback");
write(C_ADDR_IER, fit(x"00"), "IER");
check(C_ADDR_IER, fit(x"00"), ERROR, "IER pure readback");

elsif run("Check register trigger clear mechanism") then
write(C_ADDR_ITR, fit(x"AA"), "ITR : Set interrupts");
check(C_ADDR_IRR, fit(x"AA"), ERROR, "IRR");
write(C_ADDR_ITR, fit(x"55"), "ITR : Set more interrupts");
check(C_ADDR_IRR, fit(x"FF"), ERROR, "IRR");
write(C_ADDR_ICR, fit(x"71"), "ICR : Clear interrupts");
check(C_ADDR_IRR, fit(x"8E"), ERROR, "IRR");
write(C_ADDR_ICR, fit(x"85"), "ICR : Clear interrupts");
check(C_ADDR_IRR, fit(x"0A"), ERROR, "IRR");
write(C_ADDR_ITR, fit(x"55"), "ITR : Set more interrupts");
check(C_ADDR_IRR, fit(x"5F"), ERROR, "IRR");
write(C_ADDR_ICR, fit(x"5F"), "ICR : Clear interrupts");
check(C_ADDR_IRR, fit(x"00"), ERROR, "IRR");


elsif run("Check interrupt sources IER IPR and irq2cpu") then
log("\nChecking interrupts and IRR");
write(C_ADDR_ICR, fit(x"FF"), "ICR : Clear all interrupts");
pulse(irq_source, trim(x"AA"), clk, 1, "Pulse irq_source 1T");
check(C_ADDR_IRR, fit(x"AA"), ERROR, "IRR after irq pulses");
pulse(irq_source, trim(x"01"), clk, 1, "Add more interrupts");
check(C_ADDR_IRR, fit(x"AB"), ERROR, "IRR after irq pulses");
pulse(irq_source, trim(x"A1"), clk, 1, "Repeat same interrupts");
check(C_ADDR_IRR, fit(x"AB"), ERROR, "IRR after irq pulses");
pulse(irq_source, trim(x"54"), clk, 1, "Add remaining interrupts");
check(C_ADDR_IRR, fit(x"FF"), ERROR, "IRR after irq pulses");
write(C_ADDR_ICR, fit(x"AA"), "ICR : Clear half the interrupts");
pulse(irq_source, trim(x"A0"), clk, 1, "Add more interrupts");
check(C_ADDR_IRR, fit(x"F5"), ERROR, "IRR after irq pulses");
write(C_ADDR_ICR, fit(x"FF"), "ICR : Clear all interrupts");
check(C_ADDR_IRR, fit(x"00"), ERROR, "IRR after clearing all");

log("Checking IER IPR and irq2cpu");
write(C_ADDR_ICR, fit(x"FF"), "ICR : Clear all interrupts");
write(C_ADDR_IER, fit(x"55"), "IER : Enable some interrupts");
write(C_ADDR_ITR, fit(x"AA"), "ITR : Trigger non-enable interrupts");
check(C_ADDR_IPR, fit(x"00"), ERROR, "IPR should not be active");
check(C_ADDR_IRQ2CPU_ALLOWED, x"00", ERROR, "IRQ2CPU_ALLOWED should not be active");
write(C_ADDR_IRQ2CPU_ENA, x"01", "IRQ2CPU_ENA : Enable main interrupt to CPU");
check(C_ADDR_IRQ2CPU_ALLOWED, x"01", ERROR, "IRQ2CPU_ALLOWED should now be active");
check_value(irq2cpu, '0', ERROR, "Interrupt to CPU must still be inactive", C_SCOPE);
check_stable(irq2cpu, (now - v_time_stamp), ERROR, "No spikes allowed on irq2cpu", C_SCOPE);
pulse(irq_source, trim(x"01"), clk, 1, "Add a single enabled interrupt");
await_value(irq2cpu, '1', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt expected immediately", C_SCOPE);
v_time_stamp := now; -- from time of stable active irq2cpu
check(C_ADDR_IRR, fit(x"AB"), ERROR, "IRR should now be active");
check(C_ADDR_IPR, fit(x"01"), ERROR, "IPR should now be active");

log("\nMore details checked in the autonomy section below");
check_value(irq2cpu, '1', ERROR, "Interrupt to CPU must still be active", C_SCOPE);
check_stable(irq2cpu, (now - v_time_stamp), ERROR, "No spikes allowed on irq2cpu", C_SCOPE);

elsif run("Check autonomy for all interrupts") then
write(C_ADDR_ICR, fit(x"FF"), "ICR : Clear all interrupts");
write(C_ADDR_IER, fit(x"FF"), "IER : Disable all interrupts");
write(C_ADDR_IRQ2CPU_ENA, x"01", "IRQ2CPU_ENA : Allow interrupt to CPU");
for i in 0 to C_NUM_SOURCES-1 loop

log(" ");
log("- Checking irq_source(" & to_string(i) & ") and all corresponding functionality");
log("- - Check interrupt activation not affected by non related interrupts or registers");
v_time_stamp := now; -- from time of stable inactive irq2cpu
v_irq_mask := (i => '1', others => '0');
v_irq_mask_inv := (i => '0', others => '1');
write(C_ADDR_IER, v_irq_mask, "IER : Enable selected interrupt");
pulse(irq_source, trim(v_irq_mask_inv), clk, 1, "Pulse all non-enabled interrupts");
write(C_ADDR_ITR, v_irq_mask_inv, "ITR : Trigger all non-enabled interrupts");
check(C_ADDR_IRR, fit(v_irq_mask_inv), ERROR, "IRR not yet triggered");
check(C_ADDR_IPR, x"00", ERROR, "IPR not yet triggered");
check_value(irq2cpu, '0', ERROR, "Interrupt to CPU must still be inactive", C_SCOPE);
check_stable(irq2cpu, (now - v_time_stamp), ERROR, "No spikes allowed on irq2cpu", C_SCOPE);
pulse(irq_source, trim(v_irq_mask), clk, 1, "Pulse the enabled interrupt");
await_value(irq2cpu, '1', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt expected immediately", C_SCOPE);
check(C_ADDR_IRR, fit(x"FF"), ERROR, "All IRR triggered");
check(C_ADDR_IPR, v_irq_mask, ERROR, "IPR triggered for selected");

log("\n- - Check interrupt deactivation not affected by non related interrupts or registers");
v_time_stamp := now; -- from time of stable active irq2cpu
write(C_ADDR_ICR, v_irq_mask_inv, "ICR : Clear all non-enabled interrupts");
write(C_ADDR_IER, fit(x"FF"), "IER : Enable all interrupts");
write(C_ADDR_IER, v_irq_mask, "IER : Disable non-selected interrupts");
pulse(irq_source, trim(x"FF"), clk, 1, "Pulse all interrupts");
write(C_ADDR_ITR, x"FF", "ITR : Trigger all interrupts");
check_stable(irq2cpu, (now - v_time_stamp), ERROR, "No spikes allowed on irq2cpu (='1')", C_SCOPE);
write(C_ADDR_IER, v_irq_mask_inv, "IER : Enable all interrupts but disable selected");
check_value(irq2cpu, '1', ERROR, "Interrupt to CPU still active", C_SCOPE);
check(C_ADDR_IRR, fit(x"FF"), ERROR, "IRR still active for all");
write(C_ADDR_ICR, v_irq_mask_inv, "ICR : Clear all non-enabled interrupts");
await_value(irq2cpu, '0', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt deactivation expected immediately", C_SCOPE);
write(C_ADDR_IER, v_irq_mask, "IER : Re-enable selected interrupt");
await_value(irq2cpu, '1', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt reactivation expected immediately", C_SCOPE);
check(C_ADDR_IPR, v_irq_mask, ERROR, "IPR still active for selected");
write(C_ADDR_ICR, v_irq_mask, "ICR : Clear selected interrupt");
check_value(irq2cpu, '0', ERROR, "Interrupt to CPU must go inactive", C_SCOPE);
check(C_ADDR_IRR, x"00", ERROR, "IRR all inactive");
check(C_ADDR_IPR, x"00", ERROR, "IPR all inactive");
write(C_ADDR_IER, x"00", "IER : Disable all interrupts");
end loop;

report_alert_counters(INTERMEDIATE); -- Report intermediate counters

elsif run("Check irq acknowledge and re-enable") then
log("- Activate interrupt");
write(C_ADDR_ITR, x"01", "ICR : Set single upper interrupt");
write(C_ADDR_IER, x"01", "IER : Enable single upper interrupts");
write(C_ADDR_IRQ2CPU_ENA, x"01", "IRQ2CPU_ENA : Allow interrupt to CPU");
await_value(irq2cpu, '1', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt activation expected", C_SCOPE);
v_time_stamp := now; -- from time of stable active irq2cpu

log("\n- Try potential malfunction");
write(C_ADDR_IRQ2CPU_ENA, x"01", "IRQ2CPU_ENA : Allow interrupt to CPU again - should not affect anything");
write(C_ADDR_IRQ2CPU_ENA, x"00", "IRQ2CPU_ENA : Set to 0 - should not affect anything");
write(C_ADDR_IRQ2CPU_DISABLE, x"00", "IRQ2CPU_DISABLE : Set to 0 - should not affect anything");
check_stable(irq2cpu, (now - v_time_stamp), ERROR, "No spikes allowed on irq2cpu (='1')", C_SCOPE);

log("\n- Acknowledge and deactivate interrupt");
pulse(irq2cpu_ack, clk, 1, "Pulse irq2cpu_ack");
await_value(irq2cpu, '0', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt deactivation expected", C_SCOPE);
v_time_stamp := now; -- from time of stable inactive irq2cpu

log("\n- Test for potential malfunction");
write(C_ADDR_IRQ2CPU_DISABLE, x"01", "IRQ2CPU_DISABLE : Disable interrupt to CPU again - should not affect anything");
write(C_ADDR_IRQ2CPU_DISABLE, x"00", "IRQ2CPU_DISABLE : Set to 0 - should not affect anything");
write(C_ADDR_IRQ2CPU_ENA, x"00", "IRQ2CPU_ENA : Set to 0 - should not affect anything");
write(C_ADDR_ITR, x"FF", "ICR : Trigger all interrupts");
write(C_ADDR_IER, x"FF", "IER : Enable all interrupts");
pulse(irq_source, trim(x"FF"), clk, 1, "Pulse all interrupts");
pulse(irq2cpu_ack, clk, 1, "Pulse irq2cpu_ack");
check_stable(irq2cpu, (now - v_time_stamp), ERROR, "No spikes allowed on irq2cpu (='0')", C_SCOPE);

log("\n- Re-/de-activation");
write(C_ADDR_IRQ2CPU_ENA, x"01", "IRQ2CPU_ENA : Reactivate interrupt to CPU");
await_value(irq2cpu, '1', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt reactivation expected", C_SCOPE);
write(C_ADDR_IRQ2CPU_DISABLE, x"01", "IRQ2CPU_DISABLE : Deactivate interrupt to CPU");
await_value(irq2cpu, '0', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt deactivation expected", C_SCOPE);
write(C_ADDR_IRQ2CPU_ENA, x"01", "IRQ2CPU_ENA : Reactivate interrupt to CPU");
await_value(irq2cpu, '1', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt reactivation expected", C_SCOPE);

elsif run("Check Reset") then
log("- Activate all interrupts");
write(C_ADDR_ITR, x"FF", "ICR : Set all interrupts");
write(C_ADDR_IER, x"FF", "IER : Enable all interrupts");
write(C_ADDR_IRQ2CPU_ENA, x"01", "IRQ2CPU_ENA : Allow interrupt to CPU");
await_value(irq2cpu, '1', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt activation expected", C_SCOPE);
pulse(arst, clk, 1, "Pulse reset");
await_value(irq2cpu, '0', 0 ns, C_CLK_PERIOD, ERROR, "Interrupt deactivation", C_SCOPE);
check(C_ADDR_IER, x"00", ERROR, "IER all inactive");
check(C_ADDR_IRR, x"00", ERROR, "IRR all inactive");
check(C_ADDR_IPR, x"00", ERROR, "IPR all inactive");
end if;
end loop;



--================================================================================================= -- Ending the simulation
--------------------------------------------------------------------------------------
wait for 1000 ns; -- to allow some time for completion
report_alert_counters(FINAL); -- Report final counters and print conclusion for simulation (Success/Fail)
test_runner_cleanup(runner);
wait; -- to stop completely

end process p_main;

end func;



On Saturday, June 20, 2015 at 1:50:45 PM UTC+2, olof.k...@gmail.com wrote:
I downloaded the BVUL to have a look. It looks very similar to what we have in VUnit. I noticed the example test bench bitvis_irqc/tb/irqc_tb.vhd could benefit from the VUnit Python/VHDL automation. My interpretation is that you use log messages with ID_LOG_HDR to visually/textually separate different independent test cases. I count 8 of those. With VUnit you could have those 8 as actual independent test cases run in different simulations (or optionally all in the same simulation) with individual pass/fail in the test report. An individual specific test can be easily run from command line using wildcard (*) pattern. VUnit would also ensure that each test case got its own dedicated output folder to gather all simulation artifacts such as the complete stderr/stdout, wlf and transcript as well as any other user defined outputs such as images other binary data files.

On Saturday, June 20, 2015 at 12:21:59 PM UTC+2, Brian Drummond wrote:
On Fri, 19 Jun 2015 07:20:39 -0700, espen.tallaksen wrote:

Hi Olof,

I agree that BVUL only covers your item 2 above, but then again item 2
is where you can actually save by far the most hours in a complex FPGA
project. Items 1 and 3 are also important, and will save quite a few
hours.

I think perhaps it could be great if we could cooperate on the
combination Vunit and BVUL, so that we could get the best out of two
worlds, but we could take that discussion off line ;-)

By all means take the details off-line but please summarize the outcome
here!

Thanks to yourself, Olof and Lars for discussing - and indeed, creating -
these useful tools!

-- Brian
 
Hi Olof,
My explanation to why BVUL is not just another flavour was unfortunately far too brief. Let me elaborate a bit without doing any direct comparisons.
- BVUL has ID-based verbosity control. Most other systems are priority based. Prioritising log-messages may seem like a good idea, but only works for very basic testbenches. Simple example: What has the higher priority; a message saying you have received a packet header, or a message saying you have received a complete packet? Obviously your priorities change from when you debug your receiver, detecting header, address, correct CRC etc to when your receiver is properly debugged and you anly want to know that you have received a correct packet - or even 100 packets.... An ID-based verbosity system where you enable say ID_PACKET_HDR, ID_PACKET_COMPLETE and ID_PACKET_DATA separately allows full flexibility and changing your priorities as you develop your testbench. An ID-based verbosity control system is far easier to use, as you control things based on functionality, which is just what you want. It may even be used as a priority based system if you really want to, but not he other way around.
- BVUL has positive acknowledge on all checks, so that you may get a message saying that check this and that (detailed info) has been executed and was OK, and not just an alert if it fails (plus potential counting). The positive acknowledge may of course be turned off. Very few systems have this capability.
- BVUL also have some other very useful features that most other libraries do not have, but I think perhaps the most important aspect of BVUL is the extremely low user threshold. We advice browsing through our provided PPT to get an overview, but once you have done that - all feedback so far has been that it is dead simple to use.

In my experience as a consultant for 20 years now the testbench structure is the worst source of time wasting.
If we take an average 5000 man hour FPGA development project I would say that on average the verification part (say 2500h) could have been reduced by at least 1000 hours if they had structured their testbenches properly, and provided good progress reports (logging) and alert handling.
The IRQC example is more like a 20 hour project. I have included two bugs in the design and presented this at universities and in our course 'FPGA Development Best Practices' and shown them the testbench log/alert transcript only. They have always found the source of the bugs just by looking at the transcript for less than 30 seconds - with no need for the wave view. The Wilson report shows that nearly half of the verification time is spent on debugging. Then a proper progress report is key to efficiency.
For larger designs of course even more so.

I have definitely seen that proper regression testing mechanisms is also important, which is why I say that your issues 1 and 3 are in fact also important, but my experience is that in this 5000 man hour project this would give an average improvement of say 100 hours, which is also very important. Of course there are projects where this number is far higher, but similarly there are projects wasting another 2000-3000 hours due to bad testbench structure.

Immediate stop on an error is my preferred way as well for simpler debugging, but having a good progress report prior to the error helps a lot.
(Sometimes however you run your test overnight, and then you may want to detect more bugs - either by running more separate test cases -e.g. using VUnit, or by getting further into a test case which is really time consuming because it needs to be (some tests cannot be cut into short pieces).

Again - I really do like the unit testing features of VUnit, and we do need a structured approach to regression testing. This is why I think it would have been great to cooperate to make all parts better. (I'll come back to that in a separate response "further down")
 
Hi Brian,
You wanted a summary of our off-line discussion on potential cooperation between VUnit and BVUL.
I think the very brief version is that we agree to look into collaboration opportunities when they appear, but just now we are not quite there.
Olof did however demonstrate the Unit testing capabilities of VUnit, and the fact that BVUL may be used seamlessly with VUnit, without making changes to any of them.

-Espen
 
Espen,

I can agree that lack of proper testbench structure is a big cause of inefficiency in a typical FPGA project. I wouldn't say that this is mostly due to missing library features but rather that due to lack of software/verification skills. It quite possible to write a good test bench without any supporting library and also possible to write a poor test bench using a good supporting library. People will not learn to write good test benches quickly by using either BVUL or VUnit but rather primary by working in teams with people that possess the skills and secondary by taking verification courses. Also in my opinion writing good test benches is more of a software skill. The people I know that are good at writing test benches also have software experience and the people I know that are worse at it do not have much software experience are are more purely hardware oriented. The libraries provided by BVUL/VUnit/OSVVM will provide the good test bench developer with better tools than they would have taken the time to create for themselves and that is a still a big benefit. How great that it is possible to use them all together further increasing the toolbox of the test bench writer.

I also agree that debugging is a big time waster. The solution is primarily to avoid the need for debugging rather than making it easier. First let me identify two types of debugging:

I) The first kind is when a bug is reported in the field and it must be re-produced. The work performed while reproducing the bug can be called debugging.

II) The second kind is when an existing module is extended or modified and it causes a regression in an existing test. The process of figuring out why the regression test failed can be called debugging.

The road to reducing the need for debugging is the use of a better verification methodology. By having many small tests that test a small piece of functionality the debugging effort in II) is greatly reduced. A large module can be significantly harder to modify or re-use when it only has a large end-to-end test compared to when it has many smaller tests or even testw for it's sub-modules. Also when having small and quick tests the designer can re-run the test more often even for smaller modifications, the easiest bug to find is in the line you just wrote 10 seconds ago. The end-to-end test is still necessary to ensure that the parts work as a whole but it should only catch integration issues and not faults in the individual parts.

Lots of research and testimonies from software the development world, from which there are many parallels to FPGA-development and a lot to learn, has shown that using the method to write many small test of small parts increases the quality of the code base. It forces the design to be more modular and less tightly coupled with more well defined interfaces. This reduces the likely-hood to have bugs which cause I) and also reduces the cost of modification/re-use. I have seen many 3000-line state machines in large modules with only a big end-to-end test which should really have been split up into multiple smaller parts. Such modules sooner or later just have to be re-written because they cannot accommodate new functionality. In many complex projects or projects using agile methods the development process is nothing more than a steady stream of modifications making the reduction of their cost very beneficial. Many companies can also save money from re-using modules between product families or new products with more or less modifications. In my opinion it is good regression testing that facilitates re-use rather than any IP-packaging format or similar stuff.

So how does the methodology described above relate to VUnit? Well it was created to facilitate the methodology by significantly reducing the cost of having many tests per test bench and many test benches both from the regression testing use case as well as from a daily coding edit/compile/run use case were a designer wants to quickly re-run multiple tests/test benches for each small modification. It is by using this methodology, enabled by VUnit, that a typical project can save a lot of time and dramatically increase quality and re-usability. Just having the VHDL-part of VUnit or BVUL alone could not nearly as well enable the above methodology but it is an important piece of the puzzle non the less.

I finally conclude that any BVUL user could benefit from using VUnit together with BVUL enabling the above methodology without using the parts of VUnit that are redundant with BVUL as I have shown in my previous posts.
 
Hi Olof,

I final comment from my side.
I agree with many of your points, but I think writing a good testbench depends far more on your experience, structure, awareness of ROI (return on investments) of such structure and a good methodology. The quality of this testbench is *the* main key to efficiency and quality.
And of course BVUL (or other similar libraires) is just one piece in this puzzle. For simple testbenches it is a major piece, but for complex testbenches it is a minor piece, but still a corner stone for other pieces. For complex testbenches their structure is by far the most important piece. To verify corner cases you need to be able to control different interfaces simultaneously in a controlled manner, and for this Verfication components is the best approach. We will hopefully present a solution for that at FPGAworld in September with 'UVVM', that handles this in a very structured manner. Other important pieces in this puzzle are constraint random, coverage, scoreboards, etc.
I still agree that unit testing is also very important, but unfortunately for some applications some simulations are time consuming because you just have to run for a long time before your DUT reaches a certain state, and you have to verify that e.g. lots of different submodules work together as expected. (By comparison verifying for instance the *implementation* of filters and sub-filters is dead simple.)
Most huge state machines are bad design structure, but I guess that is a different discussion. (We spend nearly a day on that alone in our course 'FPGA Development Best Practices'. So I agree this is definitely a problem form many FPGA projects.)
Handling (or not handling) the complex verification scenarios is where lots of projects are wasting several hundred man hours and sometimes far more than thousand man hours - either because they do labtest/patch-iterations for ever or because they don't structure their testbenches sufficiently. And for this they need methodology, awareness, structure at all levels and debug-support. BVUL is just a library that supports this approach very well, but only for the basic logging, alert handling, proper verbosity control, checks and awaits. OSVVM is a different library that is excellent for contrained random and coverage. Another library is UVVM (to be released in September) that provides a very structured verification component environment and TLM for a really understandable handling of simultaneous stimuli and checking of multiple interfaces. In fact the combination of these three libraries provides a unified testbench approach.
And they could all work together with VUnit for unit testing :)
 
Hello,

Interesting discussion!

As Espen mentioned we will take small steps where we find common grounds and in the first iteration VUnit will provide better support for coexistence between the two libraries in addition to the possibilities already existing.. We will provide means to handle the name collision that exists for the log procedures and we will also provide a thoroughly documented example on how coexistence is achieved. I'll get back when this is on Github.

Next, I would like to comment on the last few posts. Olof had already said a lot about the productivity gains with unit testing but this is very important point so I will add a bit more (and probably repeat a bit)

First of all, the productivity gain of unit testing is NOT a result of the time saved running all your self-testing testbenches automatically rather then opening the GUI, load the testbench, and then hit run for each and every one. It comes because it enables very short code/test cycles so you can start test early and do it frequently. When I say frequently I mean at the pace you add bugs to your code which is many times a day (at least I do). This frequency won't happen unless you have a tool chain supporting that. Some benefits are:

- The obvious one is that the sooner you find the bug the less damage for you and your team. Ideally you should find the bug when the code is still fresh in the developer's mind.
- When you have a fully automated test environment you also become very responsive to the changes in requirements and design that happens all the time in most projects. If you can fix these change requests and quickly make sure that everything still works then you have a competitive edge. Take the VUnit project as an example. Since it was released about half a year ago 8 contributors have done about 250 commits which added about 30000 lines of something (code and documentation) and removed about 12000 lines. An enhanced version of the tool is typically made public one or several times a week (and it's not about continuous bug fixing). Considering that we support several simulators, VHDL standards, Python versions, and operating systems this would not be possible to do unless we had test suites verifying the quality of each and every release. Ok, we don't have the many hour tests, synthesis and place and route but even if you have the release cycle can be very quick if you automate
- When test becomes such an integral part of what the developer do it also starts to affect the quality of the code in a positive way. Code that is hard to test is a bad code smell, i.e. an indication of bad quality. This means that the drive to do low-level testing also enhances the quality at that level. You may discover these bad smells when testing later at a higher level as well but then it's much harder to correct. Since test drives the design many unit test practioners adopt test-driven design (TDD) where the basic concept is to write the test first and then implement the design that makes that test pass.

What I'm saying is that unit testing enables a way of working affecting many parts of the project not just verification and that's why it has such an impact. The effect on project success rates has been showed in research

Some words about what we support and what we don't.

VUnit also use "ID-based verbosity control" but we don't call it ID but source. For example, here I'm doing a debug log with no special source:

debug("This is a debug message);

But I can add a source (ID) if I want:

debug("This is a debug message", "Some source name");

If I want to stop messages from "Some source name" to appear on stdout I can add

stop_source("Some source name", display_handler, filter);

What I done is to add a stop filter to the display handler. I can individually decide what filters to have for the log file (if any) by using the file_handler instead. The filter returned is there so that it can be removed. I can add many filters to a handler, have pass filters, filter on log level, e.g. log all debug messages to file but don't show them on the display. I can also filter hierarchies. https://github.com/LarsAsplund/vunit/blob/master/examples/logging/logging_example.vhd will show the different capabilities

VUnit doesn't support positive acknowledge on the checker/assert/alert level. It hasn't really been asked for but I opened this issue (https://github.com/LarsAsplund/vunit/issues/53) so that you can support its addition. It's an easy fix. I think the reason for not normally seeing this among unit test frameworks is that it yields a lot of text. For example, the VUnit VHDL part is verified with 22 test suites containing 280 test cases containing 1600 checks/asserts. We keep them public so that you can make your own modifications and possibly contribute code and be confident you didn't destroy anything (see https://github.com/LarsAsplund/vunit/blob/master/developing.md). When developing a piece of code I mostly run the test suite for that code. Such a test suites contains, on average, 13 test cases and 75 checks. A summary of 13 test cases is a good overview which you might read. 75 "passed" is a bit much a may not even fit in your window.

When we describe VUnit we usually go directly for the end goal with full automation since that provides the greatest value. However, you can do this is smaller steps to set a threshold that fits you. I've seen different approaches to start using VUnit depending on project background but here is one way starting with a pure VHDL testbench which makes it similar to BVUL. For simplicity I've excluded any real DUT and only test basic VHDL behaviour.

library vunit_lib;
context vunit_lib.vunit_context; -- Get all VUnit-related functionality

entity tb_comp_lang_vhdl_example is
generic (
runner_cfg : runner_cfg_t := runner_cfg_default); -- Use configuration from script or defaults
end entity tb_comp_lang_vhdl_example;

architecture test_fixture of tb_comp_lang_vhdl_example is
begin
-- Normally I would have a DUT, clock generators and so on here

test_runner: process is
variable filter : log_filter_t;
begin
test_runner_setup(runner, runner_cfg); -- Setup with provided configuration

logger_init(runner_trace_logger, display_format => raw); -- Enable runner trace log on display with "raw" format. Only active on file by default
pass_level(runner_trace_logger, info, display_handler, filter); -- Exlude details and only display basic info, in this case currently active test case

while test_suite loop -- Loop over the set of test cases
if run("Test that addition works") then -- Every if statement branch like this defines a named test case
check(1 + 1 = 2, "VHDL can't do addition!"); -- Use various checks for verification
elsif run("Test that subtraction works") then
check_equal(5 - 3, 2);
end if;
end loop;

info(LF & "=== Summary ===");
info(to_string(get_checker_stat)); -- Make an info message of basic statistics

test_runner_cleanup(runner); -- Wrap-up
end process test_runner;
end;

This will result i the following output in Modelsim

# Test case: Test that addition works
# Test case: Test that subtraction works
#
# === Summary ==# Checks: 2
# Passed: 2
# Failed: 0

Making this fully automated with Python requires another baby step. Just add this run.py script

from vunit import VUnit
from os.path import join, dirname

ui = VUnit.from_argv()
lib = ui.add_library("lib")
lib.add_source_files(join(dirname(__file__), "*.vhd"))
ui.main()

The three last lines are the most interesting. First create a VHDL library called lib. Then add all the .vhd files found in the same directory as this script file (in this case we only have one file). Then call main to run. It will find all VHDL files, figure out their dependencies so that they can be compiled in the correct order and only compile what's needed based on changes. Then it will find all testbenches (only one) and run their test cases. Just type

python run.py

and you'll get the following result

Starting lib.tb_comp_lang_vhdl_example.Test that addition works
pass (P=1 S=0 F=0 T=2) lib.tb_comp_lang_vhdl_example.Test that addition works (1
..9 seconds)

Starting lib.tb_comp_lang_vhdl_example.Test that subtraction works
pass (P=2 S=0 F=0 T=2) lib.tb_comp_lang_vhdl_example.Test that subtraction works
(0.5 seconds)

==== Summary ==================================================================pass lib.tb_comp_lang_vhdl_example.Test that addition works (1.9 seconds)
pass lib.tb_comp_lang_vhdl_example.Test that subtraction works (0.5 seconds)
==============================================================================pass 2 of 2
==============================================================================Total time was 2.4 seconds
Elapsed time was 2.4 seconds
================================================================================All passed!

So it's really not very complicated. From run.py you also get various options (do python run.py -h to see them all) like running the tests on many parallel cores and open and run a specific test case in the GUI.

/Lars
 
On Friday, June 12, 2015 at 9:39:45 AM UTC-4, espen.t...@bitvis.no wrote:
on average 50% of FPGA designers' time is spent on verification, and
almost half of that verification time is spent on debugging. This means:

1. Good reports for unexpected design behaviour is critical.
2. Good progress reporting is also critical.
3. Good basic testbench features are required

Thus we need a library with good functionality for mismatch reporting,
progress reporting and for checks etc. that are needed for every single
testbench; like

Since one can just as easily do all of the above with straight VHDL and be just as concise or even more so, it does not really follow that what is needed is a 'library with good...'.

- checking value against expected
- waiting for something to happen - with a timeout
- checking stability of a signal
- waiting for a signal to be stable for a given time (with timeout)

The only free library library (to my knowledge) to provide all this
functionality is Bitvis Utility Library.

OK, but the VHDL language provides this as well.

If this sounds interesting, you should read the below intro.
You can download the library and PPT from
http://bitvis.no/resources/utility-library-download/ without any
registration.

Thanks for providing, that in itself is a useful service.

My read is that the library is way to low level to be an effective archive to capture the original intent of the testbench which means that a testbench written using Bitvis will be just as opaque as the testbenches you complain about now. As an example of checking register function (from slide 31 of the PowerPoint):

write(C_ADDR_ITR, x"AA", "ITR : Set interrupts");
check(C_ADDR_IRR, x"AA", ERROR, "IRR");

Some simple observations:
- The hard coded constants x"AA" are not independent. Changing one requires you to change the second one. But this is not controlled by the code in any way.
- Similarly, as one works through the rest of the script, there are other hidden dependencies that occur.
- The expected response of the DUT is implicit (the reading back of data from IRR and expecting it to be the same as what was written into ITR). At first glance, one almost might thought it was an error to write to one register and expect some other register to read back that same data. The code you have is actually just an undetectable typo away from being a 'write then read back' test of just ITR (or IRR).
- Although this particular test is simply testing the bits somewhat independently, those bits typically have definitions from a record but here you're totally ignoring those definitions and turning on and off bits in a byte with no regard for what that bit defines. While OK for a simple read/write testing as you're showing, there is nothing in Bitvis that would let it scale it up to something more general which is what you would want once you get beyond the simple read/write tests. Consider now this could be written in vanilla VHDL:

-- Let's check operation of the 'This' and 'That' interrupt bits
Fpga_Reg.ITR :(
This_Interrupt => "1",
That_Interrupt => "1",
Reserved => (others => '0')
);
reg_write(Fpga_Reg.ITR);
Fpga_Reg.IRR := Fpga_Reg.ITR; -- This is the DUT response that is expected
Fpga_Reg_Readback.IRR := reg_read;

assert (Fpga_Reg.IRR = Fpga_Reg_Readback.IRR) report
"IRR register did not read back correctly" & LF &
"Expected: Fpga_Reg.IRR=" & image(Fpga_Reg.IER) & LF &
"Actual: Fpga_Reg_Readback.IRR=" & image(Fpga_Reg_Readback.IRR)
severity ERROR;

While the code is wordier, it is also self-documenting. Using the Bitvis library, one would have to dig into a whole lot more design specific detail and documentation (that is outside the scope of the testbench itself) just to understand what the testbench is trying to accomplish. With what I've shown, that should not be the case. As a bonus, you don't have any of the shortcomings that I pointed out earlier either.

Of course the main issue is that Bitvis, since it is attempting to be generic, cannot be design specific, but in order to get good code clarity you do want the executable code to be design specific. You do want to have executable that looks like 'Control_Reg.Fire_The_Gun := "1"' to then have a response of 'Status_Reg.Off_To_The_Races := "1"', not hard coded hex constants...that then have to change as the 'Off_To_The_Races' bit gets moved from bit 5 to bit 6.

Testbench modelling at the top design level should start by modeling the board that the design will be put into. This then naturally leads to modeling the system that the board goes into as well. Following that approach produces a library of parts that can then be reused in other testbenches because they are modeling actual parts, not just something cobbled together to control/check interface ABC of design XYZ. It will also produce XYZ design specific stuff as well.

I have yet to have a time when 'verbosity control' was something I would want. The testbench will stop on an error and I have the complete log file that I need to debug the problem. What you don't mention at all that is useful is simply to produce multiple log files. For example, logging transaction that occur on a particular interface to a CSV file so that it can be pulled up in Excel. Using those interface log files along with the main transcript log file is a powerful debug aid.

I don't want to seem to be too harsh, it's not that I think the library is 'bad'. Much of what you have is useful info, but the library while it may improve how some people today write a testbench eventually it will stunt testbench development because it does not go far enough to produce maintainable code. While I accept that you have found users that find it useful, for me it would be a big step backwards since it would produce less maintainable code and probably take just as long, or longer to develop in the first place.

Kevin Jennings
 
Kevin,

I agree with you that using low level checks in test benches makes them worse and hard to maintain. A good test bench will contain a lot of supporting code to enable the actual test to be as high level as possible and read almost as a specification.

This is why I previously argued that I do not think the check/log library is the most important part missing to experienced test bench writers. The designer still needs to write a lot of project specific supporting code to raise the abstraction level of the test bench (and design). Many designers would still write unmaintainable test benches using BVUL/VUnit/OSVVM without the proper experience.

On the other hand the unique features of VUnit such as the Python test running and compilation automation provides a feature rich and rock solid implementation of something that many VHDL teams re-invent all the time in the form of a pile of scripts of varying quality. Using VUnit the design team can focus entirely on writing their high level test benches while the entire test running and compilation is managed by VUnit.

I can testify that in my latest project we used VUnit to manage well over 200 test cases with test benches automatically configured to run for all golden data in a folder and test configured to run with all interesting combinations of generics. Many test benches contained multiple tests that were run in individual simulations but that benefited from a shared test bench infrastructure. The test cases could be run on Jenkins using multiple machines and processor cores. The test cases could be opened in the simulator GUI with everything set up by just issuing a command. Just the necessary files were recompiled when re-running. All it took was writing a VUnit run.py file of about 100 lines where most of the code was related project specifics such as enumerating the golden reference data and creating generic combinations to configure the multiple test runs of the same test bench/test case.

VUnit provides the features that all serious verification efforts need right away without modification and the potential for re-use is the greatest. I gives every team access to a turn key solution worth several man-months of work. I should also say that the low level functions found in the VHDL part of VUnit as well as BVUL/OSVVM are still useful and saves redoing some redundant work but they would not take that many days for the experienced VHDL designer to re-implement the essential parts of them compared to implementing something comparable to the sophisticated automation features of VUnit which would not be possible within the budget of a single project.

I also should say there are a lot of advanced features of BVUL and especially OSVVM that can be really useful in some situations though that would would take the average designer many weeks to implement. Although a lot of code has to be design specific due to the static nature of VHDL it is great the people have taken the time to make the obviously general parts available as open source such that they do not need to be re-invented. In general I think the FPGA/VHDL community is too shy of sharing their code and experiences compared to the software community. There is also a lack of standardization in verification tools and libraries compared to the software community where specific languages have de-facto standard testing tools that everyone uses.

The top two re-usable VHDL library features I tend to use are the check_equal procedures of VUnit, the dynamic array type of VUnit, as well as the random number generation from OSVVM. It does not mean that I use the check_equal procedures directly often but rather as part of higher level design specific checking procedures that I create. A higher level checking procedure still must sometime contain a primitive check_equal of std_logic_vector, signed, unsigned or integer etc. It is nice not to have to create a "Got XXXX expected XXXX" message in manual assert statements all the time. Just because there is a primitive method does not mean that you are encouraged to use it directly. I suspect that the BVUL authors realize this but when showing examples one does not want to obfuscate the usage to much by showing a simple case. Our VUnit examples are also simplistic for the same reasons.

Due to the static nature of VHDL a library can not provide much general functionality and one must often write redundant code even with 2008 generic packages. Since VUnit is not just a VHDL library but also Python framework which parses the code we can do better in removing the need for redundant code by using code generation were packages are scanned for records for which automatic to_string and check_equal can be generated. We have some such code-generation/pre-processing features already and plan to add more as use cases are identified.
 
Hi Kevin,

Although I agree that a lot can be done with plain VHDL I still think there are use cases for both verbosity control and support procedures replacing assert statements.

One place where I think verbosity control is useful is when you have a trace log like the one I showed in the previous comment (runner_trace_logger). I don't want to filter what goes to file because I don't know what will be interesting before I have the problems that caused me to open the file. The file may become very large but as you suggest the CSV format (we call it verbose_csv) enable you to use the power of your spreadsheet tool to reduce the file to the interesting parts. During long simulations I may also want to get some progress from the trace log on stdout but there's no point if I can't reduce that output to a message pace which I can read. For that I use verbosity control on the log level.

When it comes to assert statements like the one you mention I think they can be reduced to (here I'm assuming that IRR and ITR are std_logic).

check_equal(Fpga_Reg.IRR, Fpga_Reg_Readback.IRR, "IRR register did not read back correctly");

while still being self-documenting. Given that a check for equality is one of the most common ones this will save you a lot of redundant typing. In case of an error you will get the following output (level format).

ERROR: Equality check failed! Got 1. Expected 0. IRR register did not read back correctly.

Another good thing about standard check and log procedures, at least for us writing tools, is that they are easier to parse such that you can add code-related feature not available in VHDL itself. For example, if you enable the location preprocessor in your VUnit run script (ui.enable_location_preprocessing()) and the have a log like this

info("Some log message");

the output will be like this (verbose format)

0 ps: INFO in (tb_demo.vhd:27): Some log message

This is useful when finding things and filtering your CSV file in Excel. A drawback is that it is the preprocessed file you will see in the simulator and you my be tempted to edit that one and not the original file.

Another issue with convenience procedures like check_equal is that we support a commonly used but limited set of data types so if you want to make an equality check between other types you're back to the plain assert. But to do the assert in your example you need to define "=" and image() for that type. If you define a to_string() function instead of image() and enable the check preprocessing (ui.enable_check_preprocessing()) you can do like this

check_relation(Fpga_Reg = Fpga_Reg_Readback, "Registers did not read back correctly");

and a failed test will output this

ERROR: Relation Fpga_Reg = Fpga_Reg_Readback failed! Left is ('1', '1'). Right is ('1', '0'). Registers did not read back correctly

check_relation can be used with any relational operator and type as long as the operator and to_string() functions are defined. There are some drawbacks, primarily if an operand is a function with side effects. This function is called twice, one time when checking the relation and one time when calculating the error response string. This is not obvious when looking at the procedure call before it has been preprocessed. For more details see https://github.com/LarsAsplund/vunit/blob/master/examples/vhdl/check/check_example.vhd

/Lars
 
On Tuesday, June 30, 2015 at 4:22:16 PM UTC-4, Lars Asplund wrote:
Hi Kevin,

One place where I think verbosity control is useful is when you have a trace
log like the one I showed in the previous comment (runner_trace_logger). I
don't want to filter what goes to file because I don't know what will be
interesting before I have the problems that caused me to open the file.

Exactly. No filtering. At the point where the sim stops at an assertion, I have everything I need so there is no need to filter anything. I also don't necessarily need to look at everything in the file since I'm debugging a specific problem. I would look at the transcript window or file for basic information, I would look at auxiliary files if necessary but primarily I will be looking at the signals and variables at the point where the sim stopped in order to determine why the assertion condition failed.

During long simulations I may also want to get some progress from the trace
log on stdout but there's no point if I can't reduce that output to a message
pace which I can read. For that I use verbosity control on the log level.

For that I simply grab the scroll bar which effectively pauses the window. Or, if I have the transcript set to go to an output file rather than the GUI, I simply open the file in a text editor while the sim keeps on running.

When it comes to assert statements like the one you mention I think they can
be reduced to (here I'm assuming that IRR and ITR are std_logic).

check_equal(Fpga_Reg.IRR, Fpga_Reg_Readback.IRR, "IRR register did not read
back correctly");

Not quite. Your example, which I was following, looked like ITR and IRR were both software registers. Both of them looked to be eight bits wide which implies to me, that the individual bits would be defined in a record and used the way that I was showing. So the comparison between the ITR and IRR would be between two design specific record types, not just std_logic.

This means that your example here of check_equal wouldn't work without first creating an override of check_equal that works with those specific record types. That's OK, but it means that now when you define new record types, you'll have to also create a 'check_equal' override. Right now, when I create a record type, there will typically be overridden functions of to_std_ulogic_vector, from_std_logic_vector and frequently, but not always image. Having to add another override for 'check_equal' is more work, so I would have to be convinced of the value to do so first. Actually, of late, what I've found to be more useful is a 'diff_image' function that takes two record type arguments and returns an image only where record elements between the two are different. That way, I'm not eyeballing 10 different fields that are the same to weed out the one or two that are different when the assertion fails and prints the 'diff_image'.

while still being self-documenting. Given that a check for equality is one of
the most common ones this will save you a lot of redundant typing.

I agree that wrapping the assertion into a procedure will typically save typing. On the other hand, many times that typing is only actually done one time within a procedure that may gets called all over the place so the savings on typing isn't really there.

Another good thing about standard check and log procedures, at least for us
writing tools, is that they are easier to parse such that you can add code-
related feature not available in VHDL itself. For example, if you enable the
location preprocessor in your VUnit run script
(ui.enable_location_preprocessing()) and the have a log like this

info("Some log message");

the output will be like this (verbose format)

0 ps: INFO in (tb_demo.vhd:27): Some log message

This is useful when finding things and filtering your CSV file in Excel. A
drawback is that it is the preprocessed file you will see in the simulator
and you my be tempted to edit that one and not the original file.

We may be talking about different things. Whereas the assertion output logging and whatever one puts to the console is one thing, the CSV files I would typically generate are totally separate files that are not nearly so free form as what would go to the console/transcript window.

As an example, there might be a monitor procedure which takes address, data, read/write commands, wait/ack signals all as inputs and whenever a transaction on that bus completes, a new line is written with sim time, address, command, data. No filtering or simply using Excel's built-in filtering has always been enough, no pre-processing needed. The one drawback is that Excel locks the file when it opens so one has to either make a copy of the file if the sim is still running, or you have to stop the sim. Otherwise, the sim quickly stops because it won't be able to write out that new line. But even that isn't all that bad, because it doesn't actually crash Modelsim, it just stops the sim on the 'file_open' but the subsequent writes to the file complete normally (since I don't start the sim then until I have finished looking at the CSV file) so I haven't actually lost anything. It seems to be an odd, but fortuitous feature/bug of Modelsim.

Another issue with convenience procedures like check_equal is that we support
a commonly used but limited set of data types so if you want to make an
equality check between other types you're back to the plain assert.

I think the better approach as I mentioned earlier would be to override check_equal to work with the custom type. Within that overridden procedure one would call the Bitvis library check_equal procedure on the individual elements of the custom type.

> But to do the assert in your example you need to define "=" and image() for that type.

Yes for 'image', but no you don't for '='.

If you define a to_string() function instead of image() and enable the check
preprocessing (ui.enable_check_preprocessing()) you can do like this

check_relation(Fpga_Reg = Fpga_Reg_Readback, "Registers did not read back correctly");

and a failed test will output this

ERROR: Relation Fpga_Reg = Fpga_Reg_Readback failed! Left is ('1', '1').. Right is ('1', '0'). Registers did not read back correctly

I don't see how you can only pass in a Boolean (i.e. "Fpga_Reg = Fpga_Reg_Readback" and have it print out the individual 'left' and 'right' sides of that comparison. Did you not include something? Looking through the Zip file from Bitvis, I couldn't find any 'check_relation' so it's not clear to me what is going on here.

check_relation can be used with any relational operator and type as long as
the operator and to_string() functions are defined.

Again, it's not clear to me what 'check_relation' actually is since it appears to take as input a Boolean and a text string.

> For more details see https://github.com/LarsAsplund/vunit/blob/master/examples/vhdl/check/check_example.vhd

Unfortunately, that file does not have the source for 'check_relation', only examples (which are essentially like you've shown here) which would not allow you to separate the two things that are being compared (i.e. This = That) to report individually on 'This' and 'That'. All you can report on is the Boolean that is the result of comparing 'This' with 'That'.

Anyway, the bigger issue I thought was the way that the PPT example seemed to indicate how clean and easy it is to have simple read and write procedures that are only working with hard coded constants. The reality is that those hard coded constants would be a maintenance nightmare since they are completely separated from the underlying design elements. The effect of changing a constant from x"AA" to x"AB" and how that change would then ripple into and affect upcoming statements was not addressed at all. Removing the design specific elements and working only with std_logic/std_logic_vectors at the 'higher level' testbench source code level is a mistake and will become a maintenance nightmare for whoever follows this approach. The proper place to work with std_logic/std_logic_vectors is only within helper functions and procedures that encapsulate something. The 'higher level' testbench code would only be working with this encapsulating function/procedure so the fact that it then converts something to std_logic/std_logic_vectors is just something that happens behind the scenes...which is exactly what you would want.

I realize the PPT is taking an easy to understand example so as to focus on the testbench aspects but that is not an excuse for this kind of oversight.. As a thought experiment, take the PPT example and simply add the condition that certain bits in either ITR or IRR or both will always be 0 even if they are written as 1 (i.e. the bits are being reserved for future use which is not an uncommon thing). To put that change in, the way I approach it would involve changes to only the to_std_ulogic_vector, from_std_logic_vector functions that I mentioned previously. These functions exist in the same package as the defining record type. Edit those two functions to force 0 on the appropriate bit fields, recompile and you're done. Zero changes at the testbench level. I could also easily take it a step further and add an assertion inside those functions that checks to see if those bit fields really are 0. Now look at what you would have to edit with the PPT source that is shown...a whole bunch of editing of hard coded constants.

Kevin Jennings
 

Welcome to EDABoard.com

Sponsor

Back
Top