VHDL Verification components – The obvious solutio n to ef

Guest
How would you assure safe and efficient reuse of an FPGA design module for some stand-alone functionality?

Let's consider this for a simple example like a UART. Now what would you do?

You could of course just make lots of functions, procedures, processes and concurrent statements, - and then include all of this into your FPGA top-level whenever you need a UART... But no serious FPGA designer would ever do this.

Why? Because we all know it is much better to put all of this into a component (a VHDL entity), as this has the following benefits:

- Everything is encapsulated in an entity containing all needed elements
- No risk of forgetting parts or functionality
- No need to understand the implementation
- A simple port interface for integration into the FPGA top level
- A simple generic interface for parameterisation of the module
- Internal modifications may be done locally - invisible at the FPGA top level
- New functionality may be added inside the encapsulation
- Reuse is safe and efficient

Now - give me one reason why all of this does not apply to verification exactly the same way.
Yes - we could still just use lots of processes, sub-programs, etc, but as for design that would be very inefficient and risky.

What we need is of course a VHDL entity - a VHDL Verification Component (VVC) - encapsulating the complete verification functionality for a given design interface, where the VVC should be characterized by:

- An easy to understand component interface (ports and generics)
- A clearly defined internal functionality, where the internal implementation is of no interest when integrating the VVC
- An easy to understand command interface to control and monitor the behaviour of the VVC

This is exactly how the VVCs of UVVM (Universal VHDL Verification Methodology, free and Open source) are made.

(For a figure of the UART VVC please see http://bitvis.no/products/uvvm-vvc-framework/vvc_efficient_reuse/)
The VVC for a UART has two simple physical port (TX, RX), and is thus very easy to integrate in a testbench. All the functionality is included inside and thus well encapsulated and easy to reuse. Once included in the testbench the test sequencer/driver/controller may then execute commands to transmit and receive data in many different ways. This command interface is predefined in UVVM, which thus provides a common and standardised way of communicating with any VVC independent of type - again just like a CPU may communicate with any design module inside an FPGA via a predefined bus interface.

A major additional benefit of the UVVM VVCs is the ease of integration, the very structured internal architecture and the extreme reuse friendliness.

UVVM is free and open source, and may be downloaded from Github: https://github.com/UVVM/UVVM_All

For a simple and fast introduction to UVVM and VHDL Verification Components see: http://bitvis.no/media/21190/UVVM_Advanced_Verif_made_simple_1.pdf
 
On Wednesday, June 21, 2017 at 3:06:07 AM UTC-4, espen.t...@bitvis.no wrote:
How would you assure safe and efficient reuse of an FPGA design module
for some stand-alone functionality?

Let's consider this for a simple example like a UART. Now what would you do?

You could of course just make lots of functions, procedures, processes and concurrent statements, - and then include all of this into your FPGA top-level whenever you need a UART... But no serious FPGA designer would ever do this.

Why? Because we all know it is much better to put all of this into a component (a VHDL entity), as this has the following benefits:

- Everything is encapsulated in an entity containing all needed elements
- No risk of forgetting parts or functionality
- No need to understand the implementation
- A simple port interface for integration into the FPGA top level
- A simple generic interface for parameterisation of the module
- Internal modifications may be done locally - invisible at the FPGA top level
- New functionality may be added inside the encapsulation
- Reuse is safe and efficient

Now - give me one reason why all of this does not apply to verification
exactly the same way.
Yes - we could still just use lots of processes, sub-programs, etc, but as for design that would be very inefficient and risky.

What we need is of course a VHDL entity - a VHDL Verification Component
(VVC) - encapsulating the complete verification functionality for a given
design interface, where the VVC should be characterized by:

- An easy to understand component interface (ports and generics)
- A clearly defined internal functionality, where the internal
implementation is of no interest when integrating the VVC
- An easy to understand command interface to control and monitor the
behaviour of the VVC

A simpler approach is to simply model the system in which the design operates. An FPGA for example is rarely an entire system. Many times the FPGA is just one part on a PCBA. Model all of the other parts and you now have a simulation model of the PCBA. A PCBA is also rarely the entire system. Many times the PCBA is connected to sensors, actuators, etc. Model those parts and typically you do now have a full system model.

All of those parts models are derived from published specifications so when the entire system does not behave as you think it should and you're trying to debug whether you have a bug in your design or the model, you refer back to the part specification to validate whether the part model is working or not.

Part models do not necessarily have to model every nuance of the part, processors are a good example. These models only have to model enough behavior for the entire system to function somewhat like the real system.

One can also 'design in', whatever types of fault models that one wants to be able to emulate and test. This goes somewhat beyond part's specifications since no commercial part is probably designed to intentionally fault.

This approach is much more straightforward then trying to come up with some abstract "VHDL Verification Component (VVC) - encapsulating the complete verification functionality for a given design interface". Each of the parts being modeled are individual entities, each PCBA, sensor, actuator, etc. are also entities. Nothing abstract and subject to individual interpretation since the behavior of those part models is governed by published specifications.

Just my two cents.

Kevin Jennings
 
søndag 25. juni 2017 03.16.48 UTC+2 skrev KJ følgende:
On Wednesday, June 21, 2017 at 3:06:07 AM UTC-4, espen.t...@bitvis.no wrote:
How would you assure safe and efficient reuse of an FPGA design module
for some stand-alone functionality?

Let's consider this for a simple example like a UART. Now what would you do?

You could of course just make lots of functions, procedures, processes and concurrent statements, - and then include all of this into your FPGA top-level whenever you need a UART... But no serious FPGA designer would ever do this.

Why? Because we all know it is much better to put all of this into a component (a VHDL entity), as this has the following benefits:

- Everything is encapsulated in an entity containing all needed elements
- No risk of forgetting parts or functionality
- No need to understand the implementation
- A simple port interface for integration into the FPGA top level
- A simple generic interface for parameterisation of the module
- Internal modifications may be done locally - invisible at the FPGA top level
- New functionality may be added inside the encapsulation
- Reuse is safe and efficient

Now - give me one reason why all of this does not apply to verification
exactly the same way.
Yes - we could still just use lots of processes, sub-programs, etc, but as for design that would be very inefficient and risky.

What we need is of course a VHDL entity - a VHDL Verification Component
(VVC) - encapsulating the complete verification functionality for a given
design interface, where the VVC should be characterized by:

- An easy to understand component interface (ports and generics)
- A clearly defined internal functionality, where the internal
implementation is of no interest when integrating the VVC
- An easy to understand command interface to control and monitor the
behaviour of the VVC


A simpler approach is to simply model the system in which the design operates. An FPGA for example is rarely an entire system. Many times the FPGA is just one part on a PCBA. Model all of the other parts and you now have a simulation model of the PCBA. A PCBA is also rarely the entire system. Many times the PCBA is connected to sensors, actuators, etc. Model those parts and typically you do now have a full system model.

All of those parts models are derived from published specifications so when the entire system does not behave as you think it should and you're trying to debug whether you have a bug in your design or the model, you refer back to the part specification to validate whether the part model is working or not.

Part models do not necessarily have to model every nuance of the part, processors are a good example. These models only have to model enough behavior for the entire system to function somewhat like the real system.

One can also 'design in', whatever types of fault models that one wants to be able to emulate and test. This goes somewhat beyond part's specifications since no commercial part is probably designed to intentionally fault.

This approach is much more straightforward then trying to come up with some abstract "VHDL Verification Component (VVC) - encapsulating the complete verification functionality for a given design interface". Each of the parts being modeled are individual entities, each PCBA, sensor, actuator, etc. are also entities. Nothing abstract and subject to individual interpretation since the behavior of those part models is governed by published specifications.

Just my two cents.

Kevin Jennings

I agree that partial (or complete) modelling of your system is also a good approach, and in my opinion (and experience) combining this with verification components is a very good combined solution. (And sometimes one is better than the other - depending on your actual system to verify.)

The great thing about verification components is that for a given interface you don't need to bother about the details of interface protocols or timing, but can access your system via high level SW-like commands (e.g. uart_transmit(data) or uart_transmit(byte array) or uart_transmit(send N * random data and also forward to a model), etc...)
Similarly you can add functionality to provoke bugs, check bit-timing, etc, - and everything can be reused 100% from one project to another, and also from module level to FPGA level.
This means interface dedicated functionality is dead simple to use and reuse, and it also makes it much simpler to control/monitor the FPGA functionality that differs from one FPGA to another via commands that anyone can understand.
 
On Monday, June 26, 2017 at 6:38:52 AM UTC-4, espen.t...@bitvis.no wrote:
søndag 25. juni 2017 03.16.48 UTC+2 skrev KJ følgende:
On Wednesday, June 21, 2017 at 3:06:07 AM UTC-4, espen.t...@bitvis.no wrote:
How would you assure safe and efficient reuse of an FPGA design module
for some stand-alone functionality?

Let's consider this for a simple example like a UART. Now what would you do?

You could of course just make lots of functions, procedures, processes and concurrent statements, - and then include all of this into your FPGA top-level whenever you need a UART... But no serious FPGA designer would ever do this.

Why? Because we all know it is much better to put all of this into a component (a VHDL entity), as this has the following benefits:

- Everything is encapsulated in an entity containing all needed elements
- No risk of forgetting parts or functionality
- No need to understand the implementation
- A simple port interface for integration into the FPGA top level
- A simple generic interface for parameterisation of the module
- Internal modifications may be done locally - invisible at the FPGA top level
- New functionality may be added inside the encapsulation
- Reuse is safe and efficient

Now - give me one reason why all of this does not apply to verification
exactly the same way.
Yes - we could still just use lots of processes, sub-programs, etc, but as for design that would be very inefficient and risky.

What we need is of course a VHDL entity - a VHDL Verification Component
(VVC) - encapsulating the complete verification functionality for a given
design interface, where the VVC should be characterized by:

- An easy to understand component interface (ports and generics)
- A clearly defined internal functionality, where the internal
implementation is of no interest when integrating the VVC
- An easy to understand command interface to control and monitor the
behaviour of the VVC


A simpler approach is to simply model the system in which the design operates. An FPGA for example is rarely an entire system. Many times the FPGA is just one part on a PCBA. Model all of the other parts and you now have a simulation model of the PCBA. A PCBA is also rarely the entire system.. Many times the PCBA is connected to sensors, actuators, etc. Model those parts and typically you do now have a full system model.

All of those parts models are derived from published specifications so when the entire system does not behave as you think it should and you're trying to debug whether you have a bug in your design or the model, you refer back to the part specification to validate whether the part model is working or not.

Part models do not necessarily have to model every nuance of the part, processors are a good example. These models only have to model enough behavior for the entire system to function somewhat like the real system.

One can also 'design in', whatever types of fault models that one wants to be able to emulate and test. This goes somewhat beyond part's specifications since no commercial part is probably designed to intentionally fault..

This approach is much more straightforward then trying to come up with some abstract "VHDL Verification Component (VVC) - encapsulating the complete verification functionality for a given design interface". Each of the parts being modeled are individual entities, each PCBA, sensor, actuator, etc. are also entities. Nothing abstract and subject to individual interpretation since the behavior of those part models is governed by published specifications.

Just my two cents.

Kevin Jennings

I agree that partial (or complete) modelling of your system is also a good approach, and in my opinion (and experience) combining this with verification components is a very good combined solution. (And sometimes one is better than the other - depending on your actual system to verify.)

The great thing about verification components is that for a given interface you don't need to bother about the details of interface protocols or timing, but can access your system via high level SW-like commands (e.g. uart_transmit(data) or uart_transmit(byte array) or uart_transmit(send N * random data and also forward to a model), etc...)
Similarly you can add functionality to provoke bugs, check bit-timing, etc, - and everything can be reused 100% from one project to another, and also from module level to FPGA level.
This means interface dedicated functionality is dead simple to use and reuse, and it also makes it much simpler to control/monitor the FPGA functionality that differs from one FPGA to another via commands that anyone can understand.

And the exact same things can be said about using models of real physical parts and other components.

I disagree with your statement "you don't need to bother about the details of interface protocols or timing", sure you do. You need to create that first model of that interface protocol and that needs to be correct. Again, this comment is the same as with a model of a true physical component. The advantage of using a model of a real part is that you have a true datasheet to use to validate that the model itself is correct. With some virtual verification component, you don't have that datasheet which can mean that you come up with a verification component that does not match reality. Every line of code that gets written, whether as part of a design or part of a verification model has a chance of being wrong. What is your strategy for validating that your model itself is correct?

Kevin Jennings
 
There are of course different scenarios here. One is to access the rest of your FPGA through an interface/protocol. This is the default for most BFMs and VVCs and a allow a testbench to be up and running in a very short time. The other is a protocol checker that also checks your interface/protocol thoroughly. We have heard of several users who actually have found design bugs in their interface design even with non-protocol checker BFM/VVC, so they may be partially useful for that as well. And in many cases you don't need a protocol checker, but just an access into/out of the "rest" of your FPGA.
What I meant about the statement "you don't need to bother about the details of interface protocols or timing" is that this is already care of inside a good BFM or VVC, a bit like when you use IP (your own, FPGA vendor, tool vendor, 3rd party) for say a complex interface or some complex data processing, then you don't really want to waste time understanding the details of the implementation if you are provided with a simplified user interface.
Also in many projects several designers may need to access the same interface, and it would be a total waste of time if everybody had to go into the details of that interface/protocol if they could rather use a simple transaction level procedure that hides the details for them.
I fact in most cases where people talk about models they tend to mix interface and internal functionality, or tend to combine multiple interface layers inside the same model. For those cases it is obvious that layering is more efficient, more reusable and safer, - and one of those layers is very often a BFM or VVC.
I think the best strategy for validating any type of model/BFM/VVC (at the level of ambition for which it is intended) is to have as many designers as possible using and checking them, and I must admit that we have already improved/fixed our BFMs/VVCs several times do to feedback from UVVM users all over the world.
 
On Friday, June 30, 2017 at 3:27:12 AM UTC-4, espen.t...@bitvis.no wrote:
> What I meant about the statement "you don't need to bother about the details of interface protocols or timing" is that this is already care of inside a good BFM or VVC, a bit like when you use IP (your own, FPGA vendor, tool vendor, 3rd party) for say a complex interface or some complex data processing, then you don't really want to waste time understanding the details of the implementation if you are provided with a simplified user interface.

You have to understand the complex interface once in order to write the model. It's also likely that you'll need to delve into those details on occasion afterwards in order to fix the bugs. That's called support which hopefully does decrease over time. But yes, of course, the idea is always that you're transforming a 'complex' interface into a simpler easier to use one so you're not stating anything that isn't obvious.

> Also in many projects several designers may need to access the same interface, and it would be a total waste of time if everybody had to go into the details of that interface/protocol if they could rather use a simple transaction level procedure that hides the details for them.

That's called design reuse, again nothing new. Whether or not the 'design' being used is part of an actual physical design or a simulation model is not important.

> I fact in most cases where people talk about models they tend to mix interface and internal functionality, or tend to combine multiple interface layers inside the same model. For those cases it is obvious that layering is more efficient, more reusable and safer, - and one of those layers is very often a BFM or VVC.

Yet when I look at the UART example you posted (http://bitvis.no/products/uvvm-vvc-framework/vvc_efficient_reuse/), the figure (maybe the code too) does not reflect your statement. The DUT has unconnected interfaces that are required in order to actually test functionality. Had that figure been drawn properly, there would be some form of processor model to the left of the DUT UART and one to the right of the UART_VVC. In fact, the functionality of UART_VVC would simply be that of some known good UART model, such as an actual commercial part that has a datasheet as I've said all along. The processor models would be the thing that provides the translation between high level statements like "print("Hello world\n") and the processor interface. Ideally this would be an abstract interface that would then connect with a converter to translate between the abstract interface and that of the DUT UART's 'Register IF' or the VVC UART's register interface. The same processor model is used, they are only 'running' a different high level program such as "while TRUE loop, ch = getc(); putc(ch);". That model simply echoes back everything that comes in. The VVC model would then check all incoming messages to make sure they match what was sent out. Graphically, the connections are:

DUT Proc<-->Intf Conv<-->DUT UART<--RX/RX-->VVC UART<-->Intf Conv<--VVC Proc

Not shown in the figures would be some fault model that actually disrupts RX/TX in some fashion if that level of testing is required.

That's how I would draw it. That method actually does break down the 'layers' as you called them but there is no mystery about which layer to which something belongs. Within the processor model, one can build up device specific layers of code so that one is not having to read/write hard coded addresses. Instead you build up device specific code so that you can eventually say "print("Hello world\n" in the source code. But you wouldn't stop there. You would also build up test procedures so that maybe at the top level your code is just "Test(Uart);". But all of this is exactly the same as writing traditional software, so again, not breaking new ground just using established principals. The only thing you don't need to create or emulate here is a compiler to translate source code into object code for the processor. The VHDL compiler is essentially performing that function.

The VVC Proc runs the program that generates the messages, the DUT Proc runs the program that simply echos back whatever came in. The VVC Proc then has a checker to validate that what comes back was what was sent out; that all messages did arrive; that you check operation for all baud rates, etc.

While your web page touts of reuse, no actual examples are really shown. For instance, when would one ever have a need to reuse the UART VVC? You can't use it to test a disk controller. Once you have it all working with the UART design, there is never a need to even re-run the simulation let alone reuse the UART VVC unless the DUT UART design is changed. But even there the most likely scenario is you simply rerun the simulation tests with the new DUT UART rather than reuse anything into some new code. At best, you've added some new functionality which then requires additional testing software to be added to the processor model. Not design reuse.

On the other hand, my drawing is already reusing a processor model and perhaps using the same 'abstract processor to UART Register' converter if DUT UART and VVC UART happen to have the same interface. Actual design reuse achieved by using models that reflect reality.

> I think the best strategy for validating any type of model/BFM/VVC (at the level of ambition for which it is intended) is to have as many designers as possible using and checking them, and I must admit that we have already improved/fixed our BFMs/VVCs several times do to feedback from UVVM users all over the world.

Well, that's the strategy of Microsoft and all the other software providers.. Put it out there and let the user's find the bugs. Not saying it's not a successful strategy but it's one that has had nearly every user cursing at their device or the software.

Kevin Jennings
 
There are very many ways of making testbenches, and I do understand that for some approaches VVCs may not fit in. But it seems to me that UVM's UVCs (UVM Verification Components) has been very successful and a very efficient approach for making good testbenches in an efficient manner. UVVM and VVCs are basically providing much of the same concept, but for VHDL and with a far lower user threshold and allowing the use of low cost tools). So at least for quite a few testbench approaches VVCs is a good thing.

As to your comment that VVCs are no longer needed once your interface has been checked, - that might apply for protocol checkers, but still it would be a huge advantage to have protocol checker for everyone who makes a new interface. The VVCs that we have provided with UVVM are not intended as protocol checkers, but as interface access procedures. E.g. when writing to anything inside your FPGA via an Avalon interface, a procedural access like avalon_write(my_addr, my_data) is the preferred approach in most testbenches, - i.e. as a BFM. Some added benefits of a VVC are
- Queueing of commands
- Very simple control of simultaneous access on multiple interfaces
- Encapsulated all relevant Avalon MM functionality (including - Avalon access initiation and completion (for pipelined access))

We could of course provide more figures for how the VVC may fit in, but we have so far seen that the VVC users have lots of different use cases. Thus we explain how it works and the users may apply them any way they want.

I don't really see any similarity with the Microsoft approach you mention. UVVM including all the VVCs are free and Open source, so it's a bit more like other Open source... My point was that with many users we ALL benefit from the fact that any module that is used by many designers in average has a better quality than something used by just one person or a small team.

We really hope other designers/companies will soon start making their own VVCs and make them available for the VHDL community. That would be really great both from an efficiency and quality point of view. If they make them available as Open source or commercial IP is really up to each single contributor, but the sum of it all will make VHDL testbench development faster and better - at least for a large number of users.
 

Welcome to EDABoard.com

Sponsor

Back
Top