M
Maciej Sobczak
Guest
Hi, my first post here, I'm glad to join the group.
I'm a software engineer with interests in embedded designs and I have decided to learn VHDL as a way to broaden my perspective on the world of programmable devices. My understanding, confirmed by authors of several books that I have seen so far, is that VHDL programming is a software experience. This in turn suggests that with respect to quality and verification in the industrial practice, the software standards should be relevant. All is well until we reach the point where the easy analogies break apart - one of such points is the notion of code coverage.
Software world has a pretty good understanding of code coverage and of techniques to measure it. That is, my 100% object code coverage has a precise meaning and I know how to get the evidence that my test suite actually exercises the given design to that extent. This is how I can convince myself, the customers, the certification authorities or whoever else, that the software was done right.
But what is the analogy of code coverage in VHDL? I can imagine that when we limit the discussion to simulation only, there are no new problems, because the design can be instrumented or the debugger instructed to gather the coverage traces and this is all software exercise. But the synthesis seems to be a black box, further concealed by the IP concerns of the toolset and chip vendor. That is, because the synthesized structures do not necessarily mirror the VHDL structures, my simulation coverage traces are not necessarily indicative of the final coverage in the programmed chip.
I short, I'm not testing what I'm selling. I can still run my tests on the final hardware to keep some level of confidence, but the coverage evidence is gone. This seems to be against some software quality standards.
Is this considered to be an issue?
--
Maciej Sobczak * http://www.inspirel.com
I'm a software engineer with interests in embedded designs and I have decided to learn VHDL as a way to broaden my perspective on the world of programmable devices. My understanding, confirmed by authors of several books that I have seen so far, is that VHDL programming is a software experience. This in turn suggests that with respect to quality and verification in the industrial practice, the software standards should be relevant. All is well until we reach the point where the easy analogies break apart - one of such points is the notion of code coverage.
Software world has a pretty good understanding of code coverage and of techniques to measure it. That is, my 100% object code coverage has a precise meaning and I know how to get the evidence that my test suite actually exercises the given design to that extent. This is how I can convince myself, the customers, the certification authorities or whoever else, that the software was done right.
But what is the analogy of code coverage in VHDL? I can imagine that when we limit the discussion to simulation only, there are no new problems, because the design can be instrumented or the debugger instructed to gather the coverage traces and this is all software exercise. But the synthesis seems to be a black box, further concealed by the IP concerns of the toolset and chip vendor. That is, because the synthesized structures do not necessarily mirror the VHDL structures, my simulation coverage traces are not necessarily indicative of the final coverage in the programmed chip.
I short, I'm not testing what I'm selling. I can still run my tests on the final hardware to keep some level of confidence, but the coverage evidence is gone. This seems to be against some software quality standards.
Is this considered to be an issue?
--
Maciej Sobczak * http://www.inspirel.com