EDAboard.com | EDAboard.eu | EDAboard.de | EDAboard.co.uk | RTV forum PL | NewsGroups PL

creating program

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - VHDL Language - creating program

Goto page Previous  1, 2, 3, 4  Next

rickman
Guest

Tue Nov 17, 2015 12:40 am   



On 11/16/2015 2:22 AM, Igmar Palsenberg wrote:
Quote:

I'm used to languages with strong typing. I consider that a big plus myself, especially if you have a decent compiler that warns you when things look weird.

I like it in some cases, but until more recently it was a PITA to try to
use as it requires a lot more typing. I am will to give Verilog a shot
if I can find a good book.

In my experience, a good IDE can help. Sigasi helps a lot, it also has features that at least the Quartus editor doesn't have.


I've heard a lot of good about Emacs in this regard.


Quote:
In what sense ? Cutting it up in the right modules you mean ? I
especially found the VHDL variable vs signals confusing, and that
fact that it looks sequential, but isn't.

Any time you need to break a problem down to parallel tasks in software
it gets *much* more difficult. In VHDL this is not so much an issue.

The sequential part of VHDL (processes) are *exactly* like software when
you use variables. Signals are only different in that they are not
updated until the process stops. This is because of the fact that
signals are intended to model hardware with delays. So all signal
assignments are made with a delta delay as a minimum which is zero time
(think infinitesimal in math) and so won't happen until the process
ends. All statements in a process happen without time advancing, even
delta time.

Clear. I find debugging programs with a large number of thread a huge PITA. Is something goes wrong, it's sometimes nearly impossible to trace, especially since debuggers change behaviour (read : timing).

Delta delays get around repeatability issues while Verilog can be a
killer because of them. If you are single stepping to debug code you
are most likely doing it wrong. That is a poor technique in nearly any
language.

Personally, I use logging to a file a lot. And breakpoints on certain points, mainly in async code.


That requires a lot of manual work. If you are trying to find a problem
it can be useful. But for verification it is better to automate the
process.


Quote:
I'm still struggling testing in VHDL. With software, I'm more
confortable : Junit, gtest, mockito, pick one or combine them. That's
getting harder in the modern async works : Akka for example is
message based, high-parallel.

I guess it has been awhile since I've done C development. I've never
heard of these tools. Mostly my software is done in Forth.

I've never done that. Only C, C++, Pascal, PHP, Java, Scala, Python and bash.

I'm looking at vunit for VHDL at the moment, but it's still a bit
confusing : Waveform in, waveform out. With software it's value in,
value out.

VHDL has built in testing tools. ASSERT statements are how I do it.
There are other features provided by system Verilog that are even
fancier I hear. Even so, it's not about waveforms really. Its about
inputs and outputs. VHDL describes the inputs. VHDL verifies the
outputs. Waveforms are for viewing by the user when you have a problem.

Well, take for example the LD A, 0xFF Z80 instruction. That roughly does :

1) Fetch instruction byte from PC and increase PC
2) Fetch operands from PC, and increase PC
3) Move operand to A register
4) Increase PC

How do you test that ? You also want to test timing, since on a real Z80, this instruction takes a fixed amount of time (no pipeline, no cache, no prefetching, no nothing).

I would say confirm each step, but I'm still working that out.

Again, you only care about inputs and outputs. Make the opcode
available at the last point in time it can be read by the timing spec or
model the memory as its own function (which you then need to verify). I
don't know that the timing of internal events is required other than
clock cycle alignment. Even then it is hard to test internal features
other than functionally. So execute the next instruction to read the A
register and read it out. That also verifies the PC increment. Test it
as you would a Z80 chip.

I'll just start trying. The Z80 is pretty simple, that makes it a lot easier to get started with. Good exercise.

I believe I have found ways to read internal signals of modules in VHDL.
I think this is a simulator feature rather than a language feature
though. Verilog supports this directly.

What issues are you concerned about?

I normally test internals (in software, that is), since it will make determining what broke easier then just testing the public interfaces.
I probably just need to get started, and not think about this to much.


Ok

--

Rick


Guest

Fri Nov 20, 2015 11:27 pm   



On Sunday, November 15, 2015 at 6:57:33 PM UTC, Lars Asplund wrote:
Quote:
@Andy

As far as testing, we use a continuous integration flow with Jenkins. Our testbenches are 100% self-checking (waveforms are for debugging only), and use constrained-random stimulus, while monitoring all DUT outputs for comparison in scoreboards with an untimed reference model. Coverage models for the stimulus ensure we cover what we need to cover in terms of functionality. This can all be done in SystemVerilog using UVM, or in VHDL using OSVVM. We do not do unit level testing. We may use test versions of DUT entities to make it easier to get to an internal entity's functionality, but we always use the DUT interface to simplify the stimulus application (drivers) and response capture (monitors). The scoreboards hook up to the monitors, not the DUT, and to the reference model. Monitors can also verify interface protocols, without having to know anything about the stimulus or expected response.

When I promote the use of VUnit it's usually very easy when people have a previous experience with unit testing tools for SW. They know what to expect and they know that they want it. It seems to me that you may have such experience but decided to do only top level testing anyway. That makes me a bit curious about the reasons. Are you working as a verification engineer, RTL designer, or both?

Anyway, it might be interesting for you to know that VUnit doesn't know what a unit is, it doesn't care about your test strategy as long as your testbenches are self-checking, and it has support for Jenkins integration. So if you wrap your testbench in this

library vunit_lib;
context vunit_lib.vunit_context;

entity tb_example is
generic (runner_cfg : runner_cfg_t);
end entity;

architecture tb of tb_example is
begin
main : process
begin
test_runner_setup(runner, runner_cfg);

-- Put whatever your "main process" is doing here

test_runner_cleanup(runner); -- Simulation ends here
end process;

-- Put your DUT, scoreboards, monitors, reference models here

end architecture;

and create a python script (run.py) like this

from vunit import VUnit
vu = VUnit.from_argv()
lib = vu.add_library("lib")
lib.add_source_files("*.vhd")

# Create as many libraries as needed and add source files to them

vu.main()

and do

python run.py -x test_report.xml

from the command line you will have something that (assuming you're using ModelSim, Riviera-PRO, Active-HDL or GHDL) compiles your source files in dependency order based on what has been modified. The script then finds and runs your testbench(es) and generates a test report on "Jenkins format"

Regards,

Lars


Hmmm, I don't really do too much with this now, but there are echos in what you are doing with what I was doing with this:

http://www.p-code.org/ttask.html

For automatic checking I was just logging to files with this:

http://www.p-code.org/tbmsgs.html

As you can see that's pretty basic, the VHDL code is here:

http://www.p-code.org/tbmsgs.fossil/artifact/19271c770959048b

There is also a verilog version of that if you root around.

I see you are using vunit as a build tool as well as the testing framework. I did that different, as I kept the build system separate. Well, as much as I could, because the build system scans log files and prints a summary if it finds tbmsgs messages. But that can be easily changed with customized extensions (see bottom of first link).

Andy
Guest

Mon Nov 23, 2015 10:02 pm   



On Sunday, November 15, 2015 at 12:57:33 PM UTC-6, Lars Asplund wrote:
Quote:
@Andy
When I promote the use of VUnit it's usually very easy when people have a previous experience with unit testing tools for SW. They know what to expect and they know that they want it. It seems to me that you may have such experience but decided to do only top level testing anyway. That makes me a bit curious about the reasons. Are you working as a verification engineer, RTL designer, or both?


Mostly design, but some verification experience.

Perhaps I should say, we use unit level testing, but only at the top (design) level! Under SW testing standards, we test using the production compiler on representative production HW, or as close as practical. Simulation and modeling are forms of analysis, not testing, strictly speaking.

Given that the simulator and synthesis, place & route tool are very different "compilers", and the simulation server bears no resemblance to the target HW, the only way to perform testing is by using verification systems that capture stimulus & DUT response from simulation(s), and then play that stimulus against the real programmed FPGA while comparing the FPGA's response to the simulation response, using Aldec CTS for example.

So, if we have to show, under a SW test approach, how all of our verification coverage goals are met, on representative HW, then we have to attain that coverage by stimulating the RTL at the top level during simulation. Then we replay that stimulus to the programmed device, and show that the DUT response was the same as simulated. Note that the simulation showed the response met requirements.

Other than speeding up some simulations, there is no advantage for us in lower level unit testing (directly stimulating/monitoring a lower level module at its ports). We still have to test it via the top level (device) interface. There are still some functional coverage items we cannot cover at the hardware level, especially things that require "white box" verification, like FSM illegal state recovery, internal memory EDAC, etc.

Andy

Andy
Guest

Mon Nov 23, 2015 10:14 pm   



On Monday, November 16, 2015 at 11:40:32 AM UTC-6, rickman wrote:
Quote:
On 11/16/2015 2:22 AM, Igmar Palsenberg wrote:
In my experience, a good IDE can help. Sigasi helps a lot, it also has features that at least the Quartus editor doesn't have.

I've heard a lot of good about Emacs in this regard.


Even the free version of Sigasi is incredibly useful, if you (and your employer) can tolerate the mandatory talk-back feature in the free version. And the paid version has LOTS more features. IDE's that are truly language aware are extremely valuable.

For example, even in the free version, you can set Sigasi to fontify subprogram names differently than other text. If it cannot find a matching subprogram within scope, WITH MATCHING ARUMENT SIGNATURE, it won't fontify it, telling you immediately that you either misspelled it, or misused it with your arguments/types.

Andy

Lars Asplund
Guest

Thu Nov 26, 2015 7:02 pm   



@Andy

Different standards have different opinions on unit testing and to what extent unit test results can be used to prove correctness of a design. Even if you're allowed to use them you might be required to motivate why the results gained from a test run in a non-product environment is valid for the real thing as well.

Anyway, the key value of unit testing isn't in building test coverage, it's in the productivity and design quality boost. A fully automated unit test methodology enables short code/test cycles such that a developer can test frequently and start to test early. With early and continuous feedback it's much easier to keep the work on track considering that most people produce bugs, misinterpret requirements and make bad design decisions on a daily basis. If you only test at the system level you can't test early and frequently, partly because there is a delay before there is a system level to test at all, partly because there is a delay before an already existing system is compatible with your new piece of code, and partly because system level testing is slower. There are number of problems with this and the significance depends on project size

1. A bug discovered late might not affect you much if it's something as simple as a faulty value of a constant. But if the bug reveals a design flaw there is a risk that you have had the time to add a considerable amount of code based on that design and have to make significant changes.
2. A bug released to be integrated with other code before system testing can take place can cause significant harm. We all know this but here are some personal experiences. After you release your bug there is a delay before the code is used at all because not all teams are synchronized. When the code is used and the bug starts to show people spend time debugging their own code and there is delay before the bug report finds its way to your team. Once there it's often incomplete, information is missing to recreate and debug the problem. Meanwhile team schedules are slipping and workarounds are added to not stop progress. Once the root cause has been fixed and the workarounds can be removed it turns out that newly written code depends on these workarounds so more work is needed before development can proceed smoothly.. Unfortunately, a workaround was part of a customer release and now they rely on it which becomes apparent when you make the next release. They are not willing to take the consequences of removing that workaround on short notice so you end up supporting two different variants. At some point, often much later, all dependencies on the workaround are gone. Someone finds the FIXME in the code but the memory of why it was introduced is gone or in the heads of people no longer available. Potentially dead code is bad practice so it has to be investigated to decide if it can be removed or not. And so on...

In the same way that the lint support of Sigasi adds value by providing continuous feedback based on static code analysis unit testing adds value by providing continuous feedback based on dynamic code analysis.

Testing at unit level also drives design. Creating test cases for the units forces you to think about clear functional responsibilities for that unit. This promotes strong cohesion of the unit and loose coupling to other units which are signs of a good (readable and maintainable) design.

/Lars

Andy
Guest

Fri Dec 11, 2015 10:40 pm   



On Thursday, November 26, 2015 at 11:02:27 AM UTC-6, Lars Asplund wrote:
Quote:
@Andy

Different standards have different opinions on unit testing and to what extent unit test results can be used to prove correctness of a design. Even if you're allowed to use them you might be required to motivate why the results gained from a test run in a non-product environment is valid for the real thing as well.

If you only test at the system level you can't test early and frequently, partly because there is a delay before there is a system level to test at all, partly because there is a delay before an already existing system is compatible with your new piece of code, and partly because system level testing is slower. There are number of problems with this and the significance depends on project size

....
/Lars


Lars,

Sorry it's been a while...

First off, when I say "system level" I mean "FPGA level". With that in mind....

We don't wait for the whole system to be anywhere near complete before we test. See Jim Lewis' <excellent> paper "Accelerating Verification Through Pre-Use of System-Level Testbench Components". We use this approach to get units integrated into a partially implemented system & test early, with no wasted time developing unit level tests that won't need to be used again.

The dificulty in unit level testing is the quantity of unique interfaces for all the units, and all the monitors, drivers and often-unique transactions required for them. By developing the units in a sensible order to allow a testbench with a system-level interface to exercise the functionality provided by lower level units, we can test early and often with one TB <-> DUT interface, drivers, monitors, etc.

This approach also makes it easier to refine the system architecture for improved performance, utilization, functionality, etc. because our verification is immune to changes in unit level interfaces that change with the architecture.

Andy

Lars Asplund
Guest

Mon Dec 21, 2015 11:18 am   



Quote:

Lars,

Sorry it's been a while...

First off, when I say "system level" I mean "FPGA level". With that in mind...

We don't wait for the whole system to be anywhere near complete before we test. See Jim Lewis' <excellent> paper "Accelerating Verification Through Pre-Use of System-Level Testbench Components". We use this approach to get units integrated into a partially implemented system & test early, with no wasted time developing unit level tests that won't need to be used again.

The dificulty in unit level testing is the quantity of unique interfaces for all the units, and all the monitors, drivers and often-unique transactions required for them. By developing the units in a sensible order to allow a testbench with a system-level interface to exercise the functionality provided by lower level units, we can test early and often with one TB <-> DUT interface, drivers, monitors, etc.

This approach also makes it easier to refine the system architecture for improved performance, utilization, functionality, etc. because our verification is immune to changes in unit level interfaces that change with the architecture.

Andy


Hi Andy,

Thanks for the link to Jim's paper. I've seen a very similar approach at a company I was helping getting started with VUnit but I didn't know the origin. I think there's much to say about this but here is a short summary

- The paper describes the "traditional" approach as a methodology testing the same things at all levels and once you have the top-level testbench you throw away the testbenches at the unit (subblock in the paper) level which is a waste. This is not an inherent property of multi-level testing but rather a result of bad practices. When unit testing I aim for full functional coverage at the unit level, higher levels of testing is focused on verifying integration issues that can't be tested at the lower levels. Since all testbenches add unique value you don't throw them away.

- To people sceptical about unit testing I recommend starting with a unit testing framework and apply that on their current testbenches. It provides many features valuable at all levels of testing. Once you understand how it works and how it removes various obstacles in testing you will hopefully see how unit testing becomes more available and less cumbersome. Once there you can take advantage of the values unit testing provides, values that are lost if you just do higher level testing or variants thereof like the one in the paper.

- When adopting unit testing there is a degree of personal preferences and project specific circumstances that affect how it is applied. However, I have never met any HW or SW developer that want to go back to where they were before if they have been properly introduced to a good unit testing framework. I've also seen a number of HW and SW teams making the transition and they have all noticed a productivity and quality boost


========================================================================
DETAILS:


- In the end we are all looking for improved productivity and there are a number of ways in which we can achieve this. First we should maximize the work not done (avoid waste) and if it's something that must be done we should look for ways to automate the work, speed it up, or make it simpler in some other way. Whatever methodology we use it should scale with design size and complexity and respond well to change. The need for handling change whether it's in requirements, design or implementation is an inevitable part of larger and more complex designs. We are simply not capable of figuring out everything in advance but have to adapt as we move along.

- A short code/test cycle helps us find our mistakes ASAP to avoid wasting time heading in the wrong direction. But how short is short enough? Suppose I'm developing a simple UART, 200 lines of code or so, let's say a day of work. For that I have a handful of of test cases, maybe a little bit more: sending and receiving a byte, several bytes, verifying reset conditions, special cases like overflow. On average I will have a new test case/feature that can be verified every hour. It's not unlikely that I will introduce a bug during an hour of coding and I have to test this eventually anyway so why not do it immediately?

- One thing that may prevent me taking this approach is too long simulation times. The method proposed in the paper would on average contain half the complete system worth of logic in addition to the unit being tested. That is significant in a larger system where the total amount of logic is much larger than that of a unit.

- As you mention, all the interfaces you need to address can become a burden when you have one testbench for each unit. However, you're allowed to take well-motivated short-cuts even if you're into unit testing, and people often do. Given that the CPU interface in Jim's example doesn't affect your ability to observe and control the units behind it and it doesn't add significant delays, it may save you time to test the CPU interface (CIF) + the timer as a "unit", CIF+UART as a unit, and CIF+memory interface as a unit. To verify integration of the full system I need a testbench with a few read/writes to make sure that decoding works and I don't have conflicts on the internal bus. Same number of testbenches and unique interfaces to handle as in the paper but no superfluous logic and no abandon testbenches.

- The reason that unit grouping like the one I explained can be motivated is that the "main" units in the paper sits right behind a "transparent" CPU interface and they are largely independent. A larger and more complex design would have more units, they would be more "embedded" and there would be more dependencies. Figuring out how to control and observe the unit under test becomes harder from the system boundary. There may also be significant delays in the paths before the targeted unit becomes activated (= simulation time). If testing is hard and slow you won't do it as often (= longer code/test cycles). Also, the amount of logic per I/O of a modern FPGA is many times higher than it was in 2003 when this paper was written. The observability and controllability from the system interfaces are constantly getting worse.

- In general it's not possible to add units to a system in such an order that you can test them individually from the system boundary. This is the case with the CPU interface logic in the examples which is partially tested with manual inspection. An alternative would of course be to make that self-checking and fully tested, that is develop a proper unit test. Another approach would be to postpone testing (= longer code/test cycle) and wait until you have more units in place. This is what I do when grouping the CPU interface with the timer. This problem will get worst with larger and more complex designs

- Some other obstacles for the short code/test cycle that VUnit will remove
- If you want frequent testing your testbenches must be self-checking. VUnit provides a check package that improves over plain VHDL asserts (but you can use assert as well) and a test runner to organize the test cases in your testbench
- With many testbenches and test cases it has to be convenient to run them frequently or you won't. VUnit will automatically compile all your files in dependency order, find and execute all your test cases (or a subset you specify), and present the result.
- VUnit can with a command line option split your test case simulations between many threads/CPU cores which can run in the background while you continue to work interactively with another instance of your simulator and the next piece of code. If you can hide the simulation time altogether there's no excuse for not running the tests.
- If you run many simulations in parallel licensing may become an issue. VUnit supports the free and open source GHDL simulator to remove that obstacle. Use whatever simulator you prefer for interactive work and let GHDL handle batch simulations.
- Developing transactions may be cumbersome as you noted. VUnit has a package providing message passing combined with code generation for your transactions.


- In a previous post I described how unit testing drives an architecture with highly cohesive and loosely coupled units, that is a modular design. Such a design is easier to maintain since changes tend to be more localized. More localized changes means less interfaces changes and less rework of testbenches when making the type of optimizations you mention. One thing we changed from the first generation of VUnit to where we are today is that our test cases are no longer procedures defined in a separate package since interface changes means that you have to update the procedure declaration as well. Instead our test cases are defined in the same scope as the unit instance so that they have access to the interfaces directly. I guess you have similar problems when the test cases are defined in architectures of a separate test control entity. Again, remove obstacles whenever you can.

Andy
Guest

Tue Dec 22, 2015 8:11 pm   



Lars,

A fundamental aspect of SW test that is not possible in unit level "test" is that, by definition, sofware "test" must use the production compiler to produce production object code, which is then run and tested on production-representative hardware. In reality, simulation is analysis, not test.

In HDL, the production compiler is a synthesis, place & route tool (not a simulator!), and the production representative hardware/system is the target FPGA. None of these are possible at the unit level. SPR does not SPR each module, and "call" it multiple times. Each instance is separately SPR'd as part of the system. Each instance is uniquely optimized per its environment within the system).

In order to perform testing for Programmable Logic, all stimulus/response must be provided/captured at observable interfaces for the FPGA. RTL simulation at the FPGA level can be used to verify and caputure the stimulus and response at the FPGA level, which can then be applied to a real FPGA, using the real bit file (the "production" object code.)

Even after that, we still have to run integration testing using real system hardware to provide the stimulus/response to the FPGA, to verify that simulation models of those external components, were accurate models.

Therefore, even if we wanted to avail ourselves of the virtues of unit level testing, all coverage must still be achieved at the FPGA level, since that is the only level at which the PL can be truly tested.

Necessary exceptions to this include white-box testing at the RTL or gate level.

All that said, I agree with your statement about the virtues of loosely coupled modules, which are encouraged by unit level development and testing. There are other ways to encourage these virtues, including coding/design standards and reviews.

Just keep in mind that what is loosely coupled in the design is not always loosely coupled in the FPGA after optimization, placement and routing, particularly if physical synthesis in employed.

Andy

Lars Asplund
Guest

Tue Jan 05, 2016 1:16 am   



Hi Andy,

I've probably been a bit careless with the word "system" as well. What I'm trying to say is that unit testing and higher level testing with integration focus is an effective way of developing your code *before* hitting the synthesis button, i.e. the scope described in Jim's paper.

*After* you hit that button, assuming HW is available, you will need additional testing and I agree that you won't see the true nature of your PL until it has been integrated into the complete system, tested over full operating conditions and so on but that doesn't take away the value of getting to this point efficiently.

To me there's also a difference between the full functional testing done at higher levels and that done at the unit level. The PL units that were hard to test from the PL boundary will be even harder to test at the higher levels when additional HW and SW is involved. Due to the sheer amount of PL, SW and HW units in a larger system it would also be unmanageable to expose them all at every level of testing, it's just too much details. For example, if you let system testing focus on verifying system functions you will handle this and also make sure that the decomposition of system functionality into unit functionality was done correctly. Unit functionality will be tested indirectly of course but if the unit details aren't exposed to system testing there may be corner cases only visible to and verified by the unit tests. Both types of testing add unique values.

Your point that there are other activities than test to get valuable feedback on your design is important and often forgotten. Especially reviews which have been shown to be an important ingredient in quality work. The goal to simplify, automate and shorten the feedback loop applies to these activities as well and unit testing plays a role in doing so. Here are some ways to improve on code reviews
* You can let a lint tool check some of your coding guidelines rather than having that manually reviewed.
* If you review code and unit tests at the same time the reviewers won't waste their time trying to figure out if the code works at all. The reviewer can also learn a lot by looking at what was tested and how it was tested. For example, if he/she expects some functionality but there are no test cases for that it means that the functionality is missing or it's not properly verified.
* If you set up a formal code review meeting you have to find a time that fits everyone so the feedback cycle can be long. Since most of the work is done by the reviewers before the meeting you can shorten that time with a tool that allows everyone to review and comment code inline and submit when done.
* Live reviewing through pair programming will make the feedback loop much shorter. To what extent pair programming is cost efficient is often debated but my personal view is that it should be used when *you* feel that you're about to take on a tricky piece of code and could use another pair of eyes.. In the end someone else should review your code, one way or the other.
* Self-reviews also provide fast feedback and should always be done. The act of writing unit tests helps you with that. It helps you take a step back and think critically about your code. I would say that I find/avoid at least as many flaws writing the unit tests as I do running them. So self-reviews are in a way more powerful than unit testing but here is an interesting question. Would you be as rigorous about self-reviewing if you didn't have a precise method "forcing" you into it? Even when all your test cases pass you should have another look. It works, but can I make it work more efficient, can I make the code more readable, is code coverage sufficient and so on.. When you have a set of fast and working unit tests the fear of changing something that works is much reduced.

Reviews are powerful but a problem is that we tend to be lazy. "There's no need to review this small change or run the slow top-level simulations because it's not going to affect anything other than what I was trying to fix. I'm just going to build a new FPGA and release it.". Running a selected set of fast regression unit tests in a background thread is an effortless way of stopping you when you're wrong.

The benefits of loose coupling and high cohesion I was thinking about are those associated with the source code. Better readability, maintainability, and also significantly lower bug rates. The fact that the tools may destroy that modularity when optimizing is actually good because it means, to some extent, that I can meet my constraints while still enjoying the benefits of modular code. The alternative would be to write optimized code, destroy the code modularity and lose the benefits.

/Lars

valtih1978
Guest

Tue Jan 05, 2016 7:20 pm   



Quote:
I don't think you understand VHDL. VHDL has sequential code, that is
what a process it.


I prefer to think that VHDL processes do not affect each other while
executing. The process is affected by others only when goes sleeping.
http://www.sigasi.com/content/vhdls-crown-jewel


Quote:
Huh??? I use single stepping with VHDL at times. Normally it isn't
that useful because there is so much parallelism, things tend to jump
around as one process stops and another starts... same as software on a
processor with interrupts or multitasking.


You do not have interrupts in VHDL. There is no preemption. VHDL is a
graceful, "cooperative" multitasking.

valtih1978
Guest

Tue Jan 05, 2016 7:29 pm   



Quote:
That's also getting more common in software these days. Ever tried debugging
a 1M messages / second Akka application ?


Thanks. Now, nobody can say that VHDL program is a program after you
pointed this out. Indeed, if VHDL program is indistinguishable from a
modern (Akka) application, they cannot be called a program. BTW, how do
you call a piece of Akka code? Why Akka code is not a "description"?

There was no reason to resort to this killer argument. Because VHDL is
executable "description", it clearly cannot be a program. Being
executable and "program" are mutually exclusive, even disjoint things,
especially if you execute a description.

Nicholas Collin Paul de G
Guest

Tue Jan 05, 2016 11:27 pm   



On January 5th, 2016, Valtih1978 claimed:
|---------------------------------------------|
|"[. . .] |
| |
|[. . .] |
|[. . .] Being executable |
|and "program" are mutually exclusive, [. . .]|
|[. . .]" |
|---------------------------------------------|

False.

Truly,
Paul Colin Gloster

rickman
Guest

Tue Jan 05, 2016 11:46 pm   



On 1/5/2016 7:20 AM, valtih1978 wrote:
Quote:

I don't think you understand VHDL. VHDL has sequential code, that is
what a process it.

I prefer to think that VHDL processes do not affect each other while
executing. The process is affected by others only when goes sleeping.
http://www.sigasi.com/content/vhdls-crown-jewel


Huh??? I use single stepping with VHDL at times. Normally it isn't
that useful because there is so much parallelism, things tend to jump
around as one process stops and another starts... same as software on a
processor with interrupts or multitasking.

You do not have interrupts in VHDL. There is no preemption. VHDL is a
graceful, "cooperative" multitasking.


Uh, interrupts are about a single processor shared between multiple
tasks. VHDL allows you to describe multiple hardware "execution units"
which all operate in parallel. If you want a single execution unit
shared between tasks you can describe that too - complete with
interrupts. Your choice.

What I was describing is the behavior of of the simulator which
essentially *is* a single processor shared between VHDL tasks. No, it
doesn't have interrupts, but the task switching is very messy to try to
follow while single stepping.

--

Rick

Nicholas Collin Paul de G
Guest

Tue Jan 05, 2016 11:47 pm   



On January 5th, 2016, Valtih1978 sent:
|-------------------------------------------------------------------------|
|"[. . .] |
| |
|> I don't think you understand VHDL. VHDL has sequential code, that is |
|> what a process it. |
| |
|I prefer" |
|-------------------------------------------------------------------------|

I would prefer to be a millionaire but this does not make it so.

|-------------------------------------------------------------------------|
|"to think that VHDL processes do not affect each other while |
|executing. The process is affected by others only when goes sleeping." |
|-------------------------------------------------------------------------|

Therefore you admit that processes are affected by other processes.

|-------------------------------------------------------------------------|
|" http://www.sigasi.com/content/vhdls-crown-jewel " |
|-------------------------------------------------------------------------|

I quote from this webpage:
"[. . .]
[. . .] A signal value update may trigger a number of
processes. [. . .]
[. . .]"

|-------------------------------------------------------------------------|
|"> Huh??? I use single stepping with VHDL at times. Normally it isn't |
|> that useful because there is so much parallelism, things tend to jump |
|> around as one process stops and another starts... same as software on a|
|> processor with interrupts or multitasking. |
| |
|You do not have interrupts in VHDL. There is no preemption." |
|-------------------------------------------------------------------------|

It was a comparison - like saying that stars are like very big fires
in the sky. There are not very big fires in the sky - there are stars
and stars are like very big fires.

|-------------------------------------------------------------------------|
|"VHDL is a |
|graceful," |
|-------------------------------------------------------------------------|

VHDL is graceful.

|-------------------------------------------------------------------------|
|""cooperative" multitasking."" |
|-------------------------------------------------------------------------|

The SystemC(R) standard is defined in terms of cooperative so-called
multitasking which (unlike VHDL) does not have true concurrency.

Regards,
Paul Colin Gloster

valtih1978
Guest

Wed Jan 06, 2016 12:44 am   



> In VHDL you can describe CPU, shared between processes with interrupts

Sorry, I was sure that you are talking about simple simulation of VHDL
processes. Now, you say that my TV has apples in it. Indeed, I can
broadcast the garden and have apples in my TV. It therefore has apples
in it. Gotcha! That is a feature of TV set. VHDL process cannot be
interrupted. It is dedicated to its program. It cannot execute anything
else at all.


Quote:
Processes can be affected by others only while waiting for events
Therefore you admit that processes are affected by other processes.


I admit that they can be interrupted, you say. Indeed, when you have
nothing to do and come to the task table to request a new task for you,
and they give you that task -- your execution is interrupted! You gotcha
me! Indeed, I did not consider that waiting for tasks can be interrupted
by the response.

This also means that Akka actors are interruptable. I do not understand
what people debate in this case
https://groups.google.com/forum/#!msg/akka-user/TlIbfaC1eb8/EQ8S2NoI-oIJ. They
seem not able to find a way to interrupt the actors. What is the
problem? Every actor is interruptable, as you say. Just let it finish
the work and read your interrupt/task from the queue!

Goto page Previous  1, 2, 3, 4  Next

elektroda.net NewsGroups Forum Index - VHDL Language - creating program

Ask a question - edaboard.com

Arabic versionBulgarian versionCatalan versionCzech versionDanish versionGerman versionGreek versionEnglish versionSpanish versionFinnish versionFrench versionHindi versionCroatian versionIndonesian versionItalian versionHebrew versionJapanese versionKorean versionLithuanian versionLatvian versionDutch versionNorwegian versionPolish versionPortuguese versionRomanian versionRussian versionSlovak versionSlovenian versionSerbian versionSwedish versionTagalog versionUkrainian versionVietnamese versionChinese version
RTV map EDAboard.com map News map EDAboard.eu map EDAboard.de map EDAboard.co.uk map