EDAboard.com | EDAboard.eu | EDAboard.de | EDAboard.co.uk | RTV forum PL | NewsGroups PL

Phrasing!

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - FPGA - Phrasing!

Goto page Previous  1, 2, 3  Next

Tim Wescott
Guest

Tue Nov 22, 2016 8:30 am   



On Tue, 22 Nov 2016 01:33:12 +0000, Tom Gardner wrote:

Quote:
On 22/11/16 00:33, Tim Wescott wrote:
On Mon, 21 Nov 2016 14:51:13 -0800, Kevin Neilson wrote:

I actually came back a bit let down from a recent Xilinx user's
meeting at just how much focus Xilinx is putting on their 'high
level' tools. I'm of the opinion that Xilinx is sinking a ton of
resources into something that a small minority will ever use. (And
will probably not last long either). To Xilinx, RTL design is
dead...

--Mark

I wish they would just focus all their effort on the synthesizer and
placer. The chips get better and better, but the software seems
stuck.
I think the high-level tools are not for serious users. You can only
use them if you don't care about clock speed, and if you don't care
about clock speed, you should be using a processor or something.

Maybe if the synthesizer got better the demand for hugely fast chips
would go down, and thus they'd shoot themselves in the foot -- at least
from their perspective.

Synthesis is easy. Place and route is hard. A big question is how to
either decouple or integrate the them.

Particularly when you see the size of the big Xilinx chips and consider
the relative time taken to get across the chip and through a single LUT
(and then through the integrated ARM cores Smile )

But I suspect I'm close to teaching you how to suck eggs Smile


Nah -- about the teaching me to suck eggs part, at least. I understand
the principles involved, but it's not something I've ever done.

Assuming that people know what the hell they're doing it can't be an easy
problem, because it hasn't been fully solved. At least -- to my
knowledge the process is still an iterative one that's at least partially
based on some sort of a pseudo-random process (presumably simulated
annealing).

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!

rickman
Guest

Tue Nov 22, 2016 8:30 am   



On 11/21/2016 3:47 PM, GaborSzakacs wrote:
Quote:
Tim Wescott wrote:
On Mon, 21 Nov 2016 10:07:41 +0000, Tom Gardner wrote:

On 20/11/16 22:43, Tim Wescott wrote:
On Sat, 19 Nov 2016 14:15:18 -0800, Kevin Neilson wrote:

Here's an interesting synthesis result. I synthesized this with
Vivado for Virtex-7:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= x!=0; // version 1

Then I rephrased the logic:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= |x; // version 2

These should be the same, right?

Version 1 uses 23 3-input LUTs on the first level followed by a
23-long carry chain (6 CARRY4 blocks). This is twice as big as it
should be.

Version 2 is 3 levels of LUTs, 12 6-input LUTs on the first level, 15
total.

Neither is optimal. What I really want is a combination, 12 6-input
LUTs followed by 3 CARRY4s.

This is supposed to be the era of high-level synthesis...
I'm not enough of an FPGA guy to make really deep comments, but this
looks like the state of C compilers about 20 or so years ago. When I
started coding in C one had to write the code with an eye to the
assembly that the thing was spitting out. Now, if you've got a good
optimizer (and the gnu C optimizer is better than I am on all but a
very few of the processors I've worked with recently), you just express
your intent and the compiler makes it happen most efficiently.

Clearly, that's not yet the case, at least for that particular
synthesis tool. It's a pity.
Of course sometimes you don't want optimisation. Consider, for example,
bridging terms in an asynchronous circuit.

OK. I give up -- what do you mean by "bridging terms"?

In general, I would say that if this is an issue, then (as with the
'volatile' and 'mutable' keywords in C++), there should be a way in
the language to express your intent to the synthesizer -- either a way
to say "don't optimize this section", or a way to say "keep this
signal no matter what", or a syntax that lets you lay down literal
hardware, etc.


Bridging terms refers to terms that cover transitions in an asynchronous
sequential circuit. Xilinx tools specifically do not honor this sort of
logic and it really has no business in their FPGA's. However, if you
insist on generating asynchronous sequential logic in a Xilinx FPGA, you
will need to instantiate LUTs to get the coverage you're looking for.


Xilinx parts do not require bridging terms. If two canonical terms,
adjacent in the Karnaugh map, are set to the same value in the LUT there
is no glitch if a single input transitions from one term to another.
This is because they use transmission gates for the multiplexer and
there is enough capacitance to hold a signal on the output if neither
signals are driving the output as the switches transition.

If you think about it just a bit, you will realize most FPGA LUTs only
have canonical product terms and so can't have "cover terms" or
"bridging terms".

--

Rick C

Tom Gardner
Guest

Tue Nov 22, 2016 4:23 pm   



On 22/11/16 01:50, Tim Wescott wrote:
Quote:
On Tue, 22 Nov 2016 01:33:12 +0000, Tom Gardner wrote:

On 22/11/16 00:33, Tim Wescott wrote:
On Mon, 21 Nov 2016 14:51:13 -0800, Kevin Neilson wrote:

I actually came back a bit let down from a recent Xilinx user's
meeting at just how much focus Xilinx is putting on their 'high
level' tools. I'm of the opinion that Xilinx is sinking a ton of
resources into something that a small minority will ever use. (And
will probably not last long either). To Xilinx, RTL design is
dead...

--Mark

I wish they would just focus all their effort on the synthesizer and
placer. The chips get better and better, but the software seems
stuck.
I think the high-level tools are not for serious users. You can only
use them if you don't care about clock speed, and if you don't care
about clock speed, you should be using a processor or something.

Maybe if the synthesizer got better the demand for hugely fast chips
would go down, and thus they'd shoot themselves in the foot -- at least
from their perspective.

Synthesis is easy. Place and route is hard. A big question is how to
either decouple or integrate the them.

Particularly when you see the size of the big Xilinx chips and consider
the relative time taken to get across the chip and through a single LUT
(and then through the integrated ARM cores Smile )

But I suspect I'm close to teaching you how to suck eggs :)

Nah -- about the teaching me to suck eggs part, at least. I understand
the principles involved, but it's not something I've ever done.

Assuming that people know what the hell they're doing it can't be an easy
problem, because it hasn't been fully solved. At least -- to my
knowledge the process is still an iterative one that's at least partially
based on some sort of a pseudo-random process (presumably simulated
annealing).


I'm sure heuristics are involved, of course, but even
they will only get you so far.

From memory, a CLB "gate" delay is of the order of 100ps
and it can take ~1ns for a logic signal to cross the chip
(clocks can be a bit faster due to dedicated drivers
and tracks). Even a "global reset" becomes a heretical
concept.

Now, what delay should you guess a particular gate+track
will have, and where should you place it? Ditto the
100,000 others - to maximise the clock rate of the
ensemble.

As you might guess, the workflow is
1 design
2 synthesise (from RTL/behavioural/system design)
3 simulate, to get an idea of speed
4 place and route
5 simulate, with "actual" delays
6 utter expletive deleteds
7 goto 1

Yes, there are many means to constrain the designs and
help the place and route, from specifying which timings
matter to nailing down functions in individual LUT/CLBs.
But they only go so far.

Kevin Neilson
Guest

Tue Nov 22, 2016 7:48 pm   



Quote:
I'm sure heuristics are involved, of course, but even
they will only get you so far.

From memory, a CLB "gate" delay is of the order of 100ps
and it can take ~1ns for a logic signal to cross the chip
(clocks can be a bit faster due to dedicated drivers
and tracks). Even a "global reset" becomes a heretical
concept.

In the part I'm using, LUT delays are 43 ps and net delays between them can easily be 1 ns. I'm looking at a net segment now that is 950ps and it looks like it only goes about 3% the width of the die. It's short. (It does go across an IOB column, which is probably part of the problem.) The heuristics in the synthesizer seem to dislike using MUXF7s and MUXCYs, even though they have dedicated routing, because the LUT delay is only 43ps and that makes it look good. But when the route to it is >500ps, the advantage is lost.


These are nice chips, but the synthesizer is still weak. And it seems odd that a slight rephrasing resulting in an equivalent Boolean expression would yield an entirely different synthesis result.

Richard Damon
Guest

Wed Nov 23, 2016 11:33 pm   



On 11/21/16 5:07 AM, Tom Gardner wrote:
Quote:
On 20/11/16 22:43, Tim Wescott wrote:
On Sat, 19 Nov 2016 14:15:18 -0800, Kevin Neilson wrote:

Here's an interesting synthesis result. I synthesized this with Vivado
for Virtex-7:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= x!=0; // version 1

Then I rephrased the logic:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= |x; // version 2

These should be the same, right?

Version 1 uses 23 3-input LUTs on the first level followed by a 23-long
carry chain (6 CARRY4 blocks). This is twice as big as it should be.

Version 2 is 3 levels of LUTs, 12 6-input LUTs on the first level, 15
total.

Neither is optimal. What I really want is a combination, 12 6-input
LUTs followed by 3 CARRY4s.

This is supposed to be the era of high-level synthesis...

I'm not enough of an FPGA guy to make really deep comments, but this
looks like the state of C compilers about 20 or so years ago. When I
started coding in C one had to write the code with an eye to the assembly
that the thing was spitting out. Now, if you've got a good optimizer
(and the gnu C optimizer is better than I am on all but a very few of the
processors I've worked with recently), you just express your intent and
the compiler makes it happen most efficiently.

Clearly, that's not yet the case, at least for that particular synthesis
tool. It's a pity.

Of course sometimes you don't want optimisation.
Consider, for example, bridging terms in an asynchronous
circuit.


If you are thinking in terms of an AND-OR tree for the typical LUT based
FPGA, you aren't going to get it right. Most FPGA's now use the LUT,
which, at least for a single LUT, are normally guaranteed to be glitch
free for single line transitions (so no need for the bridging terms). If
you need more inputs than a single LUT provides, and you need need the
glitch free performance, than trying to force a massive AND-OR tree is
normally going to be very inefficient, and I find it worth building the
exact structure I need with the Low Level, vendor provided fundamental
LUT/Carry primatives.

Tom Gardner
Guest

Wed Nov 23, 2016 11:40 pm   



On 23/11/16 16:33, Richard Damon wrote:
Quote:
On 11/21/16 5:07 AM, Tom Gardner wrote:
On 20/11/16 22:43, Tim Wescott wrote:
On Sat, 19 Nov 2016 14:15:18 -0800, Kevin Neilson wrote:

Here's an interesting synthesis result. I synthesized this with Vivado
for Virtex-7:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= x!=0; // version 1

Then I rephrased the logic:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= |x; // version 2

These should be the same, right?

Version 1 uses 23 3-input LUTs on the first level followed by a 23-long
carry chain (6 CARRY4 blocks). This is twice as big as it should be.

Version 2 is 3 levels of LUTs, 12 6-input LUTs on the first level, 15
total.

Neither is optimal. What I really want is a combination, 12 6-input
LUTs followed by 3 CARRY4s.

This is supposed to be the era of high-level synthesis...

I'm not enough of an FPGA guy to make really deep comments, but this
looks like the state of C compilers about 20 or so years ago. When I
started coding in C one had to write the code with an eye to the assembly
that the thing was spitting out. Now, if you've got a good optimizer
(and the gnu C optimizer is better than I am on all but a very few of the
processors I've worked with recently), you just express your intent and
the compiler makes it happen most efficiently.

Clearly, that's not yet the case, at least for that particular synthesis
tool. It's a pity.

Of course sometimes you don't want optimisation.
Consider, for example, bridging terms in an asynchronous
circuit.


If you are thinking in terms of an AND-OR tree for the typical LUT based FPGA,
you aren't going to get it right. Most FPGA's now use the LUT, which, at least
for a single LUT, are normally guaranteed to be glitch free for single line
transitions (so no need for the bridging terms). If you need more inputs than a
single LUT provides, and you need need the glitch free performance, than trying
to force a massive AND-OR tree is normally going to be very inefficient, and I
find it worth building the exact structure I need with the Low Level, vendor
provided fundamental LUT/Carry primatives.


Agreed.

Tim Wescott
Guest

Sat Nov 26, 2016 4:26 am   



On Sat, 19 Nov 2016 14:15:18 -0800, Kevin Neilson wrote:

Quote:
Here's an interesting synthesis result. I synthesized this with Vivado
for Virtex-7:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= x!=0; // version 1

Then I rephrased the logic:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= |x; // version 2

These should be the same, right?

Version 1 uses 23 3-input LUTs on the first level followed by a 23-long
carry chain (6 CARRY4 blocks). This is twice as big as it should be.

Version 2 is 3 levels of LUTs, 12 6-input LUTs on the first level, 15
total.

Neither is optimal. What I really want is a combination, 12 6-input
LUTs followed by 3 CARRY4s.

This is supposed to be the era of high-level synthesis...


Reading this whole thread, I'm reminded of a gripe I have about the FPGA
manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to more
and better optimization, and lots of people experimenting with different
optimization approaches.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!

Tim Wescott
Guest

Sat Nov 26, 2016 8:30 am   



On Fri, 25 Nov 2016 23:57:31 -0500, rickman wrote:

Quote:
On 11/25/2016 4:26 PM, Tim Wescott wrote:

Reading this whole thread, I'm reminded of a gripe I have about the
FPGA manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive
university research on FPGA optimization that you might desire, and
possibly even see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to
more and better optimization, and lots of people experimenting with
different optimization approaches.

Let's say I am Xilinx... I have a bazillion dollars of investment into
my products and the support software. I sell to large companies who
want reliable, consistent products. I open up my chip design and a
bunch of university idealists start creating tools for my devices. The
tools work to varying degrees and are used for a number of different
designs by a wide variety of groups.

So what happens when some of these groups report problems "with the
chips"? Are these problems really with the chips or with the tools? If
any of these groups ask us to deal with these problems, how do we begin?

In other words, how do we keep these tools from causing problems with
our reputation?


"You have reached the Xilinx automated help line. To ask about problems
using our chips with unapproved tools, please hang up now..."

But yes, I see your point.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!

rickman
Guest

Sat Nov 26, 2016 8:30 am   



On 11/25/2016 4:26 PM, Tim Wescott wrote:
Quote:

Reading this whole thread, I'm reminded of a gripe I have about the FPGA
manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to more
and better optimization, and lots of people experimenting with different
optimization approaches.


Let's say I am Xilinx... I have a bazillion dollars of investment into
my products and the support software. I sell to large companies who
want reliable, consistent products. I open up my chip design and a
bunch of university idealists start creating tools for my devices. The
tools work to varying degrees and are used for a number of different
designs by a wide variety of groups.

So what happens when some of these groups report problems "with the
chips"? Are these problems really with the chips or with the tools? If
any of these groups ask us to deal with these problems, how do we begin?

In other words, how do we keep these tools from causing problems with
our reputation?

--

Rick C

Tom Gardner
Guest

Sat Nov 26, 2016 4:52 pm   



On 25/11/16 21:26, Tim Wescott wrote:
Quote:
On Sat, 19 Nov 2016 14:15:18 -0800, Kevin Neilson wrote:

Here's an interesting synthesis result. I synthesized this with Vivado
for Virtex-7:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= x!=0; // version 1

Then I rephrased the logic:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= |x; // version 2

These should be the same, right?

Version 1 uses 23 3-input LUTs on the first level followed by a 23-long
carry chain (6 CARRY4 blocks). This is twice as big as it should be.

Version 2 is 3 levels of LUTs, 12 6-input LUTs on the first level, 15
total.

Neither is optimal. What I really want is a combination, 12 6-input
LUTs followed by 3 CARRY4s.

This is supposed to be the era of high-level synthesis...

Reading this whole thread, I'm reminded of a gripe I have about the FPGA
manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to more
and better optimization, and lots of people experimenting with different
optimization approaches.


Many people have suggested that advantage.

I presume the information will never be released because
the information is
- not part of an "API" that is guaranteed over time
- highly proprietary,
- highly device specific,
- possibly varies and is corrected over time (Xilinx
is good at suite updates)
- only available in a form that is directly relevant
to their design tools, i.e. tightly coupled)
and hence is
- difficult for someone else to interpret and process
correctly
- opens all sorts of cans of worms if a third party
gets it wrong

Analogy: Intel guarantee the "machine code API" of
their processors, but the detailed internal structure
is closely and varies significantly across processor
generations.

Kevin Neilson
Guest

Sat Nov 26, 2016 8:41 pm   



Quote:
So what happens when some of these groups report problems "with the
chips"? Are these problems really with the chips or with the tools? If
any of these groups ask us to deal with these problems, how do we begin?

In other words, how do we keep these tools from causing problems with
our reputation?


I can totally understand why Xilinx wouldn't want to mess with this. It's a support nightmare and customers would definitely associate poor open-source software with the chips. I'm not even convinced that open-source tools would be any better.

At least, in the worst case, I can instantiate primitives, but I wish the tools gave me a little more ability to override bad decisions without doing that. There are a lot of case in which I can do a better job but if I put in KEEPs, the tools ignore them, leaving me little choice but to instantiate primitives. Another thing they could do is to have more synthesis directives. There are lots of good structures in the hardware, such as F7-F9 muxes, that the synthesizer almost refuses to use, and I can only make use of by using primitives. Perhaps a directive would allow me to infer a mux but to force (*USE_F7*).

And the built-in FIFOs: why can't I infer them? Preferably using the push/pop keywords from SystemVerilog. That is the kind of "high-level synthesis" I am looking for: to not have to write structural HDL. These kinds of incremental changes to their tools would be far more useful than their HLS or AccelDSP or whatever.

I wish, after working on this since 1984, they had more solid synthesis. There is supposed to be an intermediate layer between language-parsing and synthesis to primitives. When I write the same logic in two slightly different ways and get totally different primitives, I know that something is kludged. When I have to DeMorganize by hand to get better synthesis something is wrong. I am being paid to work out complex problems in Galois arithmetic, not to do freshman-level Boolean logic.

Theo Markettos
Guest

Sat Nov 26, 2016 9:40 pm   



Tom Gardner <spamjunk_at_blueyonder.co.uk> wrote:
Quote:
Analogy: Intel guarantee the "machine code API" of
their processors, but the detailed internal structure
is closely and varies significantly across processor
generations.


There is indeed work going on into FPGA 'virtualisation' - creating
vendor-neutral intermediate structures that open tools can compile down to,
that either vendor tools can pick up and compile or will map to some
pre-synthesised FPGA-on-FPGA.

I'm not sure if there's anything near mainstream, but I can see it's going
to become increasingly relevant - if Microsoft have a datacentre containing
a mix of Virtex 6, Virtex 7, Ultrascale, Stratix V, Stratix 10, ... FPGAs,
based on whatever models were cheap when they bought that batch of servers,
the number of images that needs to be supported will start multiplying and
so an 'ISA' for FPGAs would help the heterogeneity problem.

Theo

Richard Damon
Guest

Sat Nov 26, 2016 10:38 pm   



On 11/25/16 4:26 PM, Tim Wescott wrote:
Quote:
Reading this whole thread, I'm reminded of a gripe I have about the FPGA
manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to more
and better optimization, and lots of people experimenting with different
optimization approaches.


The big issue is that much of the information that would be included in
such a publication is information classified by the companies has highly
competition sensitive. While companies document quite well the basic
structure of the fundamental Logic Elements and I/O Blocks (and other
special computational blocks), what is normally not well described,
except in very general terms, is the routing. In many ways the routing
is the secret sauce that will make or break a product line. If the
routing is too weak, users will find they can't use a lot of the logic
in the device (what they think they are paying for), too much routing
and the chips get slower and too expensive (since building the routing
IS a significant part of the cost of a device).

To 'Open Source' the bitfiles, you be necessity need to explain and
document how you configured your sparse routing matrix, which may well
help your competitors next generation of products, at the cost of your
future products.

mac
Guest

Tue Nov 29, 2016 11:45 pm   



Kevin Neilson <kevin.neilson_at_xilinx.com> wrote:
Quote:
I'm not enough of an FPGA guy to make really deep comments, but this
looks like the state of C compilers about 20 or so years ago. When I
started coding in C one had to write the code with an eye to the assembly
that the thing was spitting out. Now, if you've got a good optimizer
(and the gnu C optimizer is better than I am on all but a very few of the
processors I've worked with recently), you just express your intent and
the compiler makes it happen most efficiently.

I know! I often feel like I'm a software guy, but stuck in the 80s,
poring over every line generated by the assembler to make sure it's optimized.


There is an IEEE standard for synthesizable VHDL.
https://standards.ieee.org/findstds/standard/1076.6-2004.html

But is *is* like writing C per-ANSI, when every compiler had its own
variant.

--
mac the naf

Mark Curry
Guest

Wed Nov 30, 2016 12:19 am   



In article <1973855991.502148597.323655.acolvin-efunct.com_at_news.eternal-september.org>,
mac <acolvin_at_efunct.com> wrote:
Quote:
Kevin Neilson <kevin.neilson_at_xilinx.com> wrote:
I'm not enough of an FPGA guy to make really deep comments, but this
looks like the state of C compilers about 20 or so years ago. When I
started coding in C one had to write the code with an eye to the assembly
that the thing was spitting out. Now, if you've got a good optimizer
(and the gnu C optimizer is better than I am on all but a very few of the
processors I've worked with recently), you just express your intent and
the compiler makes it happen most efficiently.

I know! I often feel like I'm a software guy, but stuck in the 80s,
poring over every line generated by the assembler to make sure it's optimized.


There is an IEEE standard for synthesizable VHDL.
https://standards.ieee.org/findstds/standard/1076.6-2004.html

But is *is* like writing C per-ANSI, when every compiler had its own
variant.


There's a IEEE standard for the synthesizeable subset of Verilog-2001 too:
(IEEE 1364.1 - 2002) I know it well, as I contributed to it. It's a shame they
never did one for SystemVerilog. It was suggested, but some internal politicking
on the working group struck it down.

It's left us with a hit-and-miss method of finding the least common ground between
toolsets. We're actively struggling with this now.

But this doesn't change Kevin's observations much. Defining what the tool should
accept, still gives the tool a LOT of leeway on HOW to build it - as Kevin's
shown with this example. After all, all implementions shown in this example
are "correct". Some are just more optimal than others (and like always the definition
of "optimal" isn't concrete...)

Regards,

Mark

Goto page Previous  1, 2, 3  Next

elektroda.net NewsGroups Forum Index - FPGA - Phrasing!

Ask a question - edaboard.com

Arabic versionBulgarian versionCatalan versionCzech versionDanish versionGerman versionGreek versionEnglish versionSpanish versionFinnish versionFrench versionHindi versionCroatian versionIndonesian versionItalian versionHebrew versionJapanese versionKorean versionLithuanian versionLatvian versionDutch versionNorwegian versionPolish versionPortuguese versionRomanian versionRussian versionSlovak versionSlovenian versionSerbian versionSwedish versionTagalog versionUkrainian versionVietnamese versionChinese version
RTV map EDAboard.com map News map EDAboard.eu map EDAboard.de map EDAboard.co.uk map