EDAboard.com | EDAboard.de | EDAboard.co.uk | WTWH Media

FPGA Market Entry Barriers

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - FPGA - FPGA Market Entry Barriers

Goto page Previous  1, 2, 3, 4  Next


Guest

Sat Oct 20, 2018 11:45 pm   



On Saturday, October 20, 2018 at 12:28:51 PM UTC-4, Richard Damon wrote:
Quote:
On 10/19/18 12:39 PM, Kevin Neilson wrote:
For some satellite work I used the Microsemi RTAX, which filled a niche for rad-hard designs. It was slow, had few gates, could only be burned once, and had poor tools, but still had a small market. They made up for low volume with high prices. I think they're still around. Since I work with a lot of Galois arithmetic, one thing I'd like to see is an FPGA with special structures for Galois matrix multipliers (instead of, say, DSP48s) and matrix transposers, but I don't think the demand is enough to warrant a special architecture.

Microsemi is still around (though part of Microchip now). They have a
number of FPGA families, that are somewhat distinct from the big two.


I've never found the Actel devices to be a good solution for any of my problems. Mostly they follow the same path that the big two follow regarding packages, namely larger, more pins and more dollars that optimal for my designs. I find Lattice is the only company that has much in the smaller packages with a low enough price.

I wonder what Microchip will do with the FPGA product line now. I see they have Atmel's CPLD/SPLD line as well as the rather obsolete AT40K products, but nothing of the Actel devices. I guess they are presently running Microsemi as a separate company for now.

Rick C.

Anssi Saari
Guest

Tue Oct 23, 2018 12:45 pm   



gnuarm.deletethisbit_at_gmail.com writes:

Quote:
I believe Achronix started out with the idea of asynchronous logic.
I'm not clear if they continue to use that or not, but it is not
apparent from their web site. Their target is ultra fast clock speeds
enabling FPGAs in new market. I don't see then showing up on FPGA
vendor lists so I assume they are sill pretty low volume.


I think Achronix is embedded FPGAs only at this point.

Quote:
Tabula was based on 3D technology, but they don't appear to have
lasted. I believe they were also claiming an ability to reconfigure
logic in real time which sounds like a very complex technology to
master. Not sure what market they were targeting.


I spoke with an ex-Achronix guy a few years ago in a conference. He was
confident that Tabula never had their mystical reconfiguring tech
working and that the whole company was a scam. He basically said they
are good with muxes and suck with random logic. Tabula was at the
conference demonstrating some kind of ethernet switch which would be
mostly muxes... No idea if he was right or wrong. He was expecting
Achronix to go under too. Tabula closed in 2015 and apparently Altera
hired some of the team and maybe some of Tabula's IP. But I don't see
them putting out anything resembling what Tabula claimed to have.

> Other than the technologies, what other barriers do new FPGA companies face?

Really the question is, how much better than Xilinx or Intel would you
need to be to break into the market? You'd probably need a billionaire
who'd want to disrupt the market. Musk and Tesla come to mind. Other
possibility I can think of is state funded efforts from China, I think I
read they've increased funding for hardware research considerably.

I remember an interview with a Flex Logix founder, they do embedded
FPGAs only. He basically said he had no interest in trying to compete
with the two giants and so he found a niche. The niche may have grown,
Achronix is there too and I think I heard old timer QuickLogic also
plays in that business now. Probably some other startups too.

Come to think of it, it would be interesting to know what companies and
what chips actually integrate FPGAs and do they farm out the design
work? Or is it the embedded FPGA provider who does the design work for
the programmable part?


Guest

Wed Oct 24, 2018 3:45 am   



On Tuesday, October 23, 2018 at 6:50:48 AM UTC-4, Anssi Saari wrote:
Quote:
gnuarm.deletethisbit_at_gmail.com writes:

I believe Achronix started out with the idea of asynchronous logic.
I'm not clear if they continue to use that or not, but it is not
apparent from their web site. Their target is ultra fast clock speeds
enabling FPGAs in new market. I don't see then showing up on FPGA
vendor lists so I assume they are sill pretty low volume.

I think Achronix is embedded FPGAs only at this point.

Tabula was based on 3D technology, but they don't appear to have
lasted. I believe they were also claiming an ability to reconfigure
logic in real time which sounds like a very complex technology to
master. Not sure what market they were targeting.

I spoke with an ex-Achronix guy a few years ago in a conference. He was
confident that Tabula never had their mystical reconfiguring tech
working and that the whole company was a scam. He basically said they
are good with muxes and suck with random logic. Tabula was at the
conference demonstrating some kind of ethernet switch which would be
mostly muxes... No idea if he was right or wrong. He was expecting
Achronix to go under too. Tabula closed in 2015 and apparently Altera
hired some of the team and maybe some of Tabula's IP. But I don't see
them putting out anything resembling what Tabula claimed to have.

Other than the technologies, what other barriers do new FPGA companies face?

Really the question is, how much better than Xilinx or Intel would you
need to be to break into the market? You'd probably need a billionaire
who'd want to disrupt the market. Musk and Tesla come to mind. Other
possibility I can think of is state funded efforts from China, I think I
read they've increased funding for hardware research considerably.

I remember an interview with a Flex Logix founder, they do embedded
FPGAs only. He basically said he had no interest in trying to compete
with the two giants and so he found a niche. The niche may have grown,
Achronix is there too and I think I heard old timer QuickLogic also
plays in that business now. Probably some other startups too.


Yes, I guess embedded PIP (Programmable Intellectual Property) is a niche. I think there are other niches. Lattice found (or bought a company who found) a niche for low power, small FPGAs. I expect there are others. Or you can market devices differently. I suppose X and A know where their bread is buttered, but I have always felt that FPGAs were underexploited and could very easily be used like MCUs if they were marketed like MCUs. Lots of flavors in lots of packages. Xilinx has always acted like it can't afford to produce a wider range of packages. 10,000 LUTs is still a small chip. Give it as many I/Os as a 48 pin QFP will support and I think it will be able to do a lot more than MCUs. Some people think the propeller is great, but you could have 30, 40 or even 50 small soft-cores in a small FPGA all working independently.


Quote:
Come to think of it, it would be interesting to know what companies and
what chips actually integrate FPGAs and do they farm out the design
work? Or is it the embedded FPGA provider who does the design work for
the programmable part?


Having all the work done by the FPGA provider would limit the number of applications.

Rick C.

HT-Lab
Guest

Fri Oct 26, 2018 3:45 pm   



On 19/10/2018 17:14, Kevin Neilson wrote:
Quote:
Wouldn't it be nice if you could write your design in sequential untimed
code and use a tool to generate the architecture for you based on
resource and timing constraints?

Well, yes, it would be nice if such a tool existed. It doesn't.


It does. If you ever visit DVCon, DAC, DATE etc go to the
Mentor/Synopsys/Cadence/Xilinx stand and tell them you are interested in
architectural exploration using untimed C/C++ code.

If it did, people wouldn't be paying me to make hand-pipelined designs.

What about a) your clients are not willing to change their design to
untimed C/C++ code? b) it is cheaper to pay a contractor for 2 month
than it is to pay $100K+ for an HLS tool?

People wouldn't pay me to spend two months doing what I can model in
Matlab in two lines of code.
>
I do hope you are not working for Xilinx as they will call you in for a
mandatory training session on Vivado's HLS :-)

https://www.xilinx.com/video/hardware/vivado-hls-in-depth-technical-overview.html

After watching this consider what a top of the range HLS tool can do.

Hans
www.ht-lab.com

Jecel
Guest

Fri Oct 26, 2018 8:45 pm   



Niches have to be large enough to allow the costs of the masks and engineering to be recovered.

The use of open source tools let the ICE40 be used in applications such as Raspberry Pi "hats", but the 8Kluts limit is restricting this niche.

Some hobbyists long for DIP packages and 5V i/o. Though this is a tiny niche, using an obsolete node (like 250nm or 350nm) might make crowdfunding practical. It would probably be more popular than

https://www.crowdsupply.com/chips4makers/retro-uc

About SiliconBlue, part of their motivation were the expiration of a bunch of FPGA patents. Even more have expired since then.

-- Jecel


Guest

Sat Oct 27, 2018 1:45 am   



On Friday, October 26, 2018 at 3:24:34 PM UTC-4, Jecel wrote:
Quote:
Niches have to be large enough to allow the costs of the masks and engineering to be recovered.

The use of open source tools let the ICE40 be used in applications such as Raspberry Pi "hats", but the 8Kluts limit is restricting this niche.


This isn't a niche, it would need to grow to be a micro-niche. No FPGA vendor even thinks about this.


Quote:
Some hobbyists long for DIP packages and 5V i/o. Though this is a tiny niche, using an obsolete node (like 250nm or 350nm) might make crowdfunding practical. It would probably be more popular than

https://www.crowdsupply.com/chips4makers/retro-uc


Time expired with only 18% funding raised. Sad

Not sure you need to go way up to 350 nm. I expect the fab costs at 150 aren't so bad. That equipment was amortized a long time ago. I guess the real issue is mask costs. Not sure how bad that is at 150 nm, but I believe you can still support 5 volt I/Os since many MCUs do it.


> About SiliconBlue, part of their motivation were the expiration of a bunch of FPGA patents. Even more have expired since then.

That may be, but expired patents aren't really significant. The basic functionality of the LUT/FF and routing have been available for quite some time now. The details of FPGA architectures only matter when you are competing head to head. That's why Silicon Blue focused on a market segment that was ignored by the big players. The big two chase the telecom market with max capacity, high pin count barn burners and the other markets are addressed with the same technology making it impossible to compete in the low power areas. In the end no significant user who is considering an iCE40 part even looks at a part from Xilinx or Altera.

Rick C.

Kevin Neilson
Guest

Sat Oct 27, 2018 3:45 am   



Quote:
That may be, but expired patents aren't really significant. The basic functionality of the LUT/FF and routing have been available for quite some time now. The details of FPGA architectures only matter when you are competing head to head. That's why Silicon Blue focused on a market segment that was ignored by the big players. The big two chase the telecom market with max capacity, high pin count barn burners and the other markets are addressed with the same technology making it impossible to compete in the low power areas. In the end no significant user who is considering an iCE40 part even looks at a part from Xilinx or Altera.

Rick C.


I don't do hobbyist stuff anymore since I'm too busy with work but I would think one could just use eval boards. I don't know why a DIP would be required. I don't know about the cost of the tools for a hobbyist, though.

As for the high pin counts, I would think that the need would be mitigated with all the high-speed serial interfaces.

Kevin Neilson
Guest

Sat Oct 27, 2018 3:45 am   



Quote:
https://www.xilinx.com/video/hardware/vivado-hls-in-depth-technical-overview.html

After watching this consider what a top of the range HLS tool can do.


I know you're joking about HLS but the whole FPGA market is limited because any real work must be done by experiences specialists working at a very low level of abstraction. It really does take months to do what you can do in Matlab in a couple of hours. It's not really any different in that respect than it was fifteen years ago. The market would be so much bigger if things were just a bit easier. I haven't figured out if this is because it's an inherently difficult problem or if the tool designers aren't that skilled. A little of both, I think. I still think the people working on HLS are going in the wrong direction and need to spend more time refining the HDL tools. They aim too high and fail utterly. Why can't I infer a FIFO? Just start with that. The amount of time I spend on very basic structures is crazy. I don't think any of this will change in the near future, though.

(PS, I know you were probably actually serious about HLS.)


Guest

Sat Oct 27, 2018 4:45 am   



On Friday, October 26, 2018 at 9:49:03 PM UTC-4, Kevin Neilson wrote:
Quote:
https://www.xilinx.com/video/hardware/vivado-hls-in-depth-technical-overview.html

After watching this consider what a top of the range HLS tool can do.


I know you're joking about HLS but the whole FPGA market is limited because any real work must be done by experiences specialists working at a very low level of abstraction. It really does take months to do what you can do in Matlab in a couple of hours.


I think this is rather exaggerated and comparing to Matlab coding is a bit disingenuous. Are many projects coded in Matlab then they are done???

Coding in HDL is not really much different from coding in any high level language unless you have significant speed or capacity issues. Even then that is not much different from programming CPUs. If you are short on CPU speed or memory, you code very differently and spend a lot of time validating your goals at every step.

Even then I have worked on a number of projects where the design was size and/or speed constrained and it wasn't a debilitating burden... just like the CPU coding projects I've worked on.

I think the real difference is in the tasks that are done in FPGAs tend to be fairly complex. I have done a number of projects in FPGAs that most of the work could have been done in a CPU and the FPGA project went rather quickly.


> It's not really any different in that respect than it was fifteen years ago. The market would be so much bigger if things were just a bit easier. I haven't figured out if this is because it's an inherently difficult problem or if the tool designers aren't that skilled. A little of both, I think.. I still think the people working on HLS are going in the wrong direction and need to spend more time refining the HDL tools. They aim too high and fail utterly. Why can't I infer a FIFO? Just start with that. The amount of time I spend on very basic structures is crazy. I don't think any of this will change in the near future, though.

I don't know, why can't *you* infer a fifo? The code required is not complex. Are you saying you feel you have to instantiate a vendor module for that??? I recall app notes from some time ago that explained how to use gray counters to easily infer fifos. Typically the thing that slows me down in HDL is the fact that I'm using VHDL with all it's verbosity. Some tools help with that, but I don't have those. I've just never bitten the bullet to try working much in Verilog.

Rick C.


Guest

Sat Oct 27, 2018 4:45 am   



On Friday, October 26, 2018 at 10:25:57 PM UTC-4, Kevin Neilson wrote:
Quote:
That may be, but expired patents aren't really significant. The basic functionality of the LUT/FF and routing have been available for quite some time now. The details of FPGA architectures only matter when you are competing head to head. That's why Silicon Blue focused on a market segment that was ignored by the big players. The big two chase the telecom market with max capacity, high pin count barn burners and the other markets are addressed with the same technology making it impossible to compete in the low power areas. In the end no significant user who is considering an iCE40 part even looks at a part from Xilinx or Altera.

Rick C.

I don't do hobbyist stuff anymore since I'm too busy with work but I would think one could just use eval boards. I don't know why a DIP would be required. I don't know about the cost of the tools for a hobbyist, though.


Tools are zero cost, no? I bought tools once, $1500 I believe. Ever since I just use the free versions.


> As for the high pin counts, I would think that the need would be mitigated with all the high-speed serial interfaces.

Uh, tell that to Xilinx, Altera, Lattice and everyone else (which I guess means Microsemi). Lattice has some low pin count parts for the iCE40 line, but they are very fine pitch BGA type devices which are hard to route. Otherwise the pin counts tend to be much higher than what I consider to be a similar MCU if not high pin count by all measures.

Rick C.

Kevin Neilson
Guest

Sat Oct 27, 2018 5:45 am   



Quote:
I think this is rather exaggerated and comparing to Matlab coding is a bit disingenuous. Are many projects coded in Matlab then they are done???

Coding in HDL is not really much different from coding in any high level language unless you have significant speed or capacity issues. Even then that is not much different from programming CPUs. If you are short on CPU speed or memory, you code very differently and spend a lot of time validating your goals at every step.

It's not necessarily that it's in Matlab that makes it easy, but that it's very abstracted. It might not be that much harder in abstract SystemVerilog. What takes months is converting to a parallelized design, adding pipelining, meeting timing, placing, dealing with domain crossings, instantiating primitives when necessary, debugging, etc. The same would be true of any language. I suppose you can get an FPGA written in C to work as well, but it's not going to be *abstract* C. It's going to be the kind of C that looks like assembly, in which the actual algorithm is indiscernible without extensive comments.


Quote:
Even then I have worked on a number of projects where the design was size and/or speed constrained and it wasn't a debilitating burden... just like the CPU coding projects I've worked on.

I think the real difference is in the tasks that are done in FPGAs tend to be fairly complex. I have done a number of projects in FPGAs that most of the work could have been done in a CPU and the FPGA project went rather quickly.


I don't know, why can't *you* infer a fifo? The code required is not complex. Are you saying you feel you have to instantiate a vendor module for that??? I recall app notes from some time ago that explained how to use gray counters to easily infer fifos. Typically the thing that slows me down in HDL is the fact that I'm using VHDL with all it's verbosity. Some tools help with that, but I don't have those. I've just never bitten the bullet to try working much in Verilog.

Rick C.


True--I instantiate FIFOs, but they themselves are actually written in HDL, Gray counters and all, though often the RAMs are instantiated for various reasons. (If you want to use Xilinx's hard FIFOs, I believe you have to instantiate those.) What I meant to say was that they should be, as a commonly-used element, much more abstracted. I ought to be able to do it as a function call, such as:

if (wr_en && fifo.size<256) fifo.push_front(wr_data);

I *can* do that, in SystemVerilog simulation, but any synthesizer would just scoff at that, though I don't see why it should be impossible to turn that into a FIFO. Forcing designers to know about Gray counters and clock-domain crossings means that FPGA design will continue to be a recondite art limited to the few.


Guest

Sat Oct 27, 2018 7:45 am   



On Saturday, October 27, 2018 at 12:30:55 AM UTC-4, Kevin Neilson wrote:
Quote:
I think this is rather exaggerated and comparing to Matlab coding is a bit disingenuous. Are many projects coded in Matlab then they are done???

Coding in HDL is not really much different from coding in any high level language unless you have significant speed or capacity issues. Even then that is not much different from programming CPUs. If you are short on CPU speed or memory, you code very differently and spend a lot of time validating your goals at every step.

It's not necessarily that it's in Matlab that makes it easy, but that it's very abstracted. It might not be that much harder in abstract SystemVerilog. What takes months is converting to a parallelized design, adding pipelining, meeting timing, placing, dealing with domain crossings, instantiating primitives when necessary, debugging, etc. The same would be true of any language. I suppose you can get an FPGA written in C to work as well, but it's not going to be *abstract* C. It's going to be the kind of C that looks like assembly, in which the actual algorithm is indiscernible without extensive comments.


And that is exactly my point. The problem you point out is not a problem related in any way to implementing in FPGAs, it's that the design is inherently complex. While you may be able to define the design in an abstract way in Matlab, that is not the same thing as an implementation in *any* medium or target.

Your claim was, "the whole FPGA market is limited because any real work must be done by experiences specialists working at a very low level of abstraction". This isn't a problem with the FPGA aspect, it is a problem with the task being implemented since it would be the same problem with any target.


Quote:
Even then I have worked on a number of projects where the design was size and/or speed constrained and it wasn't a debilitating burden... just like the CPU coding projects I've worked on.

I think the real difference is in the tasks that are done in FPGAs tend to be fairly complex. I have done a number of projects in FPGAs that most of the work could have been done in a CPU and the FPGA project went rather quickly.


I don't know, why can't *you* infer a fifo? The code required is not complex. Are you saying you feel you have to instantiate a vendor module for that??? I recall app notes from some time ago that explained how to use gray counters to easily infer fifos. Typically the thing that slows me down in HDL is the fact that I'm using VHDL with all it's verbosity. Some tools help with that, but I don't have those. I've just never bitten the bullet to try working much in Verilog.

Rick C.

True--I instantiate FIFOs, but they themselves are actually written in HDL, Gray counters and all, though often the RAMs are instantiated for various reasons. (If you want to use Xilinx's hard FIFOs, I believe you have to instantiate those.) What I meant to say was that they should be, as a commonly-used element, much more abstracted. I ought to be able to do it as a function call, such as:

if (wr_en && fifo.size<256) fifo.push_front(wr_data);

I *can* do that, in SystemVerilog simulation, but any synthesizer would just scoff at that, though I don't see why it should be impossible to turn that into a FIFO. Forcing designers to know about Gray counters and clock-domain crossings means that FPGA design will continue to be a recondite art limited to the few.


I'm not sure how you can do that in any language unless fifo.push_front() is already defined. Are you suggesting it be a part of a language? In C there are many libraries for various commonly used functions. In VHDL there are some libraries for commonly used, but low level functions, nothing like a fifo. If you write a procedure to define fifo.push_front() you can do exactly this, but there is none written for you.

Rick C.

HT-Lab
Guest

Sat Oct 27, 2018 10:45 am   



On 27/10/2018 02:48, Kevin Neilson wrote:
Quote:
https://www.xilinx.com/video/hardware/vivado-hls-in-depth-technical-overview.html

After watching this consider what a top of the range HLS tool can do.


I know you're joking about HLS but the whole FPGA market is limited because any real work must be done by experiences specialists working at a very low level of abstraction.


wow....HLS is not perfect by far and you still need to have RTL
knowledge but I think your understanding of HLS is about 10 years in the
past. Can I suggest you do a bit of googling to see what the current
state is of HLS.

It really does take months to do what you can do in Matlab in a couple
of hours.

yes but to be fair the power comes from some powerful library functions
and not from basic m-code. Equally, it takes very little effort to
instantiate some very complex IP cores.

It's not really any different in that respect than it was fifteen years
ago. The market would be so much bigger if things were just a bit
easier. I haven't figured out if this is because it's an inherently
difficult problem or if the tool designers aren't that skilled. A
little of both, I think.

It is a complex problem, the EDA industry if worth many billions so
there is no lack of financial incentive to develop these tools.

I still think the people working on HLS are going in the wrong direction
and need to spend more time refining the HDL tools.

That is what they are doing by removing the time and architectural
requirements of the input design. I agree that I would have preferred
they used another language than C/C++ but at least the simulation is
very very fast compared to RTL.

They aim too high and fail utterly.

No they don't, companies like Google, Nvidia, Qualcomm and many more are
all very successful with HLS tools.

Why can't I infer a FIFO? Just start with that.

Again, look at Vivado HLS, they have full support for FIFO's and can
infer one from a stream array.

The amount of time I spend on very basic structures is crazy.

Why, write ones, stick in a library and instantiate as often as you
like. Most synthesis tools are pretty good at inferring the required
memory type.

I don't think any of this will change in the near future, though.
Quote:

(PS, I know you were probably actually serious about HLS.)


I am, I am just surprised that you have so a low appreciation of the
current technology.

HLS is happening but it will be a many decades before our skill set
becomes obsolete (assuming we don't keep up).

Hans
www.ht-lab.com

HT-Lab
Guest

Sat Oct 27, 2018 10:45 am   



On 27/10/2018 07:22, gnuarm.deletethisbit_at_gmail.com wrote:
Quote:
On Saturday, October 27, 2018 at 12:30:55 AM UTC-4, Kevin Neilson wrote:
...
It's not necessarily that it's in Matlab that makes it easy, but that it's very abstracted. It might not be that much harder in abstract SystemVerilog. What takes months is converting to a parallelized design, adding pipelining, meeting timing, placing, dealing with domain crossings, instantiating primitives when necessary, debugging, etc. The same would be true of any language. I suppose you can get an FPGA written in C to work as well, but it's not going to be *abstract* C. It's going to be the kind of C that looks like assembly, in which the actual algorithm is indiscernible without extensive comments.

And that is exactly my point. The problem you point out is not a problem related in any way to implementing in FPGAs, it's that the design is inherently complex. While you may be able to define the design in an abstract way in Matlab, that is not the same thing as an implementation in *any* medium or target.

Your claim was, "the whole FPGA market is limited because any real work must be done by experiences specialists working at a very low level of abstraction". This isn't a problem with the FPGA aspect, it is a problem with the task being implemented since it would be the same problem with any target.


Well said.

...
Quote:

I don't know, why can't *you* infer a fifo? The code required is not complex. Are you saying you feel you have to instantiate a vendor module for that??? I recall app notes from some time ago that explained how to use gray counters to easily infer fifos. Typically the thing that slows me down in HDL is the fact that I'm using VHDL with all it's verbosity.


really, what aspect of VHDL is slowing you down that would be quicker in
Verilog?

www.synthworks.com/papers/VHDL_2008_end_of_verbosity_2013.pdf

Personally I think verbosity is a good thing as it makes it easier to
understand somebody else's code.

>Some tools help with that, but I don't have those. I've just never bitten the bullet to try working much in Verilog.

I would forget about Verilog as it has too many quirks, go straight to
SystemVerilog (or just stick with VHDL).

Hans
www.ht-lab.com

Theo
Guest

Sat Oct 27, 2018 1:45 pm   



HT-Lab <hans64_at_htminuslab.com> wrote:
Quote:
On 27/10/2018 07:22, gnuarm.deletethisbit_at_gmail.com wrote:
On Saturday, October 27, 2018 at 12:30:55 AM UTC-4, Kevin Neilson wrote:

I don't know, why can't *you* infer a fifo? The code required is not
complex. Are you saying you feel you have to instantiate a vendor
module for that??? I recall app notes from some time ago that
explained how to use gray counters to easily infer fifos. Typically
the thing that slows me down in HDL is the fact that I'm using VHDL
with all it's verbosity.

really, what aspect of VHDL is slowing you down that would be quicker in
Verilog?

www.synthworks.com/papers/VHDL_2008_end_of_verbosity_2013.pdf

Personally I think verbosity is a good thing as it makes it easier to
understand somebody else's code.


I'm don't have much to do with VHDL, but that sounds like it's making a bad
thing slightly less bad. I'd be interested if you could point me towards an
example of tight VHDL?

The other issue that that a lot of these updated VHDL and Verilog standards
take a long time to make it into the tools. So if you code in a style
that's above the lowest common denominator, you're now held hostage about
using the particular tool that supports your chosen constructs.

There's another type of tool out there, that compiles to Verilog as its
'assembly language'. Basic register-transfer Verilog is pretty universally
supported, and so they support most toolchains.

As regards FIFOs, here's a noddy example:


import FIFO::*;

interface Pipe_ifc;
method Action send(Int#(32) a);
method ActionValue#(Int#(32)) receive();
endinterface

module mkDoubler(Pipe_ifc);
FIFO#(Int#(32)) firstfifo <- mkFIFO;
FIFO#(Int#(32)) secondfifo <- mkFIFO;

rule dothedoubling;
let in = firstfifo.first();
firstfifo.deq;
secondfifo.enq ( in * 2 );
endrule

method Action send(Int#(32) a);
firstfifo.enq(a);
endmethod

method ActionValue#(Int#(32)) receive();
let result = secondfifo.first();
secondfifo.deq;
return result;
endmethod

endmodule


This creates a module containing two FIFOs, with a standard pipe interface -
a port for sending it 32 bit ints, and another for receiving 32 bit ints
back from it. Inside, one FIFO is wired to the input of the module, the
other to the output. When data comes in, it's stored in the first FIFO.
When there is space in the second FIFO, it's dequeued from the first,
doubled, and enqueued in the second. If any FIFO becomes full, backpressure
is automatically applied. There's no chance of data getting lost by missing
control signals.

This is Bluespec's BSV, not VHDL or Verilog. The compiler type checked it
for me, so I'm very confident it will work first time. I could have made it
polymorphic (there's nothing special about 32 bit ints here) with only a
tiny bit more work. It compiles to Verilog which I can then synthesise.

Notice there are no clocks or resets (they're implicit unless you say you
want multiple clock domains), no 'if valid is high then' logic, it's all
taken care of. This means you can write code that does a lot of work very
concisely.

Theo

Goto page Previous  1, 2, 3, 4  Next

elektroda.net NewsGroups Forum Index - FPGA - FPGA Market Entry Barriers

Ask a question - edaboard.com

Arabic version Bulgarian version Catalan version Czech version Danish version German version Greek version English version Spanish version Finnish version French version Hindi version Croatian version Indonesian version Italian version Hebrew version Japanese version Korean version Lithuanian version Latvian version Dutch version Norwegian version Polish version Portuguese version Romanian version Russian version Slovak version Slovenian version Serbian version Swedish version Tagalog version Ukrainian version Vietnamese version Chinese version Turkish version
EDAboard.com map