Is FPGA code called firmware?

M

Marko

Guest
Traditionally, firmware was defined as software that resided in ROM.
So, my question is, what do you call FPGA code? Is "firmware"
appropriate?
 
The difference is execution versus synthesis. Firmware is indeed
embedded and dedicated code, but the code is executed. FPGA code is
written in a _description_ language, is then interpreted, synthesized,
and ultimately produces hardware.

So, I see it fit to refer to the FPGA, when it is configured, as
hardware, and to the code itself as a description language.

Julian
 
Julian Kain schrieb:
The difference is execution versus synthesis. Firmware is indeed
embedded and dedicated code, but the code is executed. FPGA code is
written in a _description_ language, is then interpreted, synthesized,
and ultimately produces hardware.
VHDL may "produce" hardware when doing ASIC design. In an FPGA not a
single bit is "produced", all is fixed. Only the connections between the
bits are set, so all that is done in an FPGA is configuration.
Surprisingly this process is called configuration for quite a long time ;-)

So, I see it fit to refer to the FPGA, when it is configured, as
hardware, and to the code itself as a description language.
What code? The source code (VHDL/Verilog/whatever) or the final
bitstream? The same distinction applies to microprocessor "Firmware",
where you have the source code and the compiled binary.
Hmm, the more I write and think about it (where it should be better the
other way around ;-) FPGA and microprocessor bitstreams are becomming
more and more similar "Firmware".

Regards
Falk
 
On Mon, 20 Feb 2006 11:32:38 -0800, Marko <cantsay@comcast.net> wrote:

On Mon, 20 Feb 2006 19:47:16 +0100, Falk Brunner <Falk.Brunner@gmx.de
wrote:

Marko schrieb:
Traditionally, firmware was defined as software that resided in ROM.
So, my question is, what do you call FPGA code? Is "firmware"
appropriate?

Why not?

Are you answering my question with a question?

Seriously, I just want to know what other people call it. It seems
incorrect to refer to FPGA code by the same name used for ROMable S/W.
At my previous job, we just called it "FPGA Code". At my new job,
it's called "firmware".
At one previous place of employment, we called it Roland C.
Higgenbottom the Third. We figured we could charge extra by making it
sound more dignified than it actually was.

Bob Perlman
Cambrian Design Works
 
One should check out the source of the term "firmware". I suspect that
most of you weren't around in the early 70's when the term was invented
(at least not in the professional sense). Firmware was invented with
the advent of microcoded computers. Microcode is "software", but a
different kind of software than most of us were familiar with. And,
usually, it wasn't something that the user could change (with a very
few exceptions). So they came up with "firmware", meaning it was
programmed into read-only memories (ROM). Eventually, anything
programmed into ROM, EPROM, EEPROM, etc. became known as firmware. But
it was, still, instructions that were fed to an instruction execution
unit, one at a time sequentially. FPGA code is logic, not programmable
instructions (spare me comments on the Micro-Blaze and its ilk).
Personally, I would like to see a different term beause non-FPGA people
will think that you are talking about a general purpose programmable
computer. How about "coreware"?. Don't like it? Then invent your own.

Tom
 
reiner@hartenstein.de schrieb:
Firmware is instruction-stream-based - more precisely:
micro-instructions for programming the inner machine in a nested
machine, where an instruction of the outer machine is executed by a
sequence of micro-instruations by the inner machine.
This may be your personal point of view, but this is not common sense.

Programming FPGAs etc., however is NOT instruction-stream-based (i. e.
NOT a procedural issue), and it should not be called "software" how
some people do, because a software compiler generates an instruction
schedule. Configuration code, however, is NOT an instruction schedule.
It is a structural issue (but NOT procedural).
Right.

The sources for compilation into configuration code should be called
CONFIGWARE, and not software, nor firmware. A typical configware
compilation process uses placement and routing, however, NOT
instruction scheduling.

FPGA code I call configware code - the output of configware compilation
of design flow
Welcome to the post-babylonian world ;-)

Regards
Falk
 
Wow, I wouldn't use such a word...

stiffware
noun

1. computing.
Software that is difficult or impossible to modify because
it has been customized or there is incomplete documentation, etc.


Well, googlefight can confirm:

http://www.googlefight.com/index.php?lang=en_GB&word1=configware&word2=stiffware

Kolja Sulimma wrote:
reiner@hartenstein.de schrieb:


[...] Configuration code, however, is NOT an instruction schedule.
It is a structural issue (but NOT procedural).

[...] A typical configware
compilation process uses placement and routing, however, NOT
instruction scheduling.


Now, this discussion is going to become really interesting with multi
context FPGAs. In that case there is a stream of very few very large
instructions.

Configware is ok, but I like stiffware better because it fits nicely
between software, firmware and hardware.

Kolja Sulimma
 
I've heard it referred to as gateware for a long time where I work. I
think that's a fine name.
 
On Tue, 21 Feb 2006 10:52:42 -0800, Mike Treseler
<mike_treseler@comcast.net> wrote:

Brannon wrote:
I've heard it referred to as gateware for a long time where I work. I
think that's a fine name.

I fifth the motion.
All in favor of gateware say aye.

-- Mike Treseler
Sounds good to me. Anything that disabuses people of the notion that
FPGA design is the same as or similar to software design meets with my
approval.

Bob Perlman
Cambrian Design Works
http://www.cambriandesign.com
 
Mike Treseler wrote:
Brannon wrote:
I've heard it referred to as gateware for a long time where I work. I
think that's a fine name.

I fifth the motion.
All in favor of gateware say aye.

-- Mike Treseler
aye,

For me the biggest hurlde of learning to utilize VHDL was programming
my brain to not think of it as a programming language. Then everything
began to fall into place.

-Isaac
 
Here, we usually call it "broken". On rare occasions we will call it
"the bitstream" or "the configuration file".

$0.02
 
Isaac Bosompem wrote:

aye,

For me the biggest hurdle of learning to utilize VHDL was programming
my brain to not think of it as a programming language.
Yes. The trickery is that synthesis code describes
a testable simulation model, not hardware.

Hardware which matches this model is inferred
based on the device, Fmax and other constraints.

The "software" aspect to rtl code
just answers the question:

"Oh, another clock, what should we output this time?"

-- Mike Treseler
 
fpga_toys@yahoo.com wrote:

Interesting discussion. In a prior discussion regarding "programming"
or "designing" with C syntax HLL or HDLs, it was interesting how many
people took up arms that they could do everything in VHDL or Verilog
that could be done with a C based fpga design language such as
Celoxica's Handel-C, Impulse-C, FpgaC or similar tools. That arguement
was that VHDL/Verilog really isn't any different that C based HLL/HDL's
for FPGA design, and frequently with the assertion that VHDL/Verilog
was better.
Yes. The only advantage to C-based HDLs
is that lots of people already know C.

Now, we have the looks like a duck, walks like a duck, quacks like a
duck, must be a duck argument that if in fact VHDL/Verilog is some
equivalent to C based HDL/HLL's, then it's probably has some
significant aspects of software development, rather than gate level
schematic based hardware design.
Yes. Version control, Checkout into an empty directory,
Makefile generation and regression testing is just as effective
on an hdl project as it is on a C++/Java project.

So is an fpga design in VHDL/Verilog hardware, and the same realized
equiv gates written in in Celoxica's Handel-C software just because of
the choice of language? Or is a VHDL/Verilog design that is the same
as a Handel-C design software?
They are just different simulation models of the same thing.

-- Mike Treseler
 
fpga_toys@yahoo.com wrote:
Isaac Bosompem wrote:
For me the biggest hurlde of learning to utilize VHDL was programming
my brain to not think of it as a programming language. Then everything
began to fall into place.

Interesting discussion. In a prior discussion regarding "programming"
or "designing" with C syntax HLL or HDLs, it was interesting how many
people took up arms that they could do everything in VHDL or Verilog
that could be done with a C based fpga design language such as
Celoxica's Handel-C, Impulse-C, FpgaC or similar tools. That arguement
was that VHDL/Verilog really isn't any different that C based HLL/HDL's
for FPGA design, and frequently with the assertion that VHDL/Verilog
was better.
Definitely C is linked fairly close to VHDL/Verilog. But there are a
few key differences that I had to consider when learning HDL's to truly
understand what was going on. For example the non-blocking statements
in a clocked sequential processes in VHDL. I orignally assumed that
like software , signal assignments would happen instantly after the
line has executed, but I was wrong. A few minutes playing around with
ModelSim revealed that they occur on the following clock pulse (when
the flip flops sample the data input).

So there was a bit of a retraining process even though the syntax was
somewhat familiar.


So is an fpga design in VHDL/Verilog hardware, and the same realized
equiv gates written in in Celoxica's Handel-C software just because of
the choice of language? Or is a VHDL/Verilog design that is the same
as a Handel-C design software?
This is a fairly tough question as we wouldn't be discussing this if
this was something that we could all agree on. I believe that both are
hardware and I will explain my reasoning:

FpgaC for example is a totally different ball game from VHDL/Verilog
but they ultimately result in a piece of hardware at the output.

FpgaC (from the example posted at the TMCC's website at U of Toronto,
where I happen to live :) ) hides completely the hardware elements from
the designer. Allowing them to give a software-like *DESCRIPTION* (key
word) of the hardware. What you get is ultimately hardware that
implements your "program".

VHDL/Verilog on the other hand do hide most of the grunt work of doing
digital design but still you have somethings left over like what I
pointed out above about the non blocking signal assignments.

We have always progressed towards abstraction in the software
world,similar pushes have also been made in the hardware world with
EDA's and CAD software packages like MATLAB, which automate most of the
grunt work. Perhaps program like HDL's are the new progression.

All I can say though, is only time will tell. It depends on how well
compilers like FpgaC will be able to convert a program to hardware
description. Also how well it be able to extract and fine opportunities
for concurrency.


-Isaac
 
fpga_toys@yahoo.com wrote:
Isaac Bosompem wrote:
For me the biggest hurlde of learning to utilize VHDL was programming
my brain to not think of it as a programming language. Then everything
began to fall into place.

Interesting discussion. In a prior discussion regarding "programming"
or "designing" with C syntax HLL or HDLs, it was interesting how many
people took up arms that they could do everything in VHDL or Verilog
that could be done with a C based fpga design language such as
Celoxica's Handel-C, Impulse-C, FpgaC or similar tools. That arguement
was that VHDL/Verilog really isn't any different that C based HLL/HDL's
for FPGA design, and frequently with the assertion that VHDL/Verilog
was better.
snipping

So is an fpga design in VHDL/Verilog hardware, and the same realized
equiv gates written in in Celoxica's Handel-C software just because of
the choice of language? Or is a VHDL/Verilog design that is the same
as a Handel-C design software?
It reaslly depends on the style of design and synthesis level.
Traditionally Synopsys style DC synthesis performed on RTL code is
hardware design more or less no matter how the RTL is prepared, even
javascript if thats possible. But HandelC and the other new entrants
from my knowledge are usually based on behavioural synthesis, thats the
whole point for their existance is to raise productivity by letting
these tools figure how to construct the RTL dataflow code so that mere
mortal software engineers don't have to be familiar with RTL design.
They still find out about real harware issue sooner or later though.

On the ASIC side, Synopsys BC didn't fare too well with hardcore ASIC
guys, better with system guys. Since FPGAs let software, system guys
into the party, BC style synthesis is going to be far more acceptable
and widespread, cost of failure so much lower than with ASICs. As for
is it HW or SW, I decline, but the ASIC guys would tend to call BC
design too software like given early results, I don't think their
opinion for C style behavioural design has changed any either.

If the end result of BC style design produces results as good as
typically achieved by DC synthesis then it is everybit as good as
hardware but does it produce such results? In hardcore hardware land we
expect to drive upwards of 300MHz cycle rates and plan a hardware
design onto a floorplan but I wouldn't expect such performance or
efficiency from these BC tools. Do they routinely produce as good
results, I very much doubt? Replacing RTL for behavioural design may
raise productivity but it is not the same thing as replacing assembler
coding for HLL coding IMHO given the current nature of OoO cpus.
 
JJ wrote:
On the ASIC side, Synopsys BC didn't fare too well with hardcore ASIC
guys, better with system guys. Since FPGAs let software, system guys
into the party, BC style synthesis is going to be far more acceptable
and widespread, cost of failure so much lower than with ASICs. As for
is it HW or SW, I decline, but the ASIC guys would tend to call BC
design too software like given early results, I don't think their
opinion for C style behavioural design has changed any either.
As a half EE and CSc guy from the 1970's I spent more than a few years
doing hard core assembly language systems programming and device driver
work. The argument about C for systems programming was much louder and
much more opinionated about "what real systems programmer can do". As
an early Unix evanglist and systems programmer, it didn't take long to
discover that I could code C easily to produce exactly the asm I
needed. As the DEC systems guys and the UNIX systems guys war'ed over
what was best, it was more than fun to ask them for their rosetta
assembly language, and frequently knock out a faster C design in a few
hours, for a piece of code that took weeks to fine tune in asm. It was
almost always because they got fixated on micro optimization of a few
loops, and missed the big picture optimizations. Rewriting asm
libraries in C as we ported to microprocessors and away from PDP11's
was seldom a performance hit at all.

I see similar things happening with large ASIC and FPGA designs, as the
real performance gains are in highly optimized, but more complex
architectures, and less in the performance of any particular FSM and
data path. Doing the very best gate level designs, just like the very
best asm designs, at some point is just a distraction, when you start
looking at complex systems with high degrees of parallelism and
specialized functional units where the system design/architecture is
the win, not a few cycles at the bottom of some subsystem.

The advantage of transfering optimization knowledge into HLL tools, is
that they do it right EVERY time after that. Where the same energy
spent optimizing one low level design is seldom leveraged to other
designs. Because of this HLL programming languages routinely deliver
three or four nines of the performance hand coding will do, and
frequently better as all optimations are automatically taken and
applied, where a hand coder would not be able to.

We see the same evolution in bit level boolean design for hardware
engineers. A little over a decade ago, all equations were hand
optimized .... today that is a lost art. As the tools do more of it,
probably in another decade it will no longer be taught as a core
subject to EE's, if not sooner. There are far more important things for
them to learn that they WILL actually use and need. That will not stop
the oldie moldies from lamenting how little the kids today know, and
claiming they don't even know their trade. The truth is, that the kids
will have new skills that leave the Dino's just that.

If the end result of BC style design produces results as good as
typically achieved by DC synthesis then it is everybit as good as
hardware but does it produce such results? In hardcore hardware land we
expect to drive upwards of 300MHz cycle rates and plan a hardware
design onto a floorplan but I wouldn't expect such performance or
efficiency from these BC tools. Do they routinely produce as good
results, I very much doubt? Replacing RTL for behavioural design may
raise productivity but it is not the same thing as replacing assembler
coding for HLL coding IMHO given the current nature of OoO cpus.
I believe, having seen this same technology war from the other side,
that it is not only the same, but will actually evolve better because
the state of the art and knowledge about how to take optimizations and
exploit them is much better understood these days. The limits in
software technology and machine performance that slowed extensive
computational optimizations for software compilers are much less of a
problem with VERY fast cpus and large memory systems today. Probably
the hard part will be yanking the knowledge set to do a good job from
the minds of heavilly protesting EE's worried about job security. I
suspect a few, will see the handwritting on the wall, and will become
coders for tools, just to have a job.
 
Hal Murray wrote:
Back in the old days, it was common to build FSMs using ROMs. That
approach makes it natural to think of the problem as software - each word
in the ROM holds the instruction you execute at that PC plus the right
external conditions.
Any of us educated in engineering school in the 1970's probably have
more than a few times. On the other hand, I also built a DMA engine out
of an M68008 using address space microcoding which saved a bunch of
expensive PAL's and board space, plus used the baby 68k to implement a
scsi protocol engine to emulate a WD1000 chipset. The whole design took
a couple months to production.

Having done it the hard way with bipolar bit slices, just gives you the
tools to take a more powerful piece of silicon and refine it better.
That is the beauty of FPGAs as computational engines today. Looking
past what it's ment to do, and looking forward to what you can do with
it tomarrow, by exploiting the parallism and avoiding the sequential
bottlenecks of cpu/memory designs. Designers that only know how to use
a cpu + memory, and lack the skills of designing with lower level
building blocks miss the big picture - both at a software and hardware
level.

I've done a reasonable amount of hack programming where I count
every cycle to get the timing right. I could probably have done
it in c, but I'm a bottom up rather than top down sort of person.
it's not about up or down, its simply learning your tools. for 30 years
I've written C thinking asm, and coding C line for line with the asm it
produced. For the last couple years, after learning TMCC and taking a
one day Celoxica intro seminar, I started writing C thinking gates,
just as a VHDL/Verilog engineer writes in that syntax thinking gates.
Hacking on, and extending TMCC as FpgaC has only widened my
visualization of what we can do with the tool. The things that
TMCC/FpgaC does wrong are almost purely masked by the back end boolean
optimizer which comes very close to getting it right. Where it
doesn't, is because its synthesis rules are targeted at a generic
device, and it lacks device specific optimizations to target the
available CLB/Slice implementation. That will come with time, but
really don't have that big an impact today.

Right now we are focused on implementng the rest of the C language that
was left out of TMCC, which is mostly parser work, and some utility
routine work inside the compiler. That will be completed march/april,
then we can move on to back end work, and target what I call compile,
load and go work, which will focus on targeting the backend to current
devices. With that will come distributed arithmetic optimized to
several platforms as well as using carry chains and muxes available in
the slices these days. At that point, FpgaC will be very close to
fiting designs as current HDL's do .... if you can learn to write C
thinking gates. FpgaC will over time hide most of that from less
skilled programmers, requiring only modest retraining.

The focus will be computing with fpgas, not fpga designs to support
computing hardware development. RC.

For the last couple weeks we have been doing integration and cleanup
from a number of major internal changes, mostly from a symbol table
manager and scoping rules fix to bring TMCC inline with Std C scoping
and naming, so that we could support Structures, typedef and enum.
We've also implemented the first crack at traditional typing in prep
for enum/typedef, allowing unsigned and floating point now in the
process. Both are likely to be in beta-2 this month. The work I checked
in last night has the core code for FP using intrinsic functions, and
probably needs a few days to finish. It also now has do-while and for
loops, along with structures and small LUT based arrays or BRAM arrays.
It currently regression tests pretty well, with a couple minor
problems left, including one that cause some temp symbols to end up
with the same names. That should be gone by this weekend, as FP gets
finished and hopefully unsigned is done too.

svn co https://svn.sourceforge.net/svnroot/fpgac/trunk/fpgac fpgac

alpha/beta testers and other developers welcome :)
 
Gabor wrote:
At our company we call FPGA configuration code "software" if it
is stored on the hard drive and uploaded at run time by a user
application. When it is stored in a serial PROM or flash memory
on the board we call it firmware.

I don't think the terms "firmware" or "software" have as much to do
with the programming paradigm as with the delivery of the bits to
make the actual hardware run.
At my current and previous jobs, FPGA "loads" are/were considered
firmware, for the same reason that processor boot code and the
lowest-level debug monitor was considered firmware: the images are
stored on board in some kind of non-volatile memory.

-a
 
I think
what is suprising to some, is that low level software design is long
gone,
?????

No-one ever programs in assembly language any more then?


and low level hardware design is soon to be long gone for all the
same reasons of labor cost vs. hardware cost.

Where price, performance and power consumption don't matter a higher
level language might become more prevalent. I think we'll always
need to be able to get down to a lower level hardware description
to get the best out of the latest devices, stretch the performance
of devices or squeeze what needs to be done into a smaller device.


I also wonder if price/performance/power consumption will become much
less important in the future, as it has with software. These days
you can assume application software will be run on a 'standard'
sufficiently powerful PC. It won't be the case that at the start of
every hardware project that you can assume you have a multi million
gate FPGA (or whatever) at your disposal.



Nial.
 
Nial Stewart wrote:
I think what is suprising to some, is that low level software design is long gone,
No-one ever programs in assembly language any more then?
When I started doing systems programming in the late 1960's, everything
core system important was written in assembly language on big IRON
IBM's - 1401's, 1410's, 360's, with applications in RPG, COBOL, and
Fortran. Pretty much was the same when I started programing
minicomputers, DG's, DEC's, and Varian's, except some flavor of Basic
was the applications language of choice.

99% of systems code - operating system, utilities, compilers, linkers,
assemblers, ... etc was assembly language.

I suspect that number is less than a very small fraction of a percent
today, that is new designs.

and low level hardware design is soon to be long gone for all the
same reasons of labor cost vs. hardware cost.
Where price, performance and power consumption don't matter a higher
level language might become more prevalent.
The power argument is moot, as the difference between power for a good
C coder and a good asm coder, is probably less than a fraction of a
percent. Ditto for performance. The only case I can think of, is using
the wrong compiler for the job, such as using an ANSI 32bit compiler on
a micro that is native 8 or 16 bits, rather than downsizing to a subset
compiler which was designed to target that architecture. And there are
a lot of really good subset 8/16 bit compilers for micros,
and it only takes a man week or two to adapt SmallC/TinyC derivatives
to a new architecture.

Cost on the other hand, I believe is strongly in favor of using C or
some other low level HLL instead of assembly. When the burdened cost of
good software engineers is roughly $200-250K/yr and the productivity
factor between asm and HLL's is better than a factor of 3-10 over the
lifecycle of the product, it's nearly impossible to make up the
difference in volume.

Usinging junior labor that chooses the wrong tools and botches the
design is a management problem. There are plenty of good HLL coders
that can do low level software fit to hardware, and get it right. And
it only takes a good person a few weeks to train an experienced coder
how to do it when experienced low level guys are in high demand.

I think we'll always
need to be able to get down to a lower level hardware description
to get the best out of the latest devices, stretch the performance
of devices or squeeze what needs to be done into a smaller device.
Yep ... but coding C other other systems class HLL line for line as
asm, the fraction of a percent gained by coding more than a few lines
of asm is quickly lost in time to market, labor cost, and ability to
maintain the product long term, including porting to a different mirco
to chase the low cost components over time.

I also wonder if price/performance/power consumption will become much
less important in the future, as it has with software. These days
you can assume application software will be run on a 'standard'
sufficiently powerful PC. It won't be the case that at the start of
every hardware project that you can assume you have a multi million
gate FPGA (or whatever) at your disposal.
Today, the price difference between low end FPGA boards and million
gate boards is getting pretty small, with megagate FPGAs in high
volume. Five, or even two years ago, was pretty different.

The real issue is that FPGA's with CPU's and softcore CPU's allow you
to implement the bulk of the software design on a traditional
sequential architecture where performance is acceptable, and push the
most performance sensitive part of the design down to using raw logic.

The Google description for this group is: Field Programmable Gate Array
based computing systems, under Computer Architecture FPGA. And, after
a few years, I think we are finally getting there .... FPGA based
coputing instead of CPU based computing.

The days of FPGA's being only for hardware design are slipping away.
While this group has been dominated by hardware designers using FPGA's
for hardware designs, I suspect that we will see more and more
engineers of all kinds here doing computing on FPGA's, at all levels.
 

Welcome to EDABoard.com

Sponsor

Back
Top