EDAboard.com | EDAboard.eu | EDAboard.de | EDAboard.co.uk | RTV forum PL | NewsGroups PL

EDK : FSL macros defined by Xilinx are wrong

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - FPGA - EDK : FSL macros defined by Xilinx are wrong

Goto page Previous  1, 2, 3 ... 354, 355, 356, 357  Next

Philipp Klaus Krause
Guest

Thu Aug 13, 2015 6:07 pm   



On 11.08.2015 08:32, David Brown wrote:
Quote:
On 11/08/15 02:51, DJ Delorie wrote:

rickman <gnuarm_at_gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.


My guess is that Walter's experience here is with SDCC rather than gcc,
since he writes compilers that - like SDCC - target small, awkward 8-bit
architectures. In that world, there are often many variants of the cpu
- the 8051 is particularly notorious - and getting the best out of these
devices often means making sure you use the extra architectural features
your particular device provides. SDCC is an excellent tool, but as
Walter says it works with various subsets of ISA provided by common
8051, Z80, etc., variants. The big commercial toolchains for such
devices, such as from Keil, IAR and Walter's own Bytecraft, provide
better support for the range of commercially available parts.

gcc is in a different world - it is a much bigger compiler suite, with
more developers than SDCC, and a great deal more support from the cpu
manufacturers and other commercial groups. One does not need to dig
further than the manual pages to see the huge range of options for
optimising use of different variants of many of targets it supports -
including not just use of differences in the ISA, but also differences
in timings and instruction scheduling.


I'd say the SDCC situation is more complex, and it seems to do uite well
compared to other compilers for the same architectures. On one hand,
SDCC always has had few developers. It has some quite advanced
optimizations, but one the other hand it is lacking in some standard
optimizations and features (SDCC's pointer analysis is not that good, we
don't have generalized constant propagation yet, there are some standard
C features still missing - see below, after the discussion of the
ports). IMO, the bigest weaknesses are there, and not in the use of
exotic instructions.

The 8051 has many variants, and SDCC currently does not support some of
the advanced features available in some of them, such as 4 dptrs, etc. I
do not know how SDCC compares to non-free compilers in that respect.

The Z80 is already a bit different. We use the differences in the
instruction sets of the Z80, Z180, LR35902, Rabbit, TLCS-90. SDCC does
not use the undocumented instructions available in some Z80 variants,
and does not use the alternate register set for code generation; there
definitely is potential for further improvement, but: Last time I did a
comparison of compilers for these architectures, IAR was the only one
that did better than SDCC for some of them.

Newer architectures supported by SDCC are the Freescale HC08, S08 and
the STMicroelectronics STM8. The non-free compilers for these targets
seem to be able to often generate better code, but SDCC is not far behind.

The SDCC PIC backends are not up to the standard of the others.

In terms of standard complaince, IMO, SDCC is dong better than the
non-free compilers, with the exception of IAR. Most non-free compilers
support something resembling C90 with a few deviations from the
standard, IAR seems to support mostly standard C99. SDCC has a few gaps,
even in C90 (such as K&R functions and assignment of structs). ON th
other hand, SDC supports most of the new features of C99 and C11 (the
only missing feature introduced in C11 seems to be UTF-8 strings).

Philipp

hamilton
Guest

Thu Aug 13, 2015 8:11 pm   



On 8/13/2015 6:07 AM, Philipp Klaus Krause wrote:
> The SDCC PIC backends are not up to the standard of the others.

Is the PIC too much of an odd ball to keep up with
or
is there no future in 8-bit PIC ?
or
are 32-bit chips more fun ?

If there is a better place to discuss this, please let me know.

rickman
Guest

Thu Aug 13, 2015 8:20 pm   



On 8/13/2015 10:11 AM, hamilton wrote:
Quote:
On 8/13/2015 6:07 AM, Philipp Klaus Krause wrote:
The SDCC PIC backends are not up to the standard of the others.

Is the PIC too much of an odd ball to keep up with
or
is there no future in 8-bit PIC ?
or
are 32-bit chips more fun ?

If there is a better place to discuss this, please let me know.


I don't know tons about the 32 bit chips which are mostly ARMs. But the
initialization is more complex. It is a good idea to let the tools
handle that for you. All of the 8 bit chips I've used were very simple
to get off the ground.

--

Rick

Philipp Klaus Krause
Guest

Thu Aug 13, 2015 10:07 pm   



On 13.08.2015 16:11, hamilton wrote:
Quote:
On 8/13/2015 6:07 AM, Philipp Klaus Krause wrote:
The SDCC PIC backends are not up to the standard of the others.

Is the PIC too much of an odd ball to keep up with
or
is there no future in 8-bit PIC ?
or
are 32-bit chips more fun ?


I don't consider 32-bit chips more fun. I like CISC 8-bitters, but I
prefer those that seem better suited for C. Again SDCC has few
developers, and at least recently, the most active ones don't seem that
interested in the pics.

Also, the situation is quite different between the pic14 and pic16
backends. The pic16 backend is not that bad. If someone puts a few weeks
of work into it, it could probably make it up to the standard of the
other ports in terms of correctness; it already passes large parts of
the regular regression test suite. The pic14 would require much more work.

Quote:

If there is a better place to discuss this, please let me know.


The sdcc-user and sdcc-devel seem a better place than comp.arch.fpga.

Philipp


Guest

Fri Aug 14, 2015 3:44 am   



Quote:
Again SDCC has few
developers, and at least recently, the most active ones don't seem that
interested in the pics.

Back to the topic of the open FPGA tool chain, I think there would be many "PICs", i.e. topics which are addressed by no / too few developers.


But the whole discussion is quite theoretical as long as A & X do not open their bitstream formats. And I do not think that they will do anything that will support an open source solution, as software is the main entry obstacle for FPGA startups. If there would be a flexible open-source tool-chain with large developer and user-base that can be ported to new architectures easily, this would make it much easier for new competition. (Think gcc...)

Also (as mentioned above) I think with the good and free tool chains from the suppliers, their would be not much demand for such a open source tool chain. There are other points where I would see more motiviation and even there is not happening much:
- Good open source Verilog/VHDL editor (Yes, I have heard of Emacs...) as the integrated editors are average (Altera) or bad (Xilinx). (Currently I am evaluating two commercial VHDL editors...)
- A kind of graphical editor for VHDL and Verilog as the top/higher levels of bigger projects are often a pain IMHO (like writing netlists by hand). I would even start such a project myself if I had the time...

But even with such things where I think would be quite some demand, the "critical mass" of the FPGA community is too low to get projects started and especially keep them running.

Thomas

Richard Damon
Guest

Fri Aug 14, 2015 7:30 am   



On 8/13/15 9:44 PM, thomas.entner99_at_gmail.com wrote:
Quote:
Again SDCC has few
developers, and at least recently, the most active ones don't seem that
interested in the pics.

Back to the topic of the open FPGA tool chain, I think there would
bemany "PICs", i.e. topics which are addressed by no / too few developers.

But the whole discussion is quite theoretical as long as A & X do
not open their bitstream formats. And I do not think that they will do
anything that will support an open source solution, as software is the
main entry obstacle for FPGA startups. If there would be a flexible
open-source tool-chain with large developer and user-base that can be
ported to new architectures easily, this would make it much easier for
new competition. (Think gcc...)

Also (as mentioned above) I think with the good and free tool chains
from the suppliers, their would be not much demand for such a open
source tool chain. There are other points where I would see more
motiviation and even there is not happening much:
- Good open source Verilog/VHDL editor (Yes, I have heard of
Emacs...)
as the integrated editors are average (Altera) or bad (Xilinx).
(Currently I am evaluating two commercial VHDL editors...)
- A kind of graphical editor for VHDL and Verilog as the top/higher

levels of bigger projects are often a pain IMHO (like writing netlists
by hand). I would even start such a project myself if I had the time...
Quote:

But even with such things where I think would be quite some demand,
the "critical mass" of the FPGA community is too low to get projects
started and especially keep them running.

Thomas


One big factor against an open source tool chain is that while the FPGA
vendors describe in general terms the routing inside the devices, the
precise details are not given, and I suspect that these details may be
considered as part of the "secret sauce" that makes the device work. The
devices have gotten so big and complicated, that it is impractical to
use fully populated muxes, and how you chose what gets to what is important.

Processors can also have little details like this, but for processors it
tends to just affect the execution speed, and a compiler that doesn't
take them into account can still do a reasonable job. For an FPGA,
without ALL the details for this you can't even do the routing.

rickman
Guest

Fri Aug 14, 2015 11:29 pm   



On 8/13/2015 11:17 PM, Richard Damon wrote:
Quote:
On 8/13/15 9:44 PM, thomas.entner99_at_gmail.com wrote:
Again SDCC has few
developers, and at least recently, the most active ones don't seem that
interested in the pics.

Back to the topic of the open FPGA tool chain, I think there would
bemany "PICs", i.e. topics which are addressed by no / too few
developers.

But the whole discussion is quite theoretical as long as A & X do
not open their bitstream formats. And I do not think that they will do
anything that will support an open source solution, as software is the
main entry obstacle for FPGA startups. If there would be a flexible
open-source tool-chain with large developer and user-base that can be
ported to new architectures easily, this would make it much easier for
new competition. (Think gcc...)

Also (as mentioned above) I think with the good and free tool chains
from the suppliers, their would be not much demand for such a open
source tool chain. There are other points where I would see more
motiviation and even there is not happening much:
- Good open source Verilog/VHDL editor (Yes, I have heard of
Emacs...)
as the integrated editors are average (Altera) or bad (Xilinx).
(Currently I am evaluating two commercial VHDL editors...)
- A kind of graphical editor for VHDL and Verilog as the top/higher
levels of bigger projects are often a pain IMHO (like writing netlists
by hand). I would even start such a project myself if I had the time...

But even with such things where I think would be quite some demand,
the "critical mass" of the FPGA community is too low to get projects
started and especially keep them running.

Thomas


One big factor against an open source tool chain is that while the FPGA
vendors describe in general terms the routing inside the devices, the
precise details are not given, and I suspect that these details may be
considered as part of the "secret sauce" that makes the device work. The
devices have gotten so big and complicated, that it is impractical to
use fully populated muxes, and how you chose what gets to what is
important.


I'm not sure what details of routing aren't available. There may not
be a document which details it all, but last I saw, there were chip
level design tools which allow you to see all of the routing and
interconnects. The delay info can be extracted from the timing analysis
tools. As far as I am aware, there is no "secret sauce".


Quote:
Processors can also have little details like this, but for processors it
tends to just affect the execution speed, and a compiler that doesn't
take them into account can still do a reasonable job. For an FPGA,
without ALL the details for this you can't even do the routing.


Timing data in an FPGA may be difficult to extract, but otherwise I
think all the routing info is readily available.

--

Rick

Richard Damon
Guest

Sat Aug 15, 2015 7:30 am   



On 8/14/15 1:29 PM, rickman wrote:
Quote:
On 8/13/2015 11:17 PM, Richard Damon wrote:

One big factor against an open source tool chain is that while the FPGA
vendors describe in general terms the routing inside the devices, the
precise details are not given, and I suspect that these details may be
considered as part of the "secret sauce" that makes the device work. The
devices have gotten so big and complicated, that it is impractical to
use fully populated muxes, and how you chose what gets to what is
important.

I'm not sure what details of routing aren't available. There may not
be a document which details it all, but last I saw, there were chip
level design tools which allow you to see all of the routing and
interconnects. The delay info can be extracted from the timing analysis
tools. As far as I am aware, there is no "secret sauce".


Processors can also have little details like this, but for processors it
tends to just affect the execution speed, and a compiler that doesn't
take them into account can still do a reasonable job. For an FPGA,
without ALL the details for this you can't even do the routing.

Timing data in an FPGA may be difficult to extract, but otherwise I
think all the routing info is readily available.


My experience is that you get to see what location a given piece of
logic, and which channels it travels. You do NOT see which particular
wire in that channel is being used. In general, each logic cell does not
have routing to every wire in that channel, and every wire does not have
access to every cross wire. These details tend to be the secret sauce,
as when they do it well, you aren't supposed to notice the incomplete
connections.

I have had to work with the factory on things like this. I had a very
full FPGA and needed to make a small change. With the change I had some
over clogged routing, but if I removed all internal constraints the
fitter couldn't find a fit. Working with someone who did know the
details, we were able to relax just a few internal constraints and get
the system to fit the design. He did comment that my design was probably
the fullest design he had seen in the wild, we had grown to about 95%
logic utilization.

rickman
Guest

Sat Aug 15, 2015 7:30 am   



On 8/14/2015 9:32 PM, Richard Damon wrote:
Quote:
On 8/14/15 1:29 PM, rickman wrote:
On 8/13/2015 11:17 PM, Richard Damon wrote:

One big factor against an open source tool chain is that while the FPGA
vendors describe in general terms the routing inside the devices, the
precise details are not given, and I suspect that these details may be
considered as part of the "secret sauce" that makes the device work. The
devices have gotten so big and complicated, that it is impractical to
use fully populated muxes, and how you chose what gets to what is
important.

I'm not sure what details of routing aren't available. There may not
be a document which details it all, but last I saw, there were chip
level design tools which allow you to see all of the routing and
interconnects. The delay info can be extracted from the timing analysis
tools. As far as I am aware, there is no "secret sauce".


Processors can also have little details like this, but for processors it
tends to just affect the execution speed, and a compiler that doesn't
take them into account can still do a reasonable job. For an FPGA,
without ALL the details for this you can't even do the routing.

Timing data in an FPGA may be difficult to extract, but otherwise I
think all the routing info is readily available.


My experience is that you get to see what location a given piece of
logic, and which channels it travels. You do NOT see which particular
wire in that channel is being used. In general, each logic cell does not
have routing to every wire in that channel, and every wire does not have
access to every cross wire. These details tend to be the secret sauce,
as when they do it well, you aren't supposed to notice the incomplete
connections.


Don't they still have the chip editor? That *must* show everything of
importance.


Quote:
I have had to work with the factory on things like this. I had a very
full FPGA and needed to make a small change. With the change I had some
over clogged routing, but if I removed all internal constraints the
fitter couldn't find a fit. Working with someone who did know the
details, we were able to relax just a few internal constraints and get
the system to fit the design. He did comment that my design was probably
the fullest design he had seen in the wild, we had grown to about 95%
logic utilization.


Yeah, that's pretty full. I start to worry around 80%, but I've never
actually had one fail to route other than the ones I tried to help by
doing placement, lol.

--

Rick

Richard Damon
Guest

Sat Aug 15, 2015 6:32 pm   



On 8/14/15 10:59 PM, rickman wrote:
Quote:
On 8/14/2015 9:32 PM, Richard Damon wrote:

My experience is that you get to see what location a given piece of
logic, and which channels it travels. You do NOT see which particular
wire in that channel is being used. In general, each logic cell does not
have routing to every wire in that channel, and every wire does not have
access to every cross wire. These details tend to be the secret sauce,
as when they do it well, you aren't supposed to notice the incomplete
connections.

Don't they still have the chip editor? That *must* show everything of
importance.


The chip editors tend to just show the LOGIC resources, not the details
of the routing resources. The manufactures tend to do a good job of
giving the detail of the logic blocks you are working with, as this is
the part of the design you tend to specify. Routing on the other hand
tends to not be something you care about, just that the routing 'works'.
When they have done a good job at designing the routing you don't notice
it, but there have been cases where the routing turned out not quite
flexible enough and you notice that you can't fill the device as well
before hitting routing issues.
Quote:


I have had to work with the factory on things like this. I had a very
full FPGA and needed to make a small change. With the change I had some
over clogged routing, but if I removed all internal constraints the
fitter couldn't find a fit. Working with someone who did know the
details, we were able to relax just a few internal constraints and get
the system to fit the design. He did comment that my design was probably
the fullest design he had seen in the wild, we had grown to about 95%
logic utilization.

Yeah, that's pretty full. I start to worry around 80%, but I've never
actually had one fail to route other than the ones I tried to help by
doing placement, lol.


They suggest that you consider 75-80% to be "Full". This design started
in the 70% level but we were adding capability to the system and the
density grew. (And were already using the largest chip for the
footprint). Our next step was to redo the board and get the usage back
down. When we hit the issue we had a mostly working design but were
fixing the one last bug, and that was when the fitter threw its fit.

KJ
Guest

Tue Aug 18, 2015 8:21 pm   



On Tuesday, August 18, 2015 at 11:35:55 AM UTC-4, rickman wrote:
Quote:

I'm not sure what details of the routing the chip editors leave out.
You only need to know what is connected to what, through what and what
the delays for all those cases are.


If you're trying to implement an open source toolchain you would likely need to know *how* to specify those connections via the programming bitstream.

Kevin

rickman
Guest

Tue Aug 18, 2015 9:35 pm   



On 8/15/2015 8:32 AM, Richard Damon wrote:
Quote:
On 8/14/15 10:59 PM, rickman wrote:
On 8/14/2015 9:32 PM, Richard Damon wrote:

My experience is that you get to see what location a given piece of
logic, and which channels it travels. You do NOT see which particular
wire in that channel is being used. In general, each logic cell does not
have routing to every wire in that channel, and every wire does not have
access to every cross wire. These details tend to be the secret sauce,
as when they do it well, you aren't supposed to notice the incomplete
connections.

Don't they still have the chip editor? That *must* show everything of
importance.

The chip editors tend to just show the LOGIC resources, not the details
of the routing resources. The manufactures tend to do a good job of
giving the detail of the logic blocks you are working with, as this is
the part of the design you tend to specify. Routing on the other hand
tends to not be something you care about, just that the routing 'works'.
When they have done a good job at designing the routing you don't notice
it, but there have been cases where the routing turned out not quite
flexible enough and you notice that you can't fill the device as well
before hitting routing issues.


I'm not sure what details of the routing the chip editors leave out.
You only need to know what is connected to what, through what and what
the delays for all those cases are. Other than that, the routing does
just "work".


Quote:
I have had to work with the factory on things like this. I had a very
full FPGA and needed to make a small change. With the change I had some
over clogged routing, but if I removed all internal constraints the
fitter couldn't find a fit. Working with someone who did know the
details, we were able to relax just a few internal constraints and get
the system to fit the design. He did comment that my design was probably
the fullest design he had seen in the wild, we had grown to about 95%
logic utilization.

Yeah, that's pretty full. I start to worry around 80%, but I've never
actually had one fail to route other than the ones I tried to help by
doing placement, lol.


They suggest that you consider 75-80% to be "Full". This design started
in the 70% level but we were adding capability to the system and the
density grew. (And were already using the largest chip for the
footprint). Our next step was to redo the board and get the usage back
down. When we hit the issue we had a mostly working design but were
fixing the one last bug, and that was when the fitter threw its fit.


The "full" utilization number is approximate because it depends on the
details of the design. Some designs can get to higher utilization
numbers, others less. As a way of pointing out that the routing is the
part of the chip that uses the most space while the logic is smaller,
Xilinx sales people used to say, "We sell you the routing and give you
the logic for free." The point is the routing usually limits your
design rather than the logic. If you want to be upset about utilization
numbers, ask them how much of your routing gets used! It's *way* below
80%.

--

Rick

Richard Damon
Guest

Wed Aug 19, 2015 7:40 am   



On 8/18/15 11:35 AM, rickman wrote:
Quote:
On 8/15/2015 8:32 AM, Richard Damon wrote:

The chip editors tend to just show the LOGIC resources, not the details
of the routing resources. The manufactures tend to do a good job of
giving the detail of the logic blocks you are working with, as this is
the part of the design you tend to specify. Routing on the other hand
tends to not be something you care about, just that the routing 'works'.
When they have done a good job at designing the routing you don't notice
it, but there have been cases where the routing turned out not quite
flexible enough and you notice that you can't fill the device as well
before hitting routing issues.

I'm not sure what details of the routing the chip editors leave out. You
only need to know what is connected to what, through what and what the
delays for all those cases are. Other than that, the routing does just
"work".


Look closely. The chip editor will normally show you the exact logic
element you are using with a precise location. The output will then go
out into a routing channel and on the the next logic logic cell(s) that
it goes to. It may even show you the the various rows and columns of
routing it is going through. Those rows and columns are made of a
(large) number of distinct wires with routing resources connecting
outputs to select lines and select lines being brought into the next
piece of routing/logic. Which wire is being used will not be indicated,
nor are all the wires interchangeable, so which wire can matter for
fitting. THIS is the missing information.
Quote:

I have had to work with the factory on things like this. I had a very
full FPGA and needed to make a small change. With the change I had some
over clogged routing, but if I removed all internal constraints the
fitter couldn't find a fit. Working with someone who did know the
details, we were able to relax just a few internal constraints and get
the system to fit the design. He did comment that my design was
probably
the fullest design he had seen in the wild, we had grown to about 95%
logic utilization.

Yeah, that's pretty full. I start to worry around 80%, but I've never
actually had one fail to route other than the ones I tried to help by
doing placement, lol.


They suggest that you consider 75-80% to be "Full". This design started
in the 70% level but we were adding capability to the system and the
density grew. (And were already using the largest chip for the
footprint). Our next step was to redo the board and get the usage back
down. When we hit the issue we had a mostly working design but were
fixing the one last bug, and that was when the fitter threw its fit.

The "full" utilization number is approximate because it depends on the
details of the design. Some designs can get to higher utilization
numbers, others less. As a way of pointing out that the routing is the
part of the chip that uses the most space while the logic is smaller,
Xilinx sales people used to say, "We sell you the routing and give you
the logic for free." The point is the routing usually limits your
design rather than the logic. If you want to be upset about utilization
numbers, ask them how much of your routing gets used! It's *way* below
80%.

And this is why the keep the real details of the routing proprietary.
(Not to keep you from getting upset) The serious design work goes into
figuring out how much they really need per cell. If they could figure
out a better allocation that let them cut the routing per cell by 10%,
they could give you 10% more logic for free. If they goof and provide
too little routing, you see the resources that you were sold (since they
advertize the logic capability) as being wasted by some 'dumb design
limitation'. There have been families that got black eyes of having
routing problems, and thus should be avoided for 'serious' work.

rickman
Guest

Wed Aug 19, 2015 10:02 am   



On 8/18/2015 2:21 PM, KJ wrote:
Quote:
On Tuesday, August 18, 2015 at 11:35:55 AM UTC-4, rickman wrote:

I'm not sure what details of the routing the chip editors leave out.
You only need to know what is connected to what, through what and what
the delays for all those cases are.

If you're trying to implement an open source toolchain you would likely need to know *how* to specify those connections via the programming bitstream.


Well... yeah. That's the sticky wicket, knowing how to generate the
bitstream. I think you missed the point of this subthread.

--

Rick

rickman
Guest

Wed Aug 19, 2015 10:09 am   



On 8/18/2015 9:40 PM, Richard Damon wrote:
Quote:
On 8/18/15 11:35 AM, rickman wrote:
On 8/15/2015 8:32 AM, Richard Damon wrote:

The chip editors tend to just show the LOGIC resources, not the details
of the routing resources. The manufactures tend to do a good job of
giving the detail of the logic blocks you are working with, as this is
the part of the design you tend to specify. Routing on the other hand
tends to not be something you care about, just that the routing 'works'.
When they have done a good job at designing the routing you don't notice
it, but there have been cases where the routing turned out not quite
flexible enough and you notice that you can't fill the device as well
before hitting routing issues.

I'm not sure what details of the routing the chip editors leave out. You
only need to know what is connected to what, through what and what the
delays for all those cases are. Other than that, the routing does just
"work".


Look closely. The chip editor will normally show you the exact logic
element you are using with a precise location. The output will then go
out into a routing channel and on the the next logic logic cell(s) that
it goes to. It may even show you the the various rows and columns of
routing it is going through. Those rows and columns are made of a
(large) number of distinct wires with routing resources connecting
outputs to select lines and select lines being brought into the next
piece of routing/logic. Which wire is being used will not be indicated,
nor are all the wires interchangeable, so which wire can matter for
fitting. THIS is the missing information.


I can't speak with total authority since I have not used a chip editor
in a decade. But when I have used them they showed sufficient detail
that I could control every aspect of the routing. In fact, it showed
every routing resource in sufficient detail that the logic components
were rather small and were a bit hard to see.

When you say which wire is used is not shown, how would you be able to
do manual routing if the details are not there? Manual routing and
logic is the purpose of the chip editors, no?


Quote:
I have had to work with the factory on things like this. I had a very
full FPGA and needed to make a small change. With the change I had
some
over clogged routing, but if I removed all internal constraints the
fitter couldn't find a fit. Working with someone who did know the
details, we were able to relax just a few internal constraints and get
the system to fit the design. He did comment that my design was
probably
the fullest design he had seen in the wild, we had grown to about 95%
logic utilization.

Yeah, that's pretty full. I start to worry around 80%, but I've never
actually had one fail to route other than the ones I tried to help by
doing placement, lol.


They suggest that you consider 75-80% to be "Full". This design started
in the 70% level but we were adding capability to the system and the
density grew. (And were already using the largest chip for the
footprint). Our next step was to redo the board and get the usage back
down. When we hit the issue we had a mostly working design but were
fixing the one last bug, and that was when the fitter threw its fit.

The "full" utilization number is approximate because it depends on the
details of the design. Some designs can get to higher utilization
numbers, others less. As a way of pointing out that the routing is the
part of the chip that uses the most space while the logic is smaller,
Xilinx sales people used to say, "We sell you the routing and give you
the logic for free." The point is the routing usually limits your
design rather than the logic. If you want to be upset about utilization
numbers, ask them how much of your routing gets used! It's *way* below
80%.

And this is why the keep the real details of the routing proprietary.
(Not to keep you from getting upset) The serious design work goes into
figuring out how much they really need per cell. If they could figure
out a better allocation that let them cut the routing per cell by 10%,
they could give you 10% more logic for free. If they goof and provide
too little routing, you see the resources that you were sold (since they
advertize the logic capability) as being wasted by some 'dumb design
limitation'. There have been families that got black eyes of having
routing problems, and thus should be avoided for 'serious' work.


I don't follow the logic. There are always designs that deviate from
the typical utilization in both directions. Whether you can see what
details in the chip editor has nothing to do with user satisfaction
since you can read the utilization numbers in the reports and don't need
to see any routing, etc.

--

Rick

Goto page Previous  1, 2, 3 ... 354, 355, 356, 357  Next

elektroda.net NewsGroups Forum Index - FPGA - EDK : FSL macros defined by Xilinx are wrong

Ask a question - edaboard.com

Arabic versionBulgarian versionCatalan versionCzech versionDanish versionGerman versionGreek versionEnglish versionSpanish versionFinnish versionFrench versionHindi versionCroatian versionIndonesian versionItalian versionHebrew versionJapanese versionKorean versionLithuanian versionLatvian versionDutch versionNorwegian versionPolish versionPortuguese versionRomanian versionRussian versionSlovak versionSlovenian versionSerbian versionSwedish versionTagalog versionUkrainian versionVietnamese versionChinese version
RTV map EDAboard.com map News map EDAboard.eu map EDAboard.de map EDAboard.co.uk map