EDAboard.com | EDAboard.eu | EDAboard.de | EDAboard.co.uk | RTV forum PL | NewsGroups PL

EDK : FSL macros defined by Xilinx are wrong

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - FPGA - EDK : FSL macros defined by Xilinx are wrong

Goto page Previous  1, 2, 3 ... 353, 354, 355, 356, 357  Next

Aleksandar Kuktin
Guest

Sat Aug 08, 2015 6:51 pm   



On Wed, 05 Aug 2015 15:52:52 -0700, thomas.entner99 wrote:

Quote:
Of course you
would find some students which are contributing (e.g. for their thesis),
but I doubt that it will be enough to get a competitve product and to
maintain it. New devices should be supported with short delay, otherwise
the tool would not be very useful.


With GCC, Linux and the ilk, it's actually the other way around. They add
support for new CPUs before the new CPUs hit the market (quoting x86_64).
This is partially due to hardware producers understanding they need
toolchain support and working actively on getting that support. If even a
single FOSS FPGA toolchain gets to a similar penetration, you can count
on FPGA houses paying their own people to hack those leading FOSS
toolchains, for the benefit of all.

rickman
Guest

Mon Aug 10, 2015 1:45 am   



On 8/8/2015 12:51 PM, Aleksandar Kuktin wrote:
Quote:
On Wed, 05 Aug 2015 15:52:52 -0700, thomas.entner99 wrote:

Of course you
would find some students which are contributing (e.g. for their thesis),
but I doubt that it will be enough to get a competitve product and to
maintain it. New devices should be supported with short delay, otherwise
the tool would not be very useful.

With GCC, Linux and the ilk, it's actually the other way around. They add
support for new CPUs before the new CPUs hit the market (quoting x86_64).
This is partially due to hardware producers understanding they need
toolchain support and working actively on getting that support. If even a
single FOSS FPGA toolchain gets to a similar penetration, you can count
on FPGA houses paying their own people to hack those leading FOSS
toolchains, for the benefit of all.


How will any FPGA toolchain get "a similar penetration" if the vendors
don't open the spec on the bitstream? Do you see lots of people coming
together to reverse engineer the many brands and flavors of FPGA devices
to make this even possible?

Remember that CPU makers have *always* released detailed info on their
instruction sets because it was useful even if, no, *especially if*
coding in assembly.

--

Rick

glen herrmannsfeldt
Guest

Mon Aug 10, 2015 8:33 am   



rickman <gnuarm_at_gmail.com> wrote:
> On 8/8/2015 12:51 PM, Aleksandar Kuktin wrote:

(snip)
Quote:
With GCC, Linux and the ilk, it's actually the other way around. They add
support for new CPUs before the new CPUs hit the market (quoting x86_64).
This is partially due to hardware producers understanding they need
toolchain support and working actively on getting that support. If even a
single FOSS FPGA toolchain gets to a similar penetration, you can count
on FPGA houses paying their own people to hack those leading FOSS
toolchains, for the benefit of all.


OK, but that is relatively (in the life of gcc) recent.

The early gcc were replacements for existing C compilers on
systems that already had C compilers.

Quote:
How will any FPGA toolchain get "a similar penetration" if the vendors
don't open the spec on the bitstream? Do you see lots of people coming
together to reverse engineer the many brands and flavors of FPGA devices
to make this even possible?


Only the final stage of processing needs to know the real details
of the bitstream. I don't know so well the current tool chain, but
it might be that you could replace most of the steps, and use the
vendor supplied final step.

Remember the early gcc before glibc? They used the vendor supplied
libc, which meant that it had to use the same call convention.

Quote:
Remember that CPU makers have *always* released detailed info on their
instruction sets because it was useful even if, no, *especially if*
coding in assembly.


If FOSS tools were available, there would be reason to release
those details. But note that you don't really need bit level to
write assembly code, only to write assemblers. You need to know how
many bits there are (for example, in an address) but not which bits.

Now, most assemblers do print out the hex codes, but most often
that isn't needed for actual programming, sometimes for debugging.

-- glen

rickman
Guest

Mon Aug 10, 2015 12:56 pm   



On 8/10/2015 2:33 AM, glen herrmannsfeldt wrote:
Quote:
rickman <gnuarm_at_gmail.com> wrote:
On 8/8/2015 12:51 PM, Aleksandar Kuktin wrote:

(snip)

How will any FPGA toolchain get "a similar penetration" if the vendors
don't open the spec on the bitstream? Do you see lots of people coming
together to reverse engineer the many brands and flavors of FPGA devices
to make this even possible?

Only the final stage of processing needs to know the real details
of the bitstream. I don't know so well the current tool chain, but
it might be that you could replace most of the steps, and use the
vendor supplied final step.

Remember the early gcc before glibc? They used the vendor supplied
libc, which meant that it had to use the same call convention.


We have had open source compilers and simulators for some time now.
When will the "similar penetration" happen?


Quote:
Remember that CPU makers have *always* released detailed info on their
instruction sets because it was useful even if, no, *especially if*
coding in assembly.

If FOSS tools were available, there would be reason to release
those details. But note that you don't really need bit level to
write assembly code, only to write assemblers. You need to know how
many bits there are (for example, in an address) but not which bits.

Now, most assemblers do print out the hex codes, but most often
that isn't needed for actual programming, sometimes for debugging.


I'm not sure what your point is. You may disagree with details of what
I wrote, but I don't get what you are trying to say about the topic of
interest.

--

Rick

Walter Banks
Guest

Tue Aug 11, 2015 3:07 am   



On 09/08/2015 3:45 PM, rickman wrote:

Quote:

How will any FPGA toolchain get "a similar penetration" if the vendors
don't open the spec on the bitstream? Do you see lots of people coming
together to reverse engineer the many brands and flavors of FPGA devices
to make this even possible?

Remember that CPU makers have *always* released detailed info on their
instruction sets because it was useful even if, no, *especially if*
coding in assembly.


This thread implies that a bitstream is like a processor ISA and to some
extent it is. FOSS tools have for the avoided the minor variations in
processor ISA's preferring to use a subset of the instruction set to
support a broad base of processors in a family.

The problem in some FPGA devices is that a much larger set of
implementation rules are required to produce effective code
implementations. This doesn't say that FOSS can't or won't do it but it
would require a much more detailed attention to detail than the FOSS
tools that I have looked at.

w..

rickman
Guest

Tue Aug 11, 2015 3:58 am   



On 8/10/2015 5:07 PM, Walter Banks wrote:
Quote:
On 09/08/2015 3:45 PM, rickman wrote:


How will any FPGA toolchain get "a similar penetration" if the vendors
don't open the spec on the bitstream? Do you see lots of people coming
together to reverse engineer the many brands and flavors of FPGA devices
to make this even possible?

Remember that CPU makers have *always* released detailed info on their
instruction sets because it was useful even if, no, *especially if*
coding in assembly.


This thread implies that a bitstream is like a processor ISA and to some
extent it is. FOSS tools have for the avoided the minor variations in
processor ISA's preferring to use a subset of the instruction set to
support a broad base of processors in a family.

The problem in some FPGA devices is that a much larger set of
implementation rules are required to produce effective code
implementations. This doesn't say that FOSS can't or won't do it but it
would require a much more detailed attention to detail than the FOSS
tools that I have looked at.


I don't know for sure, but I think this is carrying the analogy a bit
too far. If FOSS compilers for CPUs have mostly limited code to subsets
of instructions to make the compiler easier to code and maintain that's
fine. Obviously the pressure to further optimize the output code just
isn't there.

I have no reason to think the tools for FPGA development don't have
their own set of tradeoffs and unique pressures for optimization. So it
is hard to tell where they will end up if they become mainstream FPGA
development tools which I don't believe they are currently, regardless
of the issues of bitstream generation.

I can't say just how important users find the various optimizations
possible with different FPGAs. I remember working for a test equipment
maker who was using Xilinx in a particular product. They did not want
us to code the unique HDL patterns required to utilize some of the
architectural features because the code would not be very portable to
other brands which they might use in other products in the future. In
other words, they didn't feel the optimizations were worth limiting
their choice of vendors in the future.

I guess that is another reason why the FPGA vendors like having their
own tools. They want to be able to control the optimizations for their
architectural features. I think they could do this just fine with FOSS
tools as well as proprietary, but they would have to share their code
which the competition might be able to take advantage of.

--

Rick

DJ Delorie
Guest

Tue Aug 11, 2015 6:51 am   



rickman <gnuarm_at_gmail.com> writes:
Quote:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.


As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.

Theo Markettos
Guest

Tue Aug 11, 2015 10:59 am   



DJ Delorie <dj_at_delorie.com> wrote:
Quote:

rickman <gnuarm_at_gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.


But the point is the ISA is the software-level API for the processor.
There's a lot more fancy stuff in the microarchitecture that you don't get
exposed to as a compiler writer[1]. The contract between programmers and
the CPU vendor is the vendor will implement the ISA API, and software
authors can be confident their software will work.[2]

You don't get exposed to things like branch latency, pipeline hazards,
control flow graph dependencies, and so on, because microarchitectural
techniques like branch predictors, register renaming and out-of-order
execution do a massive amount of work to hide those details from the
software world.

The nearest we came is VLIW designs like Itanium where more
microarchitectural detail was exposed to the compiler - which turned out to
be very painful for the compiler writer.

There is no such API for FPGAs - the compiler has to drive the raw
transistors to set up the routing for the exact example of the chip being
programmed. Not only that, there are no safeguards - if you drive those
transistors wrong, your chip catches fire.

Theo


[1] There is a certain amount of performance tweaking you can do with
knowledge of caching, prefetching, etc - but you rarely have the problem of
functional correctness; the ISA is not violated, even if slightly slower

[2] To a greater or lesser degree - Intel takes this to extremes,
supporting binary compatibility of OSes back to the 1970s; ARM requires the
OS to co-evolve but userland programs are (mostly) unchanged

David Brown
Guest

Tue Aug 11, 2015 12:32 pm   



On 11/08/15 02:51, DJ Delorie wrote:
Quote:

rickman <gnuarm_at_gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.


My guess is that Walter's experience here is with SDCC rather than gcc,
since he writes compilers that - like SDCC - target small, awkward 8-bit
architectures. In that world, there are often many variants of the cpu
- the 8051 is particularly notorious - and getting the best out of these
devices often means making sure you use the extra architectural features
your particular device provides. SDCC is an excellent tool, but as
Walter says it works with various subsets of ISA provided by common
8051, Z80, etc., variants. The big commercial toolchains for such
devices, such as from Keil, IAR and Walter's own Bytecraft, provide
better support for the range of commercially available parts.

gcc is in a different world - it is a much bigger compiler suite, with
more developers than SDCC, and a great deal more support from the cpu
manufacturers and other commercial groups. One does not need to dig
further than the manual pages to see the huge range of options for
optimising use of different variants of many of targets it supports -
including not just use of differences in the ISA, but also differences
in timings and instruction scheduling.

David Brown
Guest

Tue Aug 11, 2015 3:14 pm   



On 11/08/15 10:59, Theo Markettos wrote:
Quote:
DJ Delorie <dj_at_delorie.com> wrote:

rickman <gnuarm_at_gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.

But the point is the ISA is the software-level API for the processor.
There's a lot more fancy stuff in the microarchitecture that you don't get
exposed to as a compiler writer[1]. The contract between programmers and
the CPU vendor is the vendor will implement the ISA API, and software
authors can be confident their software will work.[2]

You don't get exposed to things like branch latency, pipeline hazards,
control flow graph dependencies, and so on, because microarchitectural
techniques like branch predictors, register renaming and out-of-order
execution do a massive amount of work to hide those details from the
software world.


As you note below, that is true regarding the functional execution
behaviour - but not regarding the speed. For many targets, gcc can take
such non-ISA details into account as well as a large proportion of the
device-specific ISA (contrary to what Walter thought).

Quote:

The nearest we came is VLIW designs like Itanium where more
microarchitectural detail was exposed to the compiler - which turned out to
be very painful for the compiler writer.

There is no such API for FPGAs - the compiler has to drive the raw
transistors to set up the routing for the exact example of the chip being
programmed. Not only that, there are no safeguards - if you drive those
transistors wrong, your chip catches fire.


Indeed. The bitstream and the match between configuration bits and
functionality in an FPGA do not really correspond to cpu's ISA. They
are at a level of detail and complexity that is /way/ beyond an ISA.

Quote:
Theo


[1] There is a certain amount of performance tweaking you can do with
knowledge of caching, prefetching, etc - but you rarely have the problem of
functional correctness; the ISA is not violated, even if slightly slower

[2] To a greater or lesser degree - Intel takes this to extremes,
supporting binary compatibility of OSes back to the 1970s; ARM requires the
OS to co-evolve but userland programs are (mostly) unchanged


Walter Banks
Guest

Tue Aug 11, 2015 5:20 pm   



On 11/08/2015 2:32 AM, David Brown wrote:
Quote:
On 11/08/15 02:51, DJ Delorie wrote:

rickman <gnuarm_at_gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets
of instructions to make the compiler easier to code and maintain
that's fine.

As one of the GCC maintainers, I can tell you that the opposite is
true. We take advantage of everything the ISA offers.


My guess is that Walter's experience here is with SDCC rather than
gcc, since he writes compilers that - like SDCC - target small,
awkward 8-bit architectures. In that world, there are often many
variants of the cpu - the 8051 is particularly notorious - and
getting the best out of these devices often means making sure you use
the extra architectural features your particular device provides.
SDCC is an excellent tool, but as Walter says it works with various
subsets of ISA provided by common 8051, Z80, etc., variants. The big
commercial toolchains for such devices, such as from Keil, IAR and
Walter's own Bytecraft, provide better support for the range of
commercially available parts.


That frames the point I was making about bitstream information. My
limited understanding of the issue is getting the bitstream information
correct for a specific part goes beyond getting the internal
interconnects being functional and goes to issues dealing with timing,
power, gate position and data loads.

It is not saying that FOSS couldn't or shouldn't do it but it would
change a lot of things in both the FOSS and fpga world. The chip
companies have traded speed for detail complexity. In the same way that
speed has been traded for ISA use restrictions (specific instruction
combinations) in many of the embedded system processors we have supported.

w..

David Brown
Guest

Tue Aug 11, 2015 8:45 pm   



On 11/08/15 13:20, Walter Banks wrote:
Quote:
On 11/08/2015 2:32 AM, David Brown wrote:
On 11/08/15 02:51, DJ Delorie wrote:

rickman <gnuarm_at_gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets
of instructions to make the compiler easier to code and maintain
that's fine.

As one of the GCC maintainers, I can tell you that the opposite is
true. We take advantage of everything the ISA offers.


My guess is that Walter's experience here is with SDCC rather than
gcc, since he writes compilers that - like SDCC - target small,
awkward 8-bit architectures. In that world, there are often many
variants of the cpu - the 8051 is particularly notorious - and
getting the best out of these devices often means making sure you use
the extra architectural features your particular device provides.
SDCC is an excellent tool, but as Walter says it works with various
subsets of ISA provided by common 8051, Z80, etc., variants. The big
commercial toolchains for such devices, such as from Keil, IAR and
Walter's own Bytecraft, provide better support for the range of
commercially available parts.

That frames the point I was making about bitstream information. My
limited understanding of the issue is getting the bitstream information
correct for a specific part goes beyond getting the internal
interconnects being functional and goes to issues dealing with timing,
power, gate position and data loads.

It is not saying that FOSS couldn't or shouldn't do it but it would
change a lot of things in both the FOSS and fpga world. The chip
companies have traded speed for detail complexity. In the same way that
speed has been traded for ISA use restrictions (specific instruction
combinations) in many of the embedded system processors we have supported.


This is not really a FOSS / Closed software issue (despite the thread).
Bitstream information in FPGA's is not really suitable for /any/ third
parties - it doesn't matter significantly if they are open or closed
development. When an FPGA company makes a new design, there will be
automatic flow of the details from the FPGA design details into the
placer/router/generator software - the information content and the
detail is far too high to deal sensibly with documentation or any other
interchange between significantly separated groups.

Though I have no "inside information" about how FPGA companies do their
development, I would expect there is a great deal of back-and-forth work
between the hardware designers, the software designers, and the groups
testing simulations to figure out how well the devices work in practice.
Whereas with a cpu design, the ISA is at least mostly fixed early in
the design process, and also the chip can be simulated and tested
without compilers or anything more than a simple assembler, for FPGA's
your bitstream will not be solidified until the final hardware design is
complete, and you are totally dependent on the placer/router/generator
software while doing the design.

All this means that it is almost infeasible for anyone to make a
sensible third-party generator, at least for large FPGAs. And the FPGA
manufacturers cannot avoid making such tools anyway. At best,
third-parties (FOSS or not) can hope to make limited bitstream models of
a few small FPGAs, and get something that works but is far from optimal
for the device.

Of course, there are many interesting ideas that can come out of even
such limited tools as this, so it is still worth making them and
"opening" the bitstream models for a few small FPGAs. For some uses, it
is an advantage that all software in the chain is open source, even if
the result is not as speed or space optimal. For academic use, it makes
research and study much easier, and can lead to new ideas or algorithms
for improving the FPGA development process. And you can do weird things
- I remember long ago reading of someone who used a genetic algorithm on
bitstreams for a small FPGA to make a filter system without actually
knowing /how/ it worked!

rickman
Guest

Tue Aug 11, 2015 9:29 pm   



On 8/10/2015 8:51 PM, DJ Delorie wrote:
Quote:

rickman <gnuarm_at_gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.


You are replying to the wrong person. I was not saying GCC limited the
instruction set used, I was positing a reason for Walter Bank's claim
this was true. My point is that there are different pressures in
compiling for FPGAs and CPUs.

--

Rick

rickman
Guest

Tue Aug 11, 2015 9:41 pm   



On 8/11/2015 5:14 AM, David Brown wrote:
Quote:
On 11/08/15 10:59, Theo Markettos wrote:
DJ Delorie <dj_at_delorie.com> wrote:

rickman <gnuarm_at_gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.

But the point is the ISA is the software-level API for the processor.
There's a lot more fancy stuff in the microarchitecture that you don't get
exposed to as a compiler writer[1]. The contract between programmers and
the CPU vendor is the vendor will implement the ISA API, and software
authors can be confident their software will work.[2]

You don't get exposed to things like branch latency, pipeline hazards,
control flow graph dependencies, and so on, because microarchitectural
techniques like branch predictors, register renaming and out-of-order
execution do a massive amount of work to hide those details from the
software world.

As you note below, that is true regarding the functional execution
behaviour - but not regarding the speed. For many targets, gcc can take
such non-ISA details into account as well as a large proportion of the
device-specific ISA (contrary to what Walter thought).


I'm not clear on what is being said about speed. It is my understanding
that compiler writers often consider the speed of the output and try
hard to optimize that for each particular generation of processor ISA or
even versions of processor with the same ISA. So don't see that as
being particularly different from FPGAs.

Sure, FPGAs require a *lot* of work to get routing to meet timing. That
is the primary purpose of one of the three steps in FPGA design tools,
compile, place, route. I don't see this as fundamentally different from
CPU compilers in a way that affects the FOSS issue.


Quote:
The nearest we came is VLIW designs like Itanium where more
microarchitectural detail was exposed to the compiler - which turned out to
be very painful for the compiler writer.

There is no such API for FPGAs - the compiler has to drive the raw
transistors to set up the routing for the exact example of the chip being
programmed. Not only that, there are no safeguards - if you drive those
transistors wrong, your chip catches fire.


Indeed. The bitstream and the match between configuration bits and
functionality in an FPGA do not really correspond to cpu's ISA. They
are at a level of detail and complexity that is /way/ beyond an ISA.


I think that is not a useful distinction. If you include all aspects of
writing compilers, the ISA has to be supplemented by other information
to get good output code. If you only consider the ISA your code will
never be very good. In the end the only useful distinction between the
CPU tools and FPGA tools are that FPGA users are, in general, not as
capable in modifying the tools.

--

Rick


Guest

Wed Aug 12, 2015 1:11 am   



On Tuesday, August 11, 2015 at 3:59:22 AM UTC-5, Theo Markettos wrote:
Quote:
DJ Delorie <dj@....com> wrote:

rickman <gnuarm@....com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true..
We take advantage of everything the ISA offers.

But the point is the ISA is the software-level API for the processor.
There's a lot more fancy stuff in the microarchitecture that you don't get
exposed to as a compiler writer[1]. The contract between programmers and
the CPU vendor is the vendor will implement the ISA API, and software
authors can be confident their software will work.[2]

You don't get exposed to things like branch latency, pipeline hazards,
control flow graph dependencies, and so on, because microarchitectural
techniques like branch predictors, register renaming and out-of-order
execution do a massive amount of work to hide those details from the
software world.

The nearest we came is VLIW designs like Itanium where more
microarchitectural detail was exposed to the compiler - which turned out to
be very painful for the compiler writer.

There is no such API for FPGAs - the compiler has to drive the raw
transistors to set up the routing for the exact example of the chip being
programmed. Not only that, there are no safeguards - if you drive those
transistors wrong, your chip catches fire.

Theo


[1] There is a certain amount of performance tweaking you can do with
knowledge of caching, prefetching, etc - but you rarely have the problem of
functional correctness; the ISA is not violated, even if slightly slower

[2] To a greater or lesser degree - Intel takes this to extremes,
supporting binary compatibility of OSes back to the 1970s; ARM requires the
OS to co-evolve but userland programs are (mostly) unchanged


One could make the analogy that a FPGA's ISA is the LUT, register, ALU & RAM primitives that the mapper generates from the EDIF.

There is no suitable analogy for the router phase of bitstream generation. The router resources are a hierarchy of variable length wires in an assortment of directions (horizontal, vertical, sometimes diagonal) with pass transistors used to connect wires, source and destinations.

Timing driven place & route is easy to express, difficult to implement. Register and/or logic replication may be performed to improve timing.

There are some open? router tools at Un. Toronto:
http://www.eecg.toronto.edu/~jayar/software/software.html

Jim Brakefield

Goto page Previous  1, 2, 3 ... 353, 354, 355, 356, 357  Next

elektroda.net NewsGroups Forum Index - FPGA - EDK : FSL macros defined by Xilinx are wrong

Ask a question - edaboard.com

Arabic versionBulgarian versionCatalan versionCzech versionDanish versionGerman versionGreek versionEnglish versionSpanish versionFinnish versionFrench versionHindi versionCroatian versionIndonesian versionItalian versionHebrew versionJapanese versionKorean versionLithuanian versionLatvian versionDutch versionNorwegian versionPolish versionPortuguese versionRomanian versionRussian versionSlovak versionSlovenian versionSerbian versionSwedish versionTagalog versionUkrainian versionVietnamese versionChinese version
RTV map EDAboard.com map News map EDAboard.eu map EDAboard.de map EDAboard.co.uk map