EDK : FSL macros defined by Xilinx are wrong

On Wed, 05 Aug 2015 15:52:52 -0700, thomas.entner99 wrote:

Of course you
would find some students which are contributing (e.g. for their thesis),
but I doubt that it will be enough to get a competitve product and to
maintain it. New devices should be supported with short delay, otherwise
the tool would not be very useful.

With GCC, Linux and the ilk, it's actually the other way around. They add
support for new CPUs before the new CPUs hit the market (quoting x86_64).
This is partially due to hardware producers understanding they need
toolchain support and working actively on getting that support. If even a
single FOSS FPGA toolchain gets to a similar penetration, you can count
on FPGA houses paying their own people to hack those leading FOSS
toolchains, for the benefit of all.
 
On 8/8/2015 12:51 PM, Aleksandar Kuktin wrote:
On Wed, 05 Aug 2015 15:52:52 -0700, thomas.entner99 wrote:

Of course you
would find some students which are contributing (e.g. for their thesis),
but I doubt that it will be enough to get a competitve product and to
maintain it. New devices should be supported with short delay, otherwise
the tool would not be very useful.

With GCC, Linux and the ilk, it's actually the other way around. They add
support for new CPUs before the new CPUs hit the market (quoting x86_64).
This is partially due to hardware producers understanding they need
toolchain support and working actively on getting that support. If even a
single FOSS FPGA toolchain gets to a similar penetration, you can count
on FPGA houses paying their own people to hack those leading FOSS
toolchains, for the benefit of all.

How will any FPGA toolchain get "a similar penetration" if the vendors
don't open the spec on the bitstream? Do you see lots of people coming
together to reverse engineer the many brands and flavors of FPGA devices
to make this even possible?

Remember that CPU makers have *always* released detailed info on their
instruction sets because it was useful even if, no, *especially if*
coding in assembly.

--

Rick
 
rickman <gnuarm@gmail.com> wrote:
> On 8/8/2015 12:51 PM, Aleksandar Kuktin wrote:

(snip)
With GCC, Linux and the ilk, it's actually the other way around. They add
support for new CPUs before the new CPUs hit the market (quoting x86_64).
This is partially due to hardware producers understanding they need
toolchain support and working actively on getting that support. If even a
single FOSS FPGA toolchain gets to a similar penetration, you can count
on FPGA houses paying their own people to hack those leading FOSS
toolchains, for the benefit of all.

OK, but that is relatively (in the life of gcc) recent.

The early gcc were replacements for existing C compilers on
systems that already had C compilers.

How will any FPGA toolchain get "a similar penetration" if the vendors
don't open the spec on the bitstream? Do you see lots of people coming
together to reverse engineer the many brands and flavors of FPGA devices
to make this even possible?

Only the final stage of processing needs to know the real details
of the bitstream. I don't know so well the current tool chain, but
it might be that you could replace most of the steps, and use the
vendor supplied final step.

Remember the early gcc before glibc? They used the vendor supplied
libc, which meant that it had to use the same call convention.

Remember that CPU makers have *always* released detailed info on their
instruction sets because it was useful even if, no, *especially if*
coding in assembly.

If FOSS tools were available, there would be reason to release
those details. But note that you don't really need bit level to
write assembly code, only to write assemblers. You need to know how
many bits there are (for example, in an address) but not which bits.

Now, most assemblers do print out the hex codes, but most often
that isn't needed for actual programming, sometimes for debugging.

-- glen
 
On 8/10/2015 2:33 AM, glen herrmannsfeldt wrote:
rickman <gnuarm@gmail.com> wrote:
On 8/8/2015 12:51 PM, Aleksandar Kuktin wrote:

(snip)

How will any FPGA toolchain get "a similar penetration" if the vendors
don't open the spec on the bitstream? Do you see lots of people coming
together to reverse engineer the many brands and flavors of FPGA devices
to make this even possible?

Only the final stage of processing needs to know the real details
of the bitstream. I don't know so well the current tool chain, but
it might be that you could replace most of the steps, and use the
vendor supplied final step.

Remember the early gcc before glibc? They used the vendor supplied
libc, which meant that it had to use the same call convention.

We have had open source compilers and simulators for some time now.
When will the "similar penetration" happen?


Remember that CPU makers have *always* released detailed info on their
instruction sets because it was useful even if, no, *especially if*
coding in assembly.

If FOSS tools were available, there would be reason to release
those details. But note that you don't really need bit level to
write assembly code, only to write assemblers. You need to know how
many bits there are (for example, in an address) but not which bits.

Now, most assemblers do print out the hex codes, but most often
that isn't needed for actual programming, sometimes for debugging.

I'm not sure what your point is. You may disagree with details of what
I wrote, but I don't get what you are trying to say about the topic of
interest.

--

Rick
 
On 09/08/2015 3:45 PM, rickman wrote:

How will any FPGA toolchain get "a similar penetration" if the vendors
don't open the spec on the bitstream? Do you see lots of people coming
together to reverse engineer the many brands and flavors of FPGA devices
to make this even possible?

Remember that CPU makers have *always* released detailed info on their
instruction sets because it was useful even if, no, *especially if*
coding in assembly.

This thread implies that a bitstream is like a processor ISA and to some
extent it is. FOSS tools have for the avoided the minor variations in
processor ISA's preferring to use a subset of the instruction set to
support a broad base of processors in a family.

The problem in some FPGA devices is that a much larger set of
implementation rules are required to produce effective code
implementations. This doesn't say that FOSS can't or won't do it but it
would require a much more detailed attention to detail than the FOSS
tools that I have looked at.

w..
 
On 8/10/2015 5:07 PM, Walter Banks wrote:
On 09/08/2015 3:45 PM, rickman wrote:


How will any FPGA toolchain get "a similar penetration" if the vendors
don't open the spec on the bitstream? Do you see lots of people coming
together to reverse engineer the many brands and flavors of FPGA devices
to make this even possible?

Remember that CPU makers have *always* released detailed info on their
instruction sets because it was useful even if, no, *especially if*
coding in assembly.


This thread implies that a bitstream is like a processor ISA and to some
extent it is. FOSS tools have for the avoided the minor variations in
processor ISA's preferring to use a subset of the instruction set to
support a broad base of processors in a family.

The problem in some FPGA devices is that a much larger set of
implementation rules are required to produce effective code
implementations. This doesn't say that FOSS can't or won't do it but it
would require a much more detailed attention to detail than the FOSS
tools that I have looked at.

I don't know for sure, but I think this is carrying the analogy a bit
too far. If FOSS compilers for CPUs have mostly limited code to subsets
of instructions to make the compiler easier to code and maintain that's
fine. Obviously the pressure to further optimize the output code just
isn't there.

I have no reason to think the tools for FPGA development don't have
their own set of tradeoffs and unique pressures for optimization. So it
is hard to tell where they will end up if they become mainstream FPGA
development tools which I don't believe they are currently, regardless
of the issues of bitstream generation.

I can't say just how important users find the various optimizations
possible with different FPGAs. I remember working for a test equipment
maker who was using Xilinx in a particular product. They did not want
us to code the unique HDL patterns required to utilize some of the
architectural features because the code would not be very portable to
other brands which they might use in other products in the future. In
other words, they didn't feel the optimizations were worth limiting
their choice of vendors in the future.

I guess that is another reason why the FPGA vendors like having their
own tools. They want to be able to control the optimizations for their
architectural features. I think they could do this just fine with FOSS
tools as well as proprietary, but they would have to share their code
which the competition might be able to take advantage of.

--

Rick
 
rickman <gnuarm@gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.
 
DJ Delorie <dj@delorie.com> wrote:
rickman <gnuarm@gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.

But the point is the ISA is the software-level API for the processor.
There's a lot more fancy stuff in the microarchitecture that you don't get
exposed to as a compiler writer[1]. The contract between programmers and
the CPU vendor is the vendor will implement the ISA API, and software
authors can be confident their software will work.[2]

You don't get exposed to things like branch latency, pipeline hazards,
control flow graph dependencies, and so on, because microarchitectural
techniques like branch predictors, register renaming and out-of-order
execution do a massive amount of work to hide those details from the
software world.

The nearest we came is VLIW designs like Itanium where more
microarchitectural detail was exposed to the compiler - which turned out to
be very painful for the compiler writer.

There is no such API for FPGAs - the compiler has to drive the raw
transistors to set up the routing for the exact example of the chip being
programmed. Not only that, there are no safeguards - if you drive those
transistors wrong, your chip catches fire.

Theo


[1] There is a certain amount of performance tweaking you can do with
knowledge of caching, prefetching, etc - but you rarely have the problem of
functional correctness; the ISA is not violated, even if slightly slower

[2] To a greater or lesser degree - Intel takes this to extremes,
supporting binary compatibility of OSes back to the 1970s; ARM requires the
OS to co-evolve but userland programs are (mostly) unchanged
 
On 11/08/15 02:51, DJ Delorie wrote:
rickman <gnuarm@gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.

My guess is that Walter's experience here is with SDCC rather than gcc,
since he writes compilers that - like SDCC - target small, awkward 8-bit
architectures. In that world, there are often many variants of the cpu
- the 8051 is particularly notorious - and getting the best out of these
devices often means making sure you use the extra architectural features
your particular device provides. SDCC is an excellent tool, but as
Walter says it works with various subsets of ISA provided by common
8051, Z80, etc., variants. The big commercial toolchains for such
devices, such as from Keil, IAR and Walter's own Bytecraft, provide
better support for the range of commercially available parts.

gcc is in a different world - it is a much bigger compiler suite, with
more developers than SDCC, and a great deal more support from the cpu
manufacturers and other commercial groups. One does not need to dig
further than the manual pages to see the huge range of options for
optimising use of different variants of many of targets it supports -
including not just use of differences in the ISA, but also differences
in timings and instruction scheduling.
 
On 11/08/15 10:59, Theo Markettos wrote:
DJ Delorie <dj@delorie.com> wrote:

rickman <gnuarm@gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.

But the point is the ISA is the software-level API for the processor.
There's a lot more fancy stuff in the microarchitecture that you don't get
exposed to as a compiler writer[1]. The contract between programmers and
the CPU vendor is the vendor will implement the ISA API, and software
authors can be confident their software will work.[2]

You don't get exposed to things like branch latency, pipeline hazards,
control flow graph dependencies, and so on, because microarchitectural
techniques like branch predictors, register renaming and out-of-order
execution do a massive amount of work to hide those details from the
software world.

As you note below, that is true regarding the functional execution
behaviour - but not regarding the speed. For many targets, gcc can take
such non-ISA details into account as well as a large proportion of the
device-specific ISA (contrary to what Walter thought).

The nearest we came is VLIW designs like Itanium where more
microarchitectural detail was exposed to the compiler - which turned out to
be very painful for the compiler writer.

There is no such API for FPGAs - the compiler has to drive the raw
transistors to set up the routing for the exact example of the chip being
programmed. Not only that, there are no safeguards - if you drive those
transistors wrong, your chip catches fire.

Indeed. The bitstream and the match between configuration bits and
functionality in an FPGA do not really correspond to cpu's ISA. They
are at a level of detail and complexity that is /way/ beyond an ISA.

Theo


[1] There is a certain amount of performance tweaking you can do with
knowledge of caching, prefetching, etc - but you rarely have the problem of
functional correctness; the ISA is not violated, even if slightly slower

[2] To a greater or lesser degree - Intel takes this to extremes,
supporting binary compatibility of OSes back to the 1970s; ARM requires the
OS to co-evolve but userland programs are (mostly) unchanged
 
On 11/08/2015 2:32 AM, David Brown wrote:
On 11/08/15 02:51, DJ Delorie wrote:

rickman <gnuarm@gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets
of instructions to make the compiler easier to code and maintain
that's fine.

As one of the GCC maintainers, I can tell you that the opposite is
true. We take advantage of everything the ISA offers.


My guess is that Walter's experience here is with SDCC rather than
gcc, since he writes compilers that - like SDCC - target small,
awkward 8-bit architectures. In that world, there are often many
variants of the cpu - the 8051 is particularly notorious - and
getting the best out of these devices often means making sure you use
the extra architectural features your particular device provides.
SDCC is an excellent tool, but as Walter says it works with various
subsets of ISA provided by common 8051, Z80, etc., variants. The big
commercial toolchains for such devices, such as from Keil, IAR and
Walter's own Bytecraft, provide better support for the range of
commercially available parts.

That frames the point I was making about bitstream information. My
limited understanding of the issue is getting the bitstream information
correct for a specific part goes beyond getting the internal
interconnects being functional and goes to issues dealing with timing,
power, gate position and data loads.

It is not saying that FOSS couldn't or shouldn't do it but it would
change a lot of things in both the FOSS and fpga world. The chip
companies have traded speed for detail complexity. In the same way that
speed has been traded for ISA use restrictions (specific instruction
combinations) in many of the embedded system processors we have supported.

w..
 
On 11/08/15 13:20, Walter Banks wrote:
On 11/08/2015 2:32 AM, David Brown wrote:
On 11/08/15 02:51, DJ Delorie wrote:

rickman <gnuarm@gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets
of instructions to make the compiler easier to code and maintain
that's fine.

As one of the GCC maintainers, I can tell you that the opposite is
true. We take advantage of everything the ISA offers.


My guess is that Walter's experience here is with SDCC rather than
gcc, since he writes compilers that - like SDCC - target small,
awkward 8-bit architectures. In that world, there are often many
variants of the cpu - the 8051 is particularly notorious - and
getting the best out of these devices often means making sure you use
the extra architectural features your particular device provides.
SDCC is an excellent tool, but as Walter says it works with various
subsets of ISA provided by common 8051, Z80, etc., variants. The big
commercial toolchains for such devices, such as from Keil, IAR and
Walter's own Bytecraft, provide better support for the range of
commercially available parts.

That frames the point I was making about bitstream information. My
limited understanding of the issue is getting the bitstream information
correct for a specific part goes beyond getting the internal
interconnects being functional and goes to issues dealing with timing,
power, gate position and data loads.

It is not saying that FOSS couldn't or shouldn't do it but it would
change a lot of things in both the FOSS and fpga world. The chip
companies have traded speed for detail complexity. In the same way that
speed has been traded for ISA use restrictions (specific instruction
combinations) in many of the embedded system processors we have supported.

This is not really a FOSS / Closed software issue (despite the thread).
Bitstream information in FPGA's is not really suitable for /any/ third
parties - it doesn't matter significantly if they are open or closed
development. When an FPGA company makes a new design, there will be
automatic flow of the details from the FPGA design details into the
placer/router/generator software - the information content and the
detail is far too high to deal sensibly with documentation or any other
interchange between significantly separated groups.

Though I have no "inside information" about how FPGA companies do their
development, I would expect there is a great deal of back-and-forth work
between the hardware designers, the software designers, and the groups
testing simulations to figure out how well the devices work in practice.
Whereas with a cpu design, the ISA is at least mostly fixed early in
the design process, and also the chip can be simulated and tested
without compilers or anything more than a simple assembler, for FPGA's
your bitstream will not be solidified until the final hardware design is
complete, and you are totally dependent on the placer/router/generator
software while doing the design.

All this means that it is almost infeasible for anyone to make a
sensible third-party generator, at least for large FPGAs. And the FPGA
manufacturers cannot avoid making such tools anyway. At best,
third-parties (FOSS or not) can hope to make limited bitstream models of
a few small FPGAs, and get something that works but is far from optimal
for the device.

Of course, there are many interesting ideas that can come out of even
such limited tools as this, so it is still worth making them and
"opening" the bitstream models for a few small FPGAs. For some uses, it
is an advantage that all software in the chain is open source, even if
the result is not as speed or space optimal. For academic use, it makes
research and study much easier, and can lead to new ideas or algorithms
for improving the FPGA development process. And you can do weird things
- I remember long ago reading of someone who used a genetic algorithm on
bitstreams for a small FPGA to make a filter system without actually
knowing /how/ it worked!
 
On 8/10/2015 8:51 PM, DJ Delorie wrote:
rickman <gnuarm@gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.

You are replying to the wrong person. I was not saying GCC limited the
instruction set used, I was positing a reason for Walter Bank's claim
this was true. My point is that there are different pressures in
compiling for FPGAs and CPUs.

--

Rick
 
On 8/11/2015 5:14 AM, David Brown wrote:
On 11/08/15 10:59, Theo Markettos wrote:
DJ Delorie <dj@delorie.com> wrote:

rickman <gnuarm@gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.

But the point is the ISA is the software-level API for the processor.
There's a lot more fancy stuff in the microarchitecture that you don't get
exposed to as a compiler writer[1]. The contract between programmers and
the CPU vendor is the vendor will implement the ISA API, and software
authors can be confident their software will work.[2]

You don't get exposed to things like branch latency, pipeline hazards,
control flow graph dependencies, and so on, because microarchitectural
techniques like branch predictors, register renaming and out-of-order
execution do a massive amount of work to hide those details from the
software world.

As you note below, that is true regarding the functional execution
behaviour - but not regarding the speed. For many targets, gcc can take
such non-ISA details into account as well as a large proportion of the
device-specific ISA (contrary to what Walter thought).

I'm not clear on what is being said about speed. It is my understanding
that compiler writers often consider the speed of the output and try
hard to optimize that for each particular generation of processor ISA or
even versions of processor with the same ISA. So don't see that as
being particularly different from FPGAs.

Sure, FPGAs require a *lot* of work to get routing to meet timing. That
is the primary purpose of one of the three steps in FPGA design tools,
compile, place, route. I don't see this as fundamentally different from
CPU compilers in a way that affects the FOSS issue.


The nearest we came is VLIW designs like Itanium where more
microarchitectural detail was exposed to the compiler - which turned out to
be very painful for the compiler writer.

There is no such API for FPGAs - the compiler has to drive the raw
transistors to set up the routing for the exact example of the chip being
programmed. Not only that, there are no safeguards - if you drive those
transistors wrong, your chip catches fire.


Indeed. The bitstream and the match between configuration bits and
functionality in an FPGA do not really correspond to cpu's ISA. They
are at a level of detail and complexity that is /way/ beyond an ISA.

I think that is not a useful distinction. If you include all aspects of
writing compilers, the ISA has to be supplemented by other information
to get good output code. If you only consider the ISA your code will
never be very good. In the end the only useful distinction between the
CPU tools and FPGA tools are that FPGA users are, in general, not as
capable in modifying the tools.

--

Rick
 
On Tuesday, August 11, 2015 at 3:59:22 AM UTC-5, Theo Markettos wrote:
DJ Delorie <dj@....com> wrote:

rickman <gnuarm@....com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true..
We take advantage of everything the ISA offers.

But the point is the ISA is the software-level API for the processor.
There's a lot more fancy stuff in the microarchitecture that you don't get
exposed to as a compiler writer[1]. The contract between programmers and
the CPU vendor is the vendor will implement the ISA API, and software
authors can be confident their software will work.[2]

You don't get exposed to things like branch latency, pipeline hazards,
control flow graph dependencies, and so on, because microarchitectural
techniques like branch predictors, register renaming and out-of-order
execution do a massive amount of work to hide those details from the
software world.

The nearest we came is VLIW designs like Itanium where more
microarchitectural detail was exposed to the compiler - which turned out to
be very painful for the compiler writer.

There is no such API for FPGAs - the compiler has to drive the raw
transistors to set up the routing for the exact example of the chip being
programmed. Not only that, there are no safeguards - if you drive those
transistors wrong, your chip catches fire.

Theo


[1] There is a certain amount of performance tweaking you can do with
knowledge of caching, prefetching, etc - but you rarely have the problem of
functional correctness; the ISA is not violated, even if slightly slower

[2] To a greater or lesser degree - Intel takes this to extremes,
supporting binary compatibility of OSes back to the 1970s; ARM requires the
OS to co-evolve but userland programs are (mostly) unchanged

One could make the analogy that a FPGA's ISA is the LUT, register, ALU & RAM primitives that the mapper generates from the EDIF.

There is no suitable analogy for the router phase of bitstream generation. The router resources are a hierarchy of variable length wires in an assortment of directions (horizontal, vertical, sometimes diagonal) with pass transistors used to connect wires, source and destinations.

Timing driven place & route is easy to express, difficult to implement. Register and/or logic replication may be performed to improve timing.

There are some open? router tools at Un. Toronto:
http://www.eecg.toronto.edu/~jayar/software/software.html

Jim Brakefield
 
On 11.08.2015 08:32, David Brown wrote:
On 11/08/15 02:51, DJ Delorie wrote:

rickman <gnuarm@gmail.com> writes:
If FOSS compilers for CPUs have mostly limited code to subsets of
instructions to make the compiler easier to code and maintain that's
fine.

As one of the GCC maintainers, I can tell you that the opposite is true.
We take advantage of everything the ISA offers.


My guess is that Walter's experience here is with SDCC rather than gcc,
since he writes compilers that - like SDCC - target small, awkward 8-bit
architectures. In that world, there are often many variants of the cpu
- the 8051 is particularly notorious - and getting the best out of these
devices often means making sure you use the extra architectural features
your particular device provides. SDCC is an excellent tool, but as
Walter says it works with various subsets of ISA provided by common
8051, Z80, etc., variants. The big commercial toolchains for such
devices, such as from Keil, IAR and Walter's own Bytecraft, provide
better support for the range of commercially available parts.

gcc is in a different world - it is a much bigger compiler suite, with
more developers than SDCC, and a great deal more support from the cpu
manufacturers and other commercial groups. One does not need to dig
further than the manual pages to see the huge range of options for
optimising use of different variants of many of targets it supports -
including not just use of differences in the ISA, but also differences
in timings and instruction scheduling.

I'd say the SDCC situation is more complex, and it seems to do uite well
compared to other compilers for the same architectures. On one hand,
SDCC always has had few developers. It has some quite advanced
optimizations, but one the other hand it is lacking in some standard
optimizations and features (SDCC's pointer analysis is not that good, we
don't have generalized constant propagation yet, there are some standard
C features still missing - see below, after the discussion of the
ports). IMO, the bigest weaknesses are there, and not in the use of
exotic instructions.

The 8051 has many variants, and SDCC currently does not support some of
the advanced features available in some of them, such as 4 dptrs, etc. I
do not know how SDCC compares to non-free compilers in that respect.

The Z80 is already a bit different. We use the differences in the
instruction sets of the Z80, Z180, LR35902, Rabbit, TLCS-90. SDCC does
not use the undocumented instructions available in some Z80 variants,
and does not use the alternate register set for code generation; there
definitely is potential for further improvement, but: Last time I did a
comparison of compilers for these architectures, IAR was the only one
that did better than SDCC for some of them.

Newer architectures supported by SDCC are the Freescale HC08, S08 and
the STMicroelectronics STM8. The non-free compilers for these targets
seem to be able to often generate better code, but SDCC is not far behind.

The SDCC PIC backends are not up to the standard of the others.

In terms of standard complaince, IMO, SDCC is dong better than the
non-free compilers, with the exception of IAR. Most non-free compilers
support something resembling C90 with a few deviations from the
standard, IAR seems to support mostly standard C99. SDCC has a few gaps,
even in C90 (such as K&R functions and assignment of structs). ON th
other hand, SDC supports most of the new features of C99 and C11 (the
only missing feature introduced in C11 seems to be UTF-8 strings).

Philipp
 
On 8/13/2015 6:07 AM, Philipp Klaus Krause wrote:
> The SDCC PIC backends are not up to the standard of the others.

Is the PIC too much of an odd ball to keep up with
or
is there no future in 8-bit PIC ?
or
are 32-bit chips more fun ?

If there is a better place to discuss this, please let me know.
 
On 8/13/2015 10:11 AM, hamilton wrote:
On 8/13/2015 6:07 AM, Philipp Klaus Krause wrote:
The SDCC PIC backends are not up to the standard of the others.

Is the PIC too much of an odd ball to keep up with
or
is there no future in 8-bit PIC ?
or
are 32-bit chips more fun ?

If there is a better place to discuss this, please let me know.

I don't know tons about the 32 bit chips which are mostly ARMs. But the
initialization is more complex. It is a good idea to let the tools
handle that for you. All of the 8 bit chips I've used were very simple
to get off the ground.

--

Rick
 
On 13.08.2015 16:11, hamilton wrote:
On 8/13/2015 6:07 AM, Philipp Klaus Krause wrote:
The SDCC PIC backends are not up to the standard of the others.

Is the PIC too much of an odd ball to keep up with
or
is there no future in 8-bit PIC ?
or
are 32-bit chips more fun ?

I don't consider 32-bit chips more fun. I like CISC 8-bitters, but I
prefer those that seem better suited for C. Again SDCC has few
developers, and at least recently, the most active ones don't seem that
interested in the pics.

Also, the situation is quite different between the pic14 and pic16
backends. The pic16 backend is not that bad. If someone puts a few weeks
of work into it, it could probably make it up to the standard of the
other ports in terms of correctness; it already passes large parts of
the regular regression test suite. The pic14 would require much more work.

If there is a better place to discuss this, please let me know.

The sdcc-user and sdcc-devel seem a better place than comp.arch.fpga.

Philipp
 
Again SDCC has few
developers, and at least recently, the most active ones don't seem that
interested in the pics.

Back to the topic of the open FPGA tool chain, I think there would be many "PICs", i.e. topics which are addressed by no / too few developers.

But the whole discussion is quite theoretical as long as A & X do not open their bitstream formats. And I do not think that they will do anything that will support an open source solution, as software is the main entry obstacle for FPGA startups. If there would be a flexible open-source tool-chain with large developer and user-base that can be ported to new architectures easily, this would make it much easier for new competition. (Think gcc...)

Also (as mentioned above) I think with the good and free tool chains from the suppliers, their would be not much demand for such a open source tool chain. There are other points where I would see more motiviation and even there is not happening much:
- Good open source Verilog/VHDL editor (Yes, I have heard of Emacs...) as the integrated editors are average (Altera) or bad (Xilinx). (Currently I am evaluating two commercial VHDL editors...)
- A kind of graphical editor for VHDL and Verilog as the top/higher levels of bigger projects are often a pain IMHO (like writing netlists by hand). I would even start such a project myself if I had the time...

But even with such things where I think would be quite some demand, the "critical mass" of the FPGA community is too low to get projects started and especially keep them running.

Thomas
 

Welcome to EDABoard.com

Sponsor

Back
Top