Tiny CPUs for Slow Logic

On Wednesday, March 20, 2019 at 6:41:55 AM UTC-4, already...@yahoo.com wrote:
On Tuesday, March 19, 2019 at 10:07:38 PM UTC+2, Tom Gardner wrote:
On 19/03/19 17:35, already5chosen@yahoo.com wrote:
On Tuesday, March 19, 2019 at 6:19:36 PM UTC+2, Tom Gardner wrote:

The UK Parliament is an unmitigated dysfunctional mess.


Do you prefer dysfunctional mesh ;)

:) I'll settle for anything that /works/ predictably :(


UK political system is completely off-topic in comp.arch.fpga. However I'd say that IMHO right now your parliament is facing unusually difficult problem on one hand, but at the same time it's not really "life or death" sort of the problem. Having troubles and appearing non-decisive in such situation is normal. It does not mean that the system is broken.

I was watching a video of a guy who bangs together Teslas from salvage cars.. This one was about him actually buying a used Tesla from Tesla and the many trials and tribulations he had. He had traveled to a dealership over an hour drive away and they said they didn't have anything for him. At one point he says he is not going to get too wigged out over all this because it is a "first world problem". That gave me insight into my own issues realizing that what seems at first to me to be a major issue, is an issue that much of the world would LOVE to have.

I'm wondering if Brexit is not one of those issues... I'm just sayin'...

FPGA design is similar. Consider which of your issues are "first world" issues when you design.

Rick C.
 
On Wednesday, March 20, 2019 at 6:53:07 AM UTC-4, Theo wrote:
gnuarm.deletethisbit@gmail.com wrote:
On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo Markettos wrote:

When people talk about things like "software running on such heterogeneous
cores" it makes me think they don't really understand how this could be
used. If you treat these small cores like logic elements, you don't have
such lofty descriptions of "system software" since the software isn't
created out of some global software package. Each core is designed to do
a specific job just like any other piece of hardware and it has discrete
inputs and outputs just like any other piece of hardware. If the hardware
clock is not too fast, the software can synchronize with and literally
function like hardware, but implementing more complex logic than the same
area of FPGA fabric might.

The point is that we need to understand what the whole system is doing. In
the XMOS case, we can look at a piece of software with N threads, running
across the cores provided on the chip. One piece of software, distributed
over the hardware resource available - the system is doing one thing.

Your bottom-up approach means it's difficult to see the big picture of
what's going on. That means it's hard to understand the whole system, and
to program from a whole-system perspective.

I never mentioned a bottom up or a top down approach to design. Nothing about using these small CPUs is about the design "direction". I am pretty sure that you have to define the circuit they will work in before you can start designing the code.


Not sure what is hard to think about. It's a CPU, a small CPU with
limited memory to implement small tasks that can do rather complex
operations compared to a state machine really and includes memory,
arithmetic and logic as well as I/O without having to write a single line
of HDL. Only the actual app needs to be written.

Here are the sematic descriptions of basic logic elements:

LUT: q = f(x,y,z)
FF: q <= d_in (delay of one cycle)
BRAM: q = array[addr]
DSP: q = a*b + c

A P&R tool can build a system out of these building blocks. It's notable
that the state-holding elements in this schema do nothing else except
holding state. That makes writing the tools easier (and we all know how
difficult the tools already are). In general, we don't tend to instantiate
these primitives manually but describe the higher level functions (eg a 64
bit add) in HDL and allow the tools to select appropriate primitives for us
(eg a number of fast-adder blocks chained together).

What's the logic equation of a processor?

Obviously it is like a combination of LUTs with FFs and able to implement any logic you wish including math. BTW, in many devices the elements are not at all so simple. Xilinx LUTs can be used as shift registers. There are additional logic within the logic blocks that allow math with carry chains, combining LUTs to form larger LUTs, breaking LUTs into smaller LUTs and lets not forget about routing which may not be used much anymore, not sure.

So your simple world of four elements is really not so valid.


It has state, but vastly more
state than the simplicity of a flipflop. What pattern does the P&R tool
need to match to infer a processor?

Why does it need to be inferred. If you want to write an HDL tool to turn HDL into processor code, have at it. But then there are other methods. Someone mentioned his MO is to use other tools for designing his algorithms and letting that tool generate the software for a processor or the HDL for an FPGA. That would seem easy enough to integrate.


How is any verification tool going
to understand whether the processor with software is doing the right thing?

Huh? You can't simulate code on a processor???


If your answer is 'we don't need verification tools, we program by hand'
then a) software has bugs, and automated verification is a handy way to
catch them, and b) you're never going to be writing hundreds of different
mini-programs to run on each core, let alone make them correct.

You seem to have left the roadway here. I'm lost.


If we scale the processors up a bit, I could see the merits in say a bank
of, say, 32 Cortex M0s that could be interconnected as part of the FPGA
fabric and programmed in software for dedicated tasks (for instance, read
the I2C EEPROM on the DRAM DIMM and configure the DRAM controller at boot).

I don't follow your logic. What is different about the ARM processor from the stack processor other than that it is larger and slower and requires a royalty on each one? Are you talking about writing the code in C vs. what ever is used for the stack processor?


But this is an SoC construct (built using SoC builder tools, and over which
the programmer has some purview although, as it turns out, sketchier than
you might think[1]). Such CPUs would likely be running bigger corpora of
software (for instance, the DRAM controller vendor's provided initialisation
code) which would likely be in C. But in this case we could just use a
soft-core today (the CPU ISA is most irrelevant for this application, so a
RISC-V/Microblaze/NIOS would be fine).

[1] https://inf.ethz.ch/personal/troscoe/pubs/hotos15-gerber.pdf

The point of the many hard cores is the saving of resources. Soft cores would be the most wasteful way to implement logic. If the application is large enough they can implement things in software that aren't as practical in HDL, but that would be a different class of logic from the tiny CPUs I'm talking about.


I can also see another niche, at the extreme bottom end, where a CPLD might
have one of your processors plus a few hundred logic cells. That's
essentially a microcontroller with FPGA, or an FPGA with microcontroller -
which some of the vendors already produce (although possibly not
small/cheap/low power enough). Here I can't see the advantages of using a
stack-based CPU versus paying a bit more to program in C. Although I don't
have experience in markets where the retail price of the product is $1, and so
every $0.001 matters.

I would be interested to know what applications might use heterogenous
many-cores and what performance is achievable.

Yes, clearly not getting the concept. Asking about heterogeneous
performance is totally antithetical to this idea.

You keep mentioning 700 MIPS, which suggests performance is important. If
these are simple state machine replacements, why do we care about
performance?

You lost me with the gear shift. The mention of instruction rate is about the CPU being fast enough to keep up with FPGA logic. The issue with "heterogeneous performance" is the "heterogeneous" part, lumping the many CPUs together to create some sort of number cruncher. That's not what this is about. Like in the GA144, I fully expect most CPUs to be sitting around most of the time idling, waiting for data. This is a good thing actually. These CPUs could consume significant current if they run at GHz all the time. I believe in the GA144 at that slower rate each processor can use around 2..5 mA. Not sure if a smaller process would use more or less power when running flat out. It's been too many years since I worked with those sorts of numbers.


In essence, your proposal has a disconnect between the situations existing
FPGA blocks are used (implemented automatically by P&R tools) and the
situations software is currently used (human-driven software and
architectural design). It's unclear how you claim to bridge this gap.

I don't usually think of designing in those terms. If I want to design something, I design it. I ignore many tools only using the ones I find useful.. In this case I would have no problem writing code for the processor and if needed, rolling into the FPGA simulation a model of the processor to run the code. In a professional implementation I would expect these models to be written for me in modules that run much faster than HDL so the simulation speed is not impacted.

I certainly don't see how P&R tools would be a problem. They accommodate multipliers, DSP blocks, memory block and many, many special bits of assorted components inside the FPGAs which vary from vendor to vendor. Clock generators and distribution is pretty unique to each manufacturer. Lattice has all sorts of modules to offer like I2C and embedded Flash. Then there are entire CPUs embedded in FPGAs. Why would supporting them be so different from what I am talking about?

Rick C.
 
On 20/03/2019 15:50, gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 6:14:21 AM UTC-4, David Brown wrote:
On 20/03/2019 03:30, gnuarm.deletethisbit@gmail.com wrote:
On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo Markettos
wrote:
Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
Understand XMOS's xCORE processors and xC language, see how
they complement and support each other. I found the net
result stunningly easy to get working first time, without
having to continually read obscure errata!

I can see the merits of the XMOS approach. But I'm unclear
how this relates to the OP's proposal, which (I think) is
having tiny CPUs as hard logic blocks on an FPGA, like DSP
blocks.

I completely understand the problem of running out of hardware
threads, so a means of 'just add another one' is handy. But
the issue is how to combine such things with other synthesised
logic.

The XMOS approach is fine when the hardware is uniform and the
software sits on top, but when the hardware is synthesised and
the 'CPUs' sit as pieces in a fabric containing random logic
(as I think the OP is suggesting) it becomes a lot harder to
reason about what the system is doing and what the software
running on such heterogeneous cores should look like. Only the
FPGA tools have a full view of what the system looks like, and
it seems stretching them to have them also generate software to
run on these cores.

When people talk about things like "software running on such
heterogeneous cores" it makes me think they don't really
understand how this could be used. If you treat these small
cores like logic elements, you don't have such lofty descriptions
of "system software" since the software isn't created out of some
global software package. Each core is designed to do a specific
job just like any other piece of hardware and it has discrete
inputs and outputs just like any other piece of hardware. If the
hardware clock is not too fast, the software can synchronize with
and literally function like hardware, but implementing more
complex logic than the same area of FPGA fabric might.


That is software.

If you want to try to get cycle-precise control of the software and
use that precision for direct hardware interfacing, you are almost
certainly going to have a poor, inefficient and difficult design.
It doesn't matter if you say "think of it like logic" - it is /not/
logic, it is software, and you don't use that for cycle-precise
control. You use when you need flexibility, calculations, and
decisions.

I suppose you can make anything difficult if you try hard enough.

Equally, you can make anything sound simple if you are vague enough and
wave your hands around.

The point is you don't have to make it difficult by talking about
"software running on such heterogeneous cores". Just talk about it
being a small hunk of software that is doing a specific job. Then
the mystery is gone and the task can be made as easy as the task is.

I did not use the phrase "software running on such heterogeneous cores"
- and I am not trying to make anything difficult. You are making cpu
cores. They run software. Saying they are "like logic elements" or
"they connect directly to hardware" does not make it so - and it does
not mean that what they run is not software.

In VHDL this would be a process(). VHDL programs are typically chock
full of processes and no one wrings their hands worrying about how
they will design the "software running on such heterogeneous cores".


BTW, VHDL is software too.

I agree that VHDL is software. And yes, there are usually processes in
VHDL designs.

I am not /worrying/ about these devices running software - I am simply
saying that they /will/ be running software. I can't comprehend why you
want to deny that. It seems that you are frightened of software or
programmers, and want to call it anything /but/ software.

If the software a core is running is simple enough to be described in
VHDL, then it should be a VHDL process - not software in a cpu core. If
it is too complex for that, it is going to have to be programmed
separately in an appropriate language. That is not necessarily harder
or easier than VHDL design - it is just different.

If you try to force the software to be synchronous with timing on the
hardware, /then/ you are going to be in big difficulties. So don't do
that - use hardware for the tightest timing, and software for the bits
that software is good for.


There is no need to think about how the CPUs would communicate
unless there is a specific need for them to do so. The F18A uses
a handshaked parallel port in their design. They seem to have
done a pretty slick job of it and can actually hang the processor
waiting for the acknowledgement saving power and getting an
instantaneous wake up following the handshake. This can be used
with other CPUs or


Fair enough.

Ok, that's a start.

I'd expect that the sensible way to pass data between these, if you need
to do so much, is using FIFO's.
 
On Wednesday, March 20, 2019 at 6:56:51 AM UTC-4, already...@yahoo.com wrote:
On Tuesday, March 19, 2019 at 10:07:38 PM UTC+2, Tom Gardner wrote:
On 19/03/19 17:35, already5chosen@yahoo.com wrote:
On Tuesday, March 19, 2019 at 6:19:36 PM UTC+2, Tom Gardner wrote:
The "granularity" of the computation and communication will be a key to
understanding what the OP is thinking.

I don't know what Rick had in mind. I personally would go for one "hard-CPU"
block per 4000-5000 6-input logic elements (i.e. Altera ALMs or Xilinx CLBs).
Each block could be configured either as one 64-bit core or pair of 32-bit
cores. The bock would contains hard instruction decoders/ALUs/shifters and
hard register files. It can optionally borrow adjacent DSP blocks for
multipliers. Adjacent embedded memory blocks can be used for data memory.
Code memory should be a bit more flexible giving to designer a choice between
embedded memory blocks or distributed memory (X)/MLABs(A).

It would be interesting to find an application level
description (i.e. language constructs) that
- could be automatically mapped onto those primitives
by a toolset
- was useful for more than a niche subset of applications
- was significantly better than existing tools

I wouldn't hold my breath :)


I think, you are looking at it from wrong angle.
One doesn't really need new tools to design and simulate such things. What's needed is a combinations of existing tools - compilers, assemblers, probably software simulator plug-ins into existing HDL simulators, but the later is just luxury for speeding up simulations, in principle, feeding HDL simulator with RTL model of the CPU core will work too.

I agree, but I think it will be very useful to have a proper model of the CPUs for faster simulations. If it were one CPU it's different. But using 100 CPUs would very likely make simulation a real chore without a fast model.


> As to niches, all "hard" blocks that we currently have in FPGAs are about niches. It's extremely rare that user's design uses all or majority of the features of given FPGA device and need LUTs, embedded memories, PLLs, multiplies, SERDESs, DDR DRAM I/O blocks etc in exact amounts appearing in the device.

This is exactly the reason why FPGA companies resisted even incorporating block RAM initially. I recall conversations with Xilinx representatives about these issues here. It was indicated that the cost of the added silicon was significant and they would be "seldom" used. Now many people would not buy an FPGA without multipliers and/or DSP blocks. This is really just another step in the same direction.


> It still makes sense, economically, to have them all built in, because masks and other NREs are mighty expensive while silicon itself is relatively cheap. Multiple small hard CPU cores are really not very different from features, mentioned above.

I don't know the details of costs for FPGAs. What I do know is that the CPUs I am talking about would use the silicon area of a rather few logic blocks. The reference design I use is in a 180 nm process and is an eighth of a square mm. With an 18 nm process the die area would be 1,260 sq um. That's not very big. 100 of them would occupy 0.126 sq mm. If they have much use, that's a pretty small die area. For comparison, an XC7A200T has a die area of about 132 sq mm and 33,000 slices for an area of 3,923 sq um per. Of course this is loaded with overhead which is likely more than half the area, but it gives you some perspective about the cost of adding these CPUs... very, very little, around the die area of a single slice. It also gives you an idea of how large the FPGA logic functions have grown.

Rick C.
 
On 20/03/19 14:51, already5chosen@yahoo.com wrote:
On Wednesday, March 20, 2019 at 4:31:27 PM UTC+2, Tom Gardner wrote:
On 20/03/19 14:11, already5chosen@yahoo.com wrote:
On Wednesday, March 20, 2019 at 3:37:17 PM UTC+2, Tom Gardner wrote:

But more difficult that creating such a toolset is defining an application
level description that a toolset can munge.

So, define (initially by example, later more formally) inputs to the
toolset and outputs from it. Then we can judge whether the concepts are
more than handwaving wishes.


I don't understand what you are asking for.

Go back and read the parts of my post that you chose to snip.

Give a handwaving indication of the concepts that avoid the
conceptual problems that I mentioned.

Frankly, it starts to sound like you never used soft CPU cores in your designs.
So, for somebody like myself, who uses them routinely for different tasks since 2006, you are really not easy to understand.

Professionally, since 1978 I've done everything from low noise
analogue electronics, many hardware-software systems using
all sorts of technologies, networking at all levels of the
protocol stack, "up" to high availability distributed soft
real-time systems.

And almost all of that has been on the bleeding edge.

So, yes, I do have more than a passing acquaintance with
the characteristics of many hardware and software technologies,
and where partitions between them can, should and should not
be drawn.


> Concept? Concepts are good for new things, not for something that is a variation of something old and routine and obviously working.

Whatever is being proposed, is it old or new?

If old then the OP needs enlightenment and concrete
examples can easily be noted.

If new, then provide the concepts.


Or better still, get the OP to do it.


With that part I agree.
 
On 20/03/19 15:30, David Brown wrote:
If the software a core is running is simple enough to be described in
VHDL, then it should be a VHDL process - not software in a cpu core. If
it is too complex for that, it is going to have to be programmed
separately in an appropriate language. That is not necessarily harder
or easier than VHDL design - it is just different.

Precisely.


If you try to force the software to be synchronous with timing on the
hardware, /then/ you are going to be in big difficulties. So don't do
that - use hardware for the tightest timing, and software for the bits
that software is good for.

Precisely.


There is no need to think about how the CPUs would communicate
unless there is a specific need for them to do so. The F18A uses
a handshaked parallel port in their design. They seem to have
done a pretty slick job of it and can actually hang the processor
waiting for the acknowledgement saving power and getting an
instantaneous wake up following the handshake. This can be used
with other CPUs or


Fair enough.

Ok, that's a start.


I'd expect that the sensible way to pass data between these, if you need
to do so much, is using FIFO's.

And that raises the question of the "comms protocols" or
"programming model" between each side, e.g. rendezvous,
FIFO depth, blocking, non-blocking, timeouts etc
 
On Wednesday, March 20, 2019 at 11:30:15 AM UTC-4, David Brown wrote:
On 20/03/2019 15:50, gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 6:14:21 AM UTC-4, David Brown wrote:
On 20/03/2019 03:30, gnuarm.deletethisbit@gmail.com wrote:
On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo Markettos
wrote:
Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
Understand XMOS's xCORE processors and xC language, see how
they complement and support each other. I found the net
result stunningly easy to get working first time, without
having to continually read obscure errata!

I can see the merits of the XMOS approach. But I'm unclear
how this relates to the OP's proposal, which (I think) is
having tiny CPUs as hard logic blocks on an FPGA, like DSP
blocks.

I completely understand the problem of running out of hardware
threads, so a means of 'just add another one' is handy. But
the issue is how to combine such things with other synthesised
logic.

The XMOS approach is fine when the hardware is uniform and the
software sits on top, but when the hardware is synthesised and
the 'CPUs' sit as pieces in a fabric containing random logic
(as I think the OP is suggesting) it becomes a lot harder to
reason about what the system is doing and what the software
running on such heterogeneous cores should look like. Only the
FPGA tools have a full view of what the system looks like, and
it seems stretching them to have them also generate software to
run on these cores.

When people talk about things like "software running on such
heterogeneous cores" it makes me think they don't really
understand how this could be used. If you treat these small
cores like logic elements, you don't have such lofty descriptions
of "system software" since the software isn't created out of some
global software package. Each core is designed to do a specific
job just like any other piece of hardware and it has discrete
inputs and outputs just like any other piece of hardware. If the
hardware clock is not too fast, the software can synchronize with
and literally function like hardware, but implementing more
complex logic than the same area of FPGA fabric might.


That is software.

If you want to try to get cycle-precise control of the software and
use that precision for direct hardware interfacing, you are almost
certainly going to have a poor, inefficient and difficult design.
It doesn't matter if you say "think of it like logic" - it is /not/
logic, it is software, and you don't use that for cycle-precise
control. You use when you need flexibility, calculations, and
decisions.

I suppose you can make anything difficult if you try hard enough.


Equally, you can make anything sound simple if you are vague enough and
wave your hands around.

Not trying to make it sound "simple". Just saying it can be useful and not the same as designing a chip with many CPUs for the purpose of providing lots of MIPS to crunch numbers. Those ideas and methods don't apply here.


The point is you don't have to make it difficult by talking about
"software running on such heterogeneous cores". Just talk about it
being a small hunk of software that is doing a specific job. Then
the mystery is gone and the task can be made as easy as the task is.


I did not use the phrase "software running on such heterogeneous cores"
- and I am not trying to make anything difficult. You are making cpu
cores. They run software. Saying they are "like logic elements" or
"they connect directly to hardware" does not make it so - and it does
not mean that what they run is not software.

You don't need to complicate the design by applying all the limitations of multi-processing when this is NOT at all the same. I call them logic elements because that is the intent, for them to implement logic. Yes, it is software, but that in itself creates no problems I am aware of.

As to the connection, I really don't get your point. They either connect directly to the hardware because that's how they are designed, or they don't.... because that's how they are designed. I don't know what you are saying about that.


In VHDL this would be a process(). VHDL programs are typically chock
full of processes and no one wrings their hands worrying about how
they will design the "software running on such heterogeneous cores".


BTW, VHDL is software too.

I agree that VHDL is software. And yes, there are usually processes in
VHDL designs.

I am not /worrying/ about these devices running software - I am simply
saying that they /will/ be running software. I can't comprehend why you
want to deny that.

Enough! The CPUs run software. Now, what is YOUR point?


It seems that you are frightened of software or
programmers, and want to call it anything /but/ software.

If the software a core is running is simple enough to be described in
VHDL, then it should be a VHDL process - not software in a cpu core.

Ok, now you have crossed into a philosophical domain. If you want to think in these terms I won't dissuade you, but it has no meaning in digital design and I won't discuss it further.


If
it is too complex for that, it is going to have to be programmed
separately in an appropriate language. That is not necessarily harder
or easier than VHDL design - it is just different.

Ok, so what?


If you try to force the software to be synchronous with timing on the
hardware, /then/ you are going to be in big difficulties. So don't do
that - use hardware for the tightest timing, and software for the bits
that software is good for.

LOL! You are thinking in terms that are very obsolete. Read about how the F18A synchronizes with other processors and you will find that this is an excellent way to interface to the hardware as well. Just like logic, when the CPU hand shakes with a logic clock, it only has to meet the timing of a clock cycle, just like all the logic in the same design. In a VHDL process the steps are written out in sequence and not assumed to be running in parallel, just like software. When the process reaches a point of synchronization it will halt, just like logic.


There is no need to think about how the CPUs would communicate
unless there is a specific need for them to do so. The F18A uses
a handshaked parallel port in their design. They seem to have
done a pretty slick job of it and can actually hang the processor
waiting for the acknowledgement saving power and getting an
instantaneous wake up following the handshake. This can be used
with other CPUs or


Fair enough.

Ok, that's a start.


I'd expect that the sensible way to pass data between these, if you need
to do so much, is using FIFO's.

Between what exactly??? You are designing a system that is not before you. More importantly you don't actually know anything about the ideas used in the F18A and GA144 designs.

I'm not trying to be rude, but you should learn more about them before you assume they need to work like every other processor you've ever used. The F18A and GA144 really only have two particularly unique ideas. One is that the processor is very, very small and as a consequence, fast. The other is the communications technique.

Charles Moore is a unique thinker and he realized that with the advance of processing technology CPUs could be made very small and so become MIPS fodder. By that I mean you no longer need to focus on utilizing all the MIPS in a CPU. Instead, they can be treated as disposable and only a tiny fraction of the available MIPS used to implement some function... usefully.

While the GA144 is a commercial failure for many reasons, it does illustrate some very innovative ideas and is what prompted me to consider what happens when you can scatter CPUs around an FPGA as if they were logic blocks.

No, I don't have a fully developed "business plan". I am just interested in exploring the idea. Moore's (Green Array's actually, CM isn't actively working with them at this point I believe) chip isn't very practical because Moore isn't terribly interested in being practical exactly. But that isn't to say it doesn't embody some very interesting ideas.

Rick C.
 
On Wednesday, March 20, 2019 at 5:51:21 PM UTC+2, Tom Gardner wrote:
On 20/03/19 14:51, already5chosen@yahoo.com wrote:
On Wednesday, March 20, 2019 at 4:31:27 PM UTC+2, Tom Gardner wrote:
On 20/03/19 14:11, already5chosen@yahoo.com wrote:
On Wednesday, March 20, 2019 at 3:37:17 PM UTC+2, Tom Gardner wrote:

But more difficult that creating such a toolset is defining an application
level description that a toolset can munge.

So, define (initially by example, later more formally) inputs to the
toolset and outputs from it. Then we can judge whether the concepts are
more than handwaving wishes.


I don't understand what you are asking for.

Go back and read the parts of my post that you chose to snip.

Give a handwaving indication of the concepts that avoid the
conceptual problems that I mentioned.

Frankly, it starts to sound like you never used soft CPU cores in your designs.
So, for somebody like myself, who uses them routinely for different tasks since 2006, you are really not easy to understand.

Professionally, since 1978 I've done everything from low noise
analogue electronics, many hardware-software systems using
all sorts of technologies, networking at all levels of the
protocol stack, "up" to high availability distributed soft
real-time systems.

And almost all of that has been on the bleeding edge.

So, yes, I do have more than a passing acquaintance with
the characteristics of many hardware and software technologies,
and where partitions between them can, should and should not
be drawn.

Is it sort of admission that you indeed never designed with soft cores?

Concept? Concepts are good for new things, not for something that is a variation of something old and routine and obviously working.

Whatever is being proposed, is it old or new?

If old then the OP needs enlightenment and concrete
examples can easily be noted.

If new, then provide the concepts.

It is a new variation of of old concept.
A cross between PPCs in ancient VirtexPro and soft cores virtually everywhere in more modern times.
Probably, best characterized by what is not alike: it is not alike Xilinx Zynq or Altera Cyclone5-HPS.

"New" part comes more from new economics of sub-20nm processes than from abstractions that you try to draf into it. NRE is more and more expensive, gates are more and more cheap (Well, the cost of gates started to stagnate in last couple of years, but that does not matter. What's matter is that at something like TSMC 12nm gate are already quite cheap). So, adding multiple small CPU cores that could be used as replacement for multiple soft CPU cores that people already used to use today, now starts to make sense. May be, it's not a really good proposition, but at these silicon geometries it can't be written out as obviously stupid proposition.

It appears that I don't agree with Rick about "how small is small" and respectively about how many of them should be placed on die, but we probably agree about percentage of the area of FPGA that intuitively seem worth to allocate for such feature - more than 1% but less than 5%.
Also he appears to like stack-based ISAs while I lean toward more conventional 32-bit or 32/64-bit RISC, or, may be, even toward modern CISC akin to Renesas RX, but those are relatively minor details.


Or better still, get the OP to do it.


With that part I agree.
 
On 20/03/2019 17:30, gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 11:30:15 AM UTC-4, David Brown
wrote:
On 20/03/2019 15:50, gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 6:14:21 AM UTC-4, David Brown
wrote:
On 20/03/2019 03:30, gnuarm.deletethisbit@gmail.com wrote:
On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo
Markettos wrote:
Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
Understand XMOS's xCORE processors and xC language, see
how they complement and support each other. I found the
net result stunningly easy to get working first time,
without having to continually read obscure errata!

I can see the merits of the XMOS approach. But I'm
unclear how this relates to the OP's proposal, which (I
think) is having tiny CPUs as hard logic blocks on an FPGA,
like DSP blocks.

I completely understand the problem of running out of
hardware threads, so a means of 'just add another one' is
handy. But the issue is how to combine such things with
other synthesised logic.

The XMOS approach is fine when the hardware is uniform and
the software sits on top, but when the hardware is
synthesised and the 'CPUs' sit as pieces in a fabric
containing random logic (as I think the OP is suggesting)
it becomes a lot harder to reason about what the system is
doing and what the software running on such heterogeneous
cores should look like. Only the FPGA tools have a full
view of what the system looks like, and it seems stretching
them to have them also generate software to run on these
cores.

When people talk about things like "software running on such
heterogeneous cores" it makes me think they don't really
understand how this could be used. If you treat these small
cores like logic elements, you don't have such lofty
descriptions of "system software" since the software isn't
created out of some global software package. Each core is
designed to do a specific job just like any other piece of
hardware and it has discrete inputs and outputs just like any
other piece of hardware. If the hardware clock is not too
fast, the software can synchronize with and literally
function like hardware, but implementing more complex logic
than the same area of FPGA fabric might.


That is software.

If you want to try to get cycle-precise control of the software
and use that precision for direct hardware interfacing, you are
almost certainly going to have a poor, inefficient and
difficult design. It doesn't matter if you say "think of it
like logic" - it is /not/ logic, it is software, and you don't
use that for cycle-precise control. You use when you need
flexibility, calculations, and decisions.

I suppose you can make anything difficult if you try hard
enough.


Equally, you can make anything sound simple if you are vague enough
and wave your hands around.

Not trying to make it sound "simple". Just saying it can be useful
and not the same as designing a chip with many CPUs for the purpose
of providing lots of MIPS to crunch numbers. Those ideas and methods
don't apply here.

Fair enough. I have not suggested it was like using lots of CPUs for
number crunching. (That is not what I would think the GA144 is good for
either.)

The point is you don't have to make it difficult by talking
about "software running on such heterogeneous cores". Just talk
about it being a small hunk of software that is doing a specific
job. Then the mystery is gone and the task can be made as easy
as the task is.


I did not use the phrase "software running on such heterogeneous
cores" - and I am not trying to make anything difficult. You are
making cpu cores. They run software. Saying they are "like logic
elements" or "they connect directly to hardware" does not make it
so - and it does not mean that what they run is not software.

You don't need to complicate the design by applying all the
limitations of multi-processing when this is NOT at all the same. I
call them logic elements because that is the intent, for them to
implement logic. Yes, it is software, but that in itself creates no
problems I am aware of.

I agree that software should not in itself create a problem. Trying to
think of them as "logic" /would/ create problems. Think of them as
software, and program them as software. I expect you'd think of them as
entirely independent units with independent programs, rather than as a
multi-cpu or heterogeneous system.

As to the connection, I really don't get your point. They either
connect directly to the hardware because that's how they are
designed, or they don't... because that's how they are designed. I
don't know what you are saying about that.

"Synchronise directly with hardware" might be a better phrase.

In VHDL this would be a process(). VHDL programs are typically
chock full of processes and no one wrings their hands worrying
about how they will design the "software running on such
heterogeneous cores".


BTW, VHDL is software too.

I agree that VHDL is software. And yes, there are usually
processes in VHDL designs.

I am not /worrying/ about these devices running software - I am
simply saying that they /will/ be running software. I can't
comprehend why you want to deny that.

Enough! The CPUs run software. Now, what is YOUR point?

My point was that these are not logic, they are not logic elements (even
if they could be physically small and cheap and scattered around a chip
like logic elements). Thinking about them as "sequential logic
elements" is not helpful. Think of them as small processors running
simple and limited /software/. Unless you can find a way to
automatically generate code for them, then they will be programmed using
a /software/ programming language, not a logic or hardware programming
language. If you are happy to accept that now, then great - we can move on.

It seems that you are frightened of software or programmers, and
want to call it anything /but/ software.

If the software a core is running is simple enough to be described
in VHDL, then it should be a VHDL process - not software in a cpu
core.

Ok, now you have crossed into a philosophical domain. If you want to
think in these terms I won't dissuade you, but it has no meaning in
digital design and I won't discuss it further.


If it is too complex for that, it is going to have to be
programmed separately in an appropriate language. That is not
necessarily harder or easier than VHDL design - it is just
different.

Ok, so what?


If you try to force the software to be synchronous with timing on
the hardware, /then/ you are going to be in big difficulties. So
don't do that - use hardware for the tightest timing, and software
for the bits that software is good for.

LOL! You are thinking in terms that are very obsolete. Read about
how the F18A synchronizes with other processors and you will find
that this is an excellent way to interface to the hardware as well.
Just like logic, when the CPU hand shakes with a logic clock, it only
has to meet the timing of a clock cycle, just like all the logic in
the same design.

That is not using software for synchronising with hardware (or other
cpus) - it is using hardware.

When a processor's software has a loop waiting for an input signal to go
low, then it reads a byte input, then it waits for the first signal to
go high again - that is using software for synchronisation. That's okay
for slow interfacing. When it waits for one signal, then uses three
NOP's before setting another signal to get the timing right, that is
using software for accurate timing - a very fragile solution.

When it is reading from a register that is latched by an external enable
signal, it is using hardware for the interfacing and synchronisation.
When the cpu has signals that can pause its execution at the right steps
in handshaking, it is using hardware synchronisation. That is, of
course, absolutely fine - that is using the right tools for the right jobs.


In a VHDL process the steps are written out in
sequence and not assumed to be running in parallel, just like
software. When the process reaches a point of synchronization it
will halt, just like logic.

You use VHDL processes for cycle-precise, simple sequences. You use
software on a processor for less precise, complex sequences.

There is no need to think about how the CPUs would
communicate unless there is a specific need for them to do
so. The F18A uses a handshaked parallel port in their
design. They seem to have done a pretty slick job of it and
can actually hang the processor waiting for the
acknowledgement saving power and getting an instantaneous
wake up following the handshake. This can be used with other
CPUs or


Fair enough.

Ok, that's a start.


I'd expect that the sensible way to pass data between these, if you
need to do so much, is using FIFO's.

Between what exactly??? You are designing a system that is not
before you. More importantly you don't actually know anything about
the ideas used in the F18A and GA144 designs.

Between whatever you want as you pass data around your chip.

I'm not trying to be rude, but you should learn more about them
before you assume they need to work like every other processor you've
ever used. The F18A and GA144 really only have two particularly
unique ideas. One is that the processor is very, very small and as a
consequence, fast. The other is the communications technique.

Communication between the nodes is with a synchronising port. A write
to the port blocks until the receiving node does a read - similarly, a
read blocks until the sending node does a write. Hardware
synchronisation, not software, and not entirely unlike an absolutely
minimal blocking FIFO. It is an interesting idea, though somewhat limiting.

Charles Moore is a unique thinker and he realized that with the
advance of processing technology CPUs could be made very small and so
become MIPS fodder. By that I mean you no longer need to focus on
utilizing all the MIPS in a CPU. Instead, they can be treated as
disposable and only a tiny fraction of the available MIPS used to
implement some function... usefully.

While the GA144 is a commercial failure for many reasons, it does
illustrate some very innovative ideas and is what prompted me to
consider what happens when you can scatter CPUs around an FPGA as if
they were logic blocks.

As I said before, it is a very interesting and impressive concept, with
a lot of cool ideas - despite being a commercial failure.

I think one of the biggest reasons for its failure is that it is a
technologically interesting solution, but with no matching problems -
there is no killer app for it. When combined with a significant
learning curve and development challenge compared to alternative
established solutions.

I want to know if that is going to happen with your ideas here. Sure,
you don't have a full business plan - but do you at least have thoughts
about the kind of usage where these mini cpus would be a technologically
superior choice compared to using state machines in VHDL (possibly
generated with external programs), sequential logic generators (like C
to HDL compilers, matlab tools, etc.), normal soft processors, or normal
hard processors?

Give me a /reason/ to all this - rather than just saying you can make a
simple stack-based cpu that's very small, so you could have lots of them
on a chip.

No, I don't have a fully developed "business plan". I am just
interested in exploring the idea. Moore's (Green Array's actually,
CM isn't actively working with them at this point I believe) chip
isn't very practical because Moore isn't terribly interested in being
practical exactly. But that isn't to say it doesn't embody some very
interesting ideas.

Rick C.
 
On Wednesday, March 20, 2019 at 5:38:16 PM UTC-4, David Brown wrote:
I agree that software should not in itself create a problem. Trying to
think of them as "logic" /would/ create problems. Think of them as
software, and program them as software. I expect you'd think of them as
entirely independent units with independent programs, rather than as a
multi-cpu or heterogeneous system.

Ok, please tell me what those problems would be. I have no idea what you mean by what you say. You are likely reading a lot into this that I am not intending.


As to the connection, I really don't get your point. They either
connect directly to the hardware because that's how they are
designed, or they don't... because that's how they are designed. I
don't know what you are saying about that.


"Synchronise directly with hardware" might be a better phrase.

I don't know why and likely I'm' not going to care. I think you need to learn more of how the F18A works.


Enough! The CPUs run software. Now, what is YOUR point?


My point was that these are not logic, they are not logic elements (even
if they could be physically small and cheap and scattered around a chip
like logic elements). Thinking about them as "sequential logic
elements" is not helpful. Think of them as small processors running
simple and limited /software/. Unless you can find a way to
automatically generate code for them, then they will be programmed using
a /software/ programming language, not a logic or hardware programming
language. If you are happy to accept that now, then great - we can move on.

You have it backwards. Please show me what you think the problems are. I don't care if they run software or have a Maxwell demon tossing bits about as long as it does what I need. You seem to get hung up on terminology so easily.


LOL! You are thinking in terms that are very obsolete. Read about
how the F18A synchronizes with other processors and you will find
that this is an excellent way to interface to the hardware as well.
Just like logic, when the CPU hand shakes with a logic clock, it only
has to meet the timing of a clock cycle, just like all the logic in
the same design.

That is not using software for synchronising with hardware (or other
cpus) - it is using hardware.

So??? You are the one who keeps talking about software/hardware whatever. I'm talking about the software being able to synchronize with the clock of the other hardware. When that happens there are tight timing constraints in the same sense of the software sampling an ADC on a periodic basis and having to process the resulting data before the next sample is ready. The only difference is something like the F18A running at a few GHz can do a lot in a 10 ns clock cycle.


When a processor's software has a loop waiting for an input signal to go
low, then it reads a byte input, then it waits for the first signal to
go high again - that is using software for synchronisation. That's okay
for slow interfacing. When it waits for one signal, then uses three
NOP's before setting another signal to get the timing right, that is
using software for accurate timing - a very fragile solution.

That is your construct because you know nothing of how the F18A works. As I've mentioned before, you would do well to read some of the app notes on this device. It really does have some good ideas to offer.


When it is reading from a register that is latched by an external enable
signal, it is using hardware for the interfacing and synchronisation.
When the cpu has signals that can pause its execution at the right steps
in handshaking, it is using hardware synchronisation. That is, of
course, absolutely fine - that is using the right tools for the right jobs.

Duh!


In a VHDL process the steps are written out in
sequence and not assumed to be running in parallel, just like
software. When the process reaches a point of synchronization it
will halt, just like logic.


You use VHDL processes for cycle-precise, simple sequences. You use
software on a processor for less precise, complex sequences.

You are making arbitrary distinctions. The point is that if these CPUs are available they can be used to implement significant sections of logic in less space on the die than in the FPGA fabric.


> Between whatever you want as you pass data around your chip.

FIFOs are used for specific purposes. Not every interface needs them. Your suggestion that they should be used without an understand of why is pretty pointless.


I'm not trying to be rude, but you should learn more about them
before you assume they need to work like every other processor you've
ever used. The F18A and GA144 really only have two particularly
unique ideas. One is that the processor is very, very small and as a
consequence, fast. The other is the communications technique.

Communication between the nodes is with a synchronising port. A write
to the port blocks until the receiving node does a read - similarly, a
read blocks until the sending node does a write. Hardware
synchronisation, not software, and not entirely unlike an absolutely
minimal blocking FIFO. It is an interesting idea, though somewhat limiting.

Oh, what are the limitations? Also be aware that the blocking doesn't need to work as you describe it. Mostly the block would be on the read side, a processor would block until the data it needs is available... or a clock signal transitions to indicate the data that has been calculated can be output... just like other logic the LUT/FF logic blocks of an FPGA.


Charles Moore is a unique thinker and he realized that with the
advance of processing technology CPUs could be made very small and so
become MIPS fodder. By that I mean you no longer need to focus on
utilizing all the MIPS in a CPU. Instead, they can be treated as
disposable and only a tiny fraction of the available MIPS used to
implement some function... usefully.

While the GA144 is a commercial failure for many reasons, it does
illustrate some very innovative ideas and is what prompted me to
consider what happens when you can scatter CPUs around an FPGA as if
they were logic blocks.

As I said before, it is a very interesting and impressive concept, with
a lot of cool ideas - despite being a commercial failure.

I think one of the biggest reasons for its failure is that it is a
technologically interesting solution, but with no matching problems -
there is no killer app for it. When combined with a significant
learning curve and development challenge compared to alternative
established solutions.

Saying there is no killer app is rather the result than the problem. Yes, it was designed out of the idea of "what happens when I inter-connect a bunch of these processors?" without considering a lot of the real world design needs. The chip has limited RAM which could have been included in some way even if not on each processor. There is no Flash, which again could have been included. The I/Os are all 1.8 volts. There was no real memory interface provided, rather a DRAM interface was emulated in firmware and actually doesn't work, so one had to be written for static RAM which is hard to come by these days. I don't recall the full list.

But this is not about the GA144.


I want to know if that is going to happen with your ideas here. Sure,
you don't have a full business plan - but do you at least have thoughts
about the kind of usage where these mini cpus would be a technologically
superior choice compared to using state machines in VHDL (possibly
generated with external programs), sequential logic generators (like C
to HDL compilers, matlab tools, etc.), normal soft processors, or normal
hard processors?

The point wasn't that I don't have a business plan. The point was that I haven't given this as much thought as would have been done if I were working on a business plan. I'm kicking around an idea. I'm not in a position to create FPGA with or without small CPUs.


Give me a /reason/ to all this - rather than just saying you can make a
simple stack-based cpu that's very small, so you could have lots of them
on a chip.

Why? Why don't you give ME a reason? Why don't you switch your point of view and figure out how this would be useful? Neither of us have anything to gain or lose.


No, I don't have a fully developed "business plan". I am just
interested in exploring the idea. Moore's (Green Array's actually,
CM isn't actively working with them at this point I believe) chip
isn't very practical because Moore isn't terribly interested in being
practical exactly. But that isn't to say it doesn't embody some very
interesting ideas.

Rick C.
 
On 21/03/2019 03:21, gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 5:38:16 PM UTC-4, David Brown wrote:


I want to know if that is going to happen with your ideas here.
Sure, you don't have a full business plan - but do you at least
have thoughts about the kind of usage where these mini cpus would
be a technologically superior choice compared to using state
machines in VHDL (possibly generated with external programs),
sequential logic generators (like C to HDL compilers, matlab tools,
etc.), normal soft processors, or normal hard processors?

The point wasn't that I don't have a business plan. The point was
that I haven't given this as much thought as would have been done if
I were working on a business plan. I'm kicking around an idea. I'm
not in a position to create FPGA with or without small CPUs.


Give me a /reason/ to all this - rather than just saying you can
make a simple stack-based cpu that's very small, so you could have
lots of them on a chip.

Why? Why don't you give ME a reason? Why don't you switch your
point of view and figure out how this would be useful? Neither of us
have anything to gain or lose.

I don't have any good ideas of what these might be used for. And I
can't see how it ends up as /my/ responsibility to figure out why /your/
idea might be a good idea.

You presented an idea - having several small, simple cpus on a chip.
It's taken a long time, and a lot of side-tracks, to drag out of you
what you are really thinking about. (Perhaps you didn't have a clear
idea in your mind with your first post, and it has solidified underway -
in which case, great, and I'm glad the thread has been successful there.)

I've been trying to help by trying to look at how these might be used,
and how they compare to alternative existing solutions. And I have been
trying to get /you/ to come up with some ideas about when they might be
useful. All I'm getting is a lot of complaints, insults, condescension,
patronisation. You tell me I don't understand what these are for - yet
you refuse to say what they are for (the nearest we have got in any post
in this thread to evidence that there is any use-case, is you telling me
you have ideas but refuse to tell me as I am not an FPGA designer by
profession). You are forever telling me about the wonders of the F18A
and the GA144, and how I can't understand your ideas because I don't
understand that device - while simultaneously telling me that device is
irrelevant to your proposal. You are asking for opinions and thoughts
about how people would program these devices, then tell me I am wrong
and closed-minded when I give you answers.

Hopefully, you have got /some/ ideas and thoughts out of this thread.
You can take a long, hard look at the idea in that light, and see if it
really is something that could be useful - in today's world with today's
tools and technology, or tomorrow's world with new tools and development
systems.

But next time you want to start a thread asking for ideas and opinions,
how about responding with phrases like "I hadn't thought of it that
way", "I think FPGA designers IME would like this" - not "You are wrong,
and clearly ignorant".

You are a smart guy, and you are great at answering other people's
questions and helping them out - but boy, are you bad at asking for help
yourself.
 
On Thursday, March 21, 2019 at 4:21:13 AM UTC+2, gnuarm.del...@gmail.com wrote:
So??? You are the one who keeps talking about software/hardware whatever.. I'm talking about the software being able to synchronize with the clock of the other hardware. When that happens there are tight timing constraints in the same sense of the software sampling an ADC on a periodic basis and having to process the resulting data before the next sample is ready. The only difference is something like the F18A running at a few GHz can do a lot in a 10 ns clock cycle.

I certainly don't like "few GHz" part.
Distributing single multi-GHZ clock over full area of FPGA is non-starter from power perspective alone, but even ignoring the power, such distribution takes significant area making the whole proposition unattractive. As I understand it, the whole point is that this thingies take little area, so they are not harmful even for those buyers of device that don't utilize them at all or utilize very little.
Alternatively, multi-GHZ clocks can be generated by local specialized PLLs, but I am afraid that PLLs would be several times bigger than cores themselves and need good non-noisy power supplies and grounds that are probably hard to get in the middle of the chip etc... I really know too little about PLLs, but I think that I know enough to conclude that it's not much better idea than chip-wide clock distribution at multi-GHZ.

My idea of small hard cores is completely different in that regard. IMHO, they should run either with the same clock as surrounding FPGA fabric or with clock, delivered by simple clock doubler. Even clock quadrupling does not appear as a good idea to my engineering intuition.
 
On 21/03/19 02:21, gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 5:38:16 PM UTC-4, David Brown wrote:

I agree that software should not in itself create a problem. Trying to
think of them as "logic" /would/ create problems. Think of them as
software, and program them as software. I expect you'd think of them as
entirely independent units with independent programs, rather than as a
multi-cpu or heterogeneous system.

Ok, please tell me what those problems would be. I have no idea what you
mean by what you say. You are likely reading a lot into this that I am not
intending.

I have no difficulty understanding what he is saying.

Several people have difficulty understanding what you
are proposing.

You are proposing vague ideas, so the onus is on you
to make your ideas clear.


As to the connection, I really don't get your point. They either connect
directly to the hardware because that's how they are designed, or they
don't... because that's how they are designed. I don't know what you are
saying about that.


"Synchronise directly with hardware" might be a better phrase.

I don't know why and likely I'm' not going to care. I think you need to
learn more of how the F18A works.

No, we really don't have to learn more about one specific
processor - especially if it is just to help you.

If, OTOH, you succinctly summarise its key points and
how that achieves benefits, then we might be interested.


Enough! The CPUs run software. Now, what is YOUR point?


My point was that these are not logic, they are not logic elements (even if
they could be physically small and cheap and scattered around a chip like
logic elements). Thinking about them as "sequential logic elements" is not
helpful. Think of them as small processors running simple and limited
/software/. Unless you can find a way to automatically generate code for
them, then they will be programmed using a /software/ programming language,
not a logic or hardware programming language. If you are happy to accept
that now, then great - we can move on.

You have it backwards. Please show me what you think the problems are. I
don't care if they run software or have a Maxwell demon tossing bits about as
long as it does what I need. You seem to get hung up on terminology so
easily.

You need to explain your points better.

There's the old adage that "you only realise how little
you know about a subject when you try to teach it to
other people".


That is your construct because you know nothing of how the F18A works. As
I've mentioned before, you would do well to read some of the app notes on
this device. It really does have some good ideas to offer.

Give us the elevator pitch, so we can estimate whether
it would be a beneficial use of our remaining life.



The point wasn't that I don't have a business plan. The point was that I
haven't given this as much thought as would have been done if I were working
on a business plan. I'm kicking around an idea. I'm not in a position to
create FPGA with or without small CPUs.


Give me a /reason/ to all this - rather than just saying you can make a
simple stack-based cpu that's very small, so you could have lots of them on
a chip.

Why? Why don't you give ME a reason? Why don't you switch your point of
view and figure out how this would be useful? Neither of us have anything to
gain or lose.

Why? Because you are trying to propagate your ideas.
The onus is on you to convince us, not the other way
around.
 
On 20/03/19 16:32, already5chosen@yahoo.com wrote:
On Wednesday, March 20, 2019 at 5:51:21 PM UTC+2, Tom Gardner wrote:
On 20/03/19 14:51, already5chosen@yahoo.com wrote:
On Wednesday, March 20, 2019 at 4:31:27 PM UTC+2, Tom Gardner wrote:
On 20/03/19 14:11, already5chosen@yahoo.com wrote:
On Wednesday, March 20, 2019 at 3:37:17 PM UTC+2, Tom Gardner wrote:

But more difficult that creating such a toolset is defining an
application level description that a toolset can munge.

So, define (initially by example, later more formally) inputs to
the toolset and outputs from it. Then we can judge whether the
concepts are more than handwaving wishes.


I don't understand what you are asking for.

Go back and read the parts of my post that you chose to snip.

Give a handwaving indication of the concepts that avoid the conceptual
problems that I mentioned.

Frankly, it starts to sound like you never used soft CPU cores in your
designs. So, for somebody like myself, who uses them routinely for
different tasks since 2006, you are really not easy to understand.

Professionally, since 1978 I've done everything from low noise analogue
electronics, many hardware-software systems using all sorts of
technologies, networking at all levels of the protocol stack, "up" to high
availability distributed soft real-time systems.

And almost all of that has been on the bleeding edge.

So, yes, I do have more than a passing acquaintance with the
characteristics of many hardware and software technologies, and where
partitions between them can, should and should not be drawn.


Is it sort of admission that you indeed never designed with soft cores?

No, it is not.


Concept? Concepts are good for new things, not for something that is a
variation of something old and routine and obviously working.

Whatever is being proposed, is it old or new?

If old then the OP needs enlightenment and concrete examples can easily be
noted.

If new, then provide the concepts.


It is a new variation of of old concept. A cross between PPCs in ancient
VirtexPro and soft cores virtually everywhere in more modern times. Probably,
best characterized by what is not alike: it is not alike Xilinx Zynq or
Altera Cyclone5-HPS.

"New" part comes more from new economics of sub-20nm processes than from
abstractions that you try to draf into it. NRE is more and more expensive,
gates are more and more cheap (Well, the cost of gates started to stagnate in
last couple of years, but that does not matter. What's matter is that at
something like TSMC 12nm gate are already quite cheap). So, adding multiple
small CPU cores that could be used as replacement for multiple soft CPU cores
that people already used to use today, now starts to make sense. May be, it's
not a really good proposition, but at these silicon geometries it can't be
written out as obviously stupid proposition.

The starting points are fine, but so what?

There's little point building something if it
isn't useful in practice.

For examples of that, see Intel's 432 and 860
processors, and there are other examples.
 
gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 6:53:07 AM UTC-4, Theo wrote:
Your bottom-up approach means it's difficult to see the big picture of
what's going on. That means it's hard to understand the whole system, and
to program from a whole-system perspective.

I never mentioned a bottom up or a top down approach to design. Nothing
about using these small CPUs is about the design "direction". I am pretty
sure that you have to define the circuit they will work in before you can
start designing the code.

Your approach is 'I have this low-level thing (a tiny CPU), what can I use
it for?'. That's bottom up. A top down view would be 'my problem is X,
what's the best way to solve it?'. The advantage of the latter view is you
can explore some of the architectural space before targeting a solution
that's appropriate to the problem (with metrics to measure it), aiming to
find the global maximum. In a bottom-up approach you need to sell to users
that your idea will help their problem, but until you build a system they
don't know that it will even be a local maximum.

What's the logic equation of a processor?

Obviously it is like a combination of LUTs with FFs and able to implement
any logic you wish including math. BTW, in many devices the elements are
not at all so simple. Xilinx LUTs can be used as shift registers. There
are additional logic within the logic blocks that allow math with carry
chains, combining LUTs to form larger LUTs, breaking LUTs into smaller
LUTs and lets not forget about routing which may not be used much anymore,
not sure.

You can still reason about blocks as combinations of basic functions. A
block that is LUT+FF can still be analysed in separate parts.
A processor is a 'black box' as far as the tools go. That means any
software is opaque to analysis of correctness. The tools therefore can't
know that the circuit they produced matches the input HDL.

Simulation does not give you equivalence checking of the form of LVS (layout
versus schematic) or compiler correctness testing, it only tests a
particular set of (usually hand-defined) test cases. There's much less
coverage than equivalence checking tools.

Why does it need to be inferred. If you want to write an HDL tool to turn
HDL into processor code, have at it. But then there are other methods.
Someone mentioned his MO is to use other tools for designing his
algorithms and letting that tool generate the software for a processor or
the HDL for an FPGA. That would seem easy enough to integrate.

That's roughly what OpenCL and friends can do. But those are top-down
architecturally (starting with a chip block diagram), rather than starting
with tiny building blocks as you're suggesting.

> Huh? You can't simulate code on a processor???

Verification is greater than simulation, as described above.

If we scale the processors up a bit, I could see the merits in say a
bank of, say, 32 Cortex M0s that could be interconnected as part of the
FPGA fabric and programmed in software for dedicated tasks (for
instance, read the I2C EEPROM on the DRAM DIMM and configure the DRAM
controller at boot).

I don't follow your logic. What is different about the ARM processor from
the stack processor other than that it is larger and slower and requires a
royalty on each one? Are you talking about writing the code in C vs.
what ever is used for the stack processor?

If you have an existing codebase (supplied by the vendor of your external
chip, for example), it'll likely be in C. It won't be in
special-stack-assembler, and your architecture seems to be designed to not
be amenable to compilers.

The point of the many hard cores is the saving of resources. Soft cores
would be the most wasteful way to implement logic. If the application is
large enough they can implement things in software that aren't as
practical in HDL, but that would be a different class of logic from the
tiny CPUs I'm talking about.

'Wastefulness' is one parameter. But you can also consider that every
unused hard-core is also wasteful in terms of silicon area. Can you show
that the hard-cores would be used enough of the time to outweigh the space
they waste on other people's designs?

You lost me with the gear shift. The mention of instruction rate is about
the CPU being fast enough to keep up with FPGA logic. The issue with
"heterogeneous performance" is the "heterogeneous" part, lumping the many
CPUs together to create some sort of number cruncher. That's not what
this is about. Like in the GA144, I fully expect most CPUs to be sitting
around most of the time idling, waiting for data. This is a good thing
actually. These CPUs could consume significant current if they run at GHz
all the time. I believe in the GA144 at that slower rate each processor
can use around 2.5 mA. Not sure if a smaller process would use more or
less power when running flat out. It's been too many years since I worked
with those sorts of numbers.

OK, so once we drop any idea of MIPS, we're talking about something simpler
than a Cortex M0. You should be able to make a design that clocks at a few
hundred MHz on an FPGA process. You could choose to run it synchronously
with your FPGA logic, or on an internal clock and synchronise inputs and
outputs. You probably wouldn't tile these, but you could deploy them as a
'hardware thread' in places you need a complicated state machine.

In essence, your proposal has a disconnect between the situations existing
FPGA blocks are used (implemented automatically by P&R tools) and the
situations software is currently used (human-driven software and
architectural design). It's unclear how you claim to bridge this gap.

I certainly don't see how P&R tools would be a problem. They accommodate
multipliers, DSP blocks, memory block and many, many special bits of
assorted components inside the FPGAs which vary from vendor to vendor.
Clock generators and distribution is pretty unique to each manufacturer.
Lattice has all sorts of modules to offer like I2C and embedded Flash.
Then there are entire CPUs embedded in FPGAs. Why would supporting them
be so different from what I am talking about?

If this is a module that the tools have no visibility over, ie just a blob
with inputs and outputs, then they can implement that. In that instance
there is a manageability problem - beyond a handful of processes, writing
heterogeneous distributed software is hard. Unless each processor is doing
a very small, well-defined, task, I think the chances of bugs are high.

If instead you want interaction with the toolchain in terms of
generating/checking the software running on such cores, that's also
problematic.


I hadn't seen Picoblaze before, but that seems a strong fit with what you're
suggesting. So a question: why isn't it more successful? And why isn't
Xilinx putting hard Picoblazes into their FPGAs, which they could do
tomorrow if they felt the need?

Theo
 
On 21/03/19 10:49, Theo wrote:
gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 6:53:07 AM UTC-4, Theo wrote:
Your bottom-up approach means it's difficult to see the big picture of
what's going on. That means it's hard to understand the whole system, and
to program from a whole-system perspective.

I never mentioned a bottom up or a top down approach to design. Nothing
about using these small CPUs is about the design "direction". I am pretty
sure that you have to define the circuit they will work in before you can
start designing the code.

Your approach is 'I have this low-level thing (a tiny CPU), what can I use
it for?'. That's bottom up. A top down view would be 'my problem is X,
what's the best way to solve it?'.

The OP's attitude and responses have puzzled me. However, they
make more sense if that is indeed his design strategy - and I
suspect it is, based on comments he has made in other parts
of this thread.

That attitude surprises me, since all my /designs/ have been
based on "what do I need to achieve" plus "what can individual
technologies achieve" plus "which combination of technologies
is best at achieving my objectives". I.e top down with a
knowledge of the bottom pieces.

Of course I /implement/ my designs in a more bottom up way.

(I agree with the rest of your statements)
 
On Thursday, March 21, 2019 at 5:22:09 AM UTC-4, already...@yahoo.com wrote:
On Thursday, March 21, 2019 at 4:21:13 AM UTC+2, gnuarm.del...@gmail.com wrote:

So??? You are the one who keeps talking about software/hardware whatever. I'm talking about the software being able to synchronize with the clock of the other hardware. When that happens there are tight timing constraints in the same sense of the software sampling an ADC on a periodic basis and having to process the resulting data before the next sample is ready. The only difference is something like the F18A running at a few GHz can do a lot in a 10 ns clock cycle.



I certainly don't like "few GHz" part.
Distributing single multi-GHZ clock over full area of FPGA is non-starter from power perspective alone, but even ignoring the power, such distribution takes significant area making the whole proposition unattractive. As I understand it, the whole point is that this thingies take little area, so they are not harmful even for those buyers of device that don't utilize them at all or utilize very little.

There is no multi-GHz clock distribution. These CPUs can be self timed. The F18A is. Think of asynchronous logic. It's not literally asynchronous, but similar with internal delays setting the speed so all the internal logic works correctly. The only clock would be whatever clock the rest of the logic is using.

Think of these CPUs running from the clock generated by a ring oscillator in each CPU. There would be a minimum CPU speed over PVT (Process, Voltage, Temperature). That's all you need to make this work.


> Alternatively, multi-GHZ clocks can be generated by local specialized PLLs, but I am afraid that PLLs would be several times bigger than cores themselves and need good non-noisy power supplies and grounds that are probably hard to get in the middle of the chip etc... I really know too little about PLLs, but I think that I know enough to conclude that it's not much better idea than chip-wide clock distribution at multi-GHZ.

That's the advantage of synchronizing at the interface rather than trying to run at lock step. CPUs free run at some fast speed. They sit waiting for data on a clock transition not clocking, using very little power. On receiving the same clock edge the rest of the chip is using the CPU starts running, data previously generated is output (like a FF), data on the inputs is read, processed and the result is held while the CPU pends on the next clock edge again going into a sleep state.

You can read how the F18A does it at an atomic level in the clock management. The wake up is *very* fast.


> My idea of small hard cores is completely different in that regard. IMHO, they should run either with the same clock as surrounding FPGA fabric or with clock, delivered by simple clock doubler. Even clock quadrupling does not appear as a good idea to my engineering intuition.

This would make the CPU ridiculously slow and not a good trade off for fabric logic.

CPUs can be size efficient when they do a lot of sequential calculations. This essentially takes advantage of the enormous multiplexer in the memory to allow it to replace a larger amount of logic. But if the needs are faster than a slow processor can handle the processor needs to run at a much higher clock speed. This allows an even higher space efficiency since now the logic in the CPU is executing more instructions in a single clock.

So let a small CPU run a very high rates and synchronize at the system clock rate by handshaking just like a LUT/FF logic block without worrying about the fact that it is running a lot of instructions. It just needs to run enough to get the job done. The timing is like the logic in a data path between FFs. I has to run fast enough to reach the next FF before the next clock edge. It won't matter if it is faster. So the CPU only needs a minimum spec on the internal clock speed.

Rick C.
 
On Thursday, March 21, 2019 at 3:37:14 AM UTC-4, David Brown wrote:
On 21/03/2019 03:21, gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 5:38:16 PM UTC-4, David Brown wrote:


I want to know if that is going to happen with your ideas here.
Sure, you don't have a full business plan - but do you at least
have thoughts about the kind of usage where these mini cpus would
be a technologically superior choice compared to using state
machines in VHDL (possibly generated with external programs),
sequential logic generators (like C to HDL compilers, matlab tools,
etc.), normal soft processors, or normal hard processors?

The point wasn't that I don't have a business plan. The point was
that I haven't given this as much thought as would have been done if
I were working on a business plan. I'm kicking around an idea. I'm
not in a position to create FPGA with or without small CPUs.


Give me a /reason/ to all this - rather than just saying you can
make a simple stack-based cpu that's very small, so you could have
lots of them on a chip.

Why? Why don't you give ME a reason? Why don't you switch your
point of view and figure out how this would be useful? Neither of us
have anything to gain or lose.


I don't have any good ideas of what these might be used for. And I
can't see how it ends up as /my/ responsibility to figure out why /your/
idea might be a good idea.

You presented an idea - having several small, simple cpus on a chip.
It's taken a long time, and a lot of side-tracks, to drag out of you
what you are really thinking about. (Perhaps you didn't have a clear
idea in your mind with your first post, and it has solidified underway -
in which case, great, and I'm glad the thread has been successful there.)

I've been trying to help by trying to look at how these might be used,
and how they compare to alternative existing solutions. And I have been
trying to get /you/ to come up with some ideas about when they might be
useful. All I'm getting is a lot of complaints, insults, condescension,
patronisation. You tell me I don't understand what these are for - yet
you refuse to say what they are for (the nearest we have got in any post
in this thread to evidence that there is any use-case, is you telling me
you have ideas but refuse to tell me as I am not an FPGA designer by
profession). You are forever telling me about the wonders of the F18A
and the GA144, and how I can't understand your ideas because I don't
understand that device - while simultaneously telling me that device is
irrelevant to your proposal. You are asking for opinions and thoughts
about how people would program these devices, then tell me I am wrong
and closed-minded when I give you answers.

Hopefully, you have got /some/ ideas and thoughts out of this thread.
You can take a long, hard look at the idea in that light, and see if it
really is something that could be useful - in today's world with today's
tools and technology, or tomorrow's world with new tools and development
systems.

But next time you want to start a thread asking for ideas and opinions,
how about responding with phrases like "I hadn't thought of it that
way", "I think FPGA designers IME would like this" - not "You are wrong,
and clearly ignorant".

You are a smart guy, and you are great at answering other people's
questions and helping them out - but boy, are you bad at asking for help
yourself.

I think if you go back and read, I said it all before. But because there is a lot of new thinking involved, it was very hard to get you to understand what was being said rather than continue to look at it the way you have been looking at it for the last few decades.

Rick C.
 
On Thursday, March 21, 2019 at 5:40:30 AM UTC-4, Tom Gardner wrote:
On 21/03/19 02:21, gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 5:38:16 PM UTC-4, David Brown wrote:

I agree that software should not in itself create a problem. Trying to
think of them as "logic" /would/ create problems. Think of them as
software, and program them as software. I expect you'd think of them as
entirely independent units with independent programs, rather than as a
multi-cpu or heterogeneous system.

Ok, please tell me what those problems would be. I have no idea what you
mean by what you say. You are likely reading a lot into this that I am not
intending.

I have no difficulty understanding what he is saying.

Several people have difficulty understanding what you
are proposing.

You are proposing vague ideas, so the onus is on you
to make your ideas clear.

There is no onus. This is not a business proposal. If you want to discuss it, do so. If not, don't.

If you can't tell me what your concerns are, I can't address them. If no one can tell me what problems are being talked about by "Trying to think of them as "logic" /would/ create problems." I can't possibly address those concerns.


As to the connection, I really don't get your point. They either connect
directly to the hardware because that's how they are designed, or they
don't... because that's how they are designed. I don't know what you are
saying about that.


"Synchronise directly with hardware" might be a better phrase.

I don't know why and likely I'm' not going to care. I think you need to
learn more of how the F18A works.

No, we really don't have to learn more about one specific
processor - especially if it is just to help you.

If, OTOH, you succinctly summarise its key points and
how that achieves benefits, then we might be interested.

I don't see a question. Are you trying to teach me how to post in newsgroups? lol

Ask a question if you have one. Explain something I've said that is wrong. But if you don't have anything better to say, I can't help you.


Enough! The CPUs run software. Now, what is YOUR point?


My point was that these are not logic, they are not logic elements (even if
they could be physically small and cheap and scattered around a chip like
logic elements). Thinking about them as "sequential logic elements" is not
helpful. Think of them as small processors running simple and limited
/software/. Unless you can find a way to automatically generate code for
them, then they will be programmed using a /software/ programming language,
not a logic or hardware programming language. If you are happy to accept
that now, then great - we can move on.

You have it backwards. Please show me what you think the problems are. I
don't care if they run software or have a Maxwell demon tossing bits about as
long as it does what I need. You seem to get hung up on terminology so
easily.

You need to explain your points better.

There's the old adage that "you only realise how little
you know about a subject when you try to teach it to
other people".

Which points? I'm starting to think you are not here for the hunting.


That is your construct because you know nothing of how the F18A works. As
I've mentioned before, you would do well to read some of the app notes on
this device. It really does have some good ideas to offer.

Give us the elevator pitch, so we can estimate whether
it would be a beneficial use of our remaining life.

If you don't have any idea what I'm talking about at this point, an elevator pitch won't help.


The point wasn't that I don't have a business plan. The point was that I
haven't given this as much thought as would have been done if I were working
on a business plan. I'm kicking around an idea. I'm not in a position to
create FPGA with or without small CPUs.


Give me a /reason/ to all this - rather than just saying you can make a
simple stack-based cpu that's very small, so you could have lots of them on
a chip.

Why? Why don't you give ME a reason? Why don't you switch your point of
view and figure out how this would be useful? Neither of us have anything to
gain or lose.

Why? Because you are trying to propagate your ideas.
The onus is on you to convince us, not the other way
around.

No, I'm trying to discuss an idea. If you don't wish to discuss the idea, then that's fine.

Rick C.
 
On Thursday, March 21, 2019 at 6:49:11 AM UTC-4, Theo wrote:
gnuarm.deletethisbit@gmail.com wrote:
On Wednesday, March 20, 2019 at 6:53:07 AM UTC-4, Theo wrote:
Your bottom-up approach means it's difficult to see the big picture of
what's going on. That means it's hard to understand the whole system, and
to program from a whole-system perspective.

I never mentioned a bottom up or a top down approach to design. Nothing
about using these small CPUs is about the design "direction". I am pretty
sure that you have to define the circuit they will work in before you can
start designing the code.

Your approach is 'I have this low-level thing (a tiny CPU), what can I use
it for?'. That's bottom up. A top down view would be 'my problem is X,
what's the best way to solve it?'. The advantage of the latter view is you
can explore some of the architectural space before targeting a solution
that's appropriate to the problem (with metrics to measure it), aiming to
find the global maximum. In a bottom-up approach you need to sell to users
that your idea will help their problem, but until you build a system they
don't know that it will even be a local maximum.

I'm not designing anything so I can't be designing bottom up. I'm not selling anything, so I don't have users.

I'm discussing an idea. I'm kicking a can. I'm running a flag up the flag pole.

If you aren't interested in discussing this, then that's ok. But there's no point at all in having a meta-discussion.


What's the logic equation of a processor?

Obviously it is like a combination of LUTs with FFs and able to implement
any logic you wish including math. BTW, in many devices the elements are
not at all so simple. Xilinx LUTs can be used as shift registers. There
are additional logic within the logic blocks that allow math with carry
chains, combining LUTs to form larger LUTs, breaking LUTs into smaller
LUTs and lets not forget about routing which may not be used much anymore,
not sure.

You can still reason about blocks as combinations of basic functions. A
block that is LUT+FF can still be analysed in separate parts.
A processor is a 'black box' as far as the tools go. That means any
software is opaque to analysis of correctness. The tools therefore can't
know that the circuit they produced matches the input HDL.

"Correctness" in what sense? I've never worked with tools that could analyze my HDL to tell me if it was logically correct. I really have no idea what you are talking about here. I also don't see the point of your pointing out the LUT can be separate from the FF in a LUT/FF combination. You can model the CPU as a large LUT with FFs. It can do the same job. The FF can be removed. The logic can be removed. Whatever analysis that can be done on the LUT/FF can be applied to the CPU.

If you want to verify the "correctness" of parts of a design my inspection, I would expect that to be done on the HDL anyway, not on the generated logic... unless you thought the tools were suspect.


Simulation does not give you equivalence checking of the form of LVS (layout
versus schematic) or compiler correctness testing, it only tests a
particular set of (usually hand-defined) test cases. There's much less
coverage than equivalence checking tools.

So those techniques can't be applied to software?


Why does it need to be inferred. If you want to write an HDL tool to turn
HDL into processor code, have at it. But then there are other methods.
Someone mentioned his MO is to use other tools for designing his
algorithms and letting that tool generate the software for a processor or
the HDL for an FPGA. That would seem easy enough to integrate.

That's roughly what OpenCL and friends can do. But those are top-down
architecturally (starting with a chip block diagram), rather than starting
with tiny building blocks as you're suggesting.

Huh? You can't simulate code on a processor???

Verification is greater than simulation, as described above.

If we scale the processors up a bit, I could see the merits in say a
bank of, say, 32 Cortex M0s that could be interconnected as part of the
FPGA fabric and programmed in software for dedicated tasks (for
instance, read the I2C EEPROM on the DRAM DIMM and configure the DRAM
controller at boot).

I don't follow your logic. What is different about the ARM processor from
the stack processor other than that it is larger and slower and requires a
royalty on each one? Are you talking about writing the code in C vs.
what ever is used for the stack processor?

If you have an existing codebase (supplied by the vendor of your external
chip, for example), it'll likely be in C. It won't be in
special-stack-assembler, and your architecture seems to be designed to not
be amenable to compilers.

You can write any compiler you want. I don't know what libraries you would be using to replace FPGA logic with software. Are we talking about print statements?

How do you port C libraries to logic in an FPGA now? Do it the same way.


The point of the many hard cores is the saving of resources. Soft cores
would be the most wasteful way to implement logic. If the application is
large enough they can implement things in software that aren't as
practical in HDL, but that would be a different class of logic from the
tiny CPUs I'm talking about.

'Wastefulness' is one parameter. But you can also consider that every
unused hard-core is also wasteful in terms of silicon area. Can you show
that the hard-cores would be used enough of the time to outweigh the space
they waste on other people's designs?

That assumes some number of CPUs on the FPGA. We don't have those numbers. We also don't have any real data on how large a logic block is in an FPGA, at least I don't. y

I think you are making silly points when we are discussing a concept. Of course we won't have the sort of data you are talking about.


You lost me with the gear shift. The mention of instruction rate is about
the CPU being fast enough to keep up with FPGA logic. The issue with
"heterogeneous performance" is the "heterogeneous" part, lumping the many
CPUs together to create some sort of number cruncher. That's not what
this is about. Like in the GA144, I fully expect most CPUs to be sitting
around most of the time idling, waiting for data. This is a good thing
actually. These CPUs could consume significant current if they run at GHz
all the time. I believe in the GA144 at that slower rate each processor
can use around 2.5 mA. Not sure if a smaller process would use more or
less power when running flat out. It's been too many years since I worked
with those sorts of numbers.

OK, so once we drop any idea of MIPS, we're talking about something simpler
than a Cortex M0. You should be able to make a design that clocks at a few
hundred MHz on an FPGA process.

I don't think a few hundred MIPS is fast enough to actually be useful. GIPS is required.


You could choose to run it synchronously
with your FPGA logic, or on an internal clock and synchronise inputs and
outputs. You probably wouldn't tile these, but you could deploy them as a
'hardware thread' in places you need a complicated state machine.

A state machine is one application. But I don't see them being limited in any way in replacing logic other than logic that is too small for this to be efficient.

Xilinx makes a big deal of their shift registers from a LUT. I've seen designs where many stages of shift register were needed. This CPU could replace a large number of those running at some hundreds of MHz data clock rate.


In essence, your proposal has a disconnect between the situations existing
FPGA blocks are used (implemented automatically by P&R tools) and the
situations software is currently used (human-driven software and
architectural design). It's unclear how you claim to bridge this gap..

I certainly don't see how P&R tools would be a problem. They accommodate
multipliers, DSP blocks, memory block and many, many special bits of
assorted components inside the FPGAs which vary from vendor to vendor.
Clock generators and distribution is pretty unique to each manufacturer..
Lattice has all sorts of modules to offer like I2C and embedded Flash.
Then there are entire CPUs embedded in FPGAs. Why would supporting them
be so different from what I am talking about?

If this is a module that the tools have no visibility over, ie just a blob
with inputs and outputs, then they can implement that.

Why no visibility?


In that instance
there is a manageability problem - beyond a handful of processes, writing
heterogeneous distributed software is hard. Unless each processor is doing
a very small, well-defined, task, I think the chances of bugs are high.

You need to explain to me what is hard about *this*. Giving it a label and then saying anything with that label is hard doesn't mean much. I don't think the label fits.


If instead you want interaction with the toolchain in terms of
generating/checking the software running on such cores, that's also
problematic.

I don't follow. In the design it's logic. You keep trying to think of it the way you think of all software. It's logic. Inputs and outputs. You only need to dig into the code after you find there is something wrong with the mapping of inputs to outputs like any other logic module. Presumably the code would have been simulated with appropriate inputs and outputs.


I hadn't seen Picoblaze before, but that seems a strong fit with what you're
suggesting. So a question: why isn't it more successful? And why isn't
Xilinx putting hard Picoblazes into their FPGAs, which they could do
tomorrow if they felt the need?

More successful than what? The Volkswagen Beetle?

I can't explain much of what Xilinx does except they respond to their largest customers who pay thousands of dollars for a single FPGA chip. They say what goes into Xilinx FPGAs and the rest of us are tag-alongs. Literally.

Rick C.
 

Welcome to EDABoard.com

Sponsor

Back
Top