AREF bypass capacitance on ATMega2560?

On 9/7/2013 6:23 PM, Joerg wrote:
rickman wrote:
On 9/7/2013 4:46 PM, Joerg wrote:
Paul Rubin wrote:
Joerg<invalid@invalid.invalid> writes:
I don't see how the equivalent of a TMS320 or a big MSP430 could fit
into one of these small Lattice devices.

I had thought the parts of those processors that would bloat up badly
(instruction decode etc.) are pretty simple so the overall effect of the
bloat is ok in the scheme of things. The parts doing the most work
(memory, arithmetic) are done in the FPGA hardware (RAM and DSP blocks,
adders connected to the LUT's somehow) as efficiently as on the MCU's.

I do think softcores seem like a silly idea a lot of the time, and am
looking forward to more low end FPGA's with MCU blocks.


Much of it has to do with legacy code. Yes, some things could even be
done more efficiently in the FPGA because you can actually streamline
the HW to the task, something neither uC nor DSP allow. For example, why
have a 32-bit HW multiplier when you know you'll never exceed 23 bits?
But legacy code won't run anymore and you need FPGA specialists to make
it all work.

No, you would need a DSP specialist. The FPGA designer only needs to
know how to code the FPGA.


So for this kind of solution in an FPGA you need a DSP specialist and an
FPGA specialist? That would be a problem.

You can do it anyway you want. I'm just making the distinction between
DSP knowledge and FPGA knowledge. They aren't very much the same. I
also make a distinction between a DSP designer and a DSP coder. Again,
not much in common. Coding doesn't really require a lot of DSP
knowledge and DSP designers often aren't experts at coding the finicky
chips.


But that is exactly the point of the FPGA in DSP apps. You code to the
app, not to a processor.


How long do the usual FPGA stay in the market? Meaning plop-in
replaceable, same footprint, same code, no changes.

Life span is typically *much* better than MCUs.

First, there are *no* second sources so whatever chip family you select
is the only one that will fit your layout. There *may* be more than one
member of that family that will fit the same socket, that is common, but
not guaranteed. So you often will get a choice of two, three or four
sizes and you often get an upgrade path from your first selection. Just
in case you are familiar with the compilation process, in an FPGA you
*always* have to recompile for the target. Even if they are pin
compatible you can't load a design for an whatever-02 chip into a
whatever-03 part. Those are the limitations.

As to the market life, that it typically well over 10 years. Spartan 3
was introduced some 10 years ago and it is not yet the oldest chip
Xilinx has in current full production. I'm still considering using it
for new designs. Similar situation for Altera. I was just burned by
Lattice announcing EOL of their XP line. This was because they got a
new guy in at the top with a new broom I suppose.

I'm sure you can find various MCUs which have been in production for 10
years, but I know Atmel likes to replace products from time to time with
similar "pin compatible" devices which are 99.9% compatible. I expect
for the 8 bit parts life span is not such an issue. For the larger
parts I expect life span is a bit more limited and for the top end
chips, I'm pretty sure their life span is measured in double digit
months. Can you still buy any of the Pentium 4s that were all over the
place seven or eight years ago? I can't even find a Core 2 Duo.

What lifespan have you seen for MCUs?

--

Rick
 
On 9/7/2013 5:33 PM, Paul Rubin wrote:
rickman<gnuarm@gmail.com> writes:
How about an MCU array instead? http://www.greenarraychips.com/

Yes, we've had many discussions about that part ;-).

considering a softcore "silly" is not a useful engineering analysis.

The engineering analysis is implied: it takes far more silicon to
implement a microprocessor in LUTs than directly in silicon, plus you
lose a lot of speed because of all the additional layers and lookups.

That is a pointless comparison. I have never once opened up a chip to
see how much silicon it used. I compare the things I can see from the
outside, cost, power consumption, etc... You can infer anything you
wish. The proof of the pudding is in the eating.

This exactly the type of bias I'd like to overcome.


Bernd Paysan rolled his own small processor design for an ASIC

Yes, the ASIC bypassed the relative inefficiency of doing the same thing
in FPGA's. It would be cool to have some tiny processors like that
available as hard cells on small FPGA's.

Ok, but your "efficiency" rating is not any of real value in a design.
Stop limiting yourself by pointless metrics. If you like the idea of a
lot of processors on a chip, then design one on an FPGA and see how it
works.

Do you see what I'm trying to say?

--

Rick
 
rickman <gnuarm@gmail.com> writes:
That is a pointless comparison. I have never once opened up a chip to
see how much silicon it used. I compare the things I can see from the
outside, cost, power consumption, etc...

Yes, and those are quite closely dependent on the amount of silicon used.

You can infer anything you wish. The proof of the pudding is in the
eating.

OK. That GA144 you mentioned has 144 cpu nodes made in a rather old
process technology (0.18 micron, I guess 1990's vintage). They still
manage to run the thing at 700+ Mhz, keep power consumption to around
0.5W with all cpu's running full speed, and sell it for $20 in small
quantity. Can you do anyting like that with an FPGA? What will it
cost? How much power will it use? I'll accept the b16 as a comparable
processor to a GA144 node. Bernd's paper mentions the b16 ran at 25 MHz
in a Flex10K30E, a 30-to-1 slowdown, power consumption not mentioned.
But I don't know how the underlying silicon processes compare.
 
On 9/7/2013 5:48 PM, Joerg wrote:
rickman wrote:
On 9/7/2013 3:39 PM, Joerg wrote:
rickman wrote:
On 9/7/2013 1:59 PM, Joerg wrote:
rickman wrote:
On 9/7/2013 11:10 AM, Joerg wrote:
For example, this:

http://www.ti.com/lit/ds/symlink/tms320c5535.pdf

I don't see it for $3. Did you get a quote for your project? TI says
it is $5 to $8 at qty 1k depending on the flavor. You still need
to add
Flash.


1k qty is $3.67 at Digikey:

http://www.digikey.com/product-detail/en/TMS320C5532AZHH10/296-32741-ND/2749713



Not the same part pal. You're trying to pull a fast one on me? Are we
talking about the TMS320C5535 with "tons" of memory or the TMS320C5532
with *much* less memory?


It doesn't have the single access RAM but it does have 64k dual access
RAM. That's a lot of RAM in embedded.

You do this often. Start talking about one thing and shift the context
to another. ...


I didn't. I said it is a DSP with large memory, which it is.

You first give a part, the C5535, as the chip with big memory, then it
becomes the C5532 which is less memory and less expensive. I can't tell
what you are talking about when the subject changes.


... Projects have design requirements. I often am able to meet
my design requirements with an FPGA and no MCU. I often can't say the
opposite, being able to use an MCU without the FPGA.


$3.02 with 12wks leadtime at Arrow:

http://components.arrow.com/part/detail/51425505S8988412N7713?region=na

ROM is included.

ROM is not Flash.. is it? Are you thinking in terms of a mask ROM?


You can use the bootloader or OTP your own bootloader if you don't want
to store your programming in ROM. In most situations this is part of a
larger computerized system from where it can download its programming.

That's a different wrinkle. It is common to have a micro load an FPGA,
many don't contain their own Flash. But I haven't seen this done with
DSPs as often and almost never with MCUs. But if that works for your
project, great. You certainly wouldn't have a problem loading an FPGA
then.


The fact that most FPGA don't have flash is fine ... but ... there must
be a decent bootloader inside. In one of the upcoming projects it must
be able to bootload via USB. So the device must wake up with a certain
minimum in brain functionality to handle the USB stuff. With FPGA that
can become a challenge unless you provide a serial memory device (which
adds cost).

No, you won't find any FPGAs which can wake up talking over USB. But
you will find FPGAs with internal Flash if you wish to design a USB
bootloader.


... It has been a while since I looked
hard at DSP chips, but I don't recall any I would call remotely
"big"
for $3. The TI chips that would be "big" are the TMS6xxx line which
start somewhere around $20 the last time I looked and that requires
all
memory and I/O to be separate. The smaller DSP chips that you
can get
for the $3 range are not "big" in any sense and only a very few of
them
include Flash memory. So you still need another chip.


It has tons of memory on board.

Yes, and many FPGAs have "tons" of memory on board although not for
$3... but then this isn't a $3 part either...


It is a $3 part. See above.

No, you need to pick a part number and stick with it.


I gave a part number. Still waiting for your $3 FPGA part number :)

Actually you gave me two part numbers, one for $5 and one for just over
$3. What's your point? I gave you info to find the iCE40 line. Xilinx
also makes FPGAs that are very affordable and does Altera and Lattice.


Both of the ones I gave you are $3. The DSP costs $3.02 and the MSP430
is $3.09. These are over-the-counter no-haggle prices. Can a $3 iCE40
device emulate a TMS320 or a big MSP430? I can't tell because I don't
know this Lattice series and I am not an FPGA expert. But it sure looks
like they'd have a hard time.

No, the C5535 part is not $3. That is what I mean by two part numbers.


I have already explained that I would never do a design in an FPGA to
wholly incorporate a DSP or MCU. That would be absurd. So why do you
keep asking about that?


Because you wrote yesterday, quote "For $3 however, you can get a chip
large enough for a CPU (with math) and room for your special logic".

I said "a CPU" not "any CPU". I never said it would duplicate a
commercial device. I'm talking about function.


Depending on your design requirements there are any number of FPGAs that
will do the job and some may be $3. What are your design requirements?


As I said, I do not have any hammered out ones yet but it'll come. This
was just about you $3 claim. So I gave some examples of devices that
cost $3.

Yes, and there are FPGAs in that price range which can be used to
implement a CPU plus other logic.


I understand the concept of work that can't be moved. You don't need to
continue to explain that. I was asking why you said most of your word
didn't have that requirement and yet you still were debating the point.
Now I get it, you are talking about two different things, work that can
be moved and work that can't be moved.


Yup. Hence the need for availability of local programmer talent. Less
local availability means potential problems. That is because (where
possible) I like to use architectures I am familiar with.

Programmer talent means longterm. For example, if a client has an issue
with an 8051 design long after the original programmer has moved on I
could find programmers within a 10-mile radius. Try that with an FPGA.
In San Jose it may be possible but not out here.

I can't speak of your environment. I know my friend of many years
stayed away from FPGAs in spite of the fact that he is a very capable
designer. He finally paid me for a week of FPGA design work which I
then turned over to him and helped him get started with HDL. It's not
hard at all. You don't really need anyone special. That is the sort of
thinking I am trying to dispel.

Another example. A software designer came to a newsgroup looking for
info on programming FPGAs. He used the mindset of a software guy and
wanted to do a "hello world" program. We tried to explain to him that
hardware isn't software and HDL isn't C. But he persisted and I gave
him advice over a week or so. I tried to turn it into a consulting gig
but his bosses didn't want to pay the bucks. He ended up doing just
fine with his software mindset and convinced his boss to pay me $500
over my protests. I cashed the check when it came.

The point is that FPGAs are not so hard that you need a unique talent to
design them. That may have been true 10+ years ago, but they are very
mainstream now and much easier to work with. I bet even *you* could do
an FPGA design, lol.

I don't care where you are located, if you can't find an FPGA designer,
you aren't looking very hard.


Not sure what the requirements are for your CODEC, but I have been using
the AKM parts with good results. Minimal or *no* programming,
configuration is done by a small number of select pins, very simple. I
have yet to find another one as small, AK4552, AK4556.


Plus their prices are quite good.

Which, AKM or the other? I'd like to think I can get a CD quality CODEC
for $3 from nearly anyone. I mainly picked AKM because of the size, 6x6
mm without going to a microBGA.


AKM has good prices.

Ok. I have no complaints on prices. Their lead time can be a problem.
I had a conversation, disti, manufacturers guy and me. I was
complaining about a 14 week lead time and he bragged that a 14 week lead
time was *good*. I give my customers a 10 week lead time... see the
problem? Digikey sell them now so it is not such an issue. I even
ended up speaking with a buyer or planner who was coordinating the
shipment of an order last spring. Once you reach them they are very nice.


High 10's or low 10's. Up to say, 20 or 30 ksps is easy to do in an
FPGA with decent resolution, 12 bits. Getting 16 bits is harder, I've
not tried it, but should be possible.


Mostly I need 40-50ksps. But 20 is often ok.

I haven't done 12 bits at 50 ksps, but I expect it is doable. Just
cross the t's and dot the i's.


I was looking at using a LVDS input for a comparator and Xilinx did it
in a demo of a SDR. They are very short on details, but they talk about
1 mV on the input. I know that's not anything special, I'm hoping to do
better, much better.


If you can keep substrate noise in check it could work. Try to remain
fully differential as much as you can. Not sure if FPGA design suites
still let you hand-place blocks so you can avoid making a big racket
right next to the ADC area.

*Everything* in an FPGA makes noise, it's all digital. Yes you can hand
place logic if you want. That is the sort of thing best done at the end
if possible when you are ready to finalize the chip. But what would you
have in an FPGA design that makes more noise than anything else? Each
logic block is very small and has a pretty low power consumption. It
would be the I/O that has significant power spikes and you have total
control over that.


I wasn't referring to a specific project, just your claim that FPGA can
do the same job as processors at the same price.

Yes, that is my claim. The obvious exception is when some feature is
needed that just isn't available in an FPGA. I'm not saying *every*
project can be done better in an FPGA. I'm saying that designers tend
to just not consider FPGAs when they are often viable solutions.


In most of my apps I need much of the functionality that a decent uC
affords, like the $3 device from the MSP430 series I mentioned.

If you need 256 kB of memory then you won't reach a $3 price tag. If
you need something more like the low end processor you mentioned that
might be doable in the low end FPGAs. They have block RAM, but it
scales with the size of the chip. When you have a specific requirement
we can look and see what matches.


One project will probably require something of the caliber of a
MSP430F6733. Whether this kind or a DSP, what is key is that we are able
to use pre-coooked library routines. In my case for complex (I/Q) signal
processing, FFT, non-linear filtering and so on. Sometimes legacy
routines must be kept and in an FPGA that would require an IP block that
can emulate the respective processor well enough.

Ok, that is likely a no-go. If you really want to emulate a DSP chip
then an FPGA is not likely to be a useful way to proceed. Wanting to
run DSP precompiled library code is a bit of an extreme requirement. If
the customer wants a DSP, then by all means give them a DSP. But don't
automatically exclude an FPGA from the task.


Sometimes it would also be ok if there were similar pre-cooked FPGA
routines (I/Q signal processing, non-linear filters et cetera).

There are design tools that will generate function blocks, filters, etc.
I have not had to deal with them. The DSP stuff I have done I just
coded up in HDL.


But what I see most is this: The respective client has in-house talent
for writing code. They are familiar with a particular architecture, have
a vast arsenal of re-usable code built up, and naturally they do not
wish to give this up. If it's a dsPIC like last year, then that goes in
there. If it's Atmel and MSP430 like this year, then that is used. Has
to be, the customer is king.

Yeah, well that is a deal killer for *any* other alternative. That is
not related to what I was saying. My point is that if you don't have
any specific requirement that dictates the use of a given chip, an FPGA
has as good a chance at meeting the requirements as an MCU or DSP. In
fact, FPGAs are what get used when DSPs aren't fast enough. My point is
you don't have to limit them to the high end. They also do very well at
the low end.


No disagreement there, programables have come a long way since the days
of GALs. Which I never used because they were expensive power guzzlers.

One other challenge that needs to be met in most of my cases is
longevity of the design. A FPGA would have to remain available for more
than just a few years. For example, one of my uC-based designs from the
mid 90's is still in production. Since I kind of had a hunch that this
would happen I used an 8051 family uC. Is there something similar in the
world of FPGA?

That is typically not a problem, but pick a device that is relatively
new to start with. The vendors are *all* about their latest and
greatest products. I guess they need a critical mass of design wins up
front which they get revenue from over the life of the part. So they
push the newest stuff and let you ask about the older parts.

I don't think there is anything like the 8051 other than the 22V10
perhaps. The 8051 is an anomaly in the MCU world. You won't see a DSP
equivalent for example. So far users typically want more, more, more
from FPGAs. So a stationary design would not have a market. Even
though there are ever larger markets for low end parts, they keep
redesigning them to make them cheaper. When they do that they add
incompatibility because it doesn't affect the bulk of the users,
recompile and you are good to go. But pin compatibility, no, that just
doesn't exist other than within a single family. Fortunately product
life is typically not an issue.

--

Rick
 
rickman wrote:
On 9/7/2013 6:23 PM, Joerg wrote:
rickman wrote:
On 9/7/2013 4:46 PM, Joerg wrote:
Paul Rubin wrote:
Joerg<invalid@invalid.invalid> writes:
I don't see how the equivalent of a TMS320 or a big MSP430 could fit
into one of these small Lattice devices.

I had thought the parts of those processors that would bloat up badly
(instruction decode etc.) are pretty simple so the overall effect
of the
bloat is ok in the scheme of things. The parts doing the most work
(memory, arithmetic) are done in the FPGA hardware (RAM and DSP
blocks,
adders connected to the LUT's somehow) as efficiently as on the MCU's.

I do think softcores seem like a silly idea a lot of the time, and am
looking forward to more low end FPGA's with MCU blocks.


Much of it has to do with legacy code. Yes, some things could even be
done more efficiently in the FPGA because you can actually streamline
the HW to the task, something neither uC nor DSP allow. For example,
why
have a 32-bit HW multiplier when you know you'll never exceed 23 bits?
But legacy code won't run anymore and you need FPGA specialists to make
it all work.

No, you would need a DSP specialist. The FPGA designer only needs to
know how to code the FPGA.


So for this kind of solution in an FPGA you need a DSP specialist and an
FPGA specialist? That would be a problem.

You can do it anyway you want. I'm just making the distinction between
DSP knowledge and FPGA knowledge. They aren't very much the same. I
also make a distinction between a DSP designer and a DSP coder. Again,
not much in common. Coding doesn't really require a lot of DSP
knowledge and DSP designers often aren't experts at coding the finicky
chips.

I have learned that with uC as well. There are lots of programmers but
not too many who can lay down a realtime program architecture. So I do
that a lot. And I am a guy who cannot really program uCs very easily, I
don't speak much C.

With DSP it's the same and I would expect that also from FPGA. I propose
an architecture, what needs to be calculated, and when. Then a
programmer takes over. But I can't justify more than one person for
that. So if some or a lot of uC or DSP core have to be poured into an
FPGA then I guess the FPGA guys has to take over both.

But that is exactly the point of the FPGA in DSP apps. You code to the
app, not to a processor.


How long do the usual FPGA stay in the market? Meaning plop-in
replaceable, same footprint, same code, no changes.

Life span is typically *much* better than MCUs.

First, there are *no* second sources so whatever chip family you select
is the only one that will fit your layout. ...

That is one of my concerns. With 8051 uCs you have multiple sources as
long as you stick to customary packages such as a 44-pin flat-pack.


... There *may* be more than one
member of that family that will fit the same socket, that is common, but
not guaranteed. So you often will get a choice of two, three or four
sizes and you often get an upgrade path from your first selection. Just
in case you are familiar with the compilation process, in an FPGA you
*always* have to recompile for the target. Even if they are pin
compatible you can't load a design for an whatever-02 chip into a
whatever-03 part. Those are the limitations.

Yeah, that I was aware of. And changing to a whatever-03 would be a
major headache in many of my cases. Because it's medical, aerospace
similar where that can trigger a complete re-cert.


As to the market life, that it typically well over 10 years. Spartan 3
was introduced some 10 years ago and it is not yet the oldest chip
Xilinx has in current full production. I'm still considering using it
for new designs. Similar situation for Altera. ...

Well over 10 years is good. But only if that means no change to any new
versions that require a re-compile. Early on in my career that happened
and one guy promptly got busy with three months of regression testing.
Oh what fun.


... I was just burned by
Lattice announcing EOL of their XP line. This was because they got a
new guy in at the top with a new broom I suppose.

Not so cool :-(


I'm sure you can find various MCUs which have been in production for 10
years, but I know Atmel likes to replace products from time to time with
similar "pin compatible" devices which are 99.9% compatible. I expect
for the 8 bit parts life span is not such an issue. For the larger
parts I expect life span is a bit more limited and for the top end
chips, I'm pretty sure their life span is measured in double digit
months. Can you still buy any of the Pentium 4s that were all over the
place seven or eight years ago?

Yup:

http://components.arrow.com/part/detail/41596500S6440784N2936?region=na


... I can't even find a Core 2 Duo.

No problem either:

http://components.arrow.com/part/detail/42952225S9497728N2936?region=na

What lifespan have you seen for MCUs?

The 89C51 I designed in in the mid-90's is still living. Not sure how
long it was in production when I designed it in. The nice thing is that
these are made by several companies, even Asian ones such as Winbond. So
it was no surprise when I took apart our pellet stove for maintenance
and found one of those in there as well.

2nd source is important to me, and my clients.

--
Regards, Joerg

http://www.analogconsultants.com/
 
Stef wrote:
In comp.arch.embedded,
Joerg <invalid@invalid.invalid> wrote:
Nope. Not if it's in the worlds of medical or aerospace. There you have
a huge re-cert effort on your hands for changes. New layout? Back to the
end of the line.

That is not always true (at least for medical equipment, no experience
with aerospace). If the change is minor enough, it may be enough to
write a rationale that explains the change and how it does not impact
the function of the equipment. If the notified body agrees with the
rationale, only a limitied effort is required to re-cert.

I really doubt they would agree if a code re-compilation was required to
make this work. With code and firmware they have become very careful
because there have been to many mishaps.

Most of the time the notified bodies or even the FDA do not care much
about the code, they care about your process. So then the onus is on the
company, and there mostly on the VP of Quality Control. He or she will
normally not take a re-compile lightly, or as something that can be
brushed under the carpet as "not too risky".

It is the same with some hardware. I went through a whole re-cert once
just because we had to switcher the manufacturer for one little transformer.

The bottomline is that in the unlikely but possible situation where
something bad happens you need to be prepared. Then there will be a
barrage of request for documents from the regression testing and all
that. Woe to those who then don't have them.


Sometimes changing is very time consuming. I recently learned that this
is even the case for alarm systems. "If we even add as much as one
capacitor for EMC we have to go through the whole insurer certification
process again".

Weird, I would expect a similar approach with a rationale or something
would be enough.

There are many other markets with similar requirements. One of them is
railroad electronics, especially for countries like Germany.

--
Regards, Joerg

http://www.analogconsultants.com/
 
rickman <gnuarm@gmail.com> writes:
The point is that FPGAs are not so hard that you need a unique talent
to design them. That may have been true 10+ years ago, but they are
very mainstream now and much easier to work with.

There still appears to be a complete absence of FOSS toolchains, at
least for any current interesting parts.

like the $3 device from the MSP430 series I mentioned.
If you need 256 kB of memory then you won't reach a $3 price tag.

That part (MSP430F6733) has 64k of flash and 4k of ram, not out of
reach. It does have some nice other features that may be hard to
duplicate with an fpga, like quite low power consumption:
http://www.ti.com/product/msp430f6733
 
On 9/7/2013 8:39 PM, Paul Rubin wrote:
rickman<gnuarm@gmail.com> writes:
The point is that FPGAs are not so hard that you need a unique talent
to design them. That may have been true 10+ years ago, but they are
very mainstream now and much easier to work with.

There still appears to be a complete absence of FOSS toolchains, at
least for any current interesting parts.

There are no FOSS bit stream generators and there never will be. If
that is a no-go for you, then you will never use FPGAs from any of the
existing companies.


like the $3 device from the MSP430 series I mentioned.
If you need 256 kB of memory then you won't reach a $3 price tag.

That part (MSP430F6733) has 64k of flash and 4k of ram, not out of
reach. It does have some nice other features that may be hard to
duplicate with an fpga, like quite low power consumption:
http://www.ti.com/product/msp430f6733

You have been reading old books. All FPGAs aren't power hungry. Check
the Lattice site for iCE40 line. Very low power.

--

Rick
 
On 9/7/2013 7:45 PM, Joerg wrote:
rickman wrote:
On 9/7/2013 6:23 PM, Joerg wrote:

How long do the usual FPGA stay in the market? Meaning plop-in
replaceable, same footprint, same code, no changes.

Life span is typically *much* better than MCUs.

First, there are *no* second sources so whatever chip family you select
is the only one that will fit your layout. ...


That is one of my concerns. With 8051 uCs you have multiple sources as
long as you stick to customary packages such as a 44-pin flat-pack.

Yes, but 8051s aren't DSPs either are they? You seem to be switching
gears again. I can't keep up. I know you do different designs, but can
the FPGA be wrong for *all* of them? You seem to have all requirements
for all designs.


... There *may* be more than one
member of that family that will fit the same socket, that is common, but
not guaranteed. So you often will get a choice of two, three or four
sizes and you often get an upgrade path from your first selection. Just
in case you are familiar with the compilation process, in an FPGA you
*always* have to recompile for the target. Even if they are pin
compatible you can't load a design for an whatever-02 chip into a
whatever-03 part. Those are the limitations.


Yeah, that I was aware of. And changing to a whatever-03 would be a
major headache in many of my cases. Because it's medical, aerospace
similar where that can trigger a complete re-cert.

Why would you need to change to the whatever-03. Once it is qualified
you can stick with it. My point is that you have flexibility in the
device, no one is making you switch.


As to the market life, that it typically well over 10 years. Spartan 3
was introduced some 10 years ago and it is not yet the oldest chip
Xilinx has in current full production. I'm still considering using it
for new designs. Similar situation for Altera. ...


Well over 10 years is good. But only if that means no change to any new
versions that require a re-compile. Early on in my career that happened
and one guy promptly got busy with three months of regression testing.
Oh what fun.

Why not talk to the vendors?


... I was just burned by
Lattice announcing EOL of their XP line. This was because they got a
new guy in at the top with a new broom I suppose.


Not so cool :-(

Yeah, I'm unhappy about it. I thought I could get more development
funds for a redo but the division reselling this board in their product
doesn't want to spend any cash on it. I've been asked to spend my dime
and I likely will. I make good money on this product.


I'm sure you can find various MCUs which have been in production for 10
years, but I know Atmel likes to replace products from time to time with
similar "pin compatible" devices which are 99.9% compatible. I expect
for the 8 bit parts life span is not such an issue. For the larger
parts I expect life span is a bit more limited and for the top end
chips, I'm pretty sure their life span is measured in double digit
months. Can you still buy any of the Pentium 4s that were all over the
place seven or eight years ago?


Yup:

http://components.arrow.com/part/detail/41596500S6440784N2936?region=na


... I can't even find a Core 2 Duo.


No problem either:

http://components.arrow.com/part/detail/42952225S9497728N2936?region=na

How do you know these are the parts that were designed in the system of
interest? They made a huge number of variants and I know I have seen
EOL notices for Pentium 4s.


What lifespan have you seen for MCUs?


The 89C51 I designed in in the mid-90's is still living. Not sure how
long it was in production when I designed it in. The nice thing is that
these are made by several companies, even Asian ones such as Winbond. So
it was no surprise when I took apart our pellet stove for maintenance
and found one of those in there as well.

2nd source is important to me, and my clients.

If you really need that level of consistency, then you will be using
nothing but 8051s all your career. I don't know of any digital
component that has lived as long as the 8051 other than perhaps LS-TTL.
I also don't know of any other MCU that is second sourceds. If the
8051 does what you need, then go for it. But again you are mixing
conversations. That's why it is so frustrating to have a conversation
with you. You talk about not being able to use a part unless it has a
product life as long as the 8051 and then you talk about using various
DSP chips in the same context. I *know* you won't be able to buy those
DSP chips 10 years from now. TI just doesn't provide that level of
support unless they have a special program for long lived parts I'm not
aware of. I've seen lightly selling DSPs drop from the marketplace
after less than 5 years.

The DSP market was just a tiny exploration by TI initially. Then they
saw cell phones as a way to utilize that capability. They actually
reorganized the entire company to take full advantage of it. As a
result they ended up with four segments for DSPs.

1) Cell phone devices - small, low power and cheap in large quantities.
Not much need for longevity at all... basically the C5xxx line.

2) Cell base stations - powerful devices that can handle multiple
channels, power consumption not important and cost is secondary. This
is the C6xxx line. Again, they focus on new, not longevity.

3) Scientific DSP - floating point. C67xx lines. Relatively low
volumes compared to the other two, but they seem to think it is an
important market. New designs are not as frequent. Longevity might be
better than the other two, but no promises.

4) Motor control, white goods, etc - fixed point with price the major
factor. These have appeared in a range of variations, some with flash,
some with ADCs, etc. These are almost MCUs with simlar performance,
slow compared to segment 1 and 2. Intended for high volume apps, but
again, longevity is not important.

So if you are going to consider DSPs for your apps, I expect you would
be looking at the last category. I'm pretty sure I wouldn't be
designing from this group if I wanted to be building this board 10 years
from now though. Have you talked to TI about longevity?

--

Rick
 
On 9/7/2013 7:31 PM, Paul Rubin wrote:
rickman<gnuarm@gmail.com> writes:
That is a pointless comparison. I have never once opened up a chip to
see how much silicon it used. I compare the things I can see from the
outside, cost, power consumption, etc...

Yes, and those are quite closely dependent on the amount of silicon used.

Nonsense. If you want to know how much water has collected in the
basement because of a burst pipe, do you call the water authority to
read the meter? No, you put a stick in the water and measure it. If
you want to know a parameter, then measure that parameter, don't infer
it from something only vaguely related.

You have a bias against soft cores because you want to analyze them in a
meaningless way. How about analyzing them in the terms that you care
about?


You can infer anything you wish. The proof of the pudding is in the
eating.

OK. That GA144 you mentioned has 144 cpu nodes made in a rather old
process technology (0.18 micron, I guess 1990's vintage). They still
manage to run the thing at 700+ Mhz, keep power consumption to around
0.5W with all cpu's running full speed, and sell it for $20 in small
quantity. Can you do anyting like that with an FPGA?

Like what exactly? Do 700 MIPS, of course you can. An FPGA can be
configured to run your algorithm more exactly than any processor and so
it can get very low power.

BTW, you know the GA144 doesn't do 700 MIPS either. It is less than
half that with most code. The GA144 isn't 0.5 Watts either, it is close
to 1 Watt with all nodes running. It also doesn't cost $20 to use
because it requires a *ton* of support devices, boot prom, RAM, clock,
1.8 volt to *everything else* voltage translation, etc...

I actually considered using it in my board redesign. I might have to
add a RAM chip to it, but all the clocks are external to the board
anyway and there is already a low voltage power supply. So the main
issue is the voltage translation which is partly dealt with currently
since the current FPGA had to be buffered to some of the I/O for 5 volt
logic. So the GA144 might do ok in that design. But then there is the
reason I am doing a redesign... the FPGA is EOL. I don't have much
confidence GA will be around in 10 years. Do you know of one major
design win they have had?


What will it
cost?

You haven't told me what the design requirements are... how can I
possibly give you a price?


> How much power will it use?

How long is a piece of string?


I'll accept the b16 as a comparable
processor to a GA144 node. Bernd's paper mentions the b16 ran at 25 MHz
in a Flex10K30E, a 30-to-1 slowdown, power consumption not mentioned.
But I don't know how the underlying silicon processes compare.

You are trying to compare apples to horses. No, you can't use an FPGA
to implement some existing processor and improve on cost, power or any
other parameter. I never said you could. That would be like using a
kitchen knife as a razor. It won't work so well and has little value.
But if you have an application - it may well be easier to implement in
an FPGA than in a GA144... in fact, I can almost guarantee that!

--

Rick
 
rickman <gnuarm@gmail.com> writes:

On 9/7/2013 4:24 AM, John Devereux wrote:
rickman<gnuarm@gmail.com> writes:

If your FPGA designs are expensive or power hungry, then you are doing
things you can't do in an MCU or you are not using FPGAs properly.
They don't need to use any more power than an MCU and in many cases
less. They certainly don't need to be significantly more expensive
unless you consider every dollar in your designs. At the very low end
MCUs can be under $1 and still have reasonable performance. For $1
you can't get much in the way of programmable logic. For $3 however,
you can get a chip large enough for a CPU (with math) and room for
your special logic.

I've never used an FPGA, microcontrollers have increased in speed faster
than my needs so far. So I can usually bitbang everything or use a
peripheral. I used PLDs for glue logic back in the day but that's it. Oh
and I bought a small xilinx dev kit which I got to make a led flash then
put in a drawer for 15 years.

So your use of MCUs is based on inertia?

Partly I suppose.

Or I could say that my projects so far all require a microcontroller
anyway, and it seemed likely that a separate FPGA was always going to be
more expensive than, say, choosing a faster CPU.

A STM32F4 can bitbang a PIO at 84MHz. (It can't do anything else then,
but still...)

But could you give an example of your $3 one? Or a favorite?

A startup company called Silicon Blue came out with a line of FPGAs
targeted to the high volume, low power market that exists for portable
devices. They were preparing their second device family and were
bought by Lattice Semi. The first family was dropped and the second
family is the iCE40 (for 40 nm). They are very low power although
smallish. The largest one has 8 kLUTs, the smallest 384 LUTs.

Last winter I was looking at designing a very low power radio
controlled clock to run in one of these. They were still playing a
shell game with the devices in the lineup and the 640 LUT part I
wanted to use was dropped... :( The only real problem I have with
these devices is the packaging. Because of the target market the
packages are mostly fine pitch BGAs. Great if you are making a cell
phone, not so great if you are designing other equipment.

You can get the 1 kLUT parts for under $3 and possibly the 4 kLUT
parts. It has been a while since I got a quote. The 1 kLUT part is
big enough for a soft core MCU plus some custom logic.

OK, thanks, will check them out.

BTW, with MCUs Digikey will give you a realistic price quote. In the
FPGA world the distis never give you a good price unless you ask for a
quantity quote. I have gotten prices quoted to me that were half the
list price. FPGA companies play a different marketing game and have a
lot of room to negotiate in order to buy a socket.

--

John Devereux
 
On 9/8/2013 2:04 PM, John Devereux wrote:
rickman<gnuarm@gmail.com> writes:

On 9/8/2013 4:04 AM, John Devereux wrote:
rickman<gnuarm@gmail.com> writes:

On 9/7/2013 4:24 AM, John Devereux wrote:
rickman<gnuarm@gmail.com> writes:

If your FPGA designs are expensive or power hungry, then you are doing
things you can't do in an MCU or you are not using FPGAs properly.
They don't need to use any more power than an MCU and in many cases
less. They certainly don't need to be significantly more expensive
unless you consider every dollar in your designs. At the very low end
MCUs can be under $1 and still have reasonable performance. For $1
you can't get much in the way of programmable logic. For $3 however,
you can get a chip large enough for a CPU (with math) and room for
your special logic.

I've never used an FPGA, microcontrollers have increased in speed faster
than my needs so far. So I can usually bitbang everything or use a
peripheral. I used PLDs for glue logic back in the day but that's it. Oh
and I bought a small xilinx dev kit which I got to make a led flash then
put in a drawer for 15 years.

So your use of MCUs is based on inertia?

Partly I suppose.

Or I could say that my projects so far all require a microcontroller
anyway, and it seemed likely that a separate FPGA was always going to be
more expensive than, say, choosing a faster CPU.

A STM32F4 can bitbang a PIO at 84MHz. (It can't do anything else then,
but still...)

I think what you are saying is that the MCU is a key part of your
design and you use a lot of code in it.

Yes, basically. "a lot" being only e.g. about 64k probably, not much for
a MCU but would push the price up for an FPGA I think.

Ok, if your emphasis in on using a commercial MCU that will do the
job. But unless your MCU needs are just too large for something that
fits in an FPGA, you have it backwards in my opinion. Why have both
when you can just use an FPGA?

I'm pretty sure that a FPGA with enough RAM would be far too expensive
(compared to the $3 200 MIPS CPU).

I won't pretend that an FPGA is the right solution for every task. But
I think MCUs are often used because that is what the designer is used to
and FPGAs aren't understood well enough to consider. Is "enough" RAM
more than what a given FPGA has? I don't know, how much RAM do you
really need? Most MCU projects I have worked on never had a realistic
RAM estimate, it was all by the seat of the pants. The fact that code
uses RAM makes it harder to estimate. FPGAs are a lot easier to design
with in that regard. RAM quantities have to be known exactly. LUT
counts have to be estimated though, so its not totally different.


A M3 or M4 with attached FPGA + memories would be interesting, if it was
at a reasonable price.

Or even an AVR... are you reading Ulf? I think the requirements for
MCUs are often overstated. Most of the sort of work I do could be done
with an 8051 (ugh!) if one of the higher performance devices especially,
but I often don't have the real estate for a separate MCU unless I can
treat it as an I/O expander.


NXP have a M4 with attached M0 which sort of goes in that direction; the
M0 does the more deterministic simple stuff, the M4 does the number
crunching and runs the more complicated software.

Hell, I'd be estatic if they provided FPGAs in small enough packages so
I can use a 32 pin QFN for an MCU and the same footprint for an FPGA.
Well, Lattice *does* put an XO2 in a 32 QFN, but only 256 LUTs which is
not big enough for much. Why not 1 or 2 or 4 kLUT? For some reason
FPGA vendors all think you need more I/O and less LUTs.


But could you give an example of your $3 one? Or a favorite?


[...]

You can get the 1 kLUT parts for under $3 and possibly the 4 kLUT
parts. It has been a while since I got a quote. The 1 kLUT part is
big enough for a soft core MCU plus some custom logic.

OK, thanks, will check them out.

I haven't gotten a quote on these parts since they were bought by
Lattice. I'd appreciate a pricing update if you get one. They should
be able to do a lot better than the Digikey price, I know Xilinx and
Altera always do. Heck, the Digikey pricing for most FPGAs doesn't go
above qty 1... if nothing else there should be some quantity price
breaks.

Unfortunately I don't really have a live application, so would only be
able to buy them as "education" at this stage.

I got a freebie eval board for the iCE40 but haven't fired it up. I
want to measure some power consumption numbers. The data sheets changed
the static current a while back, well after they had been out, just
after Lattice bought SiliconBlue so I'm not sure what that was about.
The 1 kLUT part went from around 40 uA to 100 uA quiescent current. The
dynamic current is still very low though, single digit mA with the
device full of 16 bit counters running at 32 MHz. But they seem to have
removed that data when they changed data sheet formats.

--

Rick
 
rickman wrote:
On 9/7/2013 5:48 PM, Joerg wrote:
rickman wrote:
On 9/7/2013 3:39 PM, Joerg wrote:
rickman wrote:
On 9/7/2013 1:59 PM, Joerg wrote:
rickman wrote:
On 9/7/2013 11:10 AM, Joerg wrote:
For example, this:

http://www.ti.com/lit/ds/symlink/tms320c5535.pdf

I don't see it for $3. Did you get a quote for your project? TI
says
it is $5 to $8 at qty 1k depending on the flavor. You still need
to add
Flash.


1k qty is $3.67 at Digikey:

http://www.digikey.com/product-detail/en/TMS320C5532AZHH10/296-32741-ND/2749713




Not the same part pal. You're trying to pull a fast one on me?
Are we
talking about the TMS320C5535 with "tons" of memory or the TMS320C5532
with *much* less memory?


It doesn't have the single access RAM but it does have 64k dual access
RAM. That's a lot of RAM in embedded.

You do this often. Start talking about one thing and shift the context
to another. ...


I didn't. I said it is a DSP with large memory, which it is.

You first give a part, the C5535, as the chip with big memory, then it
becomes the C5532 which is less memory and less expensive. I can't tell
what you are talking about when the subject changes.

There are no subject changes. Did you even click on the link? The
datasheet is for the _whole_ series, _including_ the 5532. It clearly
says so in the first line on the first page.

[...]

The fact that most FPGA don't have flash is fine ... but ... there must
be a decent bootloader inside. In one of the upcoming projects it must
be able to bootload via USB. So the device must wake up with a certain
minimum in brain functionality to handle the USB stuff. With FPGA that
can become a challenge unless you provide a serial memory device (which
adds cost).

No, you won't find any FPGAs which can wake up talking over USB. But
you will find FPGAs with internal Flash if you wish to design a USB
bootloader.

Then one of those would be required I guess. USB connectivity is
important these days.

... It has been a while since I
looked
hard at DSP chips, but I don't recall any I would call remotely
"big"
for $3. The TI chips that would be "big" are the TMS6xxx line
which
start somewhere around $20 the last time I looked and that
requires
all
memory and I/O to be separate. The smaller DSP chips that you
can get
for the $3 range are not "big" in any sense and only a very few of
them
include Flash memory. So you still need another chip.


It has tons of memory on board.

Yes, and many FPGAs have "tons" of memory on board although not for
$3... but then this isn't a $3 part either...


It is a $3 part. See above.

No, you need to pick a part number and stick with it.


I gave a part number. Still waiting for your $3 FPGA part number :)

Actually you gave me two part numbers, one for $5 and one for just over
$3. What's your point? I gave you info to find the iCE40 line. Xilinx
also makes FPGAs that are very affordable and does Altera and Lattice.


Both of the ones I gave you are $3. The DSP costs $3.02 and the MSP430
is $3.09. These are over-the-counter no-haggle prices. Can a $3 iCE40
device emulate a TMS320 or a big MSP430? I can't tell because I don't
know this Lattice series and I am not an FPGA expert. But it sure looks
like they'd have a hard time.

No, the C5535 part is not $3. That is what I mean by two part numbers.

The C5532 is $3. That is the part in the Digikey link I gave. Datasheets
are often for a whole series, economy to deluxe. I thought that became
clear when you looked at the datasheet.

[...]


Depending on your design requirements there are any number of FPGAs that
will do the job and some may be $3. What are your design requirements?


As I said, I do not have any hammered out ones yet but it'll come. This
was just about you $3 claim. So I gave some examples of devices that
cost $3.

Yes, and there are FPGAs in that price range which can be used to
implement a CPU plus other logic.

Well, yeah, but we were talking about an appropriate and similarly
classed CPU, not a 30c 8-bitter from China.

I understand the concept of work that can't be moved. You don't need to
continue to explain that. I was asking why you said most of your word
didn't have that requirement and yet you still were debating the point.
Now I get it, you are talking about two different things, work that
can
be moved and work that can't be moved.


Yup. Hence the need for availability of local programmer talent. Less
local availability means potential problems. That is because (where
possible) I like to use architectures I am familiar with.

Programmer talent means longterm. For example, if a client has an issue
with an 8051 design long after the original programmer has moved on I
could find programmers within a 10-mile radius. Try that with an FPGA.
In San Jose it may be possible but not out here.

I can't speak of your environment. I know my friend of many years
stayed away from FPGAs in spite of the fact that he is a very capable
designer. He finally paid me for a week of FPGA design work which I
then turned over to him and helped him get started with HDL. It's not
hard at all. You don't really need anyone special. That is the sort of
thinking I am trying to dispel.

For you it may be easy. I am somehow not the kind of guy that easily
learns programming languages. Human languages, yes. Really weird analog
or RF tricks, yes. C, C++ or HDL, not really. I can read through code to
some extent but it is like having to plow through a document in
Portuguese (which I had to do).


Another example. A software designer came to a newsgroup looking for
info on programming FPGAs. He used the mindset of a software guy and
wanted to do a "hello world" program. We tried to explain to him that
hardware isn't software and HDL isn't C. But he persisted and I gave
him advice over a week or so. I tried to turn it into a consulting gig
but his bosses didn't want to pay the bucks. He ended up doing just
fine with his software mindset and convinced his boss to pay me $500
over my protests. I cashed the check when it came.

The point is that FPGAs are not so hard that you need a unique talent to
design them. That may have been true 10+ years ago, but they are very
mainstream now and much easier to work with. I bet even *you* could do
an FPGA design, lol.

Maybe, but it'll take a while. I did some uC programming though so maybe
that helps.


I don't care where you are located, if you can't find an FPGA designer,
you aren't looking very hard.

In Cameron Park? Most if not all FPGA guys out here work for Intel, they
won't have time for consulting gigs and may not even be allowed to do it.

Not sure what the requirements are for your CODEC, but I have been
using
the AKM parts with good results. Minimal or *no* programming,
configuration is done by a small number of select pins, very
simple. I
have yet to find another one as small, AK4552, AK4556.


Plus their prices are quite good.

Which, AKM or the other? I'd like to think I can get a CD quality CODEC
for $3 from nearly anyone. I mainly picked AKM because of the size, 6x6
mm without going to a microBGA.


AKM has good prices.

Ok. I have no complaints on prices. Their lead time can be a problem.
I had a conversation, disti, manufacturers guy and me. I was
complaining about a 14 week lead time and he bragged that a 14 week lead
time was *good*. I give my customers a 10 week lead time... see the
problem? Digikey sell them now so it is not such an issue. I even
ended up speaking with a buyer or planner who was coordinating the
shipment of an order last spring. Once you reach them they are very nice.

Yes, Digikey has them. My rule is that if Digikey doesn't have something
I try to avoid the part. Except for Coilcraft.

High 10's or low 10's. Up to say, 20 or 30 ksps is easy to do in an
FPGA with decent resolution, 12 bits. Getting 16 bits is harder, I've
not tried it, but should be possible.


Mostly I need 40-50ksps. But 20 is often ok.

I haven't done 12 bits at 50 ksps, but I expect it is doable. Just
cross the t's and dot the i's.

It's not just getting it done in principle but also to yield at least
10.5bits ENOB or so at that speed. Even with uC that can be a challenge.

I was looking at using a LVDS input for a comparator and Xilinx did it
in a demo of a SDR. They are very short on details, but they talk about
1 mV on the input. I know that's not anything special, I'm hoping to do
better, much better.


If you can keep substrate noise in check it could work. Try to remain
fully differential as much as you can. Not sure if FPGA design suites
still let you hand-place blocks so you can avoid making a big racket
right next to the ADC area.

*Everything* in an FPGA makes noise, it's all digital. Yes you can hand
place logic if you want. That is the sort of thing best done at the end
if possible when you are ready to finalize the chip. But what would you
have in an FPGA design that makes more noise than anything else? Each
logic block is very small and has a pretty low power consumption. It
would be the I/O that has significant power spikes and you have total
control over that.

What sometimes causes issues are FLL or PLL in there to create the
master clock. But I only know that from uC. With FPGA we had EMI
problems and sometimes they required unorthodox measures. Had the same
thing with a discrete RAM bank: We had to run other parts in the FPGA as
dummy loads, ping-pong style, to reduce the noise energy. Their FPGA guy
almost threw me out of his cubicle when I suggested that but then it
worked. He bought me a coffee at the cantina :)

I wasn't referring to a specific project, just your claim that FPGA can
do the same job as processors at the same price.

Yes, that is my claim. The obvious exception is when some feature is
needed that just isn't available in an FPGA. I'm not saying *every*
project can be done better in an FPGA. I'm saying that designers tend
to just not consider FPGAs when they are often viable solutions.


In most of my apps I need much of the functionality that a decent uC
affords, like the $3 device from the MSP430 series I mentioned.

If you need 256 kB of memory then you won't reach a $3 price tag. If
you need something more like the low end processor you mentioned that
might be doable in the low end FPGAs. They have block RAM, but it
scales with the size of the chip. When you have a specific requirement
we can look and see what matches.

It'll be a while until I know for sure. Because whether or not I need
some massive compensator routine depends on the performance of a
complicated mechanical part that we won't have before spring next year.

[...]


But what I see most is this: The respective client has in-house talent
for writing code. They are familiar with a particular architecture,
have
a vast arsenal of re-usable code built up, and naturally they do not
wish to give this up. If it's a dsPIC like last year, then that goes in
there. If it's Atmel and MSP430 like this year, then that is used. Has
to be, the customer is king.

Yeah, well that is a deal killer for *any* other alternative. That is
not related to what I was saying. My point is that if you don't have
any specific requirement that dictates the use of a given chip, an FPGA
has as good a chance at meeting the requirements as an MCU or DSP. In
fact, FPGAs are what get used when DSPs aren't fast enough. My point is
you don't have to limit them to the high end. They also do very well at
the low end.


No disagreement there, programables have come a long way since the days
of GALs. Which I never used because they were expensive power guzzlers.

One other challenge that needs to be met in most of my cases is
longevity of the design. A FPGA would have to remain available for more
than just a few years. For example, one of my uC-based designs from the
mid 90's is still in production. Since I kind of had a hunch that this
would happen I used an 8051 family uC. Is there something similar in the
world of FPGA?

That is typically not a problem, but pick a device that is relatively
new to start with. The vendors are *all* about their latest and
greatest products. I guess they need a critical mass of design wins up
front which they get revenue from over the life of the part. So they
push the newest stuff and let you ask about the older parts.

As long as they do not require a formal RFQ from the (not yet existing)
purchasing department of the company. Then I'd walk. No kidding, this
happened on a programmable device in the 90's.


I don't think there is anything like the 8051 other than the 22V10
perhaps. The 8051 is an anomaly in the MCU world. ...

Not an anomaly, it was bound to happen. There are many areas where 2nd
source is a must. The usual paranoia by manufacturers that this puts
downward pressure on the price was debunked by this very uC. It is the
only gripe I have with it, that it is expensive compared to more modern
ones. But you have no choice if there must be a 2nd source and they know it.

So it was also not very surprising that "Hayabusa editions" came out,
screaming along around 100MHz.


... You won't see a DSP
equivalent for example. So far users typically want more, more, more
from FPGAs. So a stationary design would not have a market. Even
though there are ever larger markets for low end parts, they keep
redesigning them to make them cheaper. When they do that they add
incompatibility because it doesn't affect the bulk of the users,
recompile and you are good to go. But pin compatibility, no, that just
doesn't exist other than within a single family. Fortunately product
life is typically not an issue.

It is for me. In the old days we preferred Analog Devices DSP (2110?
Forgot the part number) because they were the staple. We just used lots
of them per board. Cheap, too.

Come to think of it, most of my designs where there was a DSP it cost
around $5-10, nowadays they are down to around $3. It's not very
expensive anymore but finding a programmer to work locally can be tough.

--
Regards, Joerg

http://www.analogconsultants.com/
 
rickman wrote:
On 9/7/2013 7:45 PM, Joerg wrote:
rickman wrote:
On 9/7/2013 6:23 PM, Joerg wrote:

How long do the usual FPGA stay in the market? Meaning plop-in
replaceable, same footprint, same code, no changes.

Life span is typically *much* better than MCUs.

First, there are *no* second sources so whatever chip family you select
is the only one that will fit your layout. ...


That is one of my concerns. With 8051 uCs you have multiple sources as
long as you stick to customary packages such as a 44-pin flat-pack.

Yes, but 8051s aren't DSPs either are they? You seem to be switching
gears again. I can't keep up. I know you do different designs, but can
the FPGA be wrong for *all* of them? You seem to have all requirements
for all designs.

I do various designs, sometimes simultaneously. For DSP we often just
plop down a TMS320 bare-bones edition and be done with it. It's like
buying a Ford F-150 for the ranch, it may be too big but it is not
expensive and you almost can't go wrong with it. I had designs where the
DSP workload ended up at 5% but at $3 pop nobody was concerned.

And no, mostly I don't even have the requirements until after the
project already started. Sometimes weeks down the road the sensor guys
call in, "Houston, we have a problem". This is the kind of project
companies like to use consultants for, since it can be utterly
frustrating for engineers. Us guys are use to this stuff.

... There *may* be more than one
member of that family that will fit the same socket, that is common, but
not guaranteed. So you often will get a choice of two, three or four
sizes and you often get an upgrade path from your first selection. Just
in case you are familiar with the compilation process, in an FPGA you
*always* have to recompile for the target. Even if they are pin
compatible you can't load a design for an whatever-02 chip into a
whatever-03 part. Those are the limitations.


Yeah, that I was aware of. And changing to a whatever-03 would be a
major headache in many of my cases. Because it's medical, aerospace
similar where that can trigger a complete re-cert.

Why would you need to change to the whatever-03. Once it is qualified
you can stick with it. My point is that you have flexibility in the
device, no one is making you switch.

I was thinking about the case where whatever-02 becomes unobtanium.

As to the market life, that it typically well over 10 years. Spartan 3
was introduced some 10 years ago and it is not yet the oldest chip
Xilinx has in current full production. I'm still considering using it
for new designs. Similar situation for Altera. ...


Well over 10 years is good. But only if that means no change to any new
versions that require a re-compile. Early on in my career that happened
and one guy promptly got busy with three months of regression testing.
Oh what fun.

Why not talk to the vendors?

We did that and all we got was a "Sorry about that". The designed-in
device was discontinued.

... I was just burned by
Lattice announcing EOL of their XP line. This was because they got a
new guy in at the top with a new broom I suppose.


Not so cool :-(

Yeah, I'm unhappy about it. I thought I could get more development
funds for a redo but the division reselling this board in their product
doesn't want to spend any cash on it. I've been asked to spend my dime
and I likely will. I make good money on this product.

Looks like a good business opportunity for you :)

I'm sure you can find various MCUs which have been in production for 10
years, but I know Atmel likes to replace products from time to time with
similar "pin compatible" devices which are 99.9% compatible. I expect
for the 8 bit parts life span is not such an issue. For the larger
parts I expect life span is a bit more limited and for the top end
chips, I'm pretty sure their life span is measured in double digit
months. Can you still buy any of the Pentium 4s that were all over the
place seven or eight years ago?


Yup:

http://components.arrow.com/part/detail/41596500S6440784N2936?region=na


... I can't even find a Core 2 Duo.


No problem either:

http://components.arrow.com/part/detail/42952225S9497728N2936?region=na

How do you know these are the parts that were designed in the system of
interest? They made a huge number of variants and I know I have seen
EOL notices for Pentium 4s.

Well, you do have to look in the schematics. You only asked whether one
can still buy Pentium 4 and I said yes, and gave evidence. Are you
changing the game now? :)

Legacy stuff in the PC world does not go away fast. To this day you can
still easily buy brand-new ISA-bus PCs. Because scores of them are used
in production facilities. I helped replace one a few years ago and it
also had a processor from the days of Methusaleh.

What lifespan have you seen for MCUs?


The 89C51 I designed in in the mid-90's is still living. Not sure how
long it was in production when I designed it in. The nice thing is that
these are made by several companies, even Asian ones such as Winbond. So
it was no surprise when I took apart our pellet stove for maintenance
and found one of those in there as well.

2nd source is important to me, and my clients.

If you really need that level of consistency, then you will be using
nothing but 8051s all your career. I don't know of any digital
component that has lived as long as the 8051 other than perhaps LS-TTL.
I also don't know of any other MCU that is second sourceds. If the
8051 does what you need, then go for it. But again you are mixing
conversations. That's why it is so frustrating to have a conversation
with you. ...

I merely said it matter in some cases. Not in all cases.


... You talk about not being able to use a part unless it has a
product life as long as the 8051 and then you talk about using various
DSP chips in the same context. I *know* you won't be able to buy those
DSP chips 10 years from now. TI just doesn't provide that level of
support unless they have a special program for long lived parts I'm not
aware of. I've seen lightly selling DSPs drop from the marketplace
after less than 5 years.

Well, let me show you a blast from the past ...

http://www.rocelec.com/search/finished/TMS320C10NL/0/1/contains/?utm_source=supplyFrame&utm_medium=buyNow

20,286 in stock, ready to ship.


The DSP market was just a tiny exploration by TI initially. Then they
saw cell phones as a way to utilize that capability. They actually
reorganized the entire company to take full advantage of it. As a
result they ended up with four segments for DSPs.

Analog Devices had the market first, they really ruled in the eearly
90's. We had boards with about a dozen 16-bit FP DSPs on there.


1) Cell phone devices - small, low power and cheap in large quantities.
Not much need for longevity at all... basically the C5xxx line.

2) Cell base stations - powerful devices that can handle multiple
channels, power consumption not important and cost is secondary. This
is the C6xxx line. Again, they focus on new, not longevity.

3) Scientific DSP - floating point. C67xx lines. Relatively low
volumes compared to the other two, but they seem to think it is an
important market. New designs are not as frequent. Longevity might be
better than the other two, but no promises.

4) Motor control, white goods, etc - fixed point with price the major
factor. These have appeared in a range of variations, some with flash,
some with ADCs, etc. These are almost MCUs with simlar performance,
slow compared to segment 1 and 2. Intended for high volume apps, but
again, longevity is not important.

So if you are going to consider DSPs for your apps, I expect you would
be looking at the last category. I'm pretty sure I wouldn't be
designing from this group if I wanted to be building this board 10 years
from now though. Have you talked to TI about longevity?

Not yet. That comes if I decide to have a DSP in a project. But mostly
my clients have those discussions because that's their turf, I am more
the analog guys. On large projects stuff gets put in writing about
guaranteed years of supply. On some chips it goes as far as putting the
mask data in escrow, especially with smaller companies where there is a
chance of them going belly-up down the road.

--
Regards, Joerg

http://www.analogconsultants.com/
 
On Sat, 07 Sep 2013 13:23:48 -0400, rickman <gnuarm@gmail.com> wrote:

On 9/7/2013 11:17 AM, krw@attt.bizz wrote:
On Fri, 06 Sep 2013 23:59:59 -0400, rickman<gnuarm@gmail.com> wrote:

On 9/6/2013 7:10 PM, Joerg wrote:
That is often the problem. Sometimes a buck fifty is the pain threshold.
Not in this ATMega case, the 2560 is very expensive but comes with lots
of ADC and analog muxes and all that. Things that will cost extra with a
FPGA solution and eat real estate.


For $3 however, you can get a
chip large enough for a CPU (with math) and room for your special logic.


For $3 I can get a big DSP.

What "big" DSP can you get for $3? It has been a while since I looked
hard at DSP chips, but I don't recall any I would call remotely "big"
for $3. The TI chips that would be "big" are the TMS6xxx line which
start somewhere around $20 the last time I looked and that requires all
memory and I/O to be separate. The smaller DSP chips that you can get
for the $3 range are not "big" in any sense and only a very few of them
include Flash memory. So you still need another chip.

We pay less than that for the largest of the ADI sigma DSPs. I just
received a quote for the smallest CPLD for around $.75. I have use
for CPLDs and FPGAs but they're simply too expensive for most of my
applications.

It seems the prices have come down in recent years, but still, the parts
I have seen have no Flash. So you need to add in that cost. But the
Sigma parts aren't really general purpose. They are good if you can
make you app fit the DSP design, otherwise they aren't much use. I
pursued them hard a few years ago until an FAE just threw in the towel
and said I couldn't do my app on their part.

Good grief. The issue wasn't to show YOU that YOUR application was
better in a DSP. Like many FGPA weenies, you're trying to sell a part
that has a niche market as the universal hammer.

Even a "small" FPGA can run rings around a DSP when it comes to
performance. Usually "big" in DSPs means fast and when you want really
fast DSP you use an FPGA with all the parallelism you can handle. DSPs
can't touch FPGAs for speed, even with low power.

Comparing the two is silly. Each has its place.

That makes no sense.

Hammer, meet nail.

There will always be some designs that a given
part is a perfect fit for, but that doesn't mean different devices can't
be compared. The question is what is the best fit for a given job.

That is *NOT* what you're arguing. You're making the general case
that FPGA >> DSP >> uC, which is just silly.

I am hearing some say that FPGAs aren't the best fit and I find they often
are a better fit than an MCU.

Hammer, meet nail.

Much of it has to do with mis-information
about what FPGAs can and can't do and what is required to make them run.

Nonsense.

>Just read Joerge's post.

I have.

Much of the stuff he objects to is specific
to the individual devices he has worked with.

Like DSPs. I agree with him. FPGAs aren't in his future. You keep
sugar-coating FPGAs and (erroneously) tear down DSPs. Note that I'm
more of an FPGA kind of guy than a DSP sort but in this case Joerg is
absolutely right. FPGAs only compete in small niche markets and those
where money is no object.

What I often find is people only doing Altera or only Xilinx. With uC
it's a bit easier, a PIC guy can be cajoled into programming an AVR,
usually.

I'm totally device agnostic. I have worked with all brands other than
MicroSemi (formerly Actel). I even worked with Lucent which was bought
by Lattice and I believe is still sold and supported (but not the GD XP
line which I had designed into a cash cow product and will have to
redesign now). Ever hear of Concurrent? They were bought by Atmel.
Their devices were followed by the AT40K. I worked with the Concurrent
devices. lol So you can see I go way back.

The difference anymore is very small. The only reason I prefer one
over the other is software and that takes a back seat to most other
variables (in rough order of importance, 1. cost, 2. cost, 3. cost).

I have not found a big difference in software. The software is
different, but those differences are not important. It all compiles my
HDL fine (mostly because they often use the same third party tool
vendors) and simulation just works anymore.

The software is different in how it works, not what it does. That
difference makes *NO* difference to the end result or the cost of the
product. IOW, it's completely irrelevant. At one time it may have
been important but only in so much as that much of it didn't work
(making the hardware useless).

The one feature that isn't universal is programming modes. This can
make a big difference in indirect costs (field upgrade, SKU
personalization, etc.) that may not show up directly on the raw BOM.

I don't know what devices you work with, but the ones I use are easy to
program.

Pile on more sugar. You clearly don't work where time is money.

I've used schematic based tools and both VHDL and Verilog. I've worked
with the vendor's tools and third party tools including the NeoCAD tools
which became Xilinx tools when Xilinx bought them.

If anyone tells you they only know one brand of FPGA you are talking to
an FPGA weenie. I find MCUs to vary a *great* deal more than FPGAs in
terms of usage. MCUs need all sorts of start up code and peripheral
drivers, clock control, etc, etc, etc. FPGAs not so much. They mostly
have the same features and most of that can be inferred from the HDL so
you never need to look too hard under the hood.

Sure, the feature set and peripherals of micros varies widely. We use
a variety of SoCs from just about everyone. Since most are settling
on ARM, switching from one to the other is pretty simple. Our last
port from one manufacturer to the other took a couple of weeks.

The CPU is the easy part to port, the compiler handles that for you. It
is the drivers for the I/O that is harder.

That's all included in the port. I'm talking from working hardware to
working hardware (the target system not qualified, of course). There
is only about 10% of the code that even has to be looked at.

Their libraries have to have
compatible interfaces and every port is a port.

Wrong. That's all included.

With FPGAs, all you
need to do to switch between brands is normally a new pin list and
timing constraints.

Bullshit! More sugar!

>The HDL just compiles to suit the new device.

Oh, you never use libraries? Yet you (erroneously) add that cost into
the DSP/uC bucket.

It has been a while since I ported between brands but it would make sense
if they provide tools to port the timing constraints. That is the only
part that might be any work at all.

In short, there is a lot of FUD about FPGAs. Talk to someone who
doesn't buy into the FUD.

The FUD is on both sides. The support costs aren't as low as you
pretend.

Care to elaborate?

You've TOTALLY forgotten about simulation, for instance. That's a
huge effort that you simply sweep under the rug.

Things quickly unravel when you start relying on real hardware that is
on uC but not on FPGA. Comparators, ADCs, analog muxes, for example.

If you really need it all on a single chip, then yes, you won't find
that on so many FPGAs although Microsemi has their Fusion line with
analog. My cash cow uses a single FPGA and a stereo CODEC. That was
smaller than any MCU design because the MCU would still require the
CODEC (CD quality) and some of the control logic and interface could not
be done with any conventional MCU. I had to vary the speed of the CODEC
clock via an ADPLL to synchronize it with an incoming data stream. I
don't know how to do that with an MCU and no logic. But I can do it all
with FPGA logic and no MCU.

Nonsense. DSPs are also available with CODECs, as are UCs.

You can find a small number of DSPs with CD qualitity CODECs and the
same for MCUs. I know, I did this search recently. I didn't find much
and none that suited my other critera. So the redo of my board will
likely have another FPGA on it.

Goal post shift added to the hammer.

I would appreciate a list of the MCUs/DSPs which have stereo CD quality
CODECs on chip. The Sigma parts from ADI don't count because their DSPs
can *only* be used for certain coding like filters, not general purpose
use.

Sigmas have them. I haven't looked for others.

Last week I reviewed a design with some larger FPGA on there. What I
found fairly disgusting was how much they had to be babied with the
power sequencing. uCs don't have that problem.

If you want to work with the wrong device, then you will find it hard to
work with. There are still single voltage devices on the market. If
this was an old design, most likely it was a Spartan 3 or similar era
device when they (for still unknown reasons) used three, yes, count
them, *three* voltages on the FPGA. The 2.5 volt aux supply was there
solely for the configuration interface which was normally to a 3.3 volt
device! Only from Xilinx...

If this was a new device, then I guess they picked one based on
something other than ease of use, eh? Don't assume all FPGAs are the same.

I thought you just said that there weren't many differences between
FPGA manufacturers?

You are mixing apples and oranges. One manufacturer has many different
families of FPGAs, no? Some are huge power hungry devices that burn a
hole in your board. Others are much lower power and don't burn a hole
in your pocketbook either.

The families all look the same and vary only in density and mix of
memory, speed, MCU, DSP(hmm), and other features.

Good grief, you're arguing both sides.
 
On Sat, 07 Sep 2013 13:35:24 -0400, rickman <gnuarm@gmail.com> wrote:

On 9/7/2013 11:32 AM, krw@attt.bizz wrote:
On Sat, 07 Sep 2013 10:00:51 -0400, rickman<gnuarm@gmail.com> wrote:

On 9/7/2013 4:24 AM, John Devereux wrote:
rickman<gnuarm@gmail.com> writes:

If your FPGA designs are expensive or power hungry, then you are doing
things you can't do in an MCU or you are not using FPGAs properly.
They don't need to use any more power than an MCU and in many cases
less. They certainly don't need to be significantly more expensive
unless you consider every dollar in your designs. At the very low end
MCUs can be under $1 and still have reasonable performance. For $1
you can't get much in the way of programmable logic. For $3 however,
you can get a chip large enough for a CPU (with math) and room for
your special logic.

I've never used an FPGA, microcontrollers have increased in speed faster
than my needs so far. So I can usually bitbang everything or use a
peripheral. I used PLDs for glue logic back in the day but that's it. Oh
and I bought a small xilinx dev kit which I got to make a led flash then
put in a drawer for 15 years.

So your use of MCUs is based on inertia?

It seems that the "when all you have is a nail..." argument is
prevalent on both sides of this discussion.

Nonesense. I constantly look for MCU solutions for my designs.

You certainly don't look very hard. I keep looking for FPGA solutions
and haven't found one yet. ;-)

But could you give an example of your $3 one? Or a favorite?

A startup company called Silicon Blue came out with a line of FPGAs
targeted to the high volume, low power market that exists for portable
devices. They were preparing their second device family and were bought
by Lattice Semi. The first family was dropped and the second family is
the iCE40 (for 40 nm). They are very low power although smallish. The
largest one has 8 kLUTs, the smallest 384 LUTs.

Everyone has $3 parts, now. It's a matter of finding the FPGA that
fits the application. The line between CPLDs and FPGAs isn't very
sharp anymore and is mostly marketing. CPLDs go well under a buck.

I won't argue that. But I don't consider CPLDs in the same vein as
FPGAs, but you are right, the distinction is blurring.

The architecture is the same. They're the same.

OK, let me ask the question(s) I've asked every one of the FPGA
suppliers; Define FPGA. Define CPLD. They can't. It *IS* marketing.


You can get the 1 kLUT parts for under $3 and possibly the 4 kLUT parts.
It has been a while since I got a quote. The 1 kLUT part is big
enough for a soft core MCU plus some custom logic.

IMO, a soft core MCU negates the whole reason for an FPGA. You're
using *expensive* realestate to do what a cheap piece of silicon can
easily do. There is probably an application somewhere that makes
sense but I've always found a better/cheaper solution.

That is the sort of thinking that is just a pair of blinders. I don't
care if the real estate is "expensive". I care about my system cost.

Part cost ~= system cost. MCUs are so cheap that any soft core is
useless. The development costs are a lot less, too. The tool chains
for the embedded stuff suck.

Gates in an FPGA are very *inexpensive*. If I want to use them for a
soft core CPU that is just as good a use as a USB or SPI interface.

Good grief. Do you have diabetes?

BTW, with MCUs Digikey will give you a realistic price quote. In the
FPGA world the distis never give you a good price unless you ask for a
quantity quote. I have gotten prices quoted to me that were half the
list price. FPGA companies play a different marketing game and have a
lot of room to negotiate in order to buy a socket.

"DigiKey" and "realistic quote" don't belong in the same sentence. For
any quantity catalogs just don't cut it.

Your opinion. I don't sell 100,000 quantities, so the prices I get at
Digikey are often competitive with the other distis. Certainly they
give you a ball park number for comparison purposes.

If you're buying no more than 1K pieces, you're sorta stuck with the
DigiKeys of the world. I am for prototypes, though I build as many
prototypes as I did in production at my last job. ;-)

The point is that
with FPGAs, *no one* gives you a good price unless you get the
manufacturer involved. That is one down side to FPGAs.

Sure, I'll buy that but it just solidifies the fact that FPGAs really
aren't mainstream components. They are a niche and probably always
will be, unfortunately.
 
On Sat, 07 Sep 2013 12:46:59 -0700, Joerg <invalid@invalid.invalid>
wrote:

I don't see how the equivalent of a TMS320 or a big MSP430 could fit
into one of these small Lattice devices.

BTW, watch the TMS320 5000series parts. The DMA is seriously broken
if you're using the BSP (I2S/TDM interfaces). The McBSP sucks, too,
but that's a different issue. Last I knew they had no intention of
fixing I2S/TDM DMA, either.
 
On Sat, 07 Sep 2013 15:23:39 -0700, Joerg <invalid@invalid.invalid>
wrote:

rickman wrote:
On 9/7/2013 4:46 PM, Joerg wrote:
Paul Rubin wrote:
Joerg<invalid@invalid.invalid> writes:
I don't see how the equivalent of a TMS320 or a big MSP430 could fit
into one of these small Lattice devices.

I had thought the parts of those processors that would bloat up badly
(instruction decode etc.) are pretty simple so the overall effect of the
bloat is ok in the scheme of things. The parts doing the most work
(memory, arithmetic) are done in the FPGA hardware (RAM and DSP blocks,
adders connected to the LUT's somehow) as efficiently as on the MCU's.

I do think softcores seem like a silly idea a lot of the time, and am
looking forward to more low end FPGA's with MCU blocks.


Much of it has to do with legacy code. Yes, some things could even be
done more efficiently in the FPGA because you can actually streamline
the HW to the task, something neither uC nor DSP allow. For example, why
have a 32-bit HW multiplier when you know you'll never exceed 23 bits?
But legacy code won't run anymore and you need FPGA specialists to make
it all work.

No, you would need a DSP specialist. The FPGA designer only needs to
know how to code the FPGA.


So for this kind of solution in an FPGA you need a DSP specialist and an
FPGA specialist? That would be a problem.

Pick ones that are in the automotive market. Support for fifteen
years is required.

But that is exactly the point of the FPGA in DSP apps. You code to the
app, not to a processor.


How long do the usual FPGA stay in the market? Meaning plop-in
replaceable, same footprint, same code, no changes.

The usual? About 30 minutes. ;-)
 
On Sat, 07 Sep 2013 19:02:53 -0400, rickman <gnuarm@gmail.com> wrote:

On 9/7/2013 6:23 PM, Joerg wrote:

How long do the usual FPGA stay in the market? Meaning plop-in
replaceable, same footprint, same code, no changes.

Life span is typically *much* better than MCUs.

Like that new kid on the block, the 8051?
 
On Sat, 07 Sep 2013 14:33:41 -0700, Paul Rubin
<no.email@nospam.invalid> wrote:

rickman <gnuarm@gmail.com> writes:
How about an MCU array instead? http://www.greenarraychips.com/

Yes, we've had many discussions about that part ;-).

considering a softcore "silly" is not a useful engineering analysis.

The engineering analysis is implied: it takes far more silicon to
implement a microprocessor in LUTs than directly in silicon, plus you
lose a lot of speed because of all the additional layers and lookups.

Yes, and add the "markup" for *being* an FPGA (production quantities).
>> Bernd Paysan rolled his own small processor design for an ASIC

Even more silly. I've done silly things like this but only because of
stupid management edicts (*NOT* based on any engineering analysis).

Yes, the ASIC bypassed the relative inefficiency of doing the same thing
in FPGA's. It would be cool to have some tiny processors like that
available as hard cells on small FPGA's.

Yes, but it would still be quite expensive, compared to a similar
(external) uC, including the I/O.
 

Welcome to EDABoard.com

Sponsor

Back
Top