AREF bypass capacitance on ATMega2560?

On 9/12/2013 1:03 PM, Joerg wrote:
Piotr Wyderski wrote:
Joerg wrote:

Question: What do you do with an FPGA in a radio?

As a matter of fact, it is one of the best places to use an FPGA. :)
SDR with FPGA reconfiguration capabilities is the ideal solution.
Not for a regular FM noisemaker, though...

Best regards, Piotr

*) It was exactly the only time in my life when I used an FPGA.
+ a 14-bit@65MHz Analog Devices ADC + 104 MHz DAC.


It's going to be a tough sell for a radio. Even if they found one for $3
that's too much and they also can't stomach the 10sec to download the
compiled data in production.

You really have no interest in learning anything about FPGAs that has
happened in the last 10 or 15 years do you?


Everything has to fit into the $149.95 sale price at the auto parts
place, with fat profit margins for everyone and their middlemen,
speakers, wires, neon-colored huge window sticker, a discount coupon for
installation, free coffee and free waffles :)

You two are talking about totally different radios.

--

Rick
 
On 12.9.13 10:41 , Joerg wrote:
Tauno Voipio wrote:
On 12.9.13 9:23 , Joerg wrote:
Tauno Voipio wrote:
On 12.9.13 8:24 , Piotr Wyderski wrote:
Joerg wrote:

It's going to be a tough sell for a radio.

A car radio? Well, there's nothing to reconfigure, maybe
except of the station. :) It had to be a homebrew wideband
every-conceivable-modulation-capable scanner. It works to
some extent at the hardware level but I've kind of lost my
interest in SDR to finish the software side. Read: got married. ;-)

Best regards, Piotr


That's the right attitude: Forget radio, you have better things to do.


But those have serious financial consequences down the road. Diapers,
braces, tuition costs, driver license, higher car insurance premiums,
the family car wrapped around a power pole, car insurance going up some
more because of that, and so on.


Yep - but that's a blunder everybody has to do self.


So according to that we'd both be results of blunders? :)

Yes - and winners, about 1 to a few hundreds of millions in the great
race of Nature.

--

-T.
 
rickman wrote:
On 9/12/2013 1:03 PM, Joerg wrote:
Piotr Wyderski wrote:
Joerg wrote:

Question: What do you do with an FPGA in a radio?

As a matter of fact, it is one of the best places to use an FPGA. :)
SDR with FPGA reconfiguration capabilities is the ideal solution.
Not for a regular FM noisemaker, though...

Best regards, Piotr

*) It was exactly the only time in my life when I used an FPGA.
+ a 14-bit@65MHz Analog Devices ADC + 104 MHz DAC.


It's going to be a tough sell for a radio. Even if they found one for $3
that's too much and they also can't stomach the 10sec to download the
compiled data in production.

You really have no interest in learning anything about FPGAs that has
happened in the last 10 or 15 years do you?

Oh, I do. I just reviewed a design that has some rather fat ones in
there and it would hardly have been possible to do this without FPGA.
However, there are circuits where FPGAs are a perfect fit and others
where they just aren't. In ordinary radios they usually aren't.

Everything has to fit into the $149.95 sale price at the auto parts
place, with fat profit margins for everyone and their middlemen,
speakers, wires, neon-colored huge window sticker, a discount coupon for
installation, free coffee and free waffles :)

You two are talking about totally different radios.

The last few posts were about car radios, because the topic was
electronics and temperature exposure in cars.

--
Regards, Joerg

http://www.analogconsultants.com/
 
On Thu, 12 Sep 2013 09:58:33 -0700, Joerg <invalid@invalid.invalid>
wrote:

krw@attt.bizz wrote:
On Wed, 11 Sep 2013 07:53:16 -0700, Joerg <invalid@invalid.invalid
wrote:

krw@attt.bizz wrote:
On Mon, 09 Sep 2013 16:38:32 -0700, Joerg <invalid@invalid.invalid
wrote:

krw@attt.bizz wrote:
On Mon, 09 Sep 2013 12:17:34 -0700, Joerg <invalid@invalid.invalid
wrote:

Joerg wrote:
krw@attt.bizz wrote:
On Sun, 08 Sep 2013 17:15:18 -0700, Joerg <invalid@invalid.invalid
wrote:

krw@attt.bizz wrote:
[...]

... Design to 85C isn't just a good idea, it's the
spec (-40C to 85C).
It is a bad spec. The result of such flawed design evidences itself over
and over. For example, the minivan of friends of ours would not start
when parked at a mall on a hot day after more than 15mins of driving. It
would (sometimes) come back to life if you let it sit for half an hour
so the radiated engine heat became less. In the winter it was mostly ok.
That design is IMHO junk.
Oh, the spec I mentioned above isn't for the engine compartment or any
of the ignition or safety gadgets. It's for the noise makers. ;-)
yes, that's the unpowered temperature. Temp rise has to be added to
that.

Ok, if the radio quits on a hot day that isn't going to cause much
grief. Happened to me but luckily within the warranty period. It's
annoying though, leaves kind of a cheap feeling about the whole car even
though the car doesn't deserve that. Radios are usually in the dash and
that can exceed 80C on hot days. Then the driver hops into the car,
turns on the stereo, pops in the Eric Clapton CD, listens to "Cocaine"
with the volume on 10 and ... *PHUT* ... :)

Question: What do you do with an FPGA in a radio?
Lotsa possibilities. So far very few real applications; too
expensive. ...
I look at radio and other consumer gear a lot, mainly to spot
interesting ICs and other parts that I might be able to use on my
designs. Never seen an FPGA in there, ever.
They're not common yet but they will find their way in. Again, cost
is the biggest barrier. OTOH, function will eventually demand them.

... These are not your father's Blaupunkts. ;-)
I know, dad's Blaupunkts were better :-(
Hardly. Dad never had a 17" LCD display on his and the XM reception
sucked. ;-)

Those things aren't important to me. What is importnat to me is large
signal handling. For example, the new radios fall apart on the Bass Lake
Grade on Hwy 50 because there are all the TV towers for the valley. The
Blaupunkt doesn't fall apart.

The reality is that these things are important to the vast majority of
consumers, therefore customers. Your wishes don't make a significant
market.


Vast majority = GUM :)

What matters is a sticker with very high PMPO number on there and lots
of blue LEDs.

Blue LEDs are *so* last century. The 17" color LCD display is where
it's at.

Since new radios are mostly junk I haven't bought one in over 20 years
except a new living room stereo. And only because SWMBO kept complaining
that the old (well running ...) Kenwood tower was too large and "ugly".
Boy did I regret that, the new stereo is completely useless on the AM band.

Ours is likely 10Y). No reason to buy a new one, it's rarely used.
Still got one in the garage. In terms of large signal handling and
intermodulation it runs circles around just about anything made today.
You want to listen to AM/FM radio stations? What kind of nut are you,
anyway? ;-)

I actually still listen to AM a lot. Oh, and I do not have a smart phone :)

So do I, actually, but also XM, now (very little FM - decent reception
area is too small to bother). I also use an MP3 player and I'd like
to pipe my smart phone's navi through it, but it's broken (designed
broken by Microsoft).


I have yet to migrate into that sort of gear. Didn't have a need yet.

You're just a stubborn old geezer. ;-)
 
On Thu, 12 Sep 2013 11:19:31 -0700, Joerg <invalid@invalid.invalid>
wrote:

Piotr Wyderski wrote:
Joerg wrote:

It's going to be a tough sell for a radio.

A car radio? Well, there's nothing to reconfigure, maybe
except of the station. :)


But wait, there's more. Blue blinkenlights, yellow blinkenlights. When
we rented a Mustang I thought I'd stepped into a disco.

Our Mustang's instrument cluster can be color coordinated. It's
something to do while driving down the road, I suppose.

... It had to be a homebrew wideband
every-conceivable-modulation-capable scanner. It works to
some extent at the hardware level but I've kind of lost my
interest in SDR to finish the software side. Read: got married. ;-)


Oh. You, too? :)

My ham radio days are more or less over for the same reason but maybe
I'll pick it up again when I gradually retire. Some day. But first I
want to get back into beer brewing.

First things first!

I never had much fun with SDR because it's expensive and generally
inferior to the classic circuits when it comes to performance. I do now
have one spectrum analyzer (the Signalhound) that is basically an SDR.
But that is because I need it for my job, not for fun.

That's sorta the way I feel about the whole topic.
 
On 9/12/2013 5:42 PM, Joerg wrote:
rickman wrote:
On 9/12/2013 1:03 PM, Joerg wrote:
Piotr Wyderski wrote:
Joerg wrote:

Question: What do you do with an FPGA in a radio?

As a matter of fact, it is one of the best places to use an FPGA. :)
SDR with FPGA reconfiguration capabilities is the ideal solution.
Not for a regular FM noisemaker, though...

Best regards, Piotr

*) It was exactly the only time in my life when I used an FPGA.
+ a 14-bit@65MHz Analog Devices ADC + 104 MHz DAC.


It's going to be a tough sell for a radio. Even if they found one for $3
that's too much and they also can't stomach the 10sec to download the
compiled data in production.

You really have no interest in learning anything about FPGAs that has
happened in the last 10 or 15 years do you?


Oh, I do. I just reviewed a design that has some rather fat ones in
there and it would hardly have been possible to do this without FPGA.
However, there are circuits where FPGAs are a perfect fit and others
where they just aren't. In ordinary radios they usually aren't.


Everything has to fit into the $149.95 sale price at the auto parts
place, with fat profit margins for everyone and their middlemen,
speakers, wires, neon-colored huge window sticker, a discount coupon for
installation, free coffee and free waffles :)

You two are talking about totally different radios.


The last few posts were about car radios, because the topic was
electronics and temperature exposure in cars.

When Piotr started talking about SDR he wasn't talking about car radios
anymore.

--

Rick
 
On 9/12/2013 8:57 AM, Piotr Wyderski wrote:
rickman wrote:

That is the sort of thinking that is just a pair of blinders. I don't
care if the real estate is "expensive". I care about my system cost.
Gates in an FPGA are very *inexpensive*. If I want to use them for a
soft core CPU that is just as good a use as a USB or SPI interface.

As long as the softcore is just a supplement to the really complex
logic. Otherwise it is not the best way to use the FPGA resources
in order to emulate an ARM. There already are dirt-cheap ARMs on
the market.

The issue is not the cost of an MCU. There are any number of reasons
why not to use a separate chip for the CPU. The one I encounter most
often is board space. In a design I did some years ago I barely had
room for the logic at all, but there was no way I would have been able
to squeeze in a separate MCU package. There is also the cost. Although
a $3 CPU seems cheap, if it can't do the entire job, you can likely
include a soft CPU on the FPGA and only have one part rather than two.

If an MCU does the entire job you need, then fine, use it. But there
are plenty of times you need both and you can always include a CPU on
your FPGA, but it is hard to make a CPU do what the FPGA does.


I've done just a single hobby project based on FPGA
(and think about the next one), but it works and my experience is
as follows. Pros:

1. You can solder the chip and *then* start thinking what
actually it should do. :)

That's not a good idea for either an MCU or an FPGA.


2. The PCB routing in case of not very demanding signals
is so easy...

There are plenty of dedicated I/Os on an FPGA. Clock lines are the most
obvious. Although you can bring a clock into the chip on any I/O pin,
if you don't use a clock pin it will have extra delay getting to the
clocked elements. This can cause timing issues on clocked I/Os.


3. There is no problem with resource conflicts. When I want
an ARM with 6 UARTS, there are not many of them and then...
look... I can't use CAN becase it reuses the pins of one
of the UARTs... remap... damn, now the Ethernet MII bus
collides with that. On an FPGA I can have 134 UARTS if
I want and no pin collisions.

The routing on an FPGA is one of it's claims to fame. It is also the
major source of the "inefficiency" and extra cost compared to dedicated
logic. But most of the time the "cost" of using an FPGA is mitigated by
Moore's law and the result of FPGAs often using newer and more efficient
fabrication processes.


4. There is simply no competition for FPGAs when precise
timing is required. It is just so indecently easy. Doing
something as "complex" as 40 PWM channels (which is what
I need now) on an MCU is a nightmare.

Yeah, I looked at using a 144 core MCU to control an SDRAM. Although
each MCU could run at up to 700 MIPS, the timing was not specified well
enough to big bang the I/Os for the SDRAM at any speed above some 25 MHz
(ball park).


Cons:

1. You can simulate basically anything, but most of the
time there is no need for that. Some functional blocks
have the only sacred specification and you better not
"improve" them. Memory controllers, serial protocol
controllers, the CPU.

I'm not sure what this means.


2. Why on Earth is there something as bizarre as NIOS
or *blaze? We don't need the next invention of a wheel.
The world pretty much converged to the ARM architecture,
so I do not want to waste my time on learning about
a niche design just because it is the only thing I can
have there because it doesn't exceed the resources
available (opencores) or because of legal issues. I do
want an ARM in every Spartan-class FPGA. I am not going
to buy a Virtex just for that purpose.

It is not possible to implement an ARM on an FPGA without paying huge
license fees. But no matter, what is magical about an ARM? No
instruction set owns the market. ARM has no real advantage over other
CPUs in an engineering sense. There is little utility to sticking with
one instruction set unless you program in assembly language. However,
if you have a large code base that is not easy to port to a different
processor, then stick with what you know.


3. No analog stuff. Everything interesting needs to
be external. Welcome back to the 80s... :-/

I suggest you look at the Microsemi Fusion and Smart Fusion devices.
Fusion means it has analog on chip and Smart Fusion means it has analog
plus an ARM CPU. I have not worked with it, so I don't know much about
it. It is also a bit pricey for the original conversation we had in
this thread.


> 4. Supply voltage issues. Do we really need 3 of them?

I don't know, do you? My current board has four power supply voltages
and an extra 3.3 volt on board for the CODEC (a total of 5 separate
rails) , but only one is for the FPGA. How about your designs?


> 5. Why does it take so long to recompile the design?

How long is a piece of string? What?

Are you talking about the place and route time? That depends on the
size of the chip and the speed of your processor (and amount of memory).
I tend to use smaller designs which compile fairly fast. But more
importantly that is not a real issue because I don't compile until the
design passes simulation and finding bugs usually takes a while longer
than the compile.


6. Using a ready-made MCU at least the core has been debugged.
Do I really need to debug the processor in addition to debugging
its program?

I don't know, do you? Which processor are you thinking of using? What
is the application? What are your requirements? BTW, have you read the
errata sheet for any MCU? They are typically very long with lots of
bugs they don't plan to fix. Pick a new member of the family and you
get to find the bugs!

--

Rick
 
In comp.arch.embedded,
Joerg <invalid@invalid.invalid> wrote:
Stef wrote:
In comp.arch.embedded,
Joerg <invalid@invalid.invalid> wrote:
Stef wrote:
In comp.arch.embedded,
Joerg <invalid@invalid.invalid> wrote:
Stef wrote:
In comp.arch.embedded,
Joerg <invalid@invalid.invalid> wrote:
Nope. Not if it's in the worlds of medical or aerospace. There you have
a huge re-cert effort on your hands for changes. New layout? Back to the
end of the line.
That is not always true (at least for medical equipment, no experience
with aerospace). If the change is minor enough, it may be enough to
write a rationale that explains the change and how it does not impact
the function of the equipment. If the notified body agrees with the
rationale, only a limitied effort is required to re-cert.

I really doubt they would agree if a code re-compilation was required to
make this work. With code and firmware they have become very careful
because there have been to many mishaps.
That depends on the software risk class. If the software imposes no risk
(risk mitigated in hardware for example), I don't think they would care.

That is almost never the case. An FPGA, just like a DSP or uC, is too
close to the game to be able to make that kind of safety claim in most
cases.

Why do you mention the FPGA? I think we are not talking about the same
thing here.


The thread has moved there, Rickman advocates that a lot of things can
be better handled by FPGA. In the end it doesn't matter, programmable is
programmable and that gets scrutinized. Has to be.


What I meant was adding some 'real' hardware to limit things that could
get dangerous and are under software control. It is very common to add
such protection to reduce the software risk class.

Example:
A device generates a train of current pulses that pass through a
patients body for some kind of measurement. The safety of these pulses
depends on the duration, frequency and current. All of these parameters
are under software control, so in theory the software can create a
dangerous situation. This puts the software in a high risk class.

If you add hardware that monitors the output signal (timers,
comparators) and that switches off the output when the signal goes out
off bounds, the software can no longer create a dangerous situation.
That reduces the risk class of that piece of software.

This practice of adding hardware to remove the safety risk from
software is very common.


I generally have that. But this does not always suffice. Take dosage,
for example. Suppose a large patient needs a dose of 25 units while a
kid should never get more than 5. How would the hardware limiter know
whether the person sitting outside the machine is a heavy-set adult or a
skinny kid?

There are cases where a hardware limiter is an option, there are cases
where that's not an option. In your example above the biggest risk is
however the nurse calculating and setting the dose, but that's another
part of the risk analysis. ;-)

snip, a whole lot we mostly agree on

In my experience, in medical device you can sometimes do changes without
re-certification, but certainly not always. Thats why I started with "That
is not always true".

In the end the effort is mostly almost the same. In SW or firmware it's
regression testing et cetera. For safety boundary changes it's module
tests. And there it hardly makes a difference how much of it must be
re-tested.

That's where I don't agree. If I change something in my software that
is protected by hardware like in the above example. I can do my
internal tests and write a document for the notified body. This
ofcourse takes time and care but it is much less work than a full
re-certification effort. I don't need to repeat my safety tests, EMC
tests etc.


You can take your chances but it carries risks. For example, I have seen
a system blowing EMC just because the driver software for the barcode
reader was changed. The reason turned out not to be the machine but the
barcode reader itself. One never knows.

Yes, there are always chances and you have to weigh the risks. Making
sure all units pass EMC testing can only be done by fully testing each
unit under all cirumstances. Which is ofcourse impossible.

Your barcode scanner example is unfortunate. But such a scanner could
also change it's behaviour on scanning different codes and lighting
conditions. Did you perform EMC testing with all available barcodes
and forseeable lighting conditions?

The other factor is the agency. If they mandate re-testing after certain
changes you have to do it. In aerospace it can also be the customer
demanding it, for example an airline.

Yes , if the agency or customer demands re-testing, there's nothing you
can do but re-test.

--
Stef (remove caps, dashes and .invalid from e-mail address to reply by mail)

She asked me, "What's your sign?"
I blinked and answered "Neon,"
I thought I'd blow her mind...
 
Joerg wrote:

> But wait, there's more. Blue blinkenlights, yellow blinkenlights.

Sure you need them! A friend of my brother produces and sells some
simple ultrasound marten repelling devices. The sales went through
the roof when he added a blinking LED to the box. So blinking is of
crucial importance. :)

> I never had much fun with SDR because it's expensive

You calculate costs differently when it's a hobby project.
I.e. your time is basically free and the parts are expensive,
which is exactly the oposite of professional prototyping.

> and generally inferior to the classic circuits when it comes to performance.

Why should it be? The RF front-end is mostly the same.
What is the difference between feeding an ADC and feeding
an I/Q demodulator?

> I do now have one spectrum analyzer (the Signalhound) that is basically an SDR.

I think most (or at least a significant fraction of) the modern
radio receivers is a form of SDR. I mean the radios in the cell
phones. You have a powerful CPU on board, often with DSP capabilities
like the NEON instruction set, so all you need to do is to build
a homodyne and move everything else to the digital domain. FM
demodulation, stereo and RDS decoding -- all that is easy.
All the PC TV USB dongles also work on this principle.

In my design I also moved the IF filtering to the FPGA.
It was so much easier to build a digital filter with
configurable passband than to do it the old school way...

Best regards, Piotr
 
rickman wrote:

The issue is not the cost of an MCU. There are any number of reasons
why not to use a separate chip for the CPU.

I don't mean a separate chip. All I want to say is that you don't
want your MCU to be reconfigurable. It should be as standard as
possible for many reasons. The most important of them is that
a standard CPU has mature and debugged toolchain. The next one is
that my personal memory is too precious a resource to remember
the peculiarities of niche architectures, e.g. NIOS. What is NIOS?
A variant of an ARM core? Or a PowerPC? Or perhaps a brand name
of one of the MIPSes? No, it turns out not to be the case. Then,
my dear FPGA vendor, please go and... :)

I don't insist the "standard" word should mean ARM, however it would
be good. The Virtex family used to have up to 4 PowerPC cores, which
is fine. The problem is that a soft core eats my reconfigurable
resources which artificially boosts the cost of the FPGA chip, because
I need one or two grades bigger chip than is really necessary with
a hard core. And the el cheapo lines do not have them -- a Spartan
is what I would ever need, I am not crazy enough to buy a Virtex or
Startix for advanced amateur designs. No to mention that most of them
come in BGA which is a big no-go for many reasons.

> The one I encounter most often is board space.

In my case it's PCB routing complexity. I am not going
to use 4+ layers of copper just to satisfy the monster's
signal integrity requirements. 2 layers is all I can have.

If an MCU does the entire job you need, then fine, use it. But there
are plenty of times you need both and you can always include a CPU on
your FPGA, but it is hard to make a CPU do what the FPGA does.

Exactly. But then you use 60% of the chip's reconfigurable
capacity and most of the BRAMs just to embed an MCU? Makes
no sense to me. The need for a hard macrocell providing
a decent 32-bitter is so obvious...

As I said, I need ~40 PWM channels, 4 CANs and/or 6 RS485 + Ethernet.
Impossible on an MCU, so an FPGA (probably a Spartan 6 in TQFP144)
is the most promising candidate. But the rest is so much CPUish...

1. You can solder the chip and *then* start thinking what
actually it should do. :)

That's not a good idea for either an MCU or an FPGA.

Taken literally -- surely. But often you don't know all
the requirements or just didn't have the right idea at
the time of initial design. If you have spare FPGA resources,
you can use them to extend the device abilities.

There are plenty of dedicated I/Os on an FPGA. Clock lines are the most
obvious. Although you can bring a clock into the chip on any I/O pin,
if you don't use a clock pin it will have extra delay getting to the
clocked elements. This can cause timing issues on clocked I/Os.

Mort of my signals is so slow that the signal integrity issues
rarely matter.

1. You can simulate basically anything, but most of the
time there is no need for that. Some functional blocks
have the only sacred specification and you better not
"improve" them. Memory controllers, serial protocol
controllers, the CPU.

I'm not sure what this means.

I mean you don't need to design your own PCIe controller.
Or a DDRx controller. Or Ethernet. Or CANBUS. Name your
own typical interface. It's a waste of resources (and power)
to implement them using the reconfigurable fabric. You
would like to have a hardware core + be able to add your
own when you need something fancy. Some of these have
already been "hardened", e.g. the multipliers (all
post-Cyclone era FPGAs) or the DDR controller (in Spartan 6).
An MCU, please?

It is not possible to implement an ARM on an FPGA without paying huge
license fees.

It was just an example, my favourite architecture.
Any standard core would be fine.

> But no matter, what is magical about an ARM?

The toolchain is mature.

There is little utility to sticking with
one instruction set unless you program in assembly language.

No, there is no need to be "portable" when there is no
real reasons for that.

4. Supply voltage issues. Do we really need 3 of them?

I don't know, do you?

In the digital part? 3.3V is enough for me. Most of the
ARMs (from the ball park I care about) need 1.8V, but are
kind enough to produce it themselves with a built-in LDO.
You can override it with a switcher if you care about
efficiency. No Cyclone/Spartan is equally kind.

> How about your designs?

The current one: 12V rectified AC for the power part,
~8V DC (for the remote boards to compensate the wire resistance
and to feed the gates of power MOSFETs), 5V (for CAN), 4.4V for
the GSM module and 3.3V for the rest. I can get rid of the
5V rail if I buy the 3.3V TI CAN PHYs.

> Are you talking about the place and route time?

The time between the act of pressing "build" and getting the resulting
bitstream in Quartus. Don't care what the tool does there.

> I don't know, do you? Which processor are you thinking of using?

I mostly use AVRs and ARMs.

> BTW, have you read the errata sheet for any MCU?

Touche! :)

I suggest you look at the Microsemi Fusion and Smart Fusion devices.
Fusion means it has analog on chip and Smart Fusion means it has
analog plus an ARM CPU. I have not worked with it, so I don't know
much about it. It is also a bit pricey for the original conversation
we had in this thread.

Sounds very interesting, will have a look. What can I have for, say,
$50? Something comparable to Spartan 3S200 would be enough...
And where can I buy them at the tremendous amount of 1 piece? :)

Best regards, Piotr
 
rickman wrote:

When Piotr started talking about SDR he wasn't talking
about car radios anymore.

Yes, something more like this class stuff:

https://en.wikipedia.org/wiki/Joint_Tactical_Radio_System

Best regards, Piotr
 
On 9/13/2013 8:08 AM, Piotr Wyderski wrote:
rickman wrote:

The issue is not the cost of an MCU. There are any number of reasons
why not to use a separate chip for the CPU.

I don't mean a separate chip. All I want to say is that you don't
want your MCU to be reconfigurable. It should be as standard as
possible for many reasons. The most important of them is that
a standard CPU has mature and debugged toolchain. The next one is
that my personal memory is too precious a resource to remember
the peculiarities of niche architectures, e.g. NIOS. What is NIOS?
A variant of an ARM core? Or a PowerPC? Or perhaps a brand name
of one of the MIPSes? No, it turns out not to be the case. Then,
my dear FPGA vendor, please go and... :)

I will grant you that the tools may be better for a mainstream
instruction set, but otherwise I don't see the advantage. I don't see
how your personal memory has much to do with it. Most programmers use
the tools, meaning HHL compilers. The main point of using an HLL is to
*not* need to know anything about the instruction set. If you can't
write HLL code for the specific CPU involved, then you have other issues.


I don't insist the "standard" word should mean ARM, however it would
be good. The Virtex family used to have up to 4 PowerPC cores, which
is fine. The problem is that a soft core eats my reconfigurable
resources which artificially boosts the cost of the FPGA chip, because
I need one or two grades bigger chip than is really necessary with
a hard core. And the el cheapo lines do not have them -- a Spartan
is what I would ever need, I am not crazy enough to buy a Virtex or
Startix for advanced amateur designs. No to mention that most of them
come in BGA which is a big no-go for many reasons.

You are creating a straw man argument. I can fit a soft core CPU into
nearly any FPGA you hand me. Lattice is making some very tiny ones
without memory blocks, but otherwise soft cores are not so large. In
fact, it may have been this thread where someone pointed out that the
NIOS2, a 32 bit processor, can fit in fewer than 600 LUTs! That is
small enough to allow multiple CPUs in most FPGAs with tons of room left
over for peripherals and custom logic.


The one I encounter most often is board space.

In my case it's PCB routing complexity. I am not going
to use 4+ layers of copper just to satisfy the monster's
signal integrity requirements. 2 layers is all I can have.

Uh, what monster???


If an MCU does the entire job you need, then fine, use it. But there
are plenty of times you need both and you can always include a CPU on
your FPGA, but it is hard to make a CPU do what the FPGA does.

Exactly. But then you use 60% of the chip's reconfigurable
capacity and most of the BRAMs just to embed an MCU? Makes
no sense to me. The need for a hard macrocell providing
a decent 32-bitter is so obvious...

I won't argue that having a hard CPU on an FPGA isn't a nice thing. But
it is not such a bad thing to use a soft core CPU. 60% of the
*smallest* FPGA with block RAM I have seen is enough to embed a 32 bit
soft CPU. In fact, I had to do a bitware upgrade to an existing design
and was worried about not having enough room for the logic. My fall
back plan was to remove the slow functions from the logic and use a soft
CPU because it would be so compact in comparison.


As I said, I need ~40 PWM channels, 4 CANs and/or 6 RS485 + Ethernet.
Impossible on an MCU, so an FPGA (probably a Spartan 6 in TQFP144)
is the most promising candidate. But the rest is so much CPUish...

I'm not sure that is impossible on an MCU. Maybe it is impossible on an
MCU you can buy, but a custom design might do nicely. Just saying 40
PWM channels doesn't tell me how much CPU cycles are needed. But since
you can have so many CPUs on even a smallish FPGA, I expect you could
divide and conquer quite easily. Or you can use dedicated logic and a
soft core for the Ethernet and any other functions that are better suited.


1. You can solder the chip and *then* start thinking what
actually it should do. :)

That's not a good idea for either an MCU or an FPGA.

Taken literally -- surely. But often you don't know all
the requirements or just didn't have the right idea at
the time of initial design. If you have spare FPGA resources,
you can use them to extend the device abilities.

You can wave the same hands for CPU cycles. The only difference is MCU
I/Os are not as flexible, typically being constrained to one set of pins
or a small selection of I/Os. I guess that was your point?


There are plenty of dedicated I/Os on an FPGA. Clock lines are the most
obvious. Although you can bring a clock into the chip on any I/O pin,
if you don't use a clock pin it will have extra delay getting to the
clocked elements. This can cause timing issues on clocked I/Os.

Mort of my signals is so slow that the signal integrity issues
rarely matter.

I'm not talking about SI, I'm talking about the large (by comparison)
delays in I/O routing to the clock net. The clock inputs have a direct
connection which allow deterministic timing for I/O setup and hold
times. If your designs are so slow that isn't an issue, fine, but that
is not very common.


1. You can simulate basically anything, but most of the
time there is no need for that. Some functional blocks
have the only sacred specification and you better not
"improve" them. Memory controllers, serial protocol
controllers, the CPU.

I'm not sure what this means.

I mean you don't need to design your own PCIe controller.
Or a DDRx controller. Or Ethernet. Or CANBUS. Name your
own typical interface. It's a waste of resources (and power)
to implement them using the reconfigurable fabric. You
would like to have a hardware core + be able to add your
own when you need something fancy. Some of these have
already been "hardened", e.g. the multipliers (all
post-Cyclone era FPGAs) or the DDR controller (in Spartan 6).
An MCU, please?

Ok, I agree. I think that is the direction FPGAs will be headed, but
not very fast. FPGA makers see their market as the deep pockets of the
data/telecoms providers pumping out many, many boxes that keep our
communications running at warp speed (lol). So far those markets have
been served well by the same model, bigger, faster, very expensive FPGAs
pushing the limits of silicon processing just like the mainstream GP
CPUs. But just like GP CPUs, I see the market changing over the next 5
or 10 years. I think FPGAs will need to incorporate CPU cores, but not
exactly like embedding an MCU, possibly more like making a small CPU a
functional block like a LUT.

Check out the GA144 from greenarrays.com. They don't market their chip
this way, but I see it as a Field Programmable Processor Array (FPPA) to
be used in a similar manner to an FPGA. These CPUs can't be used like
conventional CPUs. The memory is too small and the CPUs are only
connected to adjacent CPUs. But if you can master the concept, I think
it can be powerful.


It is not possible to implement an ARM on an FPGA without paying huge
license fees.

It was just an example, my favourite architecture.
Any standard core would be fine.

But no matter, what is magical about an ARM?

The toolchain is mature.

How mature do you need?


There is little utility to sticking with
one instruction set unless you program in assembly language.

No, there is no need to be "portable" when there is no
real reasons for that.

4. Supply voltage issues. Do we really need 3 of them?

I don't know, do you?

In the digital part? 3.3V is enough for me. Most of the
ARMs (from the ball park I care about) need 1.8V, but are
kind enough to produce it themselves with a built-in LDO.
You can override it with a switcher if you care about
efficiency. No Cyclone/Spartan is equally kind.

Then don't use a Cyclone/Spartan part...


How about your designs?

The current one: 12V rectified AC for the power part,
~8V DC (for the remote boards to compensate the wire resistance
and to feed the gates of power MOSFETs), 5V (for CAN), 4.4V for
the GSM module and 3.3V for the rest. I can get rid of the
5V rail if I buy the 3.3V TI CAN PHYs.

So you have some five power rails? Sounds to me like adding a 1.x
supply would be no big deal. I included a tiny 1.2 volt switcher on my
last design so I could use either version of the FPGA, the one with the
internal LDO or the one without.


Are you talking about the place and route time?

The time between the act of pressing "build" and getting the resulting
bitstream in Quartus. Don't care what the tool does there.

I know you don't care, but what takes so long? My compiles only take a
few minutes.


I don't know, do you? Which processor are you thinking of using?

I mostly use AVRs and ARMs.

BTW, have you read the errata sheet for any MCU?

Touche! :)

I suggest you look at the Microsemi Fusion and Smart Fusion devices.
Fusion means it has analog on chip and Smart Fusion means it has
analog plus an ARM CPU. I have not worked with it, so I don't know
much about it. It is also a bit pricey for the original conversation
we had in this thread.

Sounds very interesting, will have a look. What can I have for, say,
$50? Something comparable to Spartan 3S200 would be enough...
And where can I buy them at the tremendous amount of 1 piece? :)

Try Digikey, I believe they sell Microsemi. I know I have checked
prices and if I didn't get it from Digikey I don't know where I did.
The ballpark price for the Smart Fusion is under $50 I believe, but not
by a lot. The packages may not make you happy though. They seem to
think the Smart Fusion chips need a bazillion I/Os. I have no idea what
market they are pursuing. I guess they are trying to keep the ASP high.
The CPU is an ARM CM3 which should make you happy,,, :)

--

Rick
 
rickman wrote:

I don't see how your personal memory has much to do with it. Most programmers use
the tools, meaning HHL compilers. The main point of using an HLL is to
*not* need to know anything about the instruction set. If you can't
write HLL code for the specific CPU involved, then you have other issues.

Rickman, professionally I am a low-level programmer. I mostly do
weird optimizations, often at the assembly level. Can read assembly
output generated by a compiler for several ISAs without any problem.
Everyone is smart when things go right. When something fails, without
that knowledge you are like a child in the fog. Please, do not teach
me my craft. :)

In my case it's PCB routing complexity. I am not going
to use 4+ layers of copper just to satisfy the monster's
signal integrity requirements. 2 layers is all I can have.

Uh, what monster???

A great big FPGA chip which package imposes crazy
(for a hobbyist) PCB routing requirements.

I'm not sure that is impossible on an MCU. Maybe it is impossible on an
MCU you can buy, but a custom design might do nicely.

Rickman, please... ;-)

But since you can have so many CPUs on even a smallish FPGA, I expect you could
divide and conquer quite easily.

But what for? A PWM generator is a no-brainer in VHDL. One can also
stream out the content of a BRAM in a loop directly to the IO pins,
which allows one to implement fancy spectrum spreading techniques,
equalize the amount of power consumed by shifting the relative
phases of the PWM channels, etc. One BRAM = 18 channels. Cheap. :)
And a CPU to generate the actual waveforms off-line, even a tiny one.

You can wave the same hands for CPU cycles. The only difference is MCU
I/Os are not as flexible, typically being constrained to one set of pins
or a small selection of I/Os. I guess that was your point?

More or less. I wanted to highlight that a CPU has e.g. a timer input
with input capture timestamping connected to a dedicated pin. When the
PCB is etched and you discover that connecting another pin to that input
capture allows you to do something smart, you have a problem. In case
of an FPGA you just provide an additional internal "wire" in the
routing section and presto, problem solved.

If your designs are so slow that isn't an issue, fine, but that
is not very common.

If it is necessary, I can use a dedicated pin. But I don't use
those multi-gigabit transceivers etc., so, except of the clock,
a generic IO pin is fine for most of my signals.

> Check out the GA144 from greenarrays.com.

I've checked that years ago and it still is in the mental bin
labelled "bizarre". It is neither a CPU, nor an FPGA, gracefully
merging disadvantages of both. :)

The toolchain is mature.

How mature do you need?

The so-called industrial quality. As seen in case of
x86/ARM/PowerPC/SPARC and maybe MIPS.

> Then don't use a Cyclone/Spartan part...

That's complicated. First of all, I (would) use an FPGA I can
easily buy in smal quantities. Secondly, I know the toolchain.
The experience with mediaeval-quality Lattice software ~10 years
ago has considerably chilled my enthusiasm about the "alternatives".

The packages may not make you happy though. They seem to think
the Smart Fusion chips need a bazillion I/Os.

TQFP is the only package I can handle. But will have a look,
just to learn something new.

Best regards, Piotr
 
Piotr Wyderski wrote:

The experience with mediaeval-quality Lattice software ~10 years
ago has considerably chilled my enthusiasm about the "alternatives".

Sory, it was ACTEL, not Lattice, and the family was ProASIC with some
number.

The packages may not make you happy though. They seem to think
the Smart Fusion chips need a bazillion I/Os.

There is a TQFP144-variant of SmartFusion. But hear, hear!
It's an improved and rebranded ProASIC3... :-D

Best regards, Piotr
 
On 13/09/13 14:08, Piotr Wyderski wrote:
As I said, I need ~40 PWM channels, 4 CANs and/or 6 RS485 + Ethernet.
Impossible on an MCU, so an FPGA (probably a Spartan 6 in TQFP144)
is the most promising candidate. But the rest is so much CPUish...

The Freescale MPC5675K has 54 channels of PWM, 4 CANs, 4 UARTs, and
Ethernet - plus loads of other peripherals. That's only 2 UARTs short
of your requirements there, but maybe there are other MPC devices that
cover everything (I haven't looked at them all - just the one device I
have used myself). Certainly it would not be hard to use a couple of
timer channels connected to DMA to make the final two UARTs - or you
could do it in pure software, as you have plenty of processor power (2
PPC cores at 180 MHz).

An FPGA may be the best choice for your application - but it is
certainly not the only choice.
 
On 9/13/2013 12:43 PM, Piotr Wyderski wrote:
rickman wrote:

I don't see how your personal memory has much to do with it. Most
programmers use
the tools, meaning HHL compilers. The main point of using an HLL is to
*not* need to know anything about the instruction set. If you can't
write HLL code for the specific CPU involved, then you have other issues.

Rickman, professionally I am a low-level programmer. I mostly do
weird optimizations, often at the assembly level. Can read assembly
output generated by a compiler for several ISAs without any problem.
Everyone is smart when things go right. When something fails, without
that knowledge you are like a child in the fog. Please, do not teach
me my craft. :)

I won't teach you anything if you don't want to learn.


In my case it's PCB routing complexity. I am not going
to use 4+ layers of copper just to satisfy the monster's
signal integrity requirements. 2 layers is all I can have.

Uh, what monster???

A great big FPGA chip which package imposes crazy
(for a hobbyist) PCB routing requirements.

You mean like a 100 pin quad flat pack? Are you even trying to look at
possibilities? This is the sort of bias about FPGAs that I keep running
into. Here is a board I make with an FPGA, a CODEC, some buffering and
analog drivers.

http://arius.com/images/IRIGB_board_1-0.png

The board is 0.85" x 4.5". An MCU could not provide the SPI "like"
control interface from the motherboard and it would have been *very*
hard to generate the clock timing for the CODEC which in one mode has to
be slaved to the incoming data rate on an RS-422 interface.


I'm not sure that is impossible on an MCU. Maybe it is impossible on an
MCU you can buy, but a custom design might do nicely.

Rickman, please... ;-)

Please what?

Is it possible that you aren't aware of all CPUs out there?


But since you can have so many CPUs on even a smallish FPGA, I expect
you could
divide and conquer quite easily.

But what for? A PWM generator is a no-brainer in VHDL. One can also
stream out the content of a BRAM in a loop directly to the IO pins,
which allows one to implement fancy spectrum spreading techniques,
equalize the amount of power consumed by shifting the relative
phases of the PWM channels, etc. One BRAM = 18 channels. Cheap. :)
And a CPU to generate the actual waveforms off-line, even a tiny one.

But something has to control the PWM. So if you have software
controlling the PWM you have to decide how much is in software and how
much is in hardware. I don't know your requirements so I can't speak as
to where the optimal trade off point would be.


You can wave the same hands for CPU cycles. The only difference is MCU
I/Os are not as flexible, typically being constrained to one set of pins
or a small selection of I/Os. I guess that was your point?

More or less. I wanted to highlight that a CPU has e.g. a timer input
with input capture timestamping connected to a dedicated pin. When the
PCB is etched and you discover that connecting another pin to that input
capture allows you to do something smart, you have a problem. In case
of an FPGA you just provide an additional internal "wire" in the
routing section and presto, problem solved.

If your designs are so slow that isn't an issue, fine, but that
is not very common.

If it is necessary, I can use a dedicated pin. But I don't use
those multi-gigabit transceivers etc., so, except of the clock,
a generic IO pin is fine for most of my signals.

You are thinking of SERDES which are specialized functions... because
they are impossible to do in the FPGA fabric. But dedicated clock pins
have been around almost since the beginning of FPGAs.


Check out the GA144 from greenarrays.com.

I've checked that years ago and it still is in the mental bin
labelled "bizarre". It is neither a CPU, nor an FPGA, gracefully
merging disadvantages of both. :)

It does have its limitations, I agree. But is has some great features.
I would use it to redo the board in the image above but I don't have
enough confidence in the survival of the company. The redo is because
of one of the very few EOL notices on an FPGA that I just happen to be
using. The GA144 could do the job pretty well I think.


The toolchain is mature.

How mature do you need?

The so-called industrial quality. As seen in case of
x86/ARM/PowerPC/SPARC and maybe MIPS.

I don't know what "industrial quality" is. I think the tools for the
microBlaze, the NIOS, NIOS2, etc are all widely used and well debugged.
Have you heard any complaints?


Then don't use a Cyclone/Spartan part...

That's complicated. First of all, I (would) use an FPGA I can
easily buy in smal quantities. Secondly, I know the toolchain.
The experience with mediaeval-quality Lattice software ~10 years
ago has considerably chilled my enthusiasm about the "alternatives".

The project I give the image for above uses a Lattice part and I had no
trouble with the tools. We all have our biases.


The packages may not make you happy though. They seem to think
the Smart Fusion chips need a bazillion I/Os.

TQFP is the only package I can handle. But will have a look,
just to learn something new.

Then I think you are out of luck with the SmartFusion. The GA144 is
available on a mounting board from Schmartboard. They sent me one of
the boards without the chip. Interesting. They route the top layer of
fiberglass down to an inner copper layer and drop a bead of solder in
it. I think the idea is to work with leaded parts like the QFP, but
they say it works with leadless parts too. Essentially the PCB forms a
one or two mm high solder mask and the solder acts like a heat pipe to
allow connection to QFN pins on the underside of the chip.

http://blog.schmartboard.com/blogschmartboard/2013/09/greenarrays-month-at-schmartboardand-you-can-try-it-save-some-green.html

--

Rick
 
On 9/13/2013 1:14 PM, Piotr Wyderski wrote:
Piotr Wyderski wrote:

The experience with mediaeval-quality Lattice software ~10 years
ago has considerably chilled my enthusiasm about the "alternatives".

Sory, it was ACTEL, not Lattice, and the family was ProASIC with some
number.

Ah yes, I've heard pro and con about the Actel software.


The packages may not make you happy though. They seem to think
the Smart Fusion chips need a bazillion I/Os.

There is a TQFP144-variant of SmartFusion. But hear, hear!
It's an improved and rebranded ProASIC3... :-D

Yes, similar. There is the Smartfusion and the Smartfusion2. I don't
think the SF2 comes in any TQFPs. The proASIC lines are so old I don't
think I would design them into anything. They are available in 100 pin
QFPs though.

--

Rick
 
On Fri, 13 Sep 2013 00:24:44 -0400, rickman <gnuarm@gmail.com> wrote:

On 9/12/2013 5:42 PM, Joerg wrote:
rickman wrote:
On 9/12/2013 1:03 PM, Joerg wrote:
Piotr Wyderski wrote:
Joerg wrote:

Question: What do you do with an FPGA in a radio?

As a matter of fact, it is one of the best places to use an FPGA. :)
SDR with FPGA reconfiguration capabilities is the ideal solution.
Not for a regular FM noisemaker, though...

Best regards, Piotr

*) It was exactly the only time in my life when I used an FPGA.
+ a 14-bit@65MHz Analog Devices ADC + 104 MHz DAC.


It's going to be a tough sell for a radio. Even if they found one for $3
that's too much and they also can't stomach the 10sec to download the
compiled data in production.

You really have no interest in learning anything about FPGAs that has
happened in the last 10 or 15 years do you?


Oh, I do. I just reviewed a design that has some rather fat ones in
there and it would hardly have been possible to do this without FPGA.
However, there are circuits where FPGAs are a perfect fit and others
where they just aren't. In ordinary radios they usually aren't.


Everything has to fit into the $149.95 sale price at the auto parts
place, with fat profit margins for everyone and their middlemen,
speakers, wires, neon-colored huge window sticker, a discount coupon for
installation, free coffee and free waffles :)

You two are talking about totally different radios.


The last few posts were about car radios, because the topic was
electronics and temperature exposure in cars.

When Piotr started talking about SDR he wasn't talking about car radios
anymore.

He may not have been but (parts of) car radios are SDR.
 
Stef wrote:
In comp.arch.embedded,
Joerg <invalid@invalid.invalid> wrote:
Stef wrote:
In comp.arch.embedded,
Joerg <invalid@invalid.invalid> wrote:
Stef wrote:

[...]

In my experience, in medical device you can sometimes do changes without
re-certification, but certainly not always. Thats why I started with "That
is not always true".

In the end the effort is mostly almost the same. In SW or firmware it's
regression testing et cetera. For safety boundary changes it's module
tests. And there it hardly makes a difference how much of it must be
re-tested.
That's where I don't agree. If I change something in my software that
is protected by hardware like in the above example. I can do my
internal tests and write a document for the notified body. This
ofcourse takes time and care but it is much less work than a full
re-certification effort. I don't need to repeat my safety tests, EMC
tests etc.

You can take your chances but it carries risks. For example, I have seen
a system blowing EMC just because the driver software for the barcode
reader was changed. The reason turned out not to be the machine but the
barcode reader itself. One never knows.

Yes, there are always chances and you have to weigh the risks. Making
sure all units pass EMC testing can only be done by fully testing each
unit under all cirumstances. Which is ofcourse impossible.

Some companies EMC-test every machine that leaves production though.


Your barcode scanner example is unfortunate. But such a scanner could
also change it's behaviour on scanning different codes and lighting
conditions. Did you perform EMC testing with all available barcodes
and forseeable lighting conditions?

That usually isn't necessary. I told the client to get lots of different
new readers, and fast. They did that and it turned out that many that
were claimed as "class B" failed majorly. One didn't and it had so much
margin that it wasn't needed to test it under lots of conditions. I took
it apart to make sure that the designers had done a good job.

[...]

--
Regards, Joerg

http://www.analogconsultants.com/
 
Piotr Wyderski wrote:
Joerg wrote:

But wait, there's more. Blue blinkenlights, yellow blinkenlights.

Sure you need them! A friend of my brother produces and sells some
simple ultrasound marten repelling devices. The sales went through
the roof when he added a blinking LED to the box. So blinking is of
crucial importance. :)

Oh yeah :)


I never had much fun with SDR because it's expensive

You calculate costs differently when it's a hobby project.
I.e. your time is basically free and the parts are expensive,
which is exactly the oposite of professional prototyping.

I am usually a cheapskate when it comes to hobby. Not because of budget
issues like I had when I was a kid but because I like finding a real
McGyver solution.


and generally inferior to the classic circuits when it comes to
performance.

Why should it be? The RF front-end is mostly the same.
What is the difference between feeding an ADC and feeding
an I/Q demodulator?

My gear mostly has nice 8-pole crystal filters. Neither ADC nor I/Q
demodulator can (so far) touch that when it comes to large signal
handling. On shortwave the only fence between you and a plethora of
close-by noise is this filter.


I do now have one spectrum analyzer (the Signalhound) that is
basically an SDR.

I think most (or at least a significant fraction of) the modern
radio receivers is a form of SDR. I mean the radios in the cell
phones. You have a powerful CPU on board, often with DSP capabilities
like the NEON instruction set, so all you need to do is to build
a homodyne and move everything else to the digital domain. FM
demodulation, stereo and RDS decoding -- all that is easy.
All the PC TV USB dongles also work on this principle.

Even modern ham radio gear is like that. Which is why I prefer the older
rigs.


In my design I also moved the IF filtering to the FPGA.
It was so much easier to build a digital filter with
configurable passband than to do it the old school way...

Well, head to the shortwave band on a very busy day and compare it to an
NRD-515, a Drake TR-7 or something similar. That's where the rubber
really meets the road.

--
Regards, Joerg

http://www.analogconsultants.com/
 

Welcome to EDABoard.com

Sponsor

Back
Top