EDAboard.com | EDAboard.de | EDAboard.co.uk | WTWH Media

Writing to MCU flash

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - Electronics Design - Writing to MCU flash

Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8  Next


Guest

Tue Jan 29, 2019 8:45 pm   



On Sunday, January 27, 2019 at 1:44:14 PM UTC-5, Lasse Langwadt Christensen wrote:
Quote:
søndag den 27. januar 2019 kl. 19.35.06 UTC+1 skrev 69883925...@nospam.org:
John Larkin
The c is usually much easier to change than recompiling the FPGA. We
have avoided some interesting FPGA families because we couldn't even
get the demo tools to run. FPGA tools are usually tangled in FlexLM
horrors, whereas the c suite is public domain.

c builds take seconds, and FPGA builds take hours.

c builds are command-line driven, whereas most FPGA tools are click
driven. It's hard to archive clicks.

Yes, although...

I have used iverilog (Icarus Verilog command line verilog compiler) for Xilinx,
http://iverilog.icarus.com/

it is pretty much only for simulation

and verilog 'style' is much like C

but you have to think very differently


Yes, it is much easier to think of real time, parallel processes in HDL when you just write the code as if everything is in parallel because... it is.


Quote:
It has been a while but I did that xilinx thing on Linux from the command line too:
http://panteltje.com/panteltje/fpga/index.html
a very very long time ago...

you can still run the Xilinx tools from a command line if you want to


You can run all FPGA tools from the command line as far as I know. The GUI is just a layer that invokes the command line tools.

Rick C.

--+ Get 6 months of free supercharging
--+ Tesla referral code - https://ts.la/richard11209


Guest

Tue Jan 29, 2019 8:45 pm   



On Sunday, January 27, 2019 at 12:48:40 PM UTC-5, John Larkin wrote:
Quote:
On Sun, 27 Jan 2019 09:17:09 -0800 (PST), Lasse Langwadt Christensen
langwadt_at_fonz.dk> wrote:

søndag den 27. januar 2019 kl. 18.00.53 UTC+1 skrev David Brown:
On 27/01/2019 16:52, John Larkin wrote:
On Sun, 27 Jan 2019 15:27:12 +0100, David Brown
david.brown_at_hesbynett.no> wrote:

On 26/01/2019 00:24, gnuarm.deletethisbit_at_gmail.com wrote:
On Friday, January 25, 2019 at 2:12:13 AM UTC-5, David Brown wrote:
On 25/01/2019 02:47, Phil Hobbs wrote:
On 1/24/19 3:59 PM, speff wrote:
On Thursday, 24 January 2019 10:25:20 UTC-5, Phil Hobbs  wrote:
So we've got this custom product that includes a voltage-controlled
amplifier.

VCA chips as used in ultrasound and so on have nice low noise at the
highest gains, but at low gain they stink on ice.  The same is true of
all transconductance-based VCAs unless you use a zillion stages.

Sooo, we're faking it with a dpot, an op amp with a mux-controlled
resistor ladder, and an LPC804 Cortex M0+.

The resistor ladder is made out of standard-value Susumu 25-ppm
resistors, so it's better than the dpot except that the switchable gains
aren't exactly powers of two.

The simple way of handling this is to have the thing self-calibrate.
That could be done at power-up and the cal table kept in RAM, or at test
time with the table in flash.

There's some lore on the net that having the firmware write to the MCU
flash is a bad idea.

Experiences? Opinions?

Cheers

Phil Hobbs


There's EEPROM emulation code out there for that MCU. It should work
okay,
provided you don't need smooth functioning to continue during the
write and
are not too tight on the code space so that granularity matters..

I also note the endurance is claimed to be 200K for that part, which
is not
terrible (typically it's 1,000,000 for a serial EEPROM).

On the other hand, I think I'd at least consider laying out the PCB
for an I2C
EEPROM such as the AT24xx series. If even one piece of product gets
bricked
that would pay for a lot of 15 cent EEPROMs.

Not a bad idea at all.  In this case the cal will be pretty stable--the
dpot only has 256 steps--so we can avoid that problem by doing the cal
once at test time.


Putting a little EEPROM on the board is often the simplest solution. It
is entirely /possible/ to store configuration data along with program
code on most microcontrollers, but it can be complicated. You typically
have to pause processing while programming, and you might not be able to
erase the segment for the configuration without also erasing the main
program.

Keep the program flash for programs, and write it when you are updating
the software. Use a small data EEPROM for the data. It keeps
everything clear and neat - saving significantly on development costs
for the price of a tiny chip.

This is why I like FPGAs. Real time in FPGAs is "REAL TIME" and you don't even need to think much about it. Trying to simulate real time functions on an MCU with interrupts is a PITA. With FPGAs you can just focus on the problem, rather that the limitations of the solution.

Rick C.


FPGAs and MCUs have their advantages and disadvantages.

He has a microcontroller for a few dollars and maybe a cm² of board
space, and adding an EEPROM means perhaps twenty cents, six mm², and a
100 lines of code. How would changing this to an FPGA affect board
complexity, price, development time? Let's assume - to be realistic -
that the OP or his group are happy with microcontroller development but
inexperienced with FPGAs.

Sometimes FPGAs /are/ the right answer. And for some things, either
FPGAs or microcontrollers are good choices, so you use the solutions you
are familiar with. But this is not a case for an FPGA.


We have advocates for running soft-core low-performance 8-bit CPUs
inside FPGAs, microBlaze and such, but it doesn't make sense to me. It
would take a new infrastructure (compilers, libraries, debug). RAM is
expensive inside an FPGA, external DRAM is a big deal, and separete
ARM chips are cheap.


Agreed. I don't think cpus on an FPGA make any sense unless you need
the FPGA anyway, and even then it is usually simpler and cheaper to use
an external microcontroller. As you say, development of microcontroller
stuff and FPGA stuff is mostly a different culture, and the whole
project will be simpler to do and easier to manage if they are mostly
separate. It's a different matter if you have special requirements for
the integration - FPGA acceleration of cpu instructions, tightly coupled
peripherals, etc.


one case where it might make sense is if you have slowish state machines
for something like setup and want it to be easy to chance by someone who
only knows c


The c is usually much easier to change than recompiling the FPGA. We
have avoided some interesting FPGA families because we couldn't even
get the demo tools to run. FPGA tools are usually tangled in FlexLM
horrors, whereas the c suite is public domain.

c builds take seconds, and FPGA builds take hours.

c builds are command-line driven, whereas most FPGA tools are click
driven. It's hard to archive clicks.


This is just ignorance on the part of the people using the tools. Both C and HDL tools are command line driven or have a GUI to suit your preferences.. Many HDL designers use the command line interface since it gives you better control and as you indicate, archivable scripts to run the tools.

Rick C.

--- Get 6 months of free supercharging
--- Tesla referral code - https://ts.la/richard11209


Guest

Tue Jan 29, 2019 8:45 pm   



On Sunday, January 27, 2019 at 10:53:01 AM UTC-5, John Larkin wrote:
Quote:
On Sun, 27 Jan 2019 15:27:12 +0100, David Brown
david.brown_at_hesbynett.no> wrote:

On 26/01/2019 00:24, gnuarm.deletethisbit_at_gmail.com wrote:
On Friday, January 25, 2019 at 2:12:13 AM UTC-5, David Brown wrote:
On 25/01/2019 02:47, Phil Hobbs wrote:
On 1/24/19 3:59 PM, speff wrote:
On Thursday, 24 January 2019 10:25:20 UTC-5, Phil Hobbs  wrote:
So we've got this custom product that includes a voltage-controlled
amplifier.

VCA chips as used in ultrasound and so on have nice low noise at the
highest gains, but at low gain they stink on ice.  The same is true of
all transconductance-based VCAs unless you use a zillion stages.

Sooo, we're faking it with a dpot, an op amp with a mux-controlled
resistor ladder, and an LPC804 Cortex M0+.

The resistor ladder is made out of standard-value Susumu 25-ppm
resistors, so it's better than the dpot except that the switchable gains
aren't exactly powers of two.

The simple way of handling this is to have the thing self-calibrate.
That could be done at power-up and the cal table kept in RAM, or at test
time with the table in flash.

There's some lore on the net that having the firmware write to the MCU
flash is a bad idea.

Experiences? Opinions?

Cheers

Phil Hobbs


There's EEPROM emulation code out there for that MCU. It should work
okay,
provided you don't need smooth functioning to continue during the
write and
are not too tight on the code space so that granularity matters.

I also note the endurance is claimed to be 200K for that part, which
is not
terrible (typically it's 1,000,000 for a serial EEPROM).

On the other hand, I think I'd at least consider laying out the PCB
for an I2C
EEPROM such as the AT24xx series. If even one piece of product gets
bricked
that would pay for a lot of 15 cent EEPROMs.

Not a bad idea at all.  In this case the cal will be pretty stable--the
dpot only has 256 steps--so we can avoid that problem by doing the cal
once at test time.


Putting a little EEPROM on the board is often the simplest solution. It
is entirely /possible/ to store configuration data along with program
code on most microcontrollers, but it can be complicated. You typically
have to pause processing while programming, and you might not be able to
erase the segment for the configuration without also erasing the main
program.

Keep the program flash for programs, and write it when you are updating
the software. Use a small data EEPROM for the data. It keeps
everything clear and neat - saving significantly on development costs
for the price of a tiny chip.

This is why I like FPGAs. Real time in FPGAs is "REAL TIME" and you don't even need to think much about it. Trying to simulate real time functions on an MCU with interrupts is a PITA. With FPGAs you can just focus on the problem, rather that the limitations of the solution.

Rick C.


FPGAs and MCUs have their advantages and disadvantages.

He has a microcontroller for a few dollars and maybe a cm² of board
space, and adding an EEPROM means perhaps twenty cents, six mm², and a
100 lines of code. How would changing this to an FPGA affect board
complexity, price, development time? Let's assume - to be realistic -
that the OP or his group are happy with microcontroller development but
inexperienced with FPGAs.

Sometimes FPGAs /are/ the right answer. And for some things, either
FPGAs or microcontrollers are good choices, so you use the solutions you
are familiar with. But this is not a case for an FPGA.


We have advocates for running soft-core low-performance 8-bit CPUs
inside FPGAs, microBlaze and such, but it doesn't make sense to me. It
would take a new infrastructure (compilers, libraries, debug). RAM is
expensive inside an FPGA, external DRAM is a big deal, and separete
ARM chips are cheap.

The advantage of an internal CPU is that one (might maybe) save pins
and (might maybe) do a synchronous interface from uP to fabric.
External ARM to FPGA interfaces tend to be async, which is emotionally
distasteful but not a big deal in real life. One could even go SPI.

SOCs, an FPGA with a hard ARM core or two, are getting cheap.

Embedded c and VHDL are sort of different cultures.


You are looking at FPGAs through the eyes of a dinosaur. A soft core in an FPGA can be thought of as a logic block rather than the way you think of the complexity of a typical MCU. Small CPUs can be built in just a few hundred LUTs and be dedicated to a specific task rather than using a single MCU to task switch between many tasks greatly increasing the complexity of programming, communications and the memory requirements.

Again, people are limited in their thinking to what they are used to. One of the big ones is the idea that you need to program in C. Not only complex to code for embedded use, it's very hard to debug. There are better alternatives such as Forth.

Rick C.

++ Get 6 months of free supercharging
++ Tesla referral code - https://ts.la/richard11209


Guest

Tue Jan 29, 2019 8:45 pm   



On Sunday, January 27, 2019 at 9:27:17 AM UTC-5, David Brown wrote:
Quote:
On 26/01/2019 00:24, gnuarm.deletethisbit_at_gmail.com wrote:
On Friday, January 25, 2019 at 2:12:13 AM UTC-5, David Brown wrote:
On 25/01/2019 02:47, Phil Hobbs wrote:
On 1/24/19 3:59 PM, speff wrote:
On Thursday, 24 January 2019 10:25:20 UTC-5, Phil Hobbs  wrote:
So we've got this custom product that includes a voltage-controlled
amplifier.

VCA chips as used in ultrasound and so on have nice low noise at the
highest gains, but at low gain they stink on ice.  The same is true of
all transconductance-based VCAs unless you use a zillion stages.

Sooo, we're faking it with a dpot, an op amp with a mux-controlled
resistor ladder, and an LPC804 Cortex M0+.

The resistor ladder is made out of standard-value Susumu 25-ppm
resistors, so it's better than the dpot except that the switchable gains
aren't exactly powers of two.

The simple way of handling this is to have the thing self-calibrate..
That could be done at power-up and the cal table kept in RAM, or at test
time with the table in flash.

There's some lore on the net that having the firmware write to the MCU
flash is a bad idea.

Experiences? Opinions?

Cheers

Phil Hobbs


There's EEPROM emulation code out there for that MCU. It should work
okay,
provided you don't need smooth functioning to continue during the
write and
are not too tight on the code space so that granularity matters.

I also note the endurance is claimed to be 200K for that part, which
is not
terrible (typically it's 1,000,000 for a serial EEPROM).

On the other hand, I think I'd at least consider laying out the PCB
for an I2C
EEPROM such as the AT24xx series. If even one piece of product gets
bricked
that would pay for a lot of 15 cent EEPROMs.

Not a bad idea at all.  In this case the cal will be pretty stable--the
dpot only has 256 steps--so we can avoid that problem by doing the cal
once at test time.


Putting a little EEPROM on the board is often the simplest solution. It
is entirely /possible/ to store configuration data along with program
code on most microcontrollers, but it can be complicated. You typically
have to pause processing while programming, and you might not be able to
erase the segment for the configuration without also erasing the main
program.

Keep the program flash for programs, and write it when you are updating
the software. Use a small data EEPROM for the data. It keeps
everything clear and neat - saving significantly on development costs
for the price of a tiny chip.

This is why I like FPGAs. Real time in FPGAs is "REAL TIME" and you don't even need to think much about it. Trying to simulate real time functions on an MCU with interrupts is a PITA. With FPGAs you can just focus on the problem, rather that the limitations of the solution.

Rick C.


FPGAs and MCUs have their advantages and disadvantages.

He has a microcontroller for a few dollars and maybe a cm² of board
space, and adding an EEPROM means perhaps twenty cents, six mm², and a
100 lines of code. How would changing this to an FPGA affect board
complexity, price, development time? Let's assume - to be realistic -
that the OP or his group are happy with microcontroller development but
inexperienced with FPGAs.

Sometimes FPGAs /are/ the right answer. And for some things, either
FPGAs or microcontrollers are good choices, so you use the solutions you
are familiar with. But this is not a case for an FPGA.


Let's just be realistic without assuming, eh? I don't know what factors are significant in his project, he hasn't shared that with us. We also don't know what else is going on in the processor. You made all manner of assumptions without consideration to anything you know about the project especially the 100 lines of code. I would never hire you to design an embedded board for a project of mine.

I guess the most reasonable thing you wrote was your preference for sticking with a bad solution primarily because you have never bothered to learn about better solutions.


Rick C.

+- Get 6 months of free supercharging
+- Tesla referral code - https://ts.la/richard11209


Guest

Tue Jan 29, 2019 8:45 pm   



On Sunday, January 27, 2019 at 3:25:42 AM UTC-5, 69883925...@nospam.org wrote:
Quote:
gnuarm wrote
Not sure what a Dino is,

https://en.wikipedia.org/wiki/Dinosaur
dinosaur are extinct
The story goes, because those were so big, that if one was bitten in the tail it took
almost a second for the nerve signal to reach the brain and be processed,
making those an easy victim for other animals and human hunters.


Story is wrong though - just like the human brain doesn't need to process an injury to a hand to cause the arm muscles to retract the hand. "Dinos" mostly likely had multiple brains to control the body. I still don't know what you meant by "Dinos are dead". Obviously some sort of comparison that is clear in your mind, but you didn't make any connections, so I have no idea what you are talking about.


Quote:
but your idea of dedicating a sector per data record
is poor in the extreme. Sectors on Flash are not very reliable. It is a
good idea to have a flash file system to manage the good/bad blocks for you.
I guess you can do that yourself, but are you thinking of that?

No, bad sector handling is done by the SDcard firmware
we are talking about SDcards for data storage.


"Bad sector handling" doesn't prevent all data loss. Flash memory is particularly unreliable perhaps only better than spinning rust. Since sectors can be lost putting a record within a single sector is not a very reliable way to prevent data loss.

Rick C.

-+ Get 6 months of free supercharging
-+ Tesla referral code - https://ts.la/richard11209


Guest

Tue Jan 29, 2019 8:45 pm   



On Monday, January 28, 2019 at 11:38:26 AM UTC-5, John Larkin wrote:
Quote:
On Mon, 28 Jan 2019 08:27:19 +0000, Tom Gardner
spamjunk_at_blueyonder.co.uk> wrote:

On 28/01/19 06:30, David Brown wrote:
People who write C code professionally usually know what their lines of code do.

Older people in the embedded world, yes. Younger people
in other fields - sometimes :(

I've seen too many that only have a vague concept of
what a cache is. And even getting them to outline what
a processor does during an unoptimised subroutine call
can be like pulling teeth.

PEBCAK in a stark form.


/Anything/ is better than doing PIC assembly.

I once looked at a PIC's assembler, and thought
"life is too short".


We had a weird problem last week. An ARM talks to an FPGA which
implements a bunch of DDS sine wave generators, driving a mess of
serial DACs. The sinewaves had weird erratic spikey glitches, which
were suspected to be SPI transmission-line problems, but weren't.

Much experimenting and thinking led us to the real problem: a VOLATILE
declaration in c wasn't always working, so the sinewave amplitude
multiplier values would occasionally get zeroed. One clue was that the
glitches were erratic but quantized to 1 ms time ticks, and the ARM
runs a 1 KHz interrupt.

I solved the problem by applying the universal principle of "always
blame the software."


The difficulty in isolating the bug is from the lack of appropriate self testing capability designed into the system. It is always a good idea, even if after the fact, to provide a way for the hardware sub-system to be tested independently of the software. Here the DACs were used to generate sine waves, so I would have added a feature to drive the DACs from a lookup table embedded in the HDL code. That sine wave could have be tested for spurs with no artifacts from the DDS or the software. Then the DDS should have a mode of free running independent from the software which would provide a sine wave output with no influence of the software allowing you to analyze the spurs added by the DDS.

Rick C.

-+- Get 6 months of free supercharging
-+- Tesla referral code - https://ts.la/richard11209


Guest

Tue Jan 29, 2019 10:45 pm   



gnuarm wrote
Quote:
On Sunday, January 27, 2019 at 3:25:42 AM UTC-5, 69883925...@nospam.org wrote:

gnuarm wrote
Not sure what a Dino is,

https://en.wikipedia.org/wiki/Dinosaur
dinosaur are extinct
The story goes, because those were so big, that if one was bitten in the tail
it took
almost a second for the nerve signal to reach the brain and be processed,
making those an easy victim for other animals and human hunters.

Story is wrong though - just like the human brain doesn't need to process an
injury to a hand to cause the arm muscles to retract the hand. "Dinos" mostly
likely had multiple brains to control the body. I still don't know what
you meant by "Dinos are dead". Obviously some sort of comparison that is
clear in your mind, but you didn't make any connections, so I have no idea
what you are talking about.


Yes there are reflexes, but it (the dino) also needs to take evasive action for example,
strike back, etc.
I don't know about multiple brains, I do know the nerve signals for pain are very slow:
From https://hypertextbook.com/facts/2002/DavidParizh.shtml
0.61m/s (pain)
So if end of tail of a dino is 20 meters from its head, it would not notice the rat ? tiger? whatever
taking a byte for about 12 seconds! The tiger would easily get away, same for a human hunter.
Well theory of course, never tried it on a live one...




Quote:
but your idea of dedicating a sector per data record
is poor in the extreme. Sectors on Flash are not very reliable. It is a

good idea to have a flash file system to manage the good/bad blocks for you.

I guess you can do that yourself, but are you thinking of that?

No, bad sector handling is done by the SDcard firmware
we are talking about SDcards for data storage.

"Bad sector handling" doesn't prevent all data loss. Flash memory is particularly
unreliable perhaps only better than spinning rust. Since sectors can
be lost putting a record within a single sector is not a very reliable way
to prevent data loss.


See the link I gave:
https://wiki.linaro.org/WorkingGroups/KernelArchived/Projects/FlashCardSurvey?action=show&redirect=WorkingGroups%2FKernel%2FProjects%2FFlashCardSurvey

I record a lot on FLASH, and now I am talking about SDcards, and if you check write call error return you will know
when a card is defective, or simply full.
And look at that link, not all cards are the same and allow the same random access.

Think about it: All those digital cameras, HD recorders, no problem.

As to writing to flash in a micro, I do not do that,
I do not like the idea of boot loaders and sending the executable over the net, encrypted or not.

Simply hand or mail a new chip.
The BIG advantage of that is, that if things do not work for any reason (and it would not be the first time a software update
introduces new bugs), then you can simply put the old chip back.
No FLASH is really reliable,

As to dinos are dead, in my view many things can be done faster and more reliable by small micros, or FPGAs then huge
multi tasking multi-cores running Linux.
The software in a few Microchip PICs for my drone an example of that,


Guest

Tue Jan 29, 2019 10:45 pm   



Oops
I don't know about multiple brains, I do know the nerve signals for pain are very slow:
From https://hypertextbook.com/facts/2002/DavidParizh.shtml
0.61m/s (pain)
So if end of tail of a dino is 20 meters from its head, it would not notice the rat ? tiger? whatever
taking a byte for about 12 seconds! The tiger would easily get away, same for a human hunter.

20 / .61 = 33 seconds....

David Brown
Guest

Tue Jan 29, 2019 11:45 pm   



On 29/01/2019 20:05, gnuarm.deletethisbit_at_gmail.com wrote:
Quote:
On Sunday, January 27, 2019 at 9:27:17 AM UTC-5, David Brown wrote:
On 26/01/2019 00:24, gnuarm.deletethisbit_at_gmail.com wrote:
On Friday, January 25, 2019 at 2:12:13 AM UTC-5, David Brown
wrote:
On 25/01/2019 02:47, Phil Hobbs wrote:
On 1/24/19 3:59 PM, speff wrote:
On Thursday, 24 January 2019 10:25:20 UTC-5, Phil Hobbs
wrote:
So we've got this custom product that includes a
voltage-controlled amplifier.

VCA chips as used in ultrasound and so on have nice low
noise at the highest gains, but at low gain they stink on
ice. The same is true of all transconductance-based VCAs
unless you use a zillion stages.

Sooo, we're faking it with a dpot, an op amp with a
mux-controlled resistor ladder, and an LPC804 Cortex
M0+.

The resistor ladder is made out of standard-value Susumu
25-ppm resistors, so it's better than the dpot except
that the switchable gains aren't exactly powers of two.

The simple way of handling this is to have the thing
self-calibrate. That could be done at power-up and the
cal table kept in RAM, or at test time with the table in
flash.

There's some lore on the net that having the firmware
write to the MCU flash is a bad idea.

Experiences? Opinions?

Cheers

Phil Hobbs


There's EEPROM emulation code out there for that MCU. It
should work okay, provided you don't need smooth
functioning to continue during the write and are not too
tight on the code space so that granularity matters.

I also note the endurance is claimed to be 200K for that
part, which is not terrible (typically it's 1,000,000 for a
serial EEPROM).

On the other hand, I think I'd at least consider laying out
the PCB for an I2C EEPROM such as the AT24xx series. If
even one piece of product gets bricked that would pay for a
lot of 15 cent EEPROMs.

Not a bad idea at all. In this case the cal will be pretty
stable--the dpot only has 256 steps--so we can avoid that
problem by doing the cal once at test time.


Putting a little EEPROM on the board is often the simplest
solution. It is entirely /possible/ to store configuration
data along with program code on most microcontrollers, but it
can be complicated. You typically have to pause processing
while programming, and you might not be able to erase the
segment for the configuration without also erasing the main
program.

Keep the program flash for programs, and write it when you are
updating the software. Use a small data EEPROM for the data.
It keeps everything clear and neat - saving significantly on
development costs for the price of a tiny chip.

This is why I like FPGAs. Real time in FPGAs is "REAL TIME" and
you don't even need to think much about it. Trying to simulate
real time functions on an MCU with interrupts is a PITA. With
FPGAs you can just focus on the problem, rather that the
limitations of the solution.

Rick C.


FPGAs and MCUs have their advantages and disadvantages.

He has a microcontroller for a few dollars and maybe a cm² of
board space, and adding an EEPROM means perhaps twenty cents, six
mm², and a 100 lines of code. How would changing this to an FPGA
affect board complexity, price, development time? Let's assume -
to be realistic - that the OP or his group are happy with
microcontroller development but inexperienced with FPGAs.

Sometimes FPGAs /are/ the right answer. And for some things,
either FPGAs or microcontrollers are good choices, so you use the
solutions you are familiar with. But this is not a case for an
FPGA.

Let's just be realistic without assuming, eh? I don't know what
factors are significant in his project, he hasn't shared that with
us. We also don't know what else is going on in the processor. You
made all manner of assumptions without consideration to anything you
know about the project especially the 100 lines of code. I would
never hire you to design an embedded board for a project of mine.

I guess the most reasonable thing you wrote was your preference for
sticking with a bad solution primarily because you have never
bothered to learn about better solutions.



Of course I am making assumptions - extrapolating from the little
information we have. And of course if this were a customer, I would be
expecting a great deal more information before giving suggestions. It's
not a matter of not hiring me for this "job" - I simply wouldn't accept
a job with that level of requirement detail.

What we know of the situation is that the OP has a small, cheap
microcontroller. His problem is that he wants to store some calibration
data, and is not happy about storing it in program flash. The obvious
answer - baring other factors that are missing from this information -
is to add a small, cheap EEPROM. It will solve his problem with minimal
cost in hardware, and minimal cost in code complexity (100 lines is a
perfectly good order-of-magnitude estimation).

If he already has an FPGA on board, then it might be a different matter.
Then the solution could be to add a small, cheap EEPROM and put those
100 lines of C code in an embedded (soft or hard) processor on the FPGA.


Guest

Wed Jan 30, 2019 1:45 am   



On Tuesday, January 29, 2019 at 3:59:12 PM UTC-5, 69883925...@nospam.org wrote:
Quote:
gnuarm wrote
On Sunday, January 27, 2019 at 3:25:42 AM UTC-5, 69883925...@nospam.org wrote:

gnuarm wrote
Not sure what a Dino is,

https://en.wikipedia.org/wiki/Dinosaur
dinosaur are extinct
The story goes, because those were so big, that if one was bitten in the tail
it took
almost a second for the nerve signal to reach the brain and be processed,
making those an easy victim for other animals and human hunters.

Story is wrong though - just like the human brain doesn't need to process an
injury to a hand to cause the arm muscles to retract the hand. "Dinos" mostly
likely had multiple brains to control the body. I still don't know what
you meant by "Dinos are dead". Obviously some sort of comparison that is
clear in your mind, but you didn't make any connections, so I have no idea
what you are talking about.

Yes there are reflexes, but it (the dino) also needs to take evasive action for example,
strike back, etc.
I don't know about multiple brains, I do know the nerve signals for pain are very slow:
From https://hypertextbook.com/facts/2002/DavidParizh.shtml
0.61m/s (pain)
So if end of tail of a dino is 20 meters from its head, it would not notice the rat ? tiger? whatever
taking a byte for about 12 seconds! The tiger would easily get away, same for a human hunter.
Well theory of course, never tried it on a live one...


Likely you are extrapolating beyond your data set.


Quote:
but your idea of dedicating a sector per data record
is poor in the extreme. Sectors on Flash are not very reliable. It is a

good idea to have a flash file system to manage the good/bad blocks for you.

I guess you can do that yourself, but are you thinking of that?

No, bad sector handling is done by the SDcard firmware
we are talking about SDcards for data storage.

"Bad sector handling" doesn't prevent all data loss. Flash memory is particularly
unreliable perhaps only better than spinning rust. Since sectors can
be lost putting a record within a single sector is not a very reliable way
to prevent data loss.

See the link I gave:
https://wiki.linaro.org/WorkingGroups/KernelArchived/Projects/FlashCardSurvey?action=show&redirect=WorkingGroups%2FKernel%2FProjects%2FFlashCardSurvey


I don't see anything on error recovery or failure rates.


Quote:
I record a lot on FLASH, and now I am talking about SDcards, and if you check write call error return you will know
when a card is defective, or simply full.
And look at that link, not all cards are the same and allow the same random access.


So?


> Think about it: All those digital cameras, HD recorders, no problem.

Where do you get the idea there is "no problem"???


Quote:
As to writing to flash in a micro, I do not do that,
I do not like the idea of boot loaders and sending the executable over the net, encrypted or not.

Simply hand or mail a new chip.
The BIG advantage of that is, that if things do not work for any reason (and it would not be the first time a software update
introduces new bugs), then you can simply put the old chip back.
No FLASH is really reliable,


Agreed on this point.


Quote:
As to dinos are dead, in my view many things can be done faster and more reliable by small micros, or FPGAs then huge
multi tasking multi-cores running Linux.
The software in a few Microchip PICs for my drone an example of that,


Horses for courses. The only reason for using an OS is if it provides some functionality that is hard to get any other way. If you want to communicate over wide bandwidth wifi or any of a dozen other interfaces that an OS provides easily, then an OS is justified. Using Linux simply because you can get a processor on the FPGA chip is not a very good reason for using it with LInux.

By the same token, many people have a mistaken impression that only MCUs can be small and cheap. Sure, there are no $0.25 FPGAs available in an 8 pin DIP, but for many lower end apps a $2 FPGA works as well if not better than a $2 MCU. It's not uncommon to have some unique interface that is hard to do in an MCU even with all the various peripherals on chip.

Rick C.

-++ Get 6 months of free supercharging
-++ Tesla referral code - https://ts.la/richard11209


Guest

Wed Jan 30, 2019 9:45 am   



gnuarm wrote
Quote:
On Tuesday, January 29, 2019 at 3:59:12 PM UTC-5, 69883925...@nospam.org wrote:
"Bad sector handling" doesn't prevent all data loss. Flash memory is particularly

unreliable perhaps only better than spinning rust. Since sectors can
be lost putting a record within a single sector is not a very reliable way

to prevent data loss.

See the link I gave:

https://wiki.linaro.org/WorkingGroups/KernelArchived/Projects/FlashCardSurvey?action=3Dshow&redirect=3DWorkingGroups%2FKernel%2FProjects%2FFlashCardSurvey

I
don't see anything on error recovery or failure rates.


Internally in the card a failed write to a physical sector results in that sector marked as 'bad' and the
translation table from logical sector to physical sector being updated (also in flash in that card)
to point to a good sector that is then used for write.
If this process fails you get a write error.
Write returning OK is data OK on the card.
In case of data logging, writing sector after sector, or block after block (whatever is best for the card)
is no different from sequentially writing via some OS using some filesystem.
I have inspected and recovered data (images and video) from cards that were erased (report was in sci.crypt)
found all was neatly sequentially written, even in case of that MS OS.
Fragmentation only occurs after many read writes erasures of files of different length by the OS.
So why use a filesystem if you only have one data stream.
Cards can go bad, use a hammer and nail, heat those, saw, maybe big sparks (have not tried), EMP,
to either physically damage the on board controller or cause enough charge change in the storage cells.

For the rest you can drop those, shake those, unlike harddisks,





Quote:
I record a lot on FLASH, and now I am talking about SDcards, and if you check
write call error return you will know
when a card is defective, or simply full.
And look at that link, not all cards are the same and allow the same random
access.

So?


Think about it: All those digital cameras, HD recorders, no problem.

Where do you get the idea there is "no problem"???


Because there is no problem :-)

I have SDcards in use for 6 years or more all day, cheap ones too, and those still work.
All old cards from video cameras still work.
I do make backups on harddisk and optical though, so I can compare.


Quote:
As to writing to flash in a micro, I do not do that,
I do not like the idea of boot loaders and sending the executable over the
net, encrypted or not.

Simply hand or mail a new chip.
The BIG advantage of that is, that if things do not work for any reason (and
it would not be the first time a software update
introduces new bugs), then you can simply put the old chip back.
No FLASH is really reliable,

Agreed on this point.


I think that line by me 'No FLASH is really reliable,' can be interpreted 2 ways, I did mean without the No :-)


Quote:
As to dinos are dead, in my view many things can be done faster and more reliable
by small micros, or FPGAs then huge
multi tasking multi-cores running Linux.
The software in a few Microchip PICs for my drone an example of that,

Horses for courses. The only reason for using an OS is if it provides some
functionality that is hard to get any other way. If you want to communicate
over wide bandwidth wifi or any of a dozen other interfaces that an OS provides
easily, then an OS is justified. Using Linux simply because you can
get a processor on the FPGA chip is not a very good reason for using it with
LInux.

By the same token, many people have a mistaken impression that only MCUs can
be small and cheap. Sure, there are no $0.25 FPGAs available in an 8 pin
DIP, but for many lower end apps a $2 FPGA works as well if not better than
a $2 MCU. It's not uncommon to have some unique interface that is hard to
do in an MCU even with all the various peripherals on chip.


I think we agree on the main point, apart from SDcards perhaps, what is your problem with SDcards?


Guest

Wed Jan 30, 2019 2:45 pm   



On Wednesday, January 30, 2019 at 3:31:29 AM UTC-5, 69883925...@nospam.org wrote:
Quote:
gnuarm wrote
On Tuesday, January 29, 2019 at 3:59:12 PM UTC-5, 69883925...@nospam.org wrote:
"Bad sector handling" doesn't prevent all data loss. Flash memory is particularly

unreliable perhaps only better than spinning rust. Since sectors can
be lost putting a record within a single sector is not a very reliable way

to prevent data loss.

See the link I gave:

https://wiki.linaro.org/WorkingGroups/KernelArchived/Projects/FlashCardSurvey?action=3Dshow&redirect=3DWorkingGroups%2FKernel%2FProjects%2FFlashCardSurvey

I
don't see anything on error recovery or failure rates.

Internally in the card a failed write to a physical sector results in that sector marked as 'bad' and the
translation table from logical sector to physical sector being updated (also in flash in that card)
to point to a good sector that is then used for write.
If this process fails you get a write error.
Write returning OK is data OK on the card.
In case of data logging, writing sector after sector, or block after block (whatever is best for the card)
is no different from sequentially writing via some OS using some filesystem.
I have inspected and recovered data (images and video) from cards that were erased (report was in sci.crypt)
found all was neatly sequentially written, even in case of that MS OS.
Fragmentation only occurs after many read writes erasures of files of different length by the OS.
So why use a filesystem if you only have one data stream.
Cards can go bad, use a hammer and nail, heat those, saw, maybe big sparks (have not tried), EMP,
to either physically damage the on board controller or cause enough charge change in the storage cells.

For the rest you can drop those, shake those, unlike harddisks,


If the errors are well behaved, then they will be mitigated. If a sector is not bad at the time of writing or the error is not apparent when written, but then an error develops when it is read, this does not mitigate that problem.

You keep talking about file systems. I've never said anything about file systems. I've simply pointed out that Flash memory is not very reliable compared to the rest of a properly designed electronic system. I've also said nothing that is particular to SD cards.


Quote:
I record a lot on FLASH, and now I am talking about SDcards, and if you check
write call error return you will know
when a card is defective, or simply full.
And look at that link, not all cards are the same and allow the same random
access.

So?


Think about it: All those digital cameras, HD recorders, no problem.

Where do you get the idea there is "no problem"???

Because there is no problem :-)

I have SDcards in use for 6 years or more all day, cheap ones too, and those still work.
All old cards from video cameras still work.
I do make backups on harddisk and optical though, so I can compare.


I don't have a lot of evidence to offer showing the failure rates among flash devices, but I have personally had flash devices fail and I have read many, many warnings that flash devices have a relatively high failure rate and should not be used as solitary backup media for important information. Much of that has been here in this group.

I've also never had a hard drive fail that I can recall. Does that mean I should consider hard drives to be reliable?


Quote:
As to writing to flash in a micro, I do not do that,
I do not like the idea of boot loaders and sending the executable over the
net, encrypted or not.

Simply hand or mail a new chip.
The BIG advantage of that is, that if things do not work for any reason (and
it would not be the first time a software update
introduces new bugs), then you can simply put the old chip back.
No FLASH is really reliable,

Agreed on this point.

I think that line by me 'No FLASH is really reliable,' can be interpreted 2 ways, I did mean without the No :-)


As to dinos are dead, in my view many things can be done faster and more reliable
by small micros, or FPGAs then huge
multi tasking multi-cores running Linux.
The software in a few Microchip PICs for my drone an example of that,

Horses for courses. The only reason for using an OS is if it provides some
functionality that is hard to get any other way. If you want to communicate
over wide bandwidth wifi or any of a dozen other interfaces that an OS provides
easily, then an OS is justified. Using Linux simply because you can
get a processor on the FPGA chip is not a very good reason for using it with
LInux.

By the same token, many people have a mistaken impression that only MCUs can
be small and cheap. Sure, there are no $0.25 FPGAs available in an 8 pin
DIP, but for many lower end apps a $2 FPGA works as well if not better than
a $2 MCU. It's not uncommon to have some unique interface that is hard to
do in an MCU even with all the various peripherals on chip.

I think we agree on the main point, apart from SDcards perhaps, what is your problem with SDcards?


I think you have failed to read properly what I have written and have read much that I did not write.

Rick C.

+-- Get 6 months of free supercharging
+-- Tesla referral code - https://ts.la/richard11209


Guest

Wed Jan 30, 2019 3:45 pm   



gnuarm wrote
Quote:
I don't have a lot of evidence to offer showing the failure rates among flash
devices, but I have personally had flash devices fail and I have read many,
many warnings that flash devices have a relatively high failure rate and
should not be used as solitary backup media for important information. Much
of that has been here in this group.


Well, I am not much into arguing for the sake of arguing.
Sure there are many warnings for everything, including glowballworming.
I have had a harddisk fail simply because I dropped those,
the FLASH based memory cards dropped many times did survive.
I have seen optical media fail on write (I always run a compare against the source),
likely dust particles, and older ones that had mold spots in those that I returned.
I have seen writes to SDcard with 'dd' fail on verification in read-back,
but to many other 'on the border' things happening to blame the card (card images are a different matter).
I have never seen a data record fail to SDcard in my designs.

That all means very little, but the mechanical strength, weight, speed (no seek times), low power use,
makes these cards a first choice between those media.

You stated that sequentially writing to FLASH was dangerous,
I hope I made it clear to you and everybody else that is bull.
Every OS also does that.

It's simple.


Guest

Wed Jan 30, 2019 4:45 pm   



On Wednesday, January 30, 2019 at 9:44:05 AM UTC-5, 69883925...@nospam.org wrote:
Quote:

You stated that sequentially writing to FLASH was dangerous,
I hope I made it clear to you and everybody else that is bull.
Every OS also does that.


I've never said that. My point is that Flash is not reliable. Rely on it at your own risk.


> It's simple.

Yes, it is.


Rick C.

+-+ Get 6 months of free supercharging
+-+ Tesla referral code - https://ts.la/richard11209


Guest

Wed Jan 30, 2019 6:45 pm   



On Wednesday, January 30, 2019 at 12:12:11 PM UTC-5, 69883925...@nospam.org wrote:
Quote:
On a sunny day (Wed, 30 Jan 2019 07:21:55 -0800 (PST)) it happened
gnuarm wrote
On Wednesday, January 30, 2019 at 9:44:05 AM UTC-5, 69883925...@nospam.org wrote:

You stated that sequentially writing to FLASH was dangerous,
I hope I made it clear to you and everybody else that is bull.
Every OS also does that.

I've never said that.

quote
but your idea of dedicating a sector per data record
is poor in the extreme. Sectors on Flash are not very reliable. It is a
good idea to have a flash file system to manage the good/bad blocks for you.
I guess you can do that yourself, but are you thinking of that?
end quote


You neither understood when writing that how an OS works, nor how the SDcard's internals work.
I hope you do now, else read that link again:
https://wiki.linaro.org/WorkingGroups/KernelArchived/Projects/FlashCardSurvey?action=show&redirect=WorkingGroups%2FKernel%2FProjects%2FFlashCardSurvey
For SDcards the bad blocks management is in the card.


You are making my case. I didn't say anything about "sequentially writing to FLASH was dangerous".

I didn't say anything about how an "operating system" works. Or do I misunderstand what "OS" means? As usual, you read what you want to read.

Try opening your mind to understand what I meant, not what you want my words to mean.


Rick C.

++- Get 6 months of free supercharging
++- Tesla referral code - https://ts.la/richard11209

Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8  Next

elektroda.net NewsGroups Forum Index - Electronics Design - Writing to MCU flash

Ask a question - edaboard.com

Arabic version Bulgarian version Catalan version Czech version Danish version German version Greek version English version Spanish version Finnish version French version Hindi version Croatian version Indonesian version Italian version Hebrew version Japanese version Korean version Lithuanian version Latvian version Dutch version Norwegian version Polish version Portuguese version Romanian version Russian version Slovak version Slovenian version Serbian version Swedish version Tagalog version Ukrainian version Vietnamese version Chinese version Turkish version
EDAboard.com map