EDAboard.com | EDAboard.de | EDAboard.co.uk | WTWH Media

Using many cheap accelerometers to reduce error

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - Electronics Design - Using many cheap accelerometers to reduce error

Goto page Previous  1, 2, 3, 4  Next


Guest

Mon Feb 11, 2019 4:45 am   



klaus.kragelund_at_gmail.com wrote in
news:952919e3-708f-472a-94e1-f91ecad67965_at_googlegroups.com:

Quote:
On Sunday, 10 February 2019 05:08:13 UTC+1,
DecadentLinux...@decadence.org wrote:
Sylvia Else <sylvia_at_email.invalid> wrote in news:gc9co2Fmqn1U1
@mid.individual.net:

Any approach using large numbers of less accurate parts is not
guaranteed to give you the accuracy you want.


One exception might be when paralelling resistors. 1% resistors
in paralell will generally be more accurate than the original
spec. Maybe

due to the way precision classed resistor sets get matched and
culled.

One can generally count on the members of the set to actually be
more accurate than the spec they claim to be at least as good as.

That is not my experience. Resistors are tuned in the process,
which means that one lot will typically have more or less the same
distribution, but it is offset from the nominal. If they measure
its's within the specs (1%), then they do not alter the process to
pull it in. They just press the big "GO" button

Thus more resistors in parallel gets you nowhere

Cheers

Klaus


They do when they are multi-gigohm. So two 10G in paralell makes
a closer than rated 5G resistor. Usually.

Sylvia Else
Guest

Mon Feb 11, 2019 4:45 am   



On 10/02/2019 2:38 pm, gnuarm.deletethisbit_at_gmail.com wrote:
Quote:
On Saturday, February 9, 2019 at 8:23:20 PM UTC-5, Sylvia Else wrote:
On 10/02/2019 9:39 am, George Herold wrote:
On Saturday, February 9, 2019 at 4:53:39 PM UTC-5, gnuarm.del...@gmail.com wrote:
On Saturday, February 9, 2019 at 3:47:32 PM UTC-5, JS wrote:
Hi all,

Given that the random error in a sample is proportional to 1/sqrt(sample size), does having many accelerometers and then averaging their output therefore reduce their overall error?

So would it be worthwhile to have say 100 or 1000 cheap accelerometers rather than one expensive one like a laser ring gyro?

It would if the errors are all random. Are they?

Rick C.

Right, systematic vs random differences.
A systematic error (like a DC offset vs. bias
voltage (or temperature)) Can't be improved
(as much) with averaging.

George H.

Even if there is no systematic error, the random errors may still be
skewed in one direction, since the set of all sets of random values has
to contain sets with that property.

Seems to me that if you need a particular accuracy, your options are
limited.

a) Get a part specified to have that accuracy.

b) Get a part not so specified, but which is specified not to drift, and
which you have measured to determine that it has the required accuracy [*].

Any approach using large numbers of less accurate parts is not
guaranteed to give you the accuracy you want.

Sylvia.

[*] If such even exists - why wouldn't the manufacturer measure the part
and sell it at a higher price?

If the starting point is that the error is random, your arguments all fade away. Averaging many measurements will result in a lower range of error with some probability. No measurement is contained in an error window with 100% probability.

Rick C.


Random doesn't mean equally distributed. If you found that every set you
obtained were equally distributed, you'd be forced to conclude that they
were not random.

So the question becomes that of how likely it is that a random selection
will be distributed in a way that the combined error exceeds what you want.

If I select 1% resistors, I'll be pretty annoyed if more than a very
small number show a 2% error.

By contrast, if I select 10% resistors, and group then in sets of ten in
parallel, I'll expect the combined result to exceed a 2% error quite often.

Sylvia.


Guest

Mon Feb 11, 2019 5:45 am   



On Mon, 11 Feb 2019 14:11:22 +1100, Sylvia Else <sylvia_at_email.invalid>
wrote:

Quote:
On 10/02/2019 2:38 pm, gnuarm.deletethisbit_at_gmail.com wrote:
On Saturday, February 9, 2019 at 8:23:20 PM UTC-5, Sylvia Else wrote:
On 10/02/2019 9:39 am, George Herold wrote:
On Saturday, February 9, 2019 at 4:53:39 PM UTC-5, gnuarm.del...@gmail.com wrote:
On Saturday, February 9, 2019 at 3:47:32 PM UTC-5, JS wrote:
Hi all,

Given that the random error in a sample is proportional to 1/sqrt(sample size), does having many accelerometers and then averaging their output therefore reduce their overall error?

So would it be worthwhile to have say 100 or 1000 cheap accelerometers rather than one expensive one like a laser ring gyro?

It would if the errors are all random. Are they?

Rick C.

Right, systematic vs random differences.
A systematic error (like a DC offset vs. bias
voltage (or temperature)) Can't be improved
(as much) with averaging.

George H.

Even if there is no systematic error, the random errors may still be
skewed in one direction, since the set of all sets of random values has
to contain sets with that property.

Seems to me that if you need a particular accuracy, your options are
limited.

a) Get a part specified to have that accuracy.

b) Get a part not so specified, but which is specified not to drift, and
which you have measured to determine that it has the required accuracy [*].

Any approach using large numbers of less accurate parts is not
guaranteed to give you the accuracy you want.

Sylvia.

[*] If such even exists - why wouldn't the manufacturer measure the part
and sell it at a higher price?

If the starting point is that the error is random, your arguments all fade away. Averaging many measurements will result in a lower range of error with some probability. No measurement is contained in an error window with 100% probability.

Rick C.


Random doesn't mean equally distributed. If you found that every set you
obtained were equally distributed, you'd be forced to conclude that they
were not random.

So the question becomes that of how likely it is that a random selection
will be distributed in a way that the combined error exceeds what you want.

If I select 1% resistors, I'll be pretty annoyed if more than a very
small number show a 2% error.

By contrast, if I select 10% resistors, and group then in sets of ten in
parallel, I'll expect the combined result to exceed a 2% error quite often.


I'd be very surprised if you actually did get 2% error. I'd expect a
reel to be skewed one way or another with perhaps very few individual
resistors in tens of reels within 5%, even.


Guest

Mon Feb 11, 2019 6:45 am   



On Monday, February 11, 2019 at 2:59:12 PM UTC+11, k...@notreal.com wrote:
Quote:
On Mon, 11 Feb 2019 14:11:22 +1100, Sylvia Else <sylvia_at_email.invalid
wrote:

On 10/02/2019 2:38 pm, gnuarm.deletethisbit_at_gmail.com wrote:
On Saturday, February 9, 2019 at 8:23:20 PM UTC-5, Sylvia Else wrote:
On 10/02/2019 9:39 am, George Herold wrote:
On Saturday, February 9, 2019 at 4:53:39 PM UTC-5, gnuarm.del...@gmail.com wrote:
On Saturday, February 9, 2019 at 3:47:32 PM UTC-5, JS wrote:
Hi all,

Given that the random error in a sample is proportional to 1/sqrt(sample size), does having many accelerometers and then averaging their output therefore reduce their overall error?

So would it be worthwhile to have say 100 or 1000 cheap accelerometers rather than one expensive one like a laser ring gyro?

It would if the errors are all random. Are they?

Rick C.

Right, systematic vs random differences.
A systematic error (like a DC offset vs. bias
voltage (or temperature)) Can't be improved
(as much) with averaging.

George H.

Even if there is no systematic error, the random errors may still be
skewed in one direction, since the set of all sets of random values has
to contain sets with that property.

Seems to me that if you need a particular accuracy, your options are
limited.

a) Get a part specified to have that accuracy.

b) Get a part not so specified, but which is specified not to drift, and
which you have measured to determine that it has the required accuracy [*].

Any approach using large numbers of less accurate parts is not
guaranteed to give you the accuracy you want.

Sylvia.

[*] If such even exists - why wouldn't the manufacturer measure the part
and sell it at a higher price?

If the starting point is that the error is random, your arguments all fade away. Averaging many measurements will result in a lower range of error with some probability. No measurement is contained in an error window with 100% probability.

Rick C.


Random doesn't mean equally distributed. If you found that every set you
obtained were equally distributed, you'd be forced to conclude that they
were not random.

So the question becomes that of how likely it is that a random selection
will be distributed in a way that the combined error exceeds what you want.

If I select 1% resistors, I'll be pretty annoyed if more than a very
small number show a 2% error.

By contrast, if I select 10% resistors, and group then in sets of ten in
parallel, I'll expect the combined result to exceed a 2% error quite often.

I'd be very surprised if you actually did get 2% error. I'd expect a
reel to be skewed one way or another with perhaps very few individual
resistors in tens of reels within 5%, even.


If the errors were normally distributed and the 10% tolerance represented three standard deviations away from the mean, 67% of the resistors in the sample would lie with +/3% of the mean. Putting ten resistors out of such a selection in parallel would mean that 67% of your samples of ten in parallel would be within +/-1% of the mean.

There's no obligation on the manufacturer to make the resistors in a way that generates a normal distribution, and some manufacturers are claimed to measure all the resistors that they did make and sort them into bins.

The +/-1% bin would then get the centre of the distributions, the +/-2% bin would get the two bands around it, the +/-5% bins gets the next two bands out from there, and the +/-10% bin gets all the resistors between +5% and +10% as well as all the resistors between -5% and -10%.

I've no idea precisely what they actually do, but anybody whose distribution is centred 5% or more away from the target value would end up throwing out a lot of resistors.

--
Bill Sloman, Sydney


Guest

Mon Feb 11, 2019 7:45 am   



On Sunday, February 10, 2019 at 7:51:42 PM UTC-5, k...@notreal.com wrote:
Quote:
On Sun, 10 Feb 2019 08:28:39 -0800 (PST),
gnuarm.deletethisbit_at_gmail.com wrote:

On Sunday, February 10, 2019 at 10:57:41 AM UTC-5, John Larkin wrote:
On Sat, 9 Feb 2019 21:58:58 -0800 (PST), JS <js5071921_at_gmail.com
wrote:

On Sunday, February 10, 2019 at 1:13:07 AM UTC+2, John Larkin wrote:
On Sat, 9 Feb 2019 12:47:28 -0800 (PST),
wrote:

Hi all,

Given that the random error in a sample is proportional to 1/sqrt(sample size), does having many accelerometers and then averaging their output therefore reduce their overall error?

So would it be worthwhile to have say 100 or 1000 cheap accelerometers rather than one expensive one like a laser ring gyro?

Thanks.

sqrt(1000) is only 32. I'd expect the ring gyro to be vastly better
than a cheap MEMS or some such.


--

John Larkin Highland Technology, Inc

lunatic fringe electronics

OK I did the sums. Based on the random walk of a laser ring gyro (0.0035 deg/sqrt-hour) and that of a MEMS accelerometer (2.25 deg/sqrt-hour) [1], you need about 400k MEMS accelerometers to approach the accuracy of a laser ring gyro.

It sounds like a lot of components to solder together but if done in a chip fab, it should be possible.

Is it possible to make a commercial accelerometer with no export restrictions by using such an array? Or will ITAR or the like be slapped on such a device once its accuracy is published in a brochure?

Refs:
[1] Honeywell GG1320AN Digital Laser Gyro brochure
[2] Error and Performance Analysis of MEMS-based Inertial Sensors with a Low-Cost GPS Receiver. Park, M & Gao, Y. [2008] Sensors Vol 8

If the MEMS parts use vibrating cantilevers, they would want to sync
up. I don't know if that is good or bad.

They might "want" to sync up, but I'm not sure they would. If the platform has rotational acceleration there would be a difference in the acceleration on each device depending on it's distance from the center. That would keep them out of sync.

Would it? If they're on the same platform, the coupling is the same
no matter what additional force is on them (superposition).


Eh? If there is rotation each sensor will have a separate acceleration with different distances from the center of rotation. The different forces stimulate the different sensors to different frequencies. Don't the frequencies vary with force?

Rick C.


Guest

Mon Feb 11, 2019 7:45 am   



On Sunday, February 10, 2019 at 10:11:29 PM UTC-5, Sylvia Else wrote:
Quote:
On 10/02/2019 2:38 pm, gnuarm.deletethisbit_at_gmail.com wrote:
On Saturday, February 9, 2019 at 8:23:20 PM UTC-5, Sylvia Else wrote:
On 10/02/2019 9:39 am, George Herold wrote:
On Saturday, February 9, 2019 at 4:53:39 PM UTC-5, gnuarm.del...@gmail.com wrote:
On Saturday, February 9, 2019 at 3:47:32 PM UTC-5, JS wrote:
Hi all,

Given that the random error in a sample is proportional to 1/sqrt(sample size), does having many accelerometers and then averaging their output therefore reduce their overall error?

So would it be worthwhile to have say 100 or 1000 cheap accelerometers rather than one expensive one like a laser ring gyro?

It would if the errors are all random. Are they?

Rick C.

Right, systematic vs random differences.
A systematic error (like a DC offset vs. bias
voltage (or temperature)) Can't be improved
(as much) with averaging.

George H.

Even if there is no systematic error, the random errors may still be
skewed in one direction, since the set of all sets of random values has
to contain sets with that property.

Seems to me that if you need a particular accuracy, your options are
limited.

a) Get a part specified to have that accuracy.

b) Get a part not so specified, but which is specified not to drift, and
which you have measured to determine that it has the required accuracy [*].

Any approach using large numbers of less accurate parts is not
guaranteed to give you the accuracy you want.

Sylvia.

[*] If such even exists - why wouldn't the manufacturer measure the part
and sell it at a higher price?

If the starting point is that the error is random, your arguments all fade away. Averaging many measurements will result in a lower range of error with some probability. No measurement is contained in an error window with 100% probability.

Rick C.


Random doesn't mean equally distributed. If you found that every set you
obtained were equally distributed, you'd be forced to conclude that they
were not random.

So the question becomes that of how likely it is that a random selection
will be distributed in a way that the combined error exceeds what you want.

If I select 1% resistors, I'll be pretty annoyed if more than a very
small number show a 2% error.

By contrast, if I select 10% resistors, and group then in sets of ten in
parallel, I'll expect the combined result to exceed a 2% error quite often.

Sylvia.


What's your point? Did you do the math correctly? Maybe that's why you don't see the right values.

To get a five fold improvement in accuracy, I believe 10 resistors is not the correct number. I'd have to do some digging to get the right number and come up with a probability of being within 2%, so I'll let you do your own homework. Bottom line is you can get whatever accuracy you desire to whatever probability you desire by combining resistors or any other components if the values are randomly distributed with a known average.

Rick C.


Guest

Mon Feb 11, 2019 8:45 am   



On Monday, February 11, 2019 at 5:30:27 PM UTC+11, gnuarm.del...@gmail.com wrote:
Quote:
On Sunday, February 10, 2019 at 10:11:29 PM UTC-5, Sylvia Else wrote:
On 10/02/2019 2:38 pm, gnuarm.deletethisbit_at_gmail.com wrote:
On Saturday, February 9, 2019 at 8:23:20 PM UTC-5, Sylvia Else wrote:
On 10/02/2019 9:39 am, George Herold wrote:
On Saturday, February 9, 2019 at 4:53:39 PM UTC-5, gnuarm.del...@gmail.com wrote:
On Saturday, February 9, 2019 at 3:47:32 PM UTC-5, JS wrote:
Hi all,

Given that the random error in a sample is proportional to 1/sqrt(sample size), does having many accelerometers and then averaging their output therefore reduce their overall error?

So would it be worthwhile to have say 100 or 1000 cheap accelerometers rather than one expensive one like a laser ring gyro?

It would if the errors are all random. Are they?

Rick C.

Right, systematic vs random differences.
A systematic error (like a DC offset vs. bias
voltage (or temperature)) Can't be improved
(as much) with averaging.

George H.

Even if there is no systematic error, the random errors may still be
skewed in one direction, since the set of all sets of random values has
to contain sets with that property.

Seems to me that if you need a particular accuracy, your options are
limited.

a) Get a part specified to have that accuracy.

b) Get a part not so specified, but which is specified not to drift, and
which you have measured to determine that it has the required accuracy [*].

Any approach using large numbers of less accurate parts is not
guaranteed to give you the accuracy you want.

Sylvia.

[*] If such even exists - why wouldn't the manufacturer measure the part
and sell it at a higher price?

If the starting point is that the error is random, your arguments all fade away. Averaging many measurements will result in a lower range of error with some probability. No measurement is contained in an error window with 100% probability.

Rick C.


Random doesn't mean equally distributed. If you found that every set you
obtained were equally distributed, you'd be forced to conclude that they
were not random.

So the question becomes that of how likely it is that a random selection
will be distributed in a way that the combined error exceeds what you want.

If I select 1% resistors, I'll be pretty annoyed if more than a very
small number show a 2% error.

By contrast, if I select 10% resistors, and group then in sets of ten in
parallel, I'll expect the combined result to exceed a 2% error quite often.

Sylvia.

What's your point? Did you do the math correctly? Maybe that's why you don't see the right values.

To get a five fold improvement in accuracy, I believe 10 resistors is not the correct number. I'd have to do some digging to get the right number and come up with a probability of being within 2%, so I'll let you do your own homework.


It's 25. The probability of the set being within 2% of the nominal value depends on the way the resistance values are distributed within the set of parts you are selecting from, which is not guaranteed by anybody, and I've never seen it pulbished.

> Bottom line is you can get whatever accuracy you desire to whatever probability you desire by combining resistors or any other components if the values are randomly distributed with a known average.

Sadly, that's not what the manufacturers claim. All they say is that none of the resistors that they sell you with +/-10% tolerance lies outside that tolerance.

The joke example is where they have a process that generates nice stable resistors but with a perfect Gaussian distribution around the nominal value, and they measure everything.

+/-1% are samples taken from the peak of the distribution, and probablity distribution would be pretty much flat.

+/-2% are actually -2% to -1% and +1% to +2%, with nothing within the +/1% band.

And so on.

In reality, tight tolerance resistors are almost always trimmed after manufacture, but if you do it faster you do it, the trimming process is less precise. It presumably leads to rather messy statistics.

--
Bill Sloman, Sydney


Guest

Mon Feb 11, 2019 9:45 am   



On Sun, 10 Feb 2019 22:57:40 -0800 (PST), bill.sloman_at_ieee.org wrote:

Quote:
On Monday, February 11, 2019 at 5:30:27 PM UTC+11, gnuarm.del...@gmail.com wrote:
On Sunday, February 10, 2019 at 10:11:29 PM UTC-5, Sylvia Else wrote:
On 10/02/2019 2:38 pm, gnuarm.deletethisbit_at_gmail.com wrote:
On Saturday, February 9, 2019 at 8:23:20 PM UTC-5, Sylvia Else wrote:
On 10/02/2019 9:39 am, George Herold wrote:
On Saturday, February 9, 2019 at 4:53:39 PM UTC-5, gnuarm.del...@gmail.com wrote:
On Saturday, February 9, 2019 at 3:47:32 PM UTC-5, JS wrote:
Hi all,

Given that the random error in a sample is proportional to 1/sqrt(sample size), does having many accelerometers and then averaging their output therefore reduce their overall error?

So would it be worthwhile to have say 100 or 1000 cheap accelerometers rather than one expensive one like a laser ring gyro?

It would if the errors are all random. Are they?

Rick C.

Right, systematic vs random differences.
A systematic error (like a DC offset vs. bias
voltage (or temperature)) Can't be improved
(as much) with averaging.

George H.

Even if there is no systematic error, the random errors may still be
skewed in one direction, since the set of all sets of random values has
to contain sets with that property.

Seems to me that if you need a particular accuracy, your options are
limited.

a) Get a part specified to have that accuracy.

b) Get a part not so specified, but which is specified not to drift, and
which you have measured to determine that it has the required accuracy [*].

Any approach using large numbers of less accurate parts is not
guaranteed to give you the accuracy you want.

Sylvia.

[*] If such even exists - why wouldn't the manufacturer measure the part
and sell it at a higher price?

If the starting point is that the error is random, your arguments all fade away. Averaging many measurements will result in a lower range of error with some probability. No measurement is contained in an error window with 100% probability.

Rick C.


Random doesn't mean equally distributed. If you found that every set you
obtained were equally distributed, you'd be forced to conclude that they
were not random.

So the question becomes that of how likely it is that a random selection
will be distributed in a way that the combined error exceeds what you want.

If I select 1% resistors, I'll be pretty annoyed if more than a very
small number show a 2% error.

By contrast, if I select 10% resistors, and group then in sets of ten in
parallel, I'll expect the combined result to exceed a 2% error quite often.

Sylvia.

What's your point? Did you do the math correctly? Maybe that's why you don't see the right values.

To get a five fold improvement in accuracy, I believe 10 resistors is not the correct number. I'd have to do some digging to get the right number and come up with a probability of being within 2%, so I'll let you do your own homework.

It's 25. The probability of the set being within 2% of the nominal value depends on the way the resistance values are distributed within the set of parts you are selecting from, which is not guaranteed by anybody, and I've never seen it pulbished.

Bottom line is you can get whatever accuracy you desire to whatever probability you desire by combining resistors or any other components if the values are randomly distributed with a known average.

Sadly, that's not what the manufacturers claim. All they say is that none of the resistors that they sell you with +/-10% tolerance lies outside that tolerance.


If the resistor is supposed to belong to the E12 series, an inaccuracy
more than +/-10 % would be falling in next or previous bin and should
be labeled as such.

Quote:
The joke example is where they have a process that generates nice stable resistors but with a perfect Gaussian distribution around the nominal value, and they measure everything.

+/-1% are samples taken from the peak of the distribution, and probablity distribution would be pretty much flat.

+/-2% are actually -2% to -1% and +1% to +2%, with nothing within the +/1% band.


Do they really measure each and every such low cost component
individually ?

Quote:
And so on.

In reality, tight tolerance resistors are almost always trimmed after manufacture, but if you do it faster you do it, the trimming process is less precise. It presumably leads to rather messy statistics.



Guest

Mon Feb 11, 2019 11:45 am   



On Monday, February 11, 2019 at 7:29:20 PM UTC+11, upsid...@downunder.com wrote:
Quote:
On Sun, 10 Feb 2019 22:57:40 -0800 (PST), bill.sloman_at_ieee.org wrote:

On Monday, February 11, 2019 at 5:30:27 PM UTC+11, gnuarm.del...@gmail.com wrote:
On Sunday, February 10, 2019 at 10:11:29 PM UTC-5, Sylvia Else wrote:
On 10/02/2019 2:38 pm, gnuarm.deletethisbit_at_gmail.com wrote:
On Saturday, February 9, 2019 at 8:23:20 PM UTC-5, Sylvia Else wrote:
On 10/02/2019 9:39 am, George Herold wrote:
On Saturday, February 9, 2019 at 4:53:39 PM UTC-5, gnuarm.del...@gmail.com wrote:
On Saturday, February 9, 2019 at 3:47:32 PM UTC-5, JS wrote:
Hi all,

Given that the random error in a sample is proportional to 1/sqrt(sample size), does having many accelerometers and then averaging their output therefore reduce their overall error?

So would it be worthwhile to have say 100 or 1000 cheap accelerometers rather than one expensive one like a laser ring gyro?

It would if the errors are all random. Are they?

Rick C.

Right, systematic vs random differences.
A systematic error (like a DC offset vs. bias
voltage (or temperature)) Can't be improved
(as much) with averaging.

George H.

Even if there is no systematic error, the random errors may still be
skewed in one direction, since the set of all sets of random values has
to contain sets with that property.

Seems to me that if you need a particular accuracy, your options are
limited.

a) Get a part specified to have that accuracy.

b) Get a part not so specified, but which is specified not to drift, and
which you have measured to determine that it has the required accuracy [*].

Any approach using large numbers of less accurate parts is not
guaranteed to give you the accuracy you want.

Sylvia.

[*] If such even exists - why wouldn't the manufacturer measure the part
and sell it at a higher price?

If the starting point is that the error is random, your arguments all fade away. Averaging many measurements will result in a lower range of error with some probability. No measurement is contained in an error window with 100% probability.

Rick C.


Random doesn't mean equally distributed. If you found that every set you
obtained were equally distributed, you'd be forced to conclude that they
were not random.

So the question becomes that of how likely it is that a random selection
will be distributed in a way that the combined error exceeds what you want.

If I select 1% resistors, I'll be pretty annoyed if more than a very
small number show a 2% error.

By contrast, if I select 10% resistors, and group then in sets of ten in
parallel, I'll expect the combined result to exceed a 2% error quite often.

Sylvia.

What's your point? Did you do the math correctly? Maybe that's why you don't see the right values.

To get a five fold improvement in accuracy, I believe 10 resistors is not the correct number. I'd have to do some digging to get the right number and come up with a probability of being within 2%, so I'll let you do your own homework.

It's 25. The probability of the set being within 2% of the nominal value depends on the way the resistance values are distributed within the set of parts you are selecting from, which is not guaranteed by anybody, and I've never seen it published.

Bottom line is you can get whatever accuracy you desire to whatever probability you desire by combining resistors or any other components if the values are randomly distributed with a known average.

Sadly, that's not what the manufacturers claim. All they say is that none of the resistors that they sell you with +/-10% tolerance lies outside that tolerance.

If the resistor is supposed to belong to the E12 series, an inaccuracy
more than +/-10 % would be falling in next or previous bin and should
be labeled as such.


Depend when the value is marked on the resistor - before or after measurement.

Quote:
The joke example is where they have a process that generates nice stable resistors but with a perfect Gaussian distribution around the nominal value, and they measure everything.

+/-1% are samples taken from the peak of the distribution, and probablity distribution would be pretty much flat.

+/-2% are actually -2% to -1% and +1% to +2%, with nothing within the +/1% band.

Do they really measure each and every such low cost component
individually?


It seems unlikely. I did label it a joke example.

As I've posted elsewhere in the thread, I've no idea precisely what they actually do.

--
Bill Sloman, Sydney


Guest

Mon Feb 11, 2019 4:45 pm   



bill.sloman_at_ieee.org wrote in
news:14361849-d4c5-4f3a-a6b9-7b65f2ac7300_at_googlegroups.com:

Quote:
Depend when the value is marked on the resistor - before or after
measurement.


He said E12 series.

Resistors all follow a standard progression in values.

Usually a full deviation from center spec will NOT take the part into
the next value bin.

If the deviation is that wide, then the table of values it fits into
would be wider, IOW not E12.

Phil Hobbs
Guest

Mon Feb 11, 2019 5:45 pm   



On 2/9/19 6:13 PM, John Larkin wrote:
Quote:
On Sat, 9 Feb 2019 12:47:28 -0800 (PST), JS <js5071921_at_gmail.com
wrote:

Hi all,

Given that the random error in a sample is proportional to 1/sqrt(sample size), does having many accelerometers and then averaging their output therefore reduce their overall error?

So would it be worthwhile to have say 100 or 1000 cheap accelerometers rather than one expensive one like a laser ring gyro?

Thanks.

sqrt(1000) is only 32. I'd expect the ring gyro to be vastly better
than a cheap MEMS or some such.


Of course gyros don't measure acceleration. Wink


Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com

Phil Hobbs
Guest

Mon Feb 11, 2019 5:45 pm   



On 2/10/19 12:58 AM, JS wrote:
Quote:
On Sunday, February 10, 2019 at 1:13:07 AM UTC+2, John Larkin wrote:
On Sat, 9 Feb 2019 12:47:28 -0800 (PST),
wrote:

Hi all,

Given that the random error in a sample is proportional to 1/sqrt(sample size), does having many accelerometers and then averaging their output therefore reduce their overall error?

So would it be worthwhile to have say 100 or 1000 cheap accelerometers rather than one expensive one like a laser ring gyro?

Thanks.

sqrt(1000) is only 32. I'd expect the ring gyro to be vastly better
than a cheap MEMS or some such.


--

John Larkin Highland Technology, Inc

lunatic fringe electronics

OK I did the sums. Based on the random walk of a laser ring gyro (0.0035 deg/sqrt-hour) and that of a MEMS accelerometer (2.25 deg/sqrt-hour) [1], you need about 400k MEMS accelerometers to approach the accuracy of a laser ring gyro.

It sounds like a lot of components to solder together but if done in a chip fab, it should be possible.

Is it possible to make a commercial accelerometer with no export restrictions by using such an array? Or will ITAR or the like be slapped on such a device once its accuracy is published in a brochure?

Refs:
[1] Honeywell GG1320AN Digital Laser Gyro brochure
[2] Error and Performance Analysis of MEMS-based Inertial Sensors with a Low-Cost GPS Receiver. Park, M & Gao, Y. [2008] Sensors Vol 8


If you do it all on one wafer, you'll certainly have systematic error.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com


Guest

Tue Feb 12, 2019 12:45 am   



On Tuesday, February 12, 2019 at 2:01:40 AM UTC+11, DecadentLinux...@decadence.org wrote:
Quote:
bill.sloman_at_ieee.org wrote in
news:14361849-d4c5-4f3a-a6b9-7b65f2ac7300_at_googlegroups.com:

Depend when the value is marked on the resistor - before or after
measurement.


He said E12 series.

Resistors all follow a standard progression in values.

Usually a full deviation from center spec will NOT take the part into
the next value bin.


If the resistor gets its value marking before its resistance was measured, you wouldn't put it into the next bin up, even if it did measure out as qualifying.

<snip>

--
Bill Sloman, Sydney


Guest

Tue Feb 12, 2019 1:45 am   



On Tuesday, February 12, 2019 at 10:50:30 AM UTC+11, klaus.k...@gmail.com wrote:
Quote:
On Tuesday, 12 February 2019 00:13:26 UTC+1, bill....@ieee.org wrote:
On Tuesday, February 12, 2019 at 2:01:40 AM UTC+11, DecadentLinux...@decadence.org wrote:
bill.sloman_at_ieee.org wrote in
news:14361849-d4c5-4f3a-a6b9-7b65f2ac7300_at_googlegroups.com:

Depend when the value is marked on the resistor - before or after
measurement.


He said E12 series.

Resistors all follow a standard progression in values.

Usually a full deviation from center spec will NOT take the part into
the next value bin.

If the resistor gets its value marking before its resistance was measured, you wouldn't put it into the next bin up, even if it did measure out as qualifying.

snip

Practical example:

https://www.reddit.com/r/dataisbeautiful/comments/58y1yc/how_resistors_were_are_manufactured_oc/

Most nowadays look like the China one. Offset gaussian


Not all that gaussian. There's perceptible skew, and probably some kurtotis as well. The numbers of samples in each bin aren't large, so the standard deviations on each would be four or five, but my guess would be that there's something slightly odd going on - perhaps samples drawn from two adjacent (but slightly offset) gaussians.

--
Bill Sloman, Sydney

--
Bill Sloman, Sydney


Guest

Tue Feb 12, 2019 1:45 am   



On Tuesday, 12 February 2019 00:13:26 UTC+1, bill....@ieee.org wrote:
Quote:
On Tuesday, February 12, 2019 at 2:01:40 AM UTC+11, DecadentLinux...@decadence.org wrote:
bill.sloman_at_ieee.org wrote in
news:14361849-d4c5-4f3a-a6b9-7b65f2ac7300_at_googlegroups.com:

Depend when the value is marked on the resistor - before or after
measurement.


He said E12 series.

Resistors all follow a standard progression in values.

Usually a full deviation from center spec will NOT take the part into
the next value bin.

If the resistor gets its value marking before its resistance was measured, you wouldn't put it into the next bin up, even if it did measure out as qualifying.

snip

Practical example:


https://www.reddit.com/r/dataisbeautiful/comments/58y1yc/how_resistors_were_are_manufactured_oc/

Most nowadays look like the China one. Offset gausian

Cheers

Klaus

Goto page Previous  1, 2, 3, 4  Next

elektroda.net NewsGroups Forum Index - Electronics Design - Using many cheap accelerometers to reduce error

Ask a question - edaboard.com

Arabic version Bulgarian version Catalan version Czech version Danish version German version Greek version English version Spanish version Finnish version French version Hindi version Croatian version Indonesian version Italian version Hebrew version Japanese version Korean version Lithuanian version Latvian version Dutch version Norwegian version Polish version Portuguese version Romanian version Russian version Slovak version Slovenian version Serbian version Swedish version Tagalog version Ukrainian version Vietnamese version Chinese version Turkish version
EDAboard.com map