E96 Series Computation

F

Fred Bloggs

Guest
Robert Monsen wrote:

Ok, caching is a usually a good strategy.

Sadly, I don't have a basic interpreter, so I guess I'm stuck with the
clumsy table lookup. It's ok, since I already have all the values typed
(actually cut and pasted) in here:

http://home.comcast.net/~rcmonsen/resistors.html

You have a cute routine, though. Curve fitting is fun. Somebody had alot
of fun doing it, I think. That "if 919, output 920" is hilarious.

I'm hoping Bloggs posts the simpler scheme he claims to know.
If I said I have the answer then that means I have the answer.

For the **E96** series the basic calculation is something like this,
where Log's are base 10, and R is input value normalized to 100<=R<1000,
IROUND is round-to-nearest-integer function:

X=1.5*LOG10(R) % compute and scale log base 10 of R
F=FRACT(X) % fractional part of X
Y=INT(X) % integer part of X
RSTD=IROUND(10^((Y+ IROUND(64*F)/64)/1.5)) % standard value output

In words, for normalized R in range 10 to <1000 to be converted to
standard value with hand calculator:

1) compute logarithm base 10 and multiply by 1.5
2) note integer portion and subtract off
3) multiply remaining fraction by 64
4) mentally round that result up/down to nearest integer and then divide
by 64
5) add back in original integer portion subtracted in step 2)
6) divide the above by 1.5
7) take antilog ( raise 10 to this power)
8) round result to nearest integer

It turns out that 920 is a genuine error in the original tabulation of
E192, not a part of E96 and lesser precision series, and 919 should be
the entry instead for E192, unless there is an ancient multi-point
smoothing interpolation I am overlooking. This and a bit more logic to
filter the RSTD calculated produces the correct results, and there is a
very simple theoretical basis for this that I am certain follows the
original thinking and computation. The original table was not generated
using the above formula- the formula is a luxury not available to the
original number crunchers and is an analog of their computation. The
formulation for the other series, which is similar, and of any other
series to be created with specified tolerance makes for an elegant
generalization of which the E96 formula is just one instance.

Some numbers from pi 31415927 say to spot check 314,141,415,159,592,927:

314 X=3.7453945 F=0.7453945 Y=3 RSTD=316
141 X=3.2238287 F=0.2238287 Y=3 RSTD=140
415 X=3.9270721 F=0.9270721 Y=3 RSTD=412
159 X=3.3020957 F=0.3020957 Y=3 RSTD=158
592 X=4.1584826 F=0.1584826 Y=4 RSTD=590
927 X=4.4506196 F=0.4506196 Y=4 RSTD=931

These results are right on. The formula has been checked in other ways
so that 1) standard values are always returned and 2) deviations from
absolute closest standard value to arbitrary input R are hopelessly lost
as noise compared to tolerance.
 
Fred Bloggs wrote:
Robert Monsen wrote:

Ok, caching is a usually a good strategy.

Sadly, I don't have a basic interpreter, so I guess I'm stuck with the
clumsy table lookup. It's ok, since I already have all the values typed
(actually cut and pasted) in here:

http://home.comcast.net/~rcmonsen/resistors.html

You have a cute routine, though. Curve fitting is fun. Somebody had alot
of fun doing it, I think. That "if 919, output 920" is hilarious.

I'm hoping Bloggs posts the simpler scheme he claims to know.


If I said I have the answer then that means I have the answer.

For the **E96** series the basic calculation is something like this,
where Log's are base 10, and R is input value normalized to 100<=R<1000,
IROUND is round-to-nearest-integer function:

X=1.5*LOG10(R) % compute and scale log base 10 of R
F=FRACT(X) % fractional part of X
Y=INT(X) % integer part of X
RSTD=IROUND(10^((Y+ IROUND(64*F)/64)/1.5)) % standard value output

In words, for normalized R in range 10 to <1000 to be converted to
standard value with hand calculator:

1) compute logarithm base 10 and multiply by 1.5
2) note integer portion and subtract off
3) multiply remaining fraction by 64
4) mentally round that result up/down to nearest integer and then divide
by 64
5) add back in original integer portion subtracted in step 2)
6) divide the above by 1.5
7) take antilog ( raise 10 to this power)
8) round result to nearest integer

It turns out that 920 is a genuine error in the original tabulation of
E192, not a part of E96 and lesser precision series, and 919 should be
the entry instead for E192, unless there is an ancient multi-point
smoothing interpolation I am overlooking. This and a bit more logic to
filter the RSTD calculated produces the correct results, and there is a
very simple theoretical basis for this that I am certain follows the
original thinking and computation. The original table was not generated
using the above formula- the formula is a luxury not available to the
original number crunchers and is an analog of their computation. The
formulation for the other series, which is similar, and of any other
series to be created with specified tolerance makes for an elegant
generalization of which the E96 formula is just one instance.

Some numbers from pi 31415927 say to spot check 314,141,415,159,592,927:

314 X=3.7453945 F=0.7453945 Y=3 RSTD=316
141 X=3.2238287 F=0.2238287 Y=3 RSTD=140
415 X=3.9270721 F=0.9270721 Y=3 RSTD=412
159 X=3.3020957 F=0.3020957 Y=3 RSTD=158
592 X=4.1584826 F=0.1584826 Y=4 RSTD=590
927 X=4.4506196 F=0.4506196 Y=4 RSTD=931

These results are right on. The formula has been checked in other ways
so that 1) standard values are always returned and 2) deviations from
absolute closest standard value to arbitrary input R are hopelessly lost
as noise compared to tolerance.
Great! Thanks.

--
Regards,
Robert Monsen

"Your Highness, I have no need of this hypothesis."
- Pierre Laplace (1749-1827), to Napoleon,
on why his works on celestial mechanics make no mention of God.
 
The Phantom wrote:
I guess I don't understand what your algorithm is supposed to do. Assume for example that
the design of a filter required an exact value of 835.3 ohms. Your algorithm says the
nearest standard value (in the E92 series) is 825 ohms. But wouldn't 845 ohms be closer?
This whole thread rather amused me, as whatever you use to calculate the
range, if it doesn't agree with how whoever set the standard calculated
it, you're sunk. Assume the values are arbitrary and use a look up table.

Paul Burke
 
The Phantom wrote:
On Thu, 17 Mar 2005 15:36:38 GMT, Fred Bloggs <nospam@nospam.com> wrote:


Robert Monsen wrote:


Ok, caching is a usually a good strategy.

Sadly, I don't have a basic interpreter, so I guess I'm stuck with the
clumsy table lookup. It's ok, since I already have all the values typed
(actually cut and pasted) in here:

http://home.comcast.net/~rcmonsen/resistors.html

You have a cute routine, though. Curve fitting is fun. Somebody had alot
of fun doing it, I think. That "if 919, output 920" is hilarious.

I'm hoping Bloggs posts the simpler scheme he claims to know.


If I said I have the answer then that means I have the answer.

For the **E96** series the basic calculation is something like this,
where Log's are base 10, and R is input value normalized to 100<=R<1000,
IROUND is round-to-nearest-integer function:

X=1.5*LOG10(R) % compute and scale log base 10 of R
F=FRACT(X) % fractional part of X
Y=INT(X) % integer part of X
RSTD=IROUND(10^((Y+ IROUND(64*F)/64)/1.5)) % standard value output

In words, for normalized R in range 10 to <1000 to be converted to
standard value with hand calculator:

1) compute logarithm base 10 and multiply by 1.5
2) note integer portion and subtract off
3) multiply remaining fraction by 64
4) mentally round that result up/down to nearest integer and then divide
by 64
5) add back in original integer portion subtracted in step 2)
6) divide the above by 1.5
7) take antilog ( raise 10 to this power)
8) round result to nearest integer

It turns out that 920 is a genuine error in the original tabulation of
E192, not a part of E96 and lesser precision series, and 919 should be
the entry instead for E192, unless there is an ancient multi-point
smoothing interpolation I am overlooking. This and a bit more logic to
filter the RSTD calculated produces the correct results, and there is a
very simple theoretical basis for this that I am certain follows the
original thinking and computation. The original table was not generated
using the above formula- the formula is a luxury not available to the
original number crunchers and is an analog of their computation. The
formulation for the other series, which is similar, and of any other
series to be created with specified tolerance makes for an elegant
generalization of which the E96 formula is just one instance.

Some numbers from pi 31415927 say to spot check 314,141,415,159,592,927:

314 X=3.7453945 F=0.7453945 Y=3 RSTD=316
141 X=3.2238287 F=0.2238287 Y=3 RSTD=140
415 X=3.9270721 F=0.9270721 Y=3 RSTD=412
159 X=3.3020957 F=0.3020957 Y=3 RSTD=158
592 X=4.1584826 F=0.1584826 Y=4 RSTD=590
927 X=4.4506196 F=0.4506196 Y=4 RSTD=931

These results are right on. The formula has been checked in other ways
so that 1) standard values are always returned and 2) deviations from
absolute closest standard value to arbitrary input R are hopelessly lost
as noise compared to tolerance.


I guess I don't understand what your algorithm is supposed to do. Assume for example that
the design of a filter required an exact value of 835.3 ohms. Your algorithm says the
nearest standard value (in the E92 series) is 825 ohms. But wouldn't 845 ohms be closer?
Statistically, no, it is a +/- 1.2% split- the maximum error due to
tolerance is 2.2% for either standard value selection and the nominal
value advantage of the 845 over the 825 is only 0.07% or a fraction 1/30
of that. You can't make a 0.01% tolerance final value out of 1% parts-
choose another resistor line. If you want to delude yourself into
thinking you have the closest value then you would extend the algorithm
to compute:
RSTD1=IROUND(10^((Y+ FLOOR(64*F)/64)/1.5)) and
RSTD2=IROUND(10^((Y+ CEILING(64*F)/64)/1.5))
Choose the one which minimizes |RSTD1,2-R|.
Happy now?
 
This whole thread rather amused me, as whatever you use to calculate the
range, if it doesn't agree with how whoever set the standard calculated
it, you're sunk. Assume the values are arbitrary and use a look up table.
We know how to calculate *exact* standard resistor values and that is no
thanks to dull, unimaginative, smug and worthless people like you. Take
your so-called advice and shove it up your butt.
 
The Phantom wrote:
On Wed, 13 Apr 2005 12:51:57 GMT, Fred Bloggs <nospam@nospam.com> wrote:



The Phantom wrote:

On Thu, 17 Mar 2005 15:36:38 GMT, Fred Bloggs <nospam@nospam.com> wrote:



Robert Monsen wrote:



Ok, caching is a usually a good strategy.

Sadly, I don't have a basic interpreter, so I guess I'm stuck with the
clumsy table lookup. It's ok, since I already have all the values typed
(actually cut and pasted) in here:

http://home.comcast.net/~rcmonsen/resistors.html

You have a cute routine, though. Curve fitting is fun. Somebody had alot
of fun doing it, I think. That "if 919, output 920" is hilarious.

I'm hoping Bloggs posts the simpler scheme he claims to know.


If I said I have the answer then that means I have the answer.

For the **E96** series the basic calculation is something like this,
where Log's are base 10, and R is input value normalized to 100<=R<1000,
IROUND is round-to-nearest-integer function:

X=1.5*LOG10(R) % compute and scale log base 10 of R
F=FRACT(X) % fractional part of X
Y=INT(X) % integer part of X
RSTD=IROUND(10^((Y+ IROUND(64*F)/64)/1.5)) % standard value output

In words, for normalized R in range 10 to <1000 to be converted to
standard value with hand calculator:

1) compute logarithm base 10 and multiply by 1.5
2) note integer portion and subtract off
3) multiply remaining fraction by 64
4) mentally round that result up/down to nearest integer and then divide
by 64
5) add back in original integer portion subtracted in step 2)
6) divide the above by 1.5
7) take antilog ( raise 10 to this power)
8) round result to nearest integer

It turns out that 920 is a genuine error in the original tabulation of
E192, not a part of E96 and lesser precision series, and 919 should be
the entry instead for E192, unless there is an ancient multi-point
smoothing interpolation I am overlooking. This and a bit more logic to
filter the RSTD calculated produces the correct results, and there is a
very simple theoretical basis for this that I am certain follows the
original thinking and computation. The original table was not generated
using the above formula- the formula is a luxury not available to the
original number crunchers and is an analog of their computation. The
formulation for the other series, which is similar, and of any other
series to be created with specified tolerance makes for an elegant
generalization of which the E96 formula is just one instance.

Some numbers from pi 31415927 say to spot check 314,141,415,159,592,927:

314 X=3.7453945 F=0.7453945 Y=3 RSTD=316
141 X=3.2238287 F=0.2238287 Y=3 RSTD=140
415 X=3.9270721 F=0.9270721 Y=3 RSTD=412
159 X=3.3020957 F=0.3020957 Y=3 RSTD=158
592 X=4.1584826 F=0.1584826 Y=4 RSTD=590
927 X=4.4506196 F=0.4506196 Y=4 RSTD=931

These results are right on. The formula has been checked in other ways
so that 1) standard values are always returned and 2) deviations from
absolute closest standard value to arbitrary input R are hopelessly lost
as noise compared to tolerance.


I guess I don't understand what your algorithm is supposed to do. Assume for example that
the design of a filter required an exact value of 835.3 ohms. Your algorithm says the
nearest standard value (in the E92 series) is 825 ohms. But wouldn't 845 ohms be closer?


Statistically, no, it is a +/- 1.2% split- the maximum error due to
tolerance is 2.2% for either standard value selection and the nominal
value advantage of the 845 over the 825 is only 0.07% or a fraction 1/30
of that. You can't make a 0.01% tolerance final value out of 1% parts-
choose another resistor line. If you want to delude yourself into
thinking you have the closest value then you would extend the algorithm
to compute:
RSTD1=IROUND(10^((Y+ FLOOR(64*F)/64)/1.5)) and
RSTD2=IROUND(10^((Y+ CEILING(64*F)/64)/1.5))
Choose the one which minimizes |RSTD1,2-R|.
Happy now?


You said in another post:

"I did post it under "E96 series computation", E192 and E48 left as
exercise for the student:"

It looks like the changes for E48 and E192 would be:

X=1.5*LOG10(R) % compute and scale log base 10 of R
F=FRACT(X) % fractional part of X
Y=INT(X) % integer part of X
RSTD=IROUND(10^((Y+ IROUND(32*F)/32)/1.5)) % standard value output

and

X=1.5*LOG10(R) % compute and scale log base 10 of R
F=FRACT(X) % fractional part of X
Y=INT(X) % integer part of X
RSTD=IROUND(10^((Y+ IROUND(128*F)/128)/1.5)) % standard value output

It would seem that a version for the E24 series would be:

X=1.5*LOG10(R) % compute and scale log base 10 of R
F=FRACT(X) % fractional part of X
Y=INT(X) % integer part of X
RSTD=IROUND(10^((Y+ IROUND(16*F)/16)/1.5)) % standard value output

But this version, when given an input value of 361 returns 348, which isn't a standard
value in the E24 series.

Do you have another algorithm which works for the E24 and E12 series?

Okay- you're getting close. E24 and below depart from the formula due to
the historical assignment of 22, 33, and 47 as standard values, and the
fact that the span is 10-100 instead of 100-1000, so only two digits.
One idea that they did not depart from was the Pascal Triangle concept.
Here each row represents an Exx series and successive rows are formed by
computing the geometric mean of the two adjacent elements from the row
above. You can think of it as two triangles, one where all values are
represented to infinite precision, and a second where all values are
standard values. Only the infinite precision triangle is used for
computation of successive rows, the standard value triangle is derived
from that one by a combination of rounding and historical assignment on
the corresponding infinite precision elements. Under no circumstances is
the standard value triangle used for computation of successive rows,
that triangle is only used to produce the "tables". E48 is the first row
in the standard value triangle where rounding of the infinite precision
value is the governing assignment, and for E24 and below it is a
combination of rounding and historical assignment of the infinite
precision values. Getting back to that E192 assignment of 920- I can see
they did an arithmetic mean instead of the geometric mean - so that is
clearly a goof. So I suppose the simplest way to handle these
discrepancies, 920 in E192, and only a few values in E24 and below is to
first compute the infinite precision value within the infinite precision
triangle, and then use a CASE type program structure to catch the
historical assignments, using the ROUND function elsewhere.
 
Fred Bloggs <nospam@nospam.com> wrote:


So I suppose the simplest way to handle these
discrepancies, 920 in E192, and only a few values in E24 and below is to
first compute the infinite precision value within the infinite precision
triangle, and then use a CASE type program structure to catch the
historical assignments, using the ROUND function elsewhere.
The simplest way to handle the whole problem is to use a table.

A table containing E3 to E192 ranges occupies 900 bytes.

A function which given a range and value returns the closest value (and the
next >= and =< values to allow simple iteration of a range in either
direction) occupies 540 bytes of 32bit Intel code.

It was never coded particularly for speed. On my 2GHz Athlon XP system it
finds around 2.6 million E96 values per second.
 
Okay- you're getting close. E24 and below depart from the formula due to
the historical assignment of 22, 33, and 47 as standard values, and the
fact that the span is 10-100 instead of 100-1000, so only two digits.
One idea that they did not depart from was the Pascal Triangle concept.
Here each row represents an Exx series and successive rows are formed by
computing the geometric mean of the two adjacent elements from the row
above. You can think of it as two triangles, one where all values are
represented to infinite precision, and a second where all values are
standard values. Only the infinite precision triangle is used for
computation of successive rows, the standard value triangle is derived
from that one by a combination of rounding and historical assignment on
the corresponding infinite precision elements. Under no circumstances is
the standard value triangle used for computation of successive rows,
that triangle is only used to produce the "tables". E48 is the first row
in the standard value triangle where rounding of the infinite precision
value is the governing assignment, and for E24 and below it is a
combination of rounding and historical assignment of the infinite
precision values. Getting back to that E192 assignment of 920- I can see
they did an arithmetic mean instead of the geometric mean - so that is
clearly a goof. So I suppose the simplest way to handle these
discrepancies, 920 in E192, and only a few values in E24 and below is to
first compute the infinite precision value within the infinite precision
triangle, and then use a CASE type program structure to catch the
historical assignments, using the ROUND function elsewhere.
Do you know the reason for the peculiar historical assignments?
 
nospam wrote:
Fred Bloggs <nospam@nospam.com> wrote:



So I suppose the simplest way to handle these
discrepancies, 920 in E192, and only a few values in E24 and below is to
first compute the infinite precision value within the infinite precision
triangle, and then use a CASE type program structure to catch the
historical assignments, using the ROUND function elsewhere.


The simplest way to handle the whole problem is to use a table.

A table containing E3 to E192 ranges occupies 900 bytes.

A function which given a range and value returns the closest value (and the
next >= and =< values to allow simple iteration of a range in either
direction) occupies 540 bytes of 32bit Intel code.

It was never coded particularly for speed. On my 2GHz Athlon XP system it
finds around 2.6 million E96 values per second.
You're missing the point-
 
The Phantom wrote:
Okay- you're getting close. E24 and below depart from the formula due to
the historical assignment of 22, 33, and 47 as standard values, and the
fact that the span is 10-100 instead of 100-1000, so only two digits.
One idea that they did not depart from was the Pascal Triangle concept.
Here each row represents an Exx series and successive rows are formed by
computing the geometric mean of the two adjacent elements from the row
above. You can think of it as two triangles, one where all values are
represented to infinite precision, and a second where all values are
standard values. Only the infinite precision triangle is used for
computation of successive rows, the standard value triangle is derived

from that one by a combination of rounding and historical assignment on

the corresponding infinite precision elements. Under no circumstances is
the standard value triangle used for computation of successive rows,
that triangle is only used to produce the "tables". E48 is the first row
in the standard value triangle where rounding of the infinite precision
value is the governing assignment, and for E24 and below it is a
combination of rounding and historical assignment of the infinite
precision values. Getting back to that E192 assignment of 920- I can see
they did an arithmetic mean instead of the geometric mean - so that is
clearly a goof. So I suppose the simplest way to handle these
discrepancies, 920 in E192, and only a few values in E24 and below is to
first compute the infinite precision value within the infinite precision
triangle, and then use a CASE type program structure to catch the
historical assignments, using the ROUND function elsewhere.


Do you know the reason for the peculiar historical assignments?
I have no idea- apparently these were popular values before
standardization was even an idea.
 
On Thu, 17 Mar 2005 15:36:38 GMT, Fred Bloggs <nospam@nospam.com> wrote:

Robert Monsen wrote:

Ok, caching is a usually a good strategy.

Sadly, I don't have a basic interpreter, so I guess I'm stuck with the
clumsy table lookup. It's ok, since I already have all the values typed
(actually cut and pasted) in here:

http://home.comcast.net/~rcmonsen/resistors.html

You have a cute routine, though. Curve fitting is fun. Somebody had alot
of fun doing it, I think. That "if 919, output 920" is hilarious.

I'm hoping Bloggs posts the simpler scheme he claims to know.


If I said I have the answer then that means I have the answer.

For the **E96** series the basic calculation is something like this,
where Log's are base 10, and R is input value normalized to 100<=R<1000,
IROUND is round-to-nearest-integer function:

X=1.5*LOG10(R) % compute and scale log base 10 of R
F=FRACT(X) % fractional part of X
Y=INT(X) % integer part of X
RSTD=IROUND(10^((Y+ IROUND(64*F)/64)/1.5)) % standard value output

In words, for normalized R in range 10 to <1000 to be converted to
standard value with hand calculator:

1) compute logarithm base 10 and multiply by 1.5
2) note integer portion and subtract off
3) multiply remaining fraction by 64
4) mentally round that result up/down to nearest integer and then divide
by 64
5) add back in original integer portion subtracted in step 2)
6) divide the above by 1.5
7) take antilog ( raise 10 to this power)
8) round result to nearest integer

It turns out that 920 is a genuine error in the original tabulation of
E192, not a part of E96 and lesser precision series, and 919 should be
the entry instead for E192, unless there is an ancient multi-point
smoothing interpolation I am overlooking. This and a bit more logic to
filter the RSTD calculated produces the correct results, and there is a
very simple theoretical basis for this that I am certain follows the
original thinking and computation. The original table was not generated
using the above formula- the formula is a luxury not available to the
original number crunchers and is an analog of their computation. The
formulation for the other series, which is similar, and of any other
series to be created with specified tolerance makes for an elegant
generalization of which the E96 formula is just one instance.

Some numbers from pi 31415927 say to spot check 314,141,415,159,592,927:

314 X=3.7453945 F=0.7453945 Y=3 RSTD=316
141 X=3.2238287 F=0.2238287 Y=3 RSTD=140
415 X=3.9270721 F=0.9270721 Y=3 RSTD=412
159 X=3.3020957 F=0.3020957 Y=3 RSTD=158
592 X=4.1584826 F=0.1584826 Y=4 RSTD=590
927 X=4.4506196 F=0.4506196 Y=4 RSTD=931

These results are right on. The formula has been checked in other ways
so that 1) standard values are always returned and 2) deviations from
absolute closest standard value to arbitrary input R are hopelessly lost
as noise compared to tolerance.
I guess I don't understand what your algorithm is supposed to do. Assume for example that
the design of a filter required an exact value of 835.3 ohms. Your algorithm says the
nearest standard value (in the E92 series) is 825 ohms. But wouldn't 845 ohms be closer?
 

Welcome to EDABoard.com

Sponsor

Back
Top