Prefered resistor range

On Tue, 15 Mar 2005 19:58:14 -0800, "Larry Brasfield"
<donotspam_larry_brasfield@hotmail.com> wrote:

"Fred Bloggs" <nospam@nospam.com> wrote in message
news:423794F6.7010602@nospam.com...
Larry Brasfield wrote:

Can you reconsider that assertion? I cannot make sense of it, being
stuck in the following thought pattern: If there are 96 distinct
values per decade, then for each 96 value steps, a whole power of ten
is traversed. Expressed mathematically, (and ignoring the rounding
necessary to get standard values), the E96 set can be obtained as
10^(N * log10(10) / 96) == 10^(N/96) for the 96 integer values of N
from 0 to 95. This corresponds to a multiplicative interval equal to
the 96th root of 10, not the 97 root.

Nice guess, wimp- but entirely wrong.

Entirely right. What do you have besides bare assertion
and invective to make your case? Nothing I'll wager.

In addition to the problem outlined above, what you say is contrary
to the algorithm that I successfully applied to devise the program I
posted earlier on this thread.

You call that kluge an algorithm- it's slightly less than a table.

Tables are used only for certain limited arbitrary data
and to correct for those few results that do not agree
with the publish tables when computed algorithmically
Have a look at the little Basic routine I posted. Whoever
originally wrote it did a good job of curve fitting and came up with a
formula (algorithm?) that properly calculates those oddball values
that a simple root-of-10 method doesn't get right. Thus you don't
need any tables.

Your inability to discern an algorithmic approach when
it is there in plain sight marks you as the pretender with
respect to claims of intellectual power.

I tested that quite a bit, so I am quite sure that the decade should
be split logarithmically into 96 equal steps (sans rounding). In my
testing, I compared results with the table published by several
resistor manufacturers.

I seriously doubt your competence at testing too. What a worthless fool
you are.

Fred, I wish you could comprehend how disappointed I
would be if you liked me or respected me or my work.
 
OK, but if you're doing an active filter and needing resistors closer
than 1%, don't you need capacitors to that precision as well?

My stuff wasn't using RC but sometimes a fairly precise voltage
reference using an AD588 for the +/- 5 volt references. Uusally I could
juggle 2 series values to get within a 5 mV range.
GG
 
On Tue, 15 Mar 2005 14:22:00 -0800, "Larry Brasfield"
<donotspam_larry_brasfield@hotmail.com> wrote:

"R.Lewis" <h.lewis@connect-2.co.uk> wrote in message
news:39p455F649cduU1@individual.net...
"Larry Brasfield" <donotspam_larry_brasfield@hotmail.com> wrote in message
news:Y6IZd.49$gd7.892@news.uswest.net...
...
Now, if I want a 63.54k resistor, I enter:
stdvals 1% 63.54k
snip

If you need 63.54K and you have the complete range of 1% tolerance resistors
available you have a problem that no elementary, or sophistacated,
calculator is going to solve

You appear to have assumed that the resistors
to be used would be 1% tolerance. It happens
that the 0.1% tolerance parts come in the same
values, (or more, for more money), so there is a
use for calculations such as the above example.
You don't have to be using .1 % resistors to get a benefit.

Imagine that you have designed a filter and need a 134.985k resistor.
The nearest E96 values are 133k and 137k. As it happens, 134.985k is
about 1.5% greater than 133k and 1.5% less than 137k, so you can't
find a standard 1% value that is actually guaranteed to be within 1%
of 134.985k. In fact, you can't get any closer than about 1.5%.

There are gaps like that between nearly every pair of adjacent E96
standard values where, in that gap, you are further than 1% away from
either of the nearest standard values (all but 4 adjacent pairs are
like this).

But if you parallel the two standard values 255k and 287k, you are in
effect synthesizing a 135.027675k resistor that is guaranteed to be
within 1% of that value. Since this is within .03% of 134.985k, we
have a resistance that is guaranteed to be within just about 1% of the
134.985k we want, instead of only 1.5%.
 
Glenn Gundlach wrote:
OK, but if you're doing an active filter and needing resistors closer
than 1%, don't you need capacitors to that precision as well?
depends on the sensitivity of the topology to the cap values, but maybe.

My stuff wasn't using RC but sometimes a fairly precise voltage
reference using an AD588 for the +/- 5 volt references. Uusally I could
juggle 2 series values to get within a 5 mV range.
GG
Cheers
Terry
 
The Phantom wrote:
Have a look at the little Basic routine I posted. Whoever
originally wrote it did a good job of curve fitting and came up with a
formula (algorithm?) that properly calculates those oddball values
that a simple root-of-10 method doesn't get right. Thus you don't
need any tables.
A table lookup is about 10000 times faster, though. Doing all that math
is going to hurt, particularly in basic...

You could also just look here:

http://www.logwell.com/tech/components/resistor_values.html

--
Regards,
Robert Monsen

"Your Highness, I have no need of this hypothesis."
- Pierre Laplace (1749-1827), to Napoleon,
on why his works on celestial mechanics make no mention of God.
 
Robert Monsen wrote:
The Phantom wrote:


Have a look at the little Basic routine I posted. Whoever
originally wrote it did a good job of curve fitting and came up with a
formula (algorithm?) that properly calculates those oddball values
that a simple root-of-10 method doesn't get right. Thus you don't
need any tables.


A table lookup is about 10000 times faster, though. Doing all that math
is going to hurt, particularly in basic...
what the hell are you using for your calculations - an abacus?

I concede it would be a pain on a calculator, but thats about it

You could also just look here:

http://www.logwell.com/tech/components/resistor_values.html
Although I confess I do most of this manually, without even a calculator :]

Cheers
Terry
 
"The Phantom" <phantom@aol.com> wrote in message
news:b3gf31tfc7k5gbukrkb5kfe29hjde93bc1@4ax.com...
On Tue, 15 Mar 2005 19:58:14 -0800, "Larry Brasfield"
donotspam_larry_brasfield@hotmail.com> wrote:

Tables are used only for certain limited arbitrary data
and to correct for those few results that do not agree
with the publish tables when computed algorithmically

Have a look at the little Basic routine I posted. Whoever
originally wrote it did a good job of curve fitting and came up with a
formula (algorithm?) that properly calculates those oddball values
that a simple root-of-10 method doesn't get right. Thus you don't
need any tables.
I copied it for later study shortly after you posted it.
There is an interesting mix of both the straightforward
logarithmic decade splitting and some adjustment via
a cubic polynomial. It is quite a tangle, analytically.
I suspect it was quite a trial and error effort rather
than founded on anything like the original algorithm.

If it covers the whole set of tolerances without any
corrections, I will be amazed. But using that kind of
method is very fragile against changing requirements.
Using tables judiciously can make for better code.

--
--Larry Brasfield
email: donotspam_larry_brasfield@hotmail.com
Above views may belong only to me.
 
On Tue, 15 Mar 2005 22:49:00 -0800, "Larry Brasfield"
<donotspam_larry_brasfield@hotmail.com> wrote:

"The Phantom" <phantom@aol.com> wrote in message
news:b3gf31tfc7k5gbukrkb5kfe29hjde93bc1@4ax.com...
On Tue, 15 Mar 2005 19:58:14 -0800, "Larry Brasfield"
donotspam_larry_brasfield@hotmail.com> wrote:

Tables are used only for certain limited arbitrary data
and to correct for those few results that do not agree
with the publish tables when computed algorithmically

Have a look at the little Basic routine I posted. Whoever
originally wrote it did a good job of curve fitting and came up with a
formula (algorithm?) that properly calculates those oddball values
that a simple root-of-10 method doesn't get right. Thus you don't
need any tables.

I copied it for later study shortly after you posted it.
There is an interesting mix of both the straightforward
logarithmic decade splitting and some adjustment via
a cubic polynomial. It is quite a tangle, analytically.
I suspect it was quite a trial and error effort rather
than founded on anything like the original algorithm.

If it covers the whole set of tolerances without any
corrections, I will be amazed.
-------> But using that kind of
method is very fragile against changing requirements.
Using tables judiciously can make for better code.<-------
Can you explain what you mean by this? Maybe with an example of how
tables would be better than the little routine?

The routine I gave will take up less space than than all the tables
for E12, E24, E48, E96 and E192.

And in answer to Robert Monsen, modern computers are *really* fast.
You don't really notice the time the routine takes. If you're only
going to do a few computations the results will seem instantaneous.
If you're going to do what Terry is doing, do what I describe below.

I have done the same sort of thing Terry Given describes, finding
optimum combinations of resistors. I just call the Basic routine a
few times and fill an array with the values. This is just as fast as
having a static table, but I don't have to have all the tables as I
mentioned above. I just create the tables or portions therof that I
need.
 
On Wed, 16 Mar 2005 19:39:12 +1300, Terry Given <my_name@ieee.org>
wrote:

Robert Monsen wrote:
The Phantom wrote:


Have a look at the little Basic routine I posted. Whoever
originally wrote it did a good job of curve fitting and came up with a
formula (algorithm?) that properly calculates those oddball values
that a simple root-of-10 method doesn't get right. Thus you don't
need any tables.


A table lookup is about 10000 times faster, though. Doing all that math
is going to hurt, particularly in basic...

what the hell are you using for your calculations - an abacus?

I concede it would be a pain on a calculator, but thats about it
It's not even *much* of a pain at that. On my 20 year old HP-71, it
only takes about a second to return a value. The HP48 is faster. You
just leave this little routine in your calculator and if you need a
standard value, you have it in a second. Quicker even than looking it
up on a sheet of paper.

You could also just look here:

http://www.logwell.com/tech/components/resistor_values.html


Although I confess I do most of this manually, without even a calculator :]

Cheers
Terry
 
The Phantom wrote:
On Tue, 15 Mar 2005 22:49:00 -0800, "Larry Brasfield"
donotspam_larry_brasfield@hotmail.com> wrote:


"The Phantom" <phantom@aol.com> wrote in message
news:b3gf31tfc7k5gbukrkb5kfe29hjde93bc1@4ax.com...

On Tue, 15 Mar 2005 19:58:14 -0800, "Larry Brasfield"
donotspam_larry_brasfield@hotmail.com> wrote:

Tables are used only for certain limited arbitrary data
and to correct for those few results that do not agree
with the publish tables when computed algorithmically

Have a look at the little Basic routine I posted. Whoever
originally wrote it did a good job of curve fitting and came up with a
formula (algorithm?) that properly calculates those oddball values
that a simple root-of-10 method doesn't get right. Thus you don't
need any tables.

I copied it for later study shortly after you posted it.
There is an interesting mix of both the straightforward
logarithmic decade splitting and some adjustment via
a cubic polynomial. It is quite a tangle, analytically.
I suspect it was quite a trial and error effort rather
than founded on anything like the original algorithm.

If it covers the whole set of tolerances without any
corrections, I will be amazed.


-------> But using that kind of

method is very fragile against changing requirements.
Using tables judiciously can make for better code.<-------


Can you explain what you mean by this? Maybe with an example of how
tables would be better than the little routine?

The routine I gave will take up less space than than all the tables
for E12, E24, E48, E96 and E192.

And in answer to Robert Monsen, modern computers are *really* fast.
You don't really notice the time the routine takes. If you're only
going to do a few computations the results will seem instantaneous.
If you're going to do what Terry is doing, do what I describe below.

I have done the same sort of thing Terry Given describes, finding
optimum combinations of resistors. I just call the Basic routine a
few times and fill an array with the values. This is just as fast as
having a static table, but I don't have to have all the tables as I
mentioned above. I just create the tables or portions therof that I
need.
Ludicrous amounts of computing power is a truly wonderful thing. It just
pisses me off when its used to animate a stupid puppy in a search
utility, thats too fucking stupid to let me select a particular
directory within which to search. Thankfully I have Ztree.

Cheers
Terry
 
Fred Bloggs wrote:
Well- I am here to tell you that 1) the explanation about color contrast
is bunk, 2) no curve fit necessary, and 3) there exists a very-very-very
simple formulation to calculate precisely every single value to three
digits without the mysterious roundoff error effect due to preferred
values. You know how to do this too- just stop playing your games.
I don't know how to do this. The simple 10^(i/N) formula doesn't work,
particularly for the 20%, 10% and 5% values.

Could you post it? And please, no basic... :)

Thanks

--
Regards,
Robert Monsen

"Your Highness, I have no need of this hypothesis."
- Pierre Laplace (1749-1827), to Napoleon,
on why his works on celestial mechanics make no mention of God.
 
The Phantom wrote:
And in answer to Robert Monsen, modern computers are *really* fast.
You don't really notice the time the routine takes. If you're only
going to do a few computations the results will seem instantaneous.
If you're going to do what Terry is doing, do what I describe below.
Right, it was the generate and test algorithm I was a bit concerned with
when I posted.

I have done the same sort of thing Terry Given describes, finding
optimum combinations of resistors. I just call the Basic routine a
few times and fill an array with the values. This is just as fast as
having a static table, but I don't have to have all the tables as I
mentioned above. I just create the tables or portions therof that I
need.
Ok, caching is a usually a good strategy.

Sadly, I don't have a basic interpreter, so I guess I'm stuck with the
clumsy table lookup. It's ok, since I already have all the values typed
(actually cut and pasted) in here:

http://home.comcast.net/~rcmonsen/resistors.html

You have a cute routine, though. Curve fitting is fun. Somebody had alot
of fun doing it, I think. That "if 919, output 920" is hilarious.

I'm hoping Bloggs posts the simpler scheme he claims to know.

--
Regards,
Robert Monsen

"Your Highness, I have no need of this hypothesis."
- Pierre Laplace (1749-1827), to Napoleon,
on why his works on celestial mechanics make no mention of God.
 
"The Phantom" <phantom@aol.com> wrote in message
news:lamf31112f3va294dhpbeehie0sip6hvtf@4ax.com...
On Tue, 15 Mar 2005 22:49:00 -0800, "Larry Brasfield"
donotspam_larry_brasfield@hotmail.com> wrote:
I suspect it was quite a trial and error effort rather
than founded on anything like the original algorithm.

If it covers the whole set of tolerances without any
corrections, I will be amazed.
-------> But using that kind of
method is very fragile against changing requirements.
Using tables judiciously can make for better code.<-------

Can you explain what you mean by this? Maybe with an example of how
tables would be better than the little routine?
I am not saying that a complete change from a
formula based computation to table lookups
would be an improvement. In general, I try
to use tables for values that are inherently
arbitrary. For example, the allowed values
for tolerance might be 1%, 5%, and 10%.
There is no real reason, apart from people
favoring numbers related to how many
digits they find on their hands, for those
numbers to be used. Now imagine that
series of numbers generated by, say, a
quadratic formula given inputs {1,2,3},
probably with rounding, to produce that
same set of values. Some people would
see it and say "Neat!" Other people may
notice it took 3 input numbers to get 3
output numbers. Still others wonder how
it will fare when a new value is added to
the series, say 0.5% (woops, gotta revise
the polynomial and the rounding scheme)
or 20% (probably just requires a cubic in
lieu of the quadratic).

The routine I gave will take up less space than than all the tables
for E12, E24, E48, E96 and E192.
I'm not sure that's true if the code space
needed for computing the transcendental
functions is charged against your "space".
It could be close, but I imagine you may be
right if an FPU or uncharged (DLL) library
handles the serious arithmetic.

[snip]
--
--Larry Brasfield
email: donotspam_larry_brasfield@hotmail.com
Above views may belong only to me.
 
On Tue, 15 Mar 2005 20:07:10 -0800, Robert Monsen wrote:
<snip>
Christ I hate perl. What a monstrosity. I'd learn javascript
snip

JS is much like C/C++ and so is PHP which IMO, is the preferred CGI
scripting language. Python isn't a bad language, either. I never did
like perl, but reading about perl did at least open my eyes to how
malformed queries can cause commands to be executed as root. IIRC,
the backtick " ` " in a perl script tells the perl executable to
execute a system command and if it's slipped into a query properly
and there's no input validation, you're screwed.
--
Best Regards,
Mike
 
Active8 wrote:
On Tue, 15 Mar 2005 20:07:10 -0800, Robert Monsen wrote:
snip

Christ I hate perl. What a monstrosity. I'd learn javascript

snip

JS is much like C/C++ and so is PHP which IMO, is the preferred CGI
scripting language. Python isn't a bad language, either. I never did
like perl, but reading about perl did at least open my eyes to how
malformed queries can cause commands to be executed as root. IIRC,
the backtick " ` " in a perl script tells the perl executable to
execute a system command and if it's slipped into a query properly
and there's no input validation, you're screwed.
Cool trick. Thanks... ;)

--
Regards,
Robert Monsen

"Your Highness, I have no need of this hypothesis."
- Pierre Laplace (1749-1827), to Napoleon,
on why his works on celestial mechanics make no mention of God.
 
"John Woodgate" <jmw@jmwa.demon.contraspam.yuk> wrote in message
news:DAMV28BdaCOCFwiD@jmwa.demon.co.uk...
I managed to locate T Roddam's explanation, in a Letter to the Editor,
May 1984. This includes a very interesting piece about calculations
using preferred values. I can't render it in ASCII, because it includes
logs to base 6! So I'll scan it and put it on A.B.S.E. as 'Preferred
values and colour codes'.
John,

Did you post this? If so, my news service missed it. Would you be so kind as
to post it again?

Thanks.

--
James T. White
 
It took me about 60 minutes of sheer joy to break that code- I will not
reveal it yet- having too much fun reading the conjectures. Right now we
have Brasfield, Woodgate, and the Phantom in the race. Phantom is
leading, but Woodgate is starting to heat things up. Brasfield comes in
dead last because he thinks he knows everything and is content to stay
with his little Perl kluge, whereas Phantom and Woodgate are in the
numerical explore phase, actively searching for answers and recognizing
inconsistencies- good for them.


I guess Fred isn't going to give us the answer. I was hoping. :)
 
"The Phantom" <phantom@aol.com> wrote in message
news:s6so511ubv8u7502cej78a979rmsk266rs@4ax.com...
It took me about 60 minutes of sheer joy to break that code- I will not
reveal it yet- having too much fun reading the conjectures. Right now we
have Brasfield, Woodgate, and the Phantom in the race. Phantom is
leading, but Woodgate is starting to heat things up. Brasfield comes in
dead last because he thinks he knows everything and is content to stay
with his little Perl kluge, whereas Phantom and Woodgate are in the
numerical explore phase, actively searching for answers and recognizing
inconsistencies- good for them.


I guess Fred isn't going to give us the answer. I was hoping. :)

Fred posted something clever on this a few weeks back.
It looked like it might have nearly replicated the process
that was used to create the values long ago, when folks
used published logarithm tables with 4 or 5 decimal digits
of precision to do arithmetic. (Of course, his post speaks
for itself; I say this only to suggest it is worth a look.)

--
--Larry Brasfield
email: donotspam_larry_brasfield@hotmail.com
Above views may belong only to me.
 

Welcome to EDABoard.com

Sponsor

Back
Top