The Mathworks is offering more than 1600 dB of attenuation...

On 1/9/2022 2:34 AM, Jeroen Belleman wrote:
On 2022-01-09 02:10, Dennis wrote:
On 1/8/22 4:28 PM, Don Y wrote:
On 1/8/2022 10:55 AM, Martin Brown wrote:
The classic one when I was at university was that inevitably a
new graduate student would grind the Starlink VAX to a standstill
by transposing what was then a big image of 512x512 with nested
loops.

x[i,j] = x[j,i]

You don\'t even need to be transposing a large object to see this
problem. Just initializing an N-dimensional array is fraught with
pitfalls for a \"programmer\" (who likely has only knowledge of the
language and not the hardware, OS, etc.).

for (i = 0; i < MAX; i++) for (j = 0; j , MAX; j++) foo[i,j] = 0;

vs.

for (i = 0; i < MAX; i++) for (j = 0; j , MAX; j++) foo[j,i] = 0;

[In any of the equivalent forms}

And C and FORTRAN store multi-dimensional arrays is opposite order,
so translating one to the other can cause cache problems.


will stump most \"programmers\". Much the same as:

double x, y; ... if (x == y)...

I once had a colleague come to me when he had this sort of \"failure\".
[...]

Any programmer worth his salt should know that comparing floating
point values for equality doesn\'t work.

Any programmer worth his salt should know (fill in the blank)!

I don\'t think you realize the quality (lack thereof) of \"programmers\"
being churned out by diploma mills. **Anyone** can learn to write code.
**Anyone**!

At school, the standing joke was that the Physics majors would get
their BS, find they were unemployable and return for an MS. Then, find
they were STILL unemployable and return for a PhD. And, when they
discovered there were a shitload of PhDs out there competing for
the same FEW physics jobs, they\'d go get a job writing code!

[To their credit, the Math majors would come to that realization after
their BS!]

But, getting code to compile/run doesn\'t mean it is well written (or
\"correct\") code. If you look at code written by \"programmers\" (esp
those \"self taught\"), it\'s often really amateurish. Their goal is often
just to get it to (appear to) work. Then, move on to something else.
They don\'t care how *well* it works or how maintainable it might (not!) be.

I saw a piece of code *count* (literally!) the bytes in a file to
determine its size. \"Um, didn\'t it occur to you that knowing the
size of a file is something MANY folks would want to do and, thus,
might have a mechanism in place to yield that value directly?\"

I saw a piece of code that stored three copies of a 128 bit value
as a way of ensuring it\'s integrity. No knowledge of the math
behind such efforts. (\"Hamming distance? What\'s that??\")

People tend to think the way they have always *expected* numbers to
behave and forget they aren\'t in the \"real world\" anymore.

[Everyone knows that N/3. * 3. = N, right?]

I\'ve seen experienced programmers make these mistakes when converting
integer-based algorithms to FP. They see that it works in its
integer form and think just typedef-ing everything as floating
point will *continue* to work (\"But I haven\'t changed any of
the code!\")

[I implement BigRationals in my applet language so average joes
don\'t have to worry about rounding errors, overflows, etc. when
*they* \"write code\"]

Given:
then = 22:50:00
now = 23:10:00
there must be 20 minutes in the delimited interval, right?
Simple modular arithmetic problem... (oops!)

I watched a CNC router cut a perfect circle in a sheet of steel -- save
for the discontinuity as it *closed* the circle (\"oops! *that\'s* not
supposed to happen. That wasn\'t an expensive piece of steel, was it?\")

And, if you let the IDE present pretty-printed values to you, you
likely won\'t see the underlying values and wonder why everything
LOOKS correct...
 
On Sat, 8 Jan 2022 17:55:49 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 08/01/2022 17:25, Joe Gwinn wrote:
On Sat, 8 Jan 2022 11:19:43 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

I\'ve been very impressed with the latest MSC 2019 code generator on some
of my stuff. It somehow groks tediously complex higher order difference
correctors in a way that no other compiler can match. A lucky
combination of out of order and speculative execution makes some things
run much faster with SEE, inlining and full optimisation all permitted.

What are \"MSC 2019 code generator\" and \"SEE\"?

MS C/C++ compiler under Visual Studio and \"SEE\" (sic) is a typo for SSE
(the extended floating point registers on modern Intel CPUs)
/SSE2 works best for me but YMMV.

Even so the compiler sometimes generates hybrid code with the x87 still
being used for some parts but not all of the computations.

MSC 2019 appears to have been replaced by 2022 so yet another compiler
to check my code against to see if there are any more improvements.

https://visualstudio.microsoft.com/downloads/

One of the tricky quirks I know about is that sincos can either speed
things up or slow them down. It depends on whether the result gets used
before it has been computed (pipeline stalls are hellishly expensive).

I\'m assuming that you are coding in C here.

I have one similar surprise to report, but in MatLab:

The behavior of a megawatt power system for a shipboard radar was
modeled in Simulink (integrated with MatLab). This was circa 2000.
The simulations ran very slowly, but none of us thought much about it,
for lack of a comparison.

One day, I was working with the Mathematician who was running the
simulation, and idly watching the usual stream of sim-in-progress
messages roll by as we talked, and saw a message that I did not
recognize or understand. Turned out, those messages were relatively
common, but never really noticed in the blather from the sim.

Now curious, I dug into that message. -Saga omitted- It turns out
that the simulation was coded (by us the users) in such a way that the
solver was forced to solve an implicit equation at each solution time
step in a large system of coupled ODEs. So, instead of one or two big
matrix operations per step, it was one or two hundred operations per
step. Ouch! But why?

The presence of implicit forms was a byproduct of using a block
diagram and line language to describe the power system being modeled,
being programmed by placing standard blocks and connecting them with
standard connection lines on the computer screen. But what made sense
and looked simple on the screen was anything but under the covers.

Redesigning and recoding the simulation yielded a 100x speedup.

The classic one when I was at university was that inevitably a new
graduate student would grind the Starlink VAX to a standstill by
transposing what was then a big image of 512x512 with nested loops.

x[i,j] = x[j,i]

Generating roughly a quarter of a million page faults in the process.

There were libraries with algorithms we had sweated blood over to do
this efficiently with the minimum possible number of page faults.

We had the same problems in the early days of neural nets, coded in C,
running on VAX/VMS. The app folk never could fathom why reversing
inner and outer loop when processing the big connection matrix could
make a 1000-to-1 difference in run time, and declined to make the
change. Pretty soon, they had exhausted their computer time budget.

Problem solved.

Joe Gwinn
 
On Sat, 8 Jan 2022 13:13:52 -0500, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

Joe Gwinn wrote:
On Sat, 8 Jan 2022 11:19:43 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 07/01/2022 15:43, Phil Hobbs wrote:
Martin Brown wrote:
On 06/01/2022 23:29, Simon S Aysdie wrote:
https://www.mathworks.com/help/rf/ug/richards-kuroda-workflow-for-rf-filter-circuit.html


Those plots go to -1800 dB. What an incredible toolbox.

That is perfectly possible depending on luck. The smallest numbers
that can arise in floating point are O(10^-308) which is -6160 dB
(actually denorms can go even smaller 10^-323 with lost precision)

They are not at all meaningful beyond about 10^-17 or so allowing for
the typical 53 bit mantissa and 64 bit intermediate results.

Realistically any plot going beyond -320dB is into rounding error noise.

You can occasionality get -inf if the computation produces exact zero.

I defend against it by adding 1e-20 which is different enough from the
nearest real non-denormalised answer 2.2e-16 to be obvious and doesn\'t
corrupt the output dataset in ways that disrupt further processing.

That happens sometimes in my high precision calculations for easier
problems with a near analytic solution. It is a bit annoying since it
causes discontinuities in otherwise smooth residual error curves.


Denormals are a huge pain.  Nice enough in theory, of course--why throw
away information you could keep?

The problem is that it\'s a waste of good silicon to make such marginal
creatures fast, so they aren\'t.

Tell me about it. One of my early contributions to that game was
noticing that a particular astrophysical plasma simulation was spending
all its runtime in interrupts handling denorm underflows. A couple of
orders of magnitude speed improvement was very welcome. It needed
rescaling to safer territory x ~ h^2/c^3 is just asking for trouble in
single precision (it was a fluid dynamics code).

The thing I am working on at the moment involves powers of tan(x)^(2^N)
in the range -pi to pi. It gets quite hairy for even modest N and falls
over completely for N>5. I have a cunning fix that makes it work for any
N < 8 but by then it is almost all rounding error anyway.

In early versions of my clusterized FDTD simulator, the run time was
usually dominated by rounding error causing the simulation domain to
fill up with denormals until the actual simulated fields got to all the
corners.

They are sometimes better than having it hit zero (though not always).

I couldn\'t fix it by adding a DC offset, because that would go away
completely in two half steps, so I filled all the field arrays with very
low-level random noise.  Sped some simulations up by 100x.

A bit of judicious random noise can work wonders on breaking degeneracy.

At the time I was using the Intel C++ compiler, which didn\'t have an
option for flush-to-zero on underflow.

I\'ve been very impressed with the latest MSC 2019 code generator on some
of my stuff. It somehow groks tediously complex higher order difference
correctors in a way that no other compiler can match. A lucky
combination of out of order and speculative execution makes some things
run much faster with SEE, inlining and full optimisation all permitted.

What are \"MSC 2019 code generator\" and \"SEE\"?


2nd order Newton-Raphson and 3rd order Halley are almost the same
execution time now and the 4th order one just 10% slower. That\'s quite a
bonus when the functions f(x) and its derivatives being evaluated are
S-L--O---W.

In one instance a single apparently harmless line to protect against a
rounding error giving an impossible answer had an effective execution
time of -100 cycles because it prevented a pipeline stall. I could
hardly believe it so I double checked the same code with and without.

if (E<M) E=E+pi;

The really weird thing is that the branch statement is almost never
taken except when M is extremely close to pi, but doing the comparison
somehow gives the CPU enough recovery time to run so much faster.

I\'m slowly collecting a library of short code fragments that give
different optimising compilers and certain Intel CPU\'s trouble.

I\'m assuming that you are coding in C here.

I have one similar surprise to report, but in MatLab:

The behavior of a megawatt power system for a shipboard radar was
modeled in Simulink (integrated with MatLab). This was circa 2000.
The simulations ran very slowly, but none of us thought much about it,
for lack of a comparison.

One day, I was working with the Mathematician who was running the
simulation, and idly watching the usual stream of sim-in-progress
messages roll by as we talked, and saw a message that I did not
recognize or understand. Turned out, those messages were relatively
common, but never really noticed in the blather from the sim.

Now curious, I dug into that message. -Saga omitted- It turns out
that the simulation was coded (by us the users) in such a way that the
solver was forced to solve an implicit equation at each solution time
step in a large system of coupled ODEs. So, instead of one or two big
matrix operations per step, it was one or two hundred operations per
step. Ouch! But why?

The presence of implicit forms was a byproduct of using a block
diagram and line language to describe the power system being modeled,
being programmed by placing standard blocks and connecting them with
standard connection lines on the computer screen. But what made sense
and looked simple on the screen was anything but under the covers.

Redesigning and recoding the simulation yielded a 100x speedup.

Joe Gwinn


As one of my old colleagues used to say, \"Ah, Labview--spaghetti code
that even _looks_ like spaghetti.\"

Heh. Don\'t get me started on Labview.

But in the above simulation case, the code was not spaghetti, and was
actually beautiful to behold. Too bad about the runtime, though.

Joe Gwinn
 
On 9/1/22 9:42 pm, Don Y wrote:
On 1/9/2022 2:34 AM, Jeroen Belleman wrote:
On 2022-01-09 02:10, Dennis wrote:
On 1/8/22 4:28 PM, Don Y wrote:
will stump most \"programmers\".  Much the same as:
double x, y; ... if (x == y)...
Any programmer worth his salt should know that comparing floating
point values for equality doesn\'t work.

Any programmer worth his salt should know (fill in the blank)!

I don\'t think you realize the quality (lack thereof) of \"programmers\"
being churned out by diploma mills.
So you agree that it\'s not programmers who are the problem, but education?

Thanks for clearing that up for us - it\'s the opposite of what you
previously just said.
 
On Saturday, January 8, 2022 at 8:35:57 AM UTC-8, gnuarm.del...@gmail.com wrote:
That may well be the calculated attenuation of a filter with a node at that frequency and no resistive component...

Of course. No one is questioning whether or not the computer can calculate things that can\'t exist in nature: it can and that isn\'t a bad thing at all. The question is if plotting to greater than 1600 dB of attenuation has any practical value, product marketing or otherwise. After all, if we weren\'t concerned about practical matters, why is the Kuroda transform being used? We already know the \"math hole\" (attenuation pole) at the commensurate frequency exists. Adding 1800 dB depth to the plot adds nothing and compresses the remainder of the visual data.

> The devil is in the details. You have to know what to expect and understand how the data is being presented.

100% agreement.
 
On Saturday, January 8, 2022 at 7:57:38 PM UTC-8, Don Y wrote:

Trying to explain cancellation to someone who was taught that
the two roots of a quadratic were readily obtainable from the
\"quadratic formula\" will always devolve into a concrete
example -- as folks can\'t visualize the finite nature of
the hardware.

One of my favorites:

.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.
 
On Mon, 10 Jan 2022 11:54:38 -0800 (PST), Simon S Aysdie
<gwhite@ti.com> wrote:

On Saturday, January 8, 2022 at 7:57:38 PM UTC-8, Don Y wrote:

Trying to explain cancellation to someone who was taught that
the two roots of a quadratic were readily obtainable from the
\"quadratic formula\" will always devolve into a concrete
example -- as folks can\'t visualize the finite nature of
the hardware.

One of my favorites:

.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.

Well, what it tells you is that single-precision floating point
arithmetic is used, because decimal tenths cannot be expressed as a
terminating binary fraction.

0.3 = 3/(2*5); 0.2 = 2/(2*5) = 1/5; 0.1 = 1/(2*5) = (1/2)(1/5)


The classic test is to compute (1/3 + 1/3 + 1/3)-1 without
optimization. The result will be the round off error to express 1/3
in a non-trinary number system. Often used to tell what kind of
arithmetic a hand calculator uses.

My HP 15C calculator yields 10^-10.


The parallel is (1/5 + 1/5 + 1/5 + 1/5 + 1/5)-1. The answer is exact
in decimal arithmetic.

My HP 15C calculator yields 0. So, it uses ten-digit decimal
arithmetic.


My HP 32S II calculator instead uses 12-digit decimal arithmetic.


Joe Gwinn
 
On 1/10/2022 12:54 PM, Simon S Aysdie wrote:
On Saturday, January 8, 2022 at 7:57:38 PM UTC-8, Don Y wrote:

Trying to explain cancellation to someone who was taught that
the two roots of a quadratic were readily obtainable from the
\"quadratic formula\" will always devolve into a concrete
example -- as folks can\'t visualize the finite nature of
the hardware.

One of my favorites:

.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.

Yes, but Joe Average User won\'t understand the \"why\" behind that
result.

Using BigRationals (a pair of BigDecimals per value -- plus a list of
units-of-measure), it\'s:

3/10 - 2/10 - 1/10 = 0/10 (0)

Likewise:

1/3 + 1/3 + 1/3 = 3/3

Memory/MIPS are cheap. Make life easier for the dweeb who won\'t
otherwise understand!

(Of course, irrational numbers are still a problem. But, folks
using them, hopefully, are prepared for \"fuzz\".
 
On Monday, January 10, 2022 at 12:44:11 PM UTC-8, Joe Gwinn wrote:
On Mon, 10 Jan 2022 11:54:38 -0800 (PST), Simon S Aysdie
gwh...@ti.com> wrote:

On Saturday, January 8, 2022 at 7:57:38 PM UTC-8, Don Y wrote:

Trying to explain cancellation to someone who was taught that
the two roots of a quadratic were readily obtainable from the
\"quadratic formula\" will always devolve into a concrete
example -- as folks can\'t visualize the finite nature of
the hardware.

One of my favorites:

.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.
Well, what it tells you is that single-precision floating point
arithmetic is used, because decimal tenths cannot be expressed as a
terminating binary fraction.

It\'s double precision. Single won\'t have 15-16 SD.

0.3 = 3/(2*5); 0.2 = 2/(2*5) = 1/5; 0.1 = 1/(2*5) = (1/2)(1/5)


The classic test is to compute (1/3 + 1/3 + 1/3)-1 without
optimization. The result will be the round off error to express 1/3
in a non-trinary number system. Often used to tell what kind of
arithmetic a hand calculator uses.

My HP 15C calculator yields 10^-10.

Interesting. I did not know \"the test.\" My calculator gets -1e-12.

The parallel is (1/5 + 1/5 + 1/5 + 1/5 + 1/5)-1. The answer is exact
in decimal arithmetic.

My HP 15C calculator yields 0. So, it uses ten-digit decimal
arithmetic.


My HP 32S II calculator instead uses 12-digit decimal arithmetic.

There are some HP RPN religious zealots at my workplace.
 
On Monday, January 10, 2022 at 1:15:28 PM UTC-8, Don Y wrote:
On 1/10/2022 12:54 PM, Simon S Aysdie wrote:
On Saturday, January 8, 2022 at 7:57:38 PM UTC-8, Don Y wrote:

Trying to explain cancellation to someone who was taught that
the two roots of a quadratic were readily obtainable from the
\"quadratic formula\" will always devolve into a concrete
example -- as folks can\'t visualize the finite nature of
the hardware.

One of my favorites:

.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.
Yes, but Joe Average User won\'t understand the \"why\" behind that
result.

That\'s kind of my point, and I agree. If they can\'t get this one---which is a simple as it gets---then they probably won\'t get the irrational cases either.

Using BigRationals (a pair of BigDecimals per value -- plus a list of
units-of-measure), it\'s:

3/10 - 2/10 - 1/10 = 0/10 (0)

Likewise:

1/3 + 1/3 + 1/3 = 3/3

Memory/MIPS are cheap. Make life easier for the dweeb who won\'t
otherwise understand!

(Of course, irrational numbers are still a problem. But, folks
using them, hopefully, are prepared for \"fuzz\".
 
Am 11.01.22 um 00:00 schrieb Simon S Aysdie:
On Monday, January 10, 2022 at 12:44:11 PM UTC-8, Joe Gwinn wrote:

The classic test is to compute (1/3 + 1/3 + 1/3)-1 without
optimization. The result will be the round off error to express 1/3
in a non-trinary number system. Often used to tell what kind of
arithmetic a hand calculator uses.

My HP 15C calculator yields 10^-10.

Interesting. I did not know \"the test.\" My calculator gets -1e-12.

So does go41C on my Android phone.

In the evening in the pub, my boss from 35 years ago and then
an early adoptor, now looong retired, was mildly irritated when
my phone suddenly transmogrified into an illuminated HP-41C.

For Windows, there is \"Virtual HP41CX\"

Gerhard
 
On 1/10/2022 4:10 PM, Simon S Aysdie wrote:
.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.
Yes, but Joe Average User won\'t understand the \"why\" behind that result.

That\'s kind of my point, and I agree. If they can\'t get this one---which is
a simple as it gets---then they probably won\'t get the irrational cases
either.
But, they will have *seen* some irrational value in their calculation
so likely won\'t expect a \"clean\" result. Even folks who \"know\" 1/3
to be 0.33333... still expect 3 times that value to be 1.0 and not
0.9999...!

I just want to eliminate/minimize the \"surprises\" when they perform
some calculation (in whatever order THEY chose) and discover an
unexpected result -- like adding one to ~10^LDBL_DIG and wondering
why the resulting value doesn\'t change. Yet, then *subtracting* one
gives a different value than the one they started with.
 
On Mon, 10 Jan 2022 15:00:23 -0800 (PST), Simon S Aysdie
<gwhite@ti.com> wrote:

On Monday, January 10, 2022 at 12:44:11 PM UTC-8, Joe Gwinn wrote:
On Mon, 10 Jan 2022 11:54:38 -0800 (PST), Simon S Aysdie
gwh...@ti.com> wrote:

On Saturday, January 8, 2022 at 7:57:38 PM UTC-8, Don Y wrote:

Trying to explain cancellation to someone who was taught that
the two roots of a quadratic were readily obtainable from the
\"quadratic formula\" will always devolve into a concrete
example -- as folks can\'t visualize the finite nature of
the hardware.

One of my favorites:

.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.
Well, what it tells you is that single-precision floating point
arithmetic is used, because decimal tenths cannot be expressed as a
terminating binary fraction.

It\'s double precision. Single won\'t have 15-16 SD.

Oops. Log2[3e-17] = -54.9 bits, which is about the mantissa of a
64-bit float. I did compute this, but for some reason called it
single.

For the record, 32-bit floats have about six decimal digits of
precision.


0.3 = 3/(2*5); 0.2 = 2/(2*5) = 1/5; 0.1 = 1/(2*5) = (1/2)(1/5)


The classic test is to compute (1/3 + 1/3 + 1/3)-1 without
optimization. The result will be the round off error to express 1/3
in a non-trinary number system. Often used to tell what kind of
arithmetic a hand calculator uses.

My HP 15C calculator yields 10^-10.

Interesting. I did not know \"the test.\" My calculator gets -1e-12.

As do most modern calculators.


The parallel is (1/5 + 1/5 + 1/5 + 1/5 + 1/5)-1. The answer is exact
in decimal arithmetic.

My HP 15C calculator yields 0. So, it uses ten-digit decimal
arithmetic.


My HP 32S II calculator instead uses 12-digit decimal arithmetic.

There are some HP RPN religious zealots at my workplace.

Well, I do much prefer RPN, but don\'t proselytize. Well, maybe a
little:

Saves me from having to lend my calculator. Well, except for one
RPN-qualified boss.

I do recall the RPN versus Algebraic input syntax wars of some decades
ago, where it was shown that RPN was far more efficient, for all the
difference it made in the market.

This was shown by comparing such standard things as mortgage interest
calculations as documented in the user manuals for HP and TI
calculators, on the theory that these programs were written by experts
in their respective calculators, and so represented the best one could
do.

Overall, RPN took 2/3 as many keystrokes as Algebraic.

It also automatically remembers some intermediate answers, so one can
do many unplanned follow-on calculations without recording and
re-entering 12-digit data.

Joe Gwinn
 
On Monday, January 10, 2022 at 3:31:17 PM UTC-8, Joe Gwinn wrote:
On Mon, 10 Jan 2022 15:00:23 -0800 (PST), Simon S Aysdie
gwh...@ti.com> wrote:

On Monday, January 10, 2022 at 12:44:11 PM UTC-8, Joe Gwinn wrote:
On Mon, 10 Jan 2022 11:54:38 -0800 (PST), Simon S Aysdie
gwh...@ti.com> wrote:

On Saturday, January 8, 2022 at 7:57:38 PM UTC-8, Don Y wrote:

Trying to explain cancellation to someone who was taught that
the two roots of a quadratic were readily obtainable from the
\"quadratic formula\" will always devolve into a concrete
example -- as folks can\'t visualize the finite nature of
the hardware.

One of my favorites:

.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.
Well, what it tells you is that single-precision floating point
arithmetic is used, because decimal tenths cannot be expressed as a
terminating binary fraction.

It\'s double precision. Single won\'t have 15-16 SD.
Oops. Log2[3e-17] = -54.9 bits, which is about the mantissa of a
64-bit float. I did compute this, but for some reason called it
single.

For the record, 32-bit floats have about six decimal digits of
precision.
0.3 = 3/(2*5); 0.2 = 2/(2*5) = 1/5; 0.1 = 1/(2*5) = (1/2)(1/5)


The classic test is to compute (1/3 + 1/3 + 1/3)-1 without
optimization. The result will be the round off error to express 1/3
in a non-trinary number system. Often used to tell what kind of
arithmetic a hand calculator uses.

My HP 15C calculator yields 10^-10.

Interesting. I did not know \"the test.\" My calculator gets -1e-12.
As do most modern calculators.
The parallel is (1/5 + 1/5 + 1/5 + 1/5 + 1/5)-1. The answer is exact
in decimal arithmetic.

My HP 15C calculator yields 0. So, it uses ten-digit decimal
arithmetic.


My HP 32S II calculator instead uses 12-digit decimal arithmetic.

There are some HP RPN religious zealots at my workplace.
Well, I do much prefer RPN, but don\'t proselytize. Well, maybe a
little:

Saves me from having to lend my calculator. Well, except for one
RPN-qualified boss.

I do recall the RPN versus Algebraic input syntax wars of some decades
ago, where it was shown that RPN was far more efficient, for all the
difference it made in the market.

This was shown by comparing such standard things as mortgage interest
calculations as documented in the user manuals for HP and TI
calculators, on the theory that these programs were written by experts
in their respective calculators, and so represented the best one could
do.

Overall, RPN took 2/3 as many keystrokes as Algebraic.

It also automatically remembers some intermediate answers, so one can
do many unplanned follow-on calculations without recording and
re-entering 12-digit data.

I always used TI just by the happenstance that TI was my first calculator purchase choice and I simply stuck with what I knew. I did have an HP20s at one time until the keypad went flake-o, but it was TI-like, not RPN.

I never argue with them. For example, the RPN fans often disdain parenthesis. I totally get that. As a user of TI and its AOS, I learned to avoid parenthesis like the plague because doing so was inevitably error prone. I suspect the RPN\'ers are right.

I just never bothered changing and once computing became cheap, I never did anything more than the simplest things on calculator. I don\'t \"need\" a good calculator, I think. I have Octave open as my \"calculator\" pretty much 100% of the time.
 
On Monday, January 10, 2022 at 3:27:02 PM UTC-8, Don Y wrote:
On 1/10/2022 4:10 PM, Simon S Aysdie wrote:
.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.
Yes, but Joe Average User won\'t understand the \"why\" behind that result.

That\'s kind of my point, and I agree. If they can\'t get this one---which is
a simple as it gets---then they probably won\'t get the irrational cases
either.
But, they will have *seen* some irrational value in their calculation
so likely won\'t expect a \"clean\" result. Even folks who \"know\" 1/3
to be 0.33333... still expect 3 times that value to be 1.0 and not
0.9999...!

Irrational numbers printed as decimal are always wrong, no matter how many digits they are rounded to. (Not saying they are rounded correctly.) So, I don\'t see why, as a case example, that would be inclined to tickle their interest or curiosity. They\'ll easily see/know the answer to the super-simplistic \".3 - .2 - .1\" is clearly wrong. If that doesn\'t interest them, nothing will. Most people don\'t care because they don\'t do computing/coding.

I just want to eliminate/minimize the \"surprises\" when they perform
some calculation (in whatever order THEY chose) and discover an
unexpected result -- like adding one to ~10^LDBL_DIG and wondering
why the resulting value doesn\'t change. Yet, then *subtracting* one
gives a different value than the one they started with.
 
On Tue, 11 Jan 2022 11:10:51 -0800 (PST), Simon S Aysdie
<gwhite@ti.com> wrote:

On Monday, January 10, 2022 at 3:31:17 PM UTC-8, Joe Gwinn wrote:
On Mon, 10 Jan 2022 15:00:23 -0800 (PST), Simon S Aysdie
gwh...@ti.com> wrote:

On Monday, January 10, 2022 at 12:44:11 PM UTC-8, Joe Gwinn wrote:
On Mon, 10 Jan 2022 11:54:38 -0800 (PST), Simon S Aysdie
gwh...@ti.com> wrote:

On Saturday, January 8, 2022 at 7:57:38 PM UTC-8, Don Y wrote:

Trying to explain cancellation to someone who was taught that
the two roots of a quadratic were readily obtainable from the
\"quadratic formula\" will always devolve into a concrete
example -- as folks can\'t visualize the finite nature of
the hardware.

One of my favorites:

.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.
Well, what it tells you is that single-precision floating point
arithmetic is used, because decimal tenths cannot be expressed as a
terminating binary fraction.

It\'s double precision. Single won\'t have 15-16 SD.
Oops. Log2[3e-17] = -54.9 bits, which is about the mantissa of a
64-bit float. I did compute this, but for some reason called it
single.

For the record, 32-bit floats have about six decimal digits of
precision.
0.3 = 3/(2*5); 0.2 = 2/(2*5) = 1/5; 0.1 = 1/(2*5) = (1/2)(1/5)


The classic test is to compute (1/3 + 1/3 + 1/3)-1 without
optimization. The result will be the round off error to express 1/3
in a non-trinary number system. Often used to tell what kind of
arithmetic a hand calculator uses.

My HP 15C calculator yields 10^-10.

Interesting. I did not know \"the test.\" My calculator gets -1e-12.
As do most modern calculators.
The parallel is (1/5 + 1/5 + 1/5 + 1/5 + 1/5)-1. The answer is exact
in decimal arithmetic.

My HP 15C calculator yields 0. So, it uses ten-digit decimal
arithmetic.


My HP 32S II calculator instead uses 12-digit decimal arithmetic.

There are some HP RPN religious zealots at my workplace.
Well, I do much prefer RPN, but don\'t proselytize. Well, maybe a
little:

Saves me from having to lend my calculator. Well, except for one
RPN-qualified boss.

I do recall the RPN versus Algebraic input syntax wars of some decades
ago, where it was shown that RPN was far more efficient, for all the
difference it made in the market.

This was shown by comparing such standard things as mortgage interest
calculations as documented in the user manuals for HP and TI
calculators, on the theory that these programs were written by experts
in their respective calculators, and so represented the best one could
do.

Overall, RPN took 2/3 as many keystrokes as Algebraic.

It also automatically remembers some intermediate answers, so one can
do many unplanned follow-on calculations without recording and
re-entering 12-digit data.

I always used TI just by the happenstance that TI was my first calculator purchase choice and I simply stuck with what I knew. I did have an HP20s at one time until the keypad went flake-o, but it was TI-like, not RPN.

Oddly, by first calculator purchase was a TI as well, but there is a
story. My first job out of school was at the US FCC (Federal
Communications Commission), in the Office of Chief Engineer. I did a
fair bit of programming on Univac 1100 series mainframes, which had
36-bit words.

Anyway, I saw an ad in a computer magazine (Computer World?) for a
hexadecimal desk calculator from TI. Whoever said that the Federal
Government is too slow to move - I hand-walked a purchase order out
the door that very day, product sight unseen.

Turns out that TI didn\'t actually make them, and were testing the
waters. Only to be deluged with purchase orders instead of the usual
tentative inquiries. TI took this as a divine sign.

A year later I had that calculator. It was very useful. It had one
thing I never saw anywhere else - if could do fractions (as in
floating-point mantissas) in hex and octal, not just decimal.


>I never argue with them. For example, the RPN fans often disdain parenthesis. I totally get that. As a user of TI and its AOS, I learned to avoid parenthesis like the plague because doing so was inevitably error prone. I suspect the RPN\'ers are right.

Yeah. The code generator in a compiler generally has an RPN
intermediate language, because computer hardware has no parens
command.


>I just never bothered changing and once computing became cheap, I never did anything more than the simplest things on calculator. I don\'t \"need\" a good calculator, I think. I have Octave open as my \"calculator\" pretty much 100% of the time.

Same. At work in the 1970s, I always had a midi-computer, programmed
in Fortran and assembly, costing the price of a nice row house, and
requiring a repair technician each and every week. So I was not all
that interested in PCs when they emerged.

Now days, I use Mathematica for most calculations. I do know MatLab
(and thus ~Octave), but I prefer Mathematica notebooks, which make it
easy to have my documentation and notes right with the code. So I can
read and understand the notebook and code years later.

Joe Gwinn
 
On 1/11/2022 12:21 PM, Simon S Aysdie wrote:
On Monday, January 10, 2022 at 3:27:02 PM UTC-8, Don Y wrote:
On 1/10/2022 4:10 PM, Simon S Aysdie wrote:
.3 - .2 - .1
ans = -2.775557561562891e-17

Stupid computer. lol.
Yes, but Joe Average User won\'t understand the \"why\" behind that
result.

That\'s kind of my point, and I agree. If they can\'t get this one---which
is a simple as it gets---then they probably won\'t get the irrational
cases either.
But, they will have *seen* some irrational value in their calculation so
likely won\'t expect a \"clean\" result. Even folks who \"know\" 1/3 to be
0.33333... still expect 3 times that value to be 1.0 and not 0.9999...!

Irrational numbers printed as decimal are always wrong, no matter how many
digits they are rounded to. (Not saying they are rounded correctly.) So, I
don\'t see why, as a case example, that would be inclined to tickle their
interest or curiosity. They\'ll easily see/know the answer to the
super-simplistic \".3 - .2 - .1\" is clearly wrong. If that doesn\'t interest
them, nothing will. Most people don\'t care because they don\'t do
computing/coding.

But their computations might not be as \"obvious\" to expose such issues.
If measuring a length, in mixed units:
1 inch + 2 ft 3.5 inches + 2 yds + 4 inches + ...
and then a width in the same combination of mixed units...
to compute an area, the difference between \"actual\" and \"computed\"
won\'t stand out to casual inspection. You want to trust the result, not
scratch your head wondering why there is some \"odd\" fraction involved.

But, if trying to compute the area of a circular region, you wouldn\'t
even question the \"odd fraction\" -- because you know there is an
irrational factor built into the calculation (i.e., you\'d never
expect a nice \"clean\" answer)

When, as a youngster, I first encountered this sort of thing, it made
me wary of the calculator: how many OTHER calculations that \"should be
(common-sense) correct\" aren\'t?

I just want to eliminate/minimize the \"surprises\" when they perform some
calculation (in whatever order THEY chose) and discover an unexpected
result -- like adding one to ~10^LDBL_DIG and wondering why the resulting
value doesn\'t change. Yet, then *subtracting* one gives a different value
than the one they started with.
 

Welcome to EDABoard.com

Sponsor

Back
Top