0.0 / 0.0 = -NAN ?...

On 02/05/2022 22:02, Lasse Langwadt Christensen wrote:
mandag den 2. maj 2022 kl. 22.52.48 UTC+2 skrev Phil Hobbs:
jla...@highlandsniptechnology.com wrote:
On Sun, 1 May 2022 20:27:33 -0700 (PDT), whit3rd <whi...@gmail.com
wrote:

On Saturday, April 30, 2022 at 1:23:21 PM UTC-7, legg wrote:
On Sat, 30 Apr 2022 10:47:30 -0700 (PDT), Ricky
gnuarm.del...@gmail.com> wrote:

On Saturday, April 30, 2022 at 12:11:41 PM UTC-4, jla...@highlandsniptechnology.com wrote:

I wrote an s32.32 saturating math package for the 68K that always did
what was most reasonable.

0/0 = 0

I\'m sure no one can explain why 0/0 = 0 makes sense. Zero does not represent just exactly zero. Just as 1 is the range from 1/2 to 1-1/2, zero is the range from -1/2 to +1/2.

If the denominator is not zero, as the numerator approaches zero, yes, in the limit, the result approaches zero. But if the numerator is not zero, as the denominator approaches zero, in the limit, the result approaches infinity. So why would 0/0=0 make sense?

I would have expected 0/0=1 ie no rational difference.

That\'s the case if you consider lim x/x as x approaches zero. But,
what of the limit of 2x/x, or -x/x, as x approaches zero? NAN is the
best way to get a thinking human to understanding what the computer
is trying to express.

What does a control system do when the heater voltage is computed to
be NAN?

It should worry about the skill of the programmer who wrote the code.

>> Log it, skip the update, and press on to the next measurement.

+1
Or maybe just count it. Not doing the divide saves enough time to do
something else that is *directly* under the programmers control.

Divides particularly and sometimes multiplies have the possibility of
overflow or underflow if their inputs are unfriendly.

> hoping that that doesn\'t slow the system down to much

Division even on the current crop of fast processors is already so slow
that explicitly defending against division by zero and the resulting
trap handling recovery is invariably faster than the alternative.

Many times you can prove that the divisor will not ever be zero but if
you can\'t then you should decide exactly how to handle that exception.

Division is roughly an order of magnitude slower than any of the other
primitive operations on today\'s CPU\'s - it was even worse in the past.

+,-,* now execute in a single cycle and in combination with branch
prediction and speculative execution can appear to execute in fractions
of a cycle provided that the data they need is available quickly enough.

--
Regards,
Martin Brown
 
On 02/05/2022 16:54, Joe Gwinn wrote:
On Mon, 2 May 2022 15:28:46 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 02/05/2022 15:10, jlarkin@highlandsniptechnology.com wrote:
On Sun, 1 May 2022 20:27:33 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Saturday, April 30, 2022 at 1:23:21 PM UTC-7, legg wrote:
On Sat, 30 Apr 2022 10:47:30 -0700 (PDT), Ricky
gnuarm.del...@gmail.com> wrote:

On Saturday, April 30, 2022 at 12:11:41 PM UTC-4, jla...@highlandsniptechnology.com wrote:

I wrote an s32.32 saturating math package for the 68K that always did
what was most reasonable.

0/0 = 0

I\'m sure no one can explain why 0/0 = 0 makes sense. Zero does not represent just exactly zero. Just as 1 is the range from 1/2 to 1-1/2, zero is the range from -1/2 to +1/2.

If the denominator is not zero, as the numerator approaches zero, yes, in the limit, the result approaches zero. But if the numerator is not zero, as the denominator approaches zero, in the limit, the result approaches infinity. So why would 0/0=0 make sense?

I would have expected 0/0=1 ie no rational difference.

That\'s the case if you consider lim x/x as x approaches zero. But,
what of the limit of 2x/x, or -x/x, as x approaches zero? NAN is the
best way to get a thinking human to understanding what the computer
is trying to express.

What does a control system do when the heater voltage is computed to
be NAN?

Shut down. The situation should never arise if you have scaled the
problem correctly and you should never be dividing by zero anyway.

If the denominator of a division is zero then you haven\'t thought out
the representation of your problem correctly. Testing for zero is
usually quick (and often implicitly available in integer arithmetic).

Umm. Testing for zero doesn\'t necessarily change anything.

Yes it does. If you know you are about to divide by zero you can do
something else instead and still save time. Divides are remarkably slow.
(even today that still holds true)

I\'m presently working on an algorithm that minimises divides to obtain
higher speed - at least that was the original aim. Serendipitously I
also found that then new schema simultaneously made the whole thing
considerably more accurate as well as faster.

Basically I can sometimes trade a hardware divide for a much more
horrible algebraic expression involving the other fast primitive
operations and still come out ahead on execution time and accuracy.
I have a war story here. Many decades ago, I was the software
architect for the mission software of a ship self defense system that
shoots incoming cruise missiles down, if it can. From detection at
the horizon to impact on ownship is maybe twenty seconds.

One fine day, a mob of software engineers turned up, locked in
argument about what to do if the engageability calculation suffered a
divide-by-zero exception.

The important question here is is the divide by zero a real singularity
or as seems likely a coordinate transform singularity from taking
bearings and ranges into and out of x,y,z Cartesian coordinates.

Altaz telescope mounts have exactly the same problems as gun turrets.
Limited slew rates and allowable angles. It gets singular near the
zenith since the scope cannot spin fast enough to track the sky there.
This is a weak singularity but it has to be avoided.

Observation plans were always checked in simulated operation prior to
the actual telescope run to avoid that zone.

This is not a coding error per se, it\'s a mathematical singularity in
the equations - some engagement geometries will hit the singularity,
and the embedded realtime computers of that day could not handle the
more complicated math needed to avoid such things fast enough to
matter.

But is it a true singularity or an artefact of how you are doing the
computing? My instinct is that it is the latter or else it could only
arise so rarely that taking the next set of measurements and processing
them would get you out of the bind. I can see that there might be cases
where the matrix inversion was singular for a single instant but that
would only be true for that time slice.

There were two schools: Just provide a very large number and proceed,
praying. Stop and print out a bunch of diagnostic information.

Clearly in a combat situation you can\'t afford to do anything other than
reset the calculation and try again. Or if it is because you are naively
calculating a value of tan(x) then set it to >10^18 and pray. That value
being more than enough to ensure that x = pi to double precision.

Hmm. So the user, an ordinary sailor operating the self-defense
system is in the middle of an engagement with an incoming cruise
missile, and is suddenly handed a bunch or error messages, with less
than twenty seconds to live ... Really??? No! Just silently return
the best answer possible given the situation and press on, praying.

Every division in safety critical or mission critical code should be
checked for whether or not it can fail divide by zero and what if
anything should be done about it if it does.


--
Regards,
Martin Brown
 
On Wed, 4 May 2022 09:49:04 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 02/05/2022 16:54, Joe Gwinn wrote:
On Mon, 2 May 2022 15:28:46 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 02/05/2022 15:10, jlarkin@highlandsniptechnology.com wrote:
On Sun, 1 May 2022 20:27:33 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Saturday, April 30, 2022 at 1:23:21 PM UTC-7, legg wrote:
On Sat, 30 Apr 2022 10:47:30 -0700 (PDT), Ricky
gnuarm.del...@gmail.com> wrote:

On Saturday, April 30, 2022 at 12:11:41 PM UTC-4, jla...@highlandsniptechnology.com wrote:

I wrote an s32.32 saturating math package for the 68K that always did
what was most reasonable.

0/0 = 0

I\'m sure no one can explain why 0/0 = 0 makes sense. Zero does not represent just exactly zero. Just as 1 is the range from 1/2 to 1-1/2, zero is the range from -1/2 to +1/2.

If the denominator is not zero, as the numerator approaches zero, yes, in the limit, the result approaches zero. But if the numerator is not zero, as the denominator approaches zero, in the limit, the result approaches infinity. So why would 0/0=0 make sense?

I would have expected 0/0=1 ie no rational difference.

That\'s the case if you consider lim x/x as x approaches zero. But,
what of the limit of 2x/x, or -x/x, as x approaches zero? NAN is the
best way to get a thinking human to understanding what the computer
is trying to express.

What does a control system do when the heater voltage is computed to
be NAN?

Shut down. The situation should never arise if you have scaled the
problem correctly and you should never be dividing by zero anyway.

If the denominator of a division is zero then you haven\'t thought out
the representation of your problem correctly. Testing for zero is
usually quick (and often implicitly available in integer arithmetic).

Umm. Testing for zero doesn\'t necessarily change anything.

Yes it does. If you know you are about to divide by zero you can do
something else instead and still save time. Divides are remarkably slow.
(even today that still holds true)

I\'m presently working on an algorithm that minimises divides to obtain
higher speed - at least that was the original aim. Serendipitously I
also found that then new schema simultaneously made the whole thing
considerably more accurate as well as faster.

Basically I can sometimes trade a hardware divide for a much more
horrible algebraic expression involving the other fast primitive
operations and still come out ahead on execution time and accuracy.

In my heater code I divided by the square of the unregulated supply
voltage to get fast line regulation onto the heater PWM. Since that
voltage doesn\'t change much, I guess I could have multiplied by
2*(32-V) or something, namely just correct for slope around the
nominal supply voltage. Would have saved coding the divide, which I
only used once.


--

Anybody can count to one.

- Robert Widlar
 
On Wed, 4 May 2022 09:15:20 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 02/05/2022 22:02, Lasse Langwadt Christensen wrote:
mandag den 2. maj 2022 kl. 22.52.48 UTC+2 skrev Phil Hobbs:
jla...@highlandsniptechnology.com wrote:
On Sun, 1 May 2022 20:27:33 -0700 (PDT), whit3rd <whi...@gmail.com
wrote:

On Saturday, April 30, 2022 at 1:23:21 PM UTC-7, legg wrote:
On Sat, 30 Apr 2022 10:47:30 -0700 (PDT), Ricky
gnuarm.del...@gmail.com> wrote:

On Saturday, April 30, 2022 at 12:11:41 PM UTC-4, jla...@highlandsniptechnology.com wrote:

I wrote an s32.32 saturating math package for the 68K that always did
what was most reasonable.

0/0 = 0

I\'m sure no one can explain why 0/0 = 0 makes sense. Zero does not represent just exactly zero. Just as 1 is the range from 1/2 to 1-1/2, zero is the range from -1/2 to +1/2.

If the denominator is not zero, as the numerator approaches zero, yes, in the limit, the result approaches zero. But if the numerator is not zero, as the denominator approaches zero, in the limit, the result approaches infinity. So why would 0/0=0 make sense?

I would have expected 0/0=1 ie no rational difference.

That\'s the case if you consider lim x/x as x approaches zero. But,
what of the limit of 2x/x, or -x/x, as x approaches zero? NAN is the
best way to get a thinking human to understanding what the computer
is trying to express.

What does a control system do when the heater voltage is computed to
be NAN?

It should worry about the skill of the programmer who wrote the code.

Log it, skip the update, and press on to the next measurement.

+1
Or maybe just count it.

No, count the number of superconductive magnets that were damaged. The
big ones had a spiral staircase to let users get to the top.

Not doing the divide saves enough time to do
something else that is *directly* under the programmers control.

Divides particularly and sometimes multiplies have the possibility of
overflow or underflow if their inputs are unfriendly.

Or use a saturating math package.



--

Anybody can count to one.

- Robert Widlar
 
On Wednesday, 4 May 2022 at 14:31:37 UTC+1, jla...@highlandsniptechnology.com wrote:
On Wed, 4 May 2022 09:15:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 02/05/2022 22:02, Lasse Langwadt Christensen wrote:
mandag den 2. maj 2022 kl. 22.52.48 UTC+2 skrev Phil Hobbs:
jla...@highlandsniptechnology.com wrote:
On Sun, 1 May 2022 20:27:33 -0700 (PDT), whit3rd <whi...@gmail.com
wrote:

On Saturday, April 30, 2022 at 1:23:21 PM UTC-7, legg wrote:
On Sat, 30 Apr 2022 10:47:30 -0700 (PDT), Ricky
gnuarm.del...@gmail.com> wrote:

On Saturday, April 30, 2022 at 12:11:41 PM UTC-4, jla...@highlandsniptechnology.com wrote:

I wrote an s32.32 saturating math package for the 68K that always did
what was most reasonable.

0/0 = 0

I\'m sure no one can explain why 0/0 = 0 makes sense. Zero does not represent just exactly zero. Just as 1 is the range from 1/2 to 1-1/2, zero is the range from -1/2 to +1/2.

If the denominator is not zero, as the numerator approaches zero, yes, in the limit, the result approaches zero. But if the numerator is not zero, as the denominator approaches zero, in the limit, the result approaches infinity. So why would 0/0=0 make sense?

I would have expected 0/0=1 ie no rational difference.

That\'s the case if you consider lim x/x as x approaches zero. But,
what of the limit of 2x/x, or -x/x, as x approaches zero? NAN is the
best way to get a thinking human to understanding what the computer
is trying to express.

What does a control system do when the heater voltage is computed to
be NAN?

It should worry about the skill of the programmer who wrote the code.

Log it, skip the update, and press on to the next measurement.

+1
Or maybe just count it.
No, count the number of superconductive magnets that were damaged. The
big ones had a spiral staircase to let users get to the top.
Not doing the divide saves enough time to do
something else that is *directly* under the programmers control.

Divides particularly and sometimes multiplies have the possibility of
overflow or underflow if their inputs are unfriendly.
Or use a saturating math package.

A more spectacular example of what can go wrong is the failure of
an Ariane 5 rocket:
https://web.archive.org/web/20000815230639/http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html

\"On 4 June 1996, the maiden flight of the Ariane 5 launcher ended in a failure. Only about 40 seconds
after initiation of the flight sequence, at an altitude of about 3700 m, the launcher veered off
its flight path, broke up and exploded.
....
The internal SRI software exception was caused during execution of a data conversion
from 64-bit floating point to 16-bit signed integer value. The floating point number which
was converted had a value greater than what could be represented by a 16-bit signed integer.
This resulted in an Operand Error.
....
Although the source of the Operand Error has been identified, this in itself did not cause the
mission to fail. The specification of the exception-handling mechanism also contributed to
the failure. In the event of any kind of exception, the system specification stated that: the
failure should be indicated on the databus, the failure context should be stored in an EEPROM
memory (which was recovered and read out for Ariane 501), and finally, the SRI processor
should be shut down.
It was the decision to cease the processor operation which finally proved fatal.\"

John
 
On 05/04/2022 02:15 AM, Martin Brown wrote:
Division even on the current crop of fast processors is already so slow
that explicitly defending against division by zero and the resulting
trap handling recovery is invariably faster than the alternative.

Many times you can prove that the divisor will not ever be zero but if
you can\'t then you should decide exactly how to handle that exception.

Division is roughly an order of magnitude slower than any of the other
primitive operations on today\'s CPU\'s - it was even worse in the past.

+,-,* now execute in a single cycle and in combination with branch
prediction and speculative execution can appear to execute in fractions
of a cycle provided that the data they need is available quickly enough.

https://www.hpmuseum.org/srw.htm

The Fridens had no protection for a divide by zero operation despite its
sophistication. It would churn away until you unplugged it. I would hope
we\'ve learned something in 70 years.
 
onsdag den 4. maj 2022 kl. 16.13.11 UTC+2 skrev rbowman:
On 05/04/2022 02:15 AM, Martin Brown wrote:
Division even on the current crop of fast processors is already so slow
that explicitly defending against division by zero and the resulting
trap handling recovery is invariably faster than the alternative.

Many times you can prove that the divisor will not ever be zero but if
you can\'t then you should decide exactly how to handle that exception.

Division is roughly an order of magnitude slower than any of the other
primitive operations on today\'s CPU\'s - it was even worse in the past.

+,-,* now execute in a single cycle and in combination with branch
prediction and speculative execution can appear to execute in fractions
of a cycle provided that the data they need is available quickly enough.
https://www.hpmuseum.org/srw.htm

The Fridens had no protection for a divide by zero operation despite its
sophistication. It would churn away until you unplugged it. I would hope
we\'ve learned something in 70 years.

https://youtu.be/7Kd3R_RlXgc
 
On Wed, 4 May 2022 07:09:05 -0700 (PDT), John Walliker
<jrwalliker@gmail.com> wrote:

On Wednesday, 4 May 2022 at 14:31:37 UTC+1, jla...@highlandsniptechnology.com wrote:
On Wed, 4 May 2022 09:15:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 02/05/2022 22:02, Lasse Langwadt Christensen wrote:
mandag den 2. maj 2022 kl. 22.52.48 UTC+2 skrev Phil Hobbs:
jla...@highlandsniptechnology.com wrote:
On Sun, 1 May 2022 20:27:33 -0700 (PDT), whit3rd <whi...@gmail.com
wrote:

On Saturday, April 30, 2022 at 1:23:21 PM UTC-7, legg wrote:
On Sat, 30 Apr 2022 10:47:30 -0700 (PDT), Ricky
gnuarm.del...@gmail.com> wrote:

On Saturday, April 30, 2022 at 12:11:41 PM UTC-4, jla...@highlandsniptechnology.com wrote:

I wrote an s32.32 saturating math package for the 68K that always did
what was most reasonable.

0/0 = 0

I\'m sure no one can explain why 0/0 = 0 makes sense. Zero does not represent just exactly zero. Just as 1 is the range from 1/2 to 1-1/2, zero is the range from -1/2 to +1/2.

If the denominator is not zero, as the numerator approaches zero, yes, in the limit, the result approaches zero. But if the numerator is not zero, as the denominator approaches zero, in the limit, the result approaches infinity. So why would 0/0=0 make sense?

I would have expected 0/0=1 ie no rational difference.

That\'s the case if you consider lim x/x as x approaches zero. But,
what of the limit of 2x/x, or -x/x, as x approaches zero? NAN is the
best way to get a thinking human to understanding what the computer
is trying to express.

What does a control system do when the heater voltage is computed to
be NAN?

It should worry about the skill of the programmer who wrote the code.

Log it, skip the update, and press on to the next measurement.

+1
Or maybe just count it.
No, count the number of superconductive magnets that were damaged. The
big ones had a spiral staircase to let users get to the top.
Not doing the divide saves enough time to do
something else that is *directly* under the programmers control.

Divides particularly and sometimes multiplies have the possibility of
overflow or underflow if their inputs are unfriendly.
Or use a saturating math package.

A more spectacular example of what can go wrong is the failure of
an Ariane 5 rocket:
https://web.archive.org/web/20000815230639/http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html

\"On 4 June 1996, the maiden flight of the Ariane 5 launcher ended in a failure. Only about 40 seconds
after initiation of the flight sequence, at an altitude of about 3700 m, the launcher veered off
its flight path, broke up and exploded.
...
The internal SRI software exception was caused during execution of a data conversion
from 64-bit floating point to 16-bit signed integer value. The floating point number which
was converted had a value greater than what could be represented by a 16-bit signed integer.
This resulted in an Operand Error.
...
Although the source of the Operand Error has been identified, this in itself did not cause the
mission to fail. The specification of the exception-handling mechanism also contributed to
the failure. In the event of any kind of exception, the system specification stated that: the
failure should be indicated on the databus, the failure context should be stored in an EEPROM
memory (which was recovered and read out for Ariane 501), and finally, the SRI processor
should be shut down.
It was the decision to cease the processor operation which finally proved fatal.\"

John

Exception handling, like a trap/interrupt on divide by zero or
overflow, is a hazard. After all, the flight continues.

Best to prevent exceptions by understanding the physical process.

PC programs can announce the exception and crash. Planes and rockets
really do crash.



--

Anybody can count to one.

- Robert Widlar
 
Martin Brown wrote:
On 02/05/2022 22:02, Lasse Langwadt Christensen wrote:
mandag den 2. maj 2022 kl. 22.52.48 UTC+2 skrev Phil Hobbs:
jla...@highlandsniptechnology.com wrote:
On Sun, 1 May 2022 20:27:33 -0700 (PDT), whit3rd <whi...@gmail.com
wrote:

On Saturday, April 30, 2022 at 1:23:21 PM UTC-7, legg wrote:
On Sat, 30 Apr 2022 10:47:30 -0700 (PDT), Ricky
gnuarm.del...@gmail.com> wrote:

On Saturday, April 30, 2022 at 12:11:41 PM UTC-4,
jla...@highlandsniptechnology.com wrote:

I wrote an s32.32 saturating math package for the 68K that
always did
what was most reasonable.

0/0 = 0

I\'m sure no one can explain why 0/0 = 0 makes sense. Zero does
not represent just exactly zero. Just as 1 is the range from 1/2
to 1-1/2, zero is the range from -1/2 to +1/2.

If the denominator is not zero, as the numerator approaches zero,
yes, in the limit, the result approaches zero. But if the
numerator is not zero, as the denominator approaches zero, in the
limit, the result approaches infinity. So why would 0/0=0 make
sense?

I would have expected 0/0=1 ie no rational difference.

That\'s the case if you consider lim x/x as x approaches zero. But,
what of the limit of 2x/x, or -x/x, as x approaches zero? NAN is the
best way to get a thinking human to understanding what the computer
is trying to express.

What does a control system do when the heater voltage is computed to
be NAN?

It should worry about the skill of the programmer who wrote the code.

Log it, skip the update, and press on to the next measurement.

+1
Or maybe just count it. Not doing the divide saves enough time to do
something else that is *directly* under the programmers control.

Divides particularly and sometimes multiplies have the possibility of
overflow or underflow if their inputs are unfriendly.

hoping that that doesn\'t slow the system down to much

Division even on the current crop of fast processors is already so slow
that explicitly defending against division by zero and the resulting
trap handling recovery is invariably faster than the alternative.

And if it\'s _nearly_ singular, you can get denormals, which are really
really slow.

(I expect that Lasse was thinking more about the control loop stability
problem--with constant coefficients, all the pole and zero frequencies
are proportional to the update rate.)

Many times you can prove that the divisor will not ever be zero but if
you can\'t then you should decide exactly how to handle that exception.

Division is roughly an order of magnitude slower than any of the other
primitive operations on today\'s CPU\'s - it was even worse in the past.

+,-,* now execute in a single cycle and in combination with branch
prediction and speculative execution can appear to execute in fractions
of a cycle provided that the data they need is available quickly enough.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
rbowman wrote:
On 05/04/2022 02:15 AM, Martin Brown wrote:
Division even on the current crop of fast processors is already so slow
that explicitly defending against division by zero and the resulting
trap handling recovery is invariably faster than the alternative.

Many times you can prove that the divisor will not ever be zero but if
you can\'t then you should decide exactly how to handle that exception.

Division is roughly an order of magnitude slower than any of the other
primitive operations on today\'s CPU\'s - it was even worse in the past.

+,-,* now execute in a single cycle and in combination with branch
prediction and speculative execution can appear to execute in fractions
of a cycle provided that the data they need is available quickly enough.

https://www.hpmuseum.org/srw.htm

The Fridens had no protection for a divide by zero operation despite its
sophistication. It would churn away until you unplugged it. I would hope
we\'ve learned something in 70 years.

Wow, an Italian tune-up for a desk calculator. ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Wed, 4 May 2022 12:30:14 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

rbowman wrote:
On 05/04/2022 02:15 AM, Martin Brown wrote:
Division even on the current crop of fast processors is already so slow
that explicitly defending against division by zero and the resulting
trap handling recovery is invariably faster than the alternative.

Many times you can prove that the divisor will not ever be zero but if
you can\'t then you should decide exactly how to handle that exception.

Division is roughly an order of magnitude slower than any of the other
primitive operations on today\'s CPU\'s - it was even worse in the past.

+,-,* now execute in a single cycle and in combination with branch
prediction and speculative execution can appear to execute in fractions
of a cycle provided that the data they need is available quickly enough.

https://www.hpmuseum.org/srw.htm

The Fridens had no protection for a divide by zero operation despite its
sophistication. It would churn away until you unplugged it. I would hope
we\'ve learned something in 70 years.

Wow, an Italian tune-up for a desk calculator. ;)

Cheers

Phil Hobbs

I worked two summers at UNO, in a microwave spectroscopy project. I
designed hv pulsers for Stark effect spectroscopy, with big hard tubes
and thyratrons.

Two grad students spent the entire summer in a tiny room with two
Friden calculators calculating resonances. They did everything twice
and cross-checked. My PC would do that now in milliseconds.

I made 50 cents per hour and learned a lot. They could only pay
students so they gave me fake student ID number 20,000 on the theory
that they would never get that high. Some poor person is now confused
with me.



--

Anybody can count to one.

- Robert Widlar
 
jlarkin@highlandsniptechnology.com wrote:
On Wed, 4 May 2022 12:30:14 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

rbowman wrote:
On 05/04/2022 02:15 AM, Martin Brown wrote:
Division even on the current crop of fast processors is already so slow
that explicitly defending against division by zero and the resulting
trap handling recovery is invariably faster than the alternative.

Many times you can prove that the divisor will not ever be zero but if
you can\'t then you should decide exactly how to handle that exception.

Division is roughly an order of magnitude slower than any of the other
primitive operations on today\'s CPU\'s - it was even worse in the past.

+,-,* now execute in a single cycle and in combination with branch
prediction and speculative execution can appear to execute in fractions
of a cycle provided that the data they need is available quickly enough.

https://www.hpmuseum.org/srw.htm

The Fridens had no protection for a divide by zero operation despite its
sophistication. It would churn away until you unplugged it. I would hope
we\'ve learned something in 70 years.

Wow, an Italian tune-up for a desk calculator. ;)

Cheers

Phil Hobbs

I worked two summers at UNO, in a microwave spectroscopy project. I
designed hv pulsers for Stark effect spectroscopy, with big hard tubes
and thyratrons.

Two grad students spent the entire summer in a tiny room with two
Friden calculators calculating resonances. They did everything twice
and cross-checked. My PC would do that now in milliseconds.

I made 50 cents per hour and learned a lot. They could only pay
students so they gave me fake student ID number 20,000 on the theory
that they would never get that high. Some poor person is now confused
with me.



I doubt they issued any 1099s anyway, so no harm, no foul. ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Wednesday, May 4, 2022 at 9:27:55 AM UTC-7, Phil Hobbs wrote:
Martin Brown wrote:

Division even on the current crop of fast processors is already so slow
that explicitly defending against division by zero and the resulting
trap handling recovery is invariably faster than the alternative.
And if it\'s _nearly_ singular, you can get denormals, which are really
really slow.

(I expect that Lasse was thinking more about the control loop stability
problem--with constant coefficients, all the pole and zero frequencies
are proportional to the update rate.)

If you care about updating frequently, and if there\'s an issue with division,
just... don\'t divide. Do everything with a lookup table; memory is cheap.
Or, it\'s possible to take a multivariable equation into linear operation
(a matrix operation instead of special functions) and a matrix operates on
very simple formulae that don\'t include division or allow overflow.
 
On 05/04/2022 08:44 AM, Lasse Langwadt Christensen wrote:
onsdag den 4. maj 2022 kl. 16.13.11 UTC+2 skrev rbowman:
On 05/04/2022 02:15 AM, Martin Brown wrote:
Division even on the current crop of fast processors is already so slow
that explicitly defending against division by zero and the resulting
trap handling recovery is invariably faster than the alternative.

Many times you can prove that the divisor will not ever be zero but if
you can\'t then you should decide exactly how to handle that exception.

Division is roughly an order of magnitude slower than any of the other
primitive operations on today\'s CPU\'s - it was even worse in the past.

+,-,* now execute in a single cycle and in combination with branch
prediction and speculative execution can appear to execute in fractions
of a cycle provided that the data they need is available quickly enough.
https://www.hpmuseum.org/srw.htm

The Fridens had no protection for a divide by zero operation despite its
sophistication. It would churn away until you unplugged it. I would hope
we\'ve learned something in 70 years.

https://youtu.be/7Kd3R_RlXgc

That\'s the animal. I don\'t remember the div stop but it\'s been a long
time. I worked for the NYS Education Department summers in the mid-60\'s.
NYS has state wide Regents exams which are designed from question pools
to basically get a nice bell curve from the results. All the statistical
number crunching was done on Fridens at the time.

To answer the obvious question they sometimes did blow it and wind up
flunking a lot of students leading to some post hoc juggling.
 
On 04/05/2022 21:03, whit3rd wrote:
On Wednesday, May 4, 2022 at 9:27:55 AM UTC-7, Phil Hobbs wrote:
Martin Brown wrote:

Division even on the current crop of fast processors is already so slow
that explicitly defending against division by zero and the resulting
trap handling recovery is invariably faster than the alternative.
And if it\'s _nearly_ singular, you can get denormals, which are really
really slow.

(I expect that Lasse was thinking more about the control loop stability
problem--with constant coefficients, all the pole and zero frequencies
are proportional to the update rate.)

If you care about updating frequently, and if there\'s an issue with division,
just... don\'t divide. Do everything with a lookup table; memory is cheap.

Back then memory was (VERY) expensive and in short supply and machines
with a hardware divide always had the advantage if you needed to use it.

Or, it\'s possible to take a multivariable equation into linear operation
(a matrix operation instead of special functions) and a matrix operates on
very simple formulae that don\'t include division or allow overflow.

Most of the tricks I know for special functions - some of which I still
use rely on precisely one divide but with a bit of luck you can ensure
that the parameters it is passed will never result in that happening.

Polynomial functions with lousy convergence like eg log(x)

log2(1+x) = x - x^2/2 + x^3/3 - ...

x = -1/2 polysum3 = -2/3 -0.6666\'
x=1 Polysum3 = 5/6 0.8333\'


Pade approximation 3,2 of the above with one divide

log2(1+x) = x*(6+x)/(6+4x)
x = -1/2 pade3,2 = -11/16 -0.6875
x = 1 PADE3,2 = 7/10 0.7

The exact answer to 6 sig fig is 0.69315 so the Pade approximation is
already good to 1% over the range -1/2 to 1 and <0.2% on -1/3 to +1/2.

Something that has changed recently is that when a divide is pending
other FP instructions that have no dependency on that particular result
can execute in parallel making timing predictions very tricky indeed.

--
Regards,
Martin Brown
 
On Wed, 4 May 2022 09:49:04 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 02/05/2022 16:54, Joe Gwinn wrote:
On Mon, 2 May 2022 15:28:46 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 02/05/2022 15:10, jlarkin@highlandsniptechnology.com wrote:
On Sun, 1 May 2022 20:27:33 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Saturday, April 30, 2022 at 1:23:21 PM UTC-7, legg wrote:
On Sat, 30 Apr 2022 10:47:30 -0700 (PDT), Ricky
gnuarm.del...@gmail.com> wrote:

On Saturday, April 30, 2022 at 12:11:41 PM UTC-4, jla...@highlandsniptechnology.com wrote:

I wrote an s32.32 saturating math package for the 68K that always did
what was most reasonable.

0/0 = 0

I\'m sure no one can explain why 0/0 = 0 makes sense. Zero does not represent just exactly zero. Just as 1 is the range from 1/2 to 1-1/2, zero is the range from -1/2 to +1/2.

If the denominator is not zero, as the numerator approaches zero, yes, in the limit, the result approaches zero. But if the numerator is not zero, as the denominator approaches zero, in the limit, the result approaches infinity. So why would 0/0=0 make sense?

I would have expected 0/0=1 ie no rational difference.

That\'s the case if you consider lim x/x as x approaches zero. But,
what of the limit of 2x/x, or -x/x, as x approaches zero? NAN is the
best way to get a thinking human to understanding what the computer
is trying to express.

What does a control system do when the heater voltage is computed to
be NAN?

Shut down. The situation should never arise if you have scaled the
problem correctly and you should never be dividing by zero anyway.

If the denominator of a division is zero then you haven\'t thought out
the representation of your problem correctly. Testing for zero is
usually quick (and often implicitly available in integer arithmetic).

Umm. Testing for zero doesn\'t necessarily change anything.

Yes it does. If you know you are about to divide by zero you can do
something else instead and still save time. Divides are remarkably slow.
(even today that still holds true).

The issue is not mathematical, it\'s geometric. Even if one can avoid
the actual division, the fundamental problem must still be addressed.
This requires domain knowledge, not just math, to know what\'s best to
do.


I\'m presently working on an algorithm that minimises divides to obtain
higher speed - at least that was the original aim. Serendipitously I
also found that then new schema simultaneously made the whole thing
considerably more accurate as well as faster.

Basically I can sometimes trade a hardware divide for a much more
horrible algebraic expression involving the other fast primitive
operations and still come out ahead on execution time and accuracy.

All true; relevance unclear.


I have a war story here. Many decades ago, I was the software
architect for the mission software of a ship self defense system that
shoots incoming cruise missiles down, if it can. From detection at
the horizon to impact on ownship is maybe twenty seconds.

One fine day, a mob of software engineers turned up, locked in
argument about what to do if the engageability calculation suffered a
divide-by-zero exception.

The important question here is is the divide by zero a real singularity
or as seems likely a coordinate transform singularity from taking
bearings and ranges into and out of x,y,z Cartesian coordinates.

Altaz telescope mounts have exactly the same problems as gun turrets.
Limited slew rates and allowable angles. It gets singular near the
zenith since the scope cannot spin fast enough to track the sky there.
This is a weak singularity but it has to be avoided.

Yes, but if the actual hardware is a gimbal, the singularity is
mechanical, not just mathematical. If avoidance is possible, that\'s
good. But ...


Observation plans were always checked in simulated operation prior to
the actual telescope run to avoid that zone.

Yep.


This is not a coding error per se, it\'s a mathematical singularity in
the equations - some engagement geometries will hit the singularity,
and the embedded realtime computers of that day could not handle the
more complicated math needed to avoid such things fast enough to
matter.

But is it a true singularity or an artefact of how you are doing the
computing? My instinct is that it is the latter or else it could only
arise so rarely that taking the next set of measurements and processing
them would get you out of the bind. I can see that there might be cases
where the matrix inversion was singular for a single instant but that
would only be true for that time slice.

I do recall doing a peer review of some algorithm design requirements
some time back. There was a vector math singularity if an airplane or
missile was flying on a line through the origin of the radar\'s local
coordinate system. This is a phased-array radar, so no gimbals. I
don\'t recall what that vector math was doing, but the solution was to
punch out a finite solid angle around that line (to accommodate
numerical noise and sensor imprecision and the like), and use special
processing there.

It does not matter if this situation is rare. Nor will the next
sample be different if the target is really flying along a radial.
Which could very well be true of say an incoming guided missile, a
situation of existential importance.

In my review example, I was lucky: with fresh eyes, I could see in
the vector math that zero divisor was possible, and from that what
physical situation would cause that to happen.

But the hazard could have been buried a bit deeper. In an algorithm I
was developing, using linear algebra, I\'ve chased my tail trying to
figure out where random errors were coming from. Turned out to be
that one of the inputs was being wrapped into 360 degrees where the
unwrapped value was needed. Inverting a matrix involves division, and
this truncation error power was sprayed everywhere, with no error
flags thrown. The specific error pattern was the key. Recasting the
math so the angle remained unwrapped solved the problem.


There were two schools: Just provide a very large number and proceed,
praying. Stop and print out a bunch of diagnostic information.

Clearly in a combat situation you can\'t afford to do anything other than
reset the calculation and try again. Or if it is because you are naively
calculating a value of tan(x) then set it to >10^18 and pray. That value
being more than enough to ensure that x = pi to double precision.

Perhaps a reset suffices, perhaps not. The key here was domain
knowledge that a big value was more likely to work than say zero. Or
how to recast the math, if necessary.


Hmm. So the user, an ordinary sailor operating the self-defense
system is in the middle of an engagement with an incoming cruise
missile, and is suddenly handed a bunch or error messages, with less
than twenty seconds to live ... Really??? No! Just silently return
the best answer possible given the situation and press on, praying.

Every division in safety critical or mission critical code should be
checked for whether or not it can fail divide by zero and what if
anything should be done about it if it does.

Yes, but the point is broader, that one should consider all corner
cases and singularities, not just divide by zero. The hard part being
to think of them all, in advance.

Joe Gwinn
 
On 04/05/2022 17:27, Phil Hobbs wrote:
Martin Brown wrote:
On 02/05/2022 22:02, Lasse Langwadt Christensen wrote:
mandag den 2. maj 2022 kl. 22.52.48 UTC+2 skrev Phil Hobbs:
jla...@highlandsniptechnology.com wrote:
On Sun, 1 May 2022 20:27:33 -0700 (PDT), whit3rd <whi...@gmail.com
wrote:

On Saturday, April 30, 2022 at 1:23:21 PM UTC-7, legg wrote:
On Sat, 30 Apr 2022 10:47:30 -0700 (PDT), Ricky
gnuarm.del...@gmail.com> wrote:

On Saturday, April 30, 2022 at 12:11:41 PM UTC-4,
jla...@highlandsniptechnology.com wrote:

I wrote an s32.32 saturating math package for the 68K that
always did
what was most reasonable.

0/0 = 0

I\'m sure no one can explain why 0/0 = 0 makes sense. Zero does
not represent just exactly zero. Just as 1 is the range from 1/2
to 1-1/2, zero is the range from -1/2 to +1/2.

If the denominator is not zero, as the numerator approaches
zero, yes, in the limit, the result approaches zero. But if the
numerator is not zero, as the denominator approaches zero, in
the limit, the result approaches infinity. So why would 0/0=0
make sense?

I would have expected 0/0=1 ie no rational difference.

That\'s the case if you consider lim x/x as x approaches zero. But,
what of the limit of 2x/x, or -x/x, as x approaches zero? NAN is the
best way to get a thinking human to understanding what the computer
is trying to express.

What does a control system do when the heater voltage is computed to
be NAN?

It should worry about the skill of the programmer who wrote the code.

Log it, skip the update, and press on to the next measurement.

+1
Or maybe just count it. Not doing the divide saves enough time to do
something else that is *directly* under the programmers control.

Divides particularly and sometimes multiplies have the possibility of
overflow or underflow if their inputs are unfriendly.

hoping that that doesn\'t slow the system down to much

Division even on the current crop of fast processors is already so
slow that explicitly defending against division by zero and the
resulting trap handling recovery is invariably faster than the
alternative.

And if it\'s _nearly_ singular, you can get denormals, which are really
really slow.

Tell me about it! My first job as a graduate student was to sort out
some FLIC code that was running incredibly slowly written by a brilliant
physicist but in very unfriendly units. Scaled to hbar^3/c^2 it was
always teetering on the brink of denorms every step of the way.

It spent ~90% of its time in the Fortran denorm error handling code.

I had two solutions redefine hbar^3/c^2 == 1 or mask off denormal
interrupts so that they at least ran at full (slow) hardware speed.
Denormals are a favour granted to us by hardware engineers that makes
very small numbers marginally safer than very big ones.

The users (physicists) decided they wanted the latter solution.

They were very pleased with the 10x speed up.

--
Regards,
Martin Brown
 
On Saturday, 30 April 2022 at 16:31:51 UTC+1, Skybuck Flying wrote:
When exception masks are all enabled to stop the processor from throwing floating point exceptions the following calculation produces a somewhat strange result:

0.0 / 0.0 = -nan

(At least in Delphi).

For now I will assume this is the case in C/C++ as well and with that I mean on x86/x64 which should and seems to be following IEEE 754 floating-point format.

I am a little bit surprised by this and I want/need to know more. Where is this defined that 0.0 / 0.0 should be -NAN ?!?

Problem is with the code, example:

T := 0;
D := 0.0 / 0.0;
P := T * D;

This screws up P. instead of P being zero, P is now also -NAN ?!?

I find this very strange but ok.

I guess a simple solution could be to set D to 0 explicitly for this case, is there perhaps another solution ? Maybe some kind of mask or rounding mode so that additional branch is not necessary ???

Bye for now,
Skybuck.

I can\'t remember where NAN is from, but it means the answer can not be computed.
 
On Sun, 8 May 2022 13:44:45 -0700 (PDT), Tabby <tabbypurr@gmail.com>
wrote:

On Saturday, 30 April 2022 at 16:31:51 UTC+1, Skybuck Flying wrote:
When exception masks are all enabled to stop the processor from throwing floating point exceptions the following calculation produces a somewhat strange result:

0.0 / 0.0 = -nan

(At least in Delphi).

For now I will assume this is the case in C/C++ as well and with that I mean on x86/x64 which should and seems to be following IEEE 754 floating-point format.

I am a little bit surprised by this and I want/need to know more. Where is this defined that 0.0 / 0.0 should be -NAN ?!?

Problem is with the code, example:

T := 0;
D := 0.0 / 0.0;
P := T * D;

This screws up P. instead of P being zero, P is now also -NAN ?!?

I find this very strange but ok.

I guess a simple solution could be to set D to 0 explicitly for this case, is there perhaps another solution ? Maybe some kind of mask or rounding mode so that additional branch is not necessary ???

Bye for now,
Skybuck.

I can\'t remember where NAN is from, but it means the answer can not be computed.

It comes from IEEE Std 754, and NaN means only that the answer cannot
be expressed as a real number (versus a poem or a picture), not that
the answer cannot be computed.

..<https://en.wikipedia.org/wiki/IEEE_754>

Joe Gwinn
 
On Sun, 8 May 2022 13:44:45 -0700 (PDT), Tabby <tabbypurr@gmail.com>
wrote:

On Saturday, 30 April 2022 at 16:31:51 UTC+1, Skybuck Flying wrote:
When exception masks are all enabled to stop the processor from throwing floating point exceptions the following calculation produces a somewhat strange result:

0.0 / 0.0 = -nan

(At least in Delphi).

For now I will assume this is the case in C/C++ as well and with that I mean on x86/x64 which should and seems to be following IEEE 754 floating-point format.

I am a little bit surprised by this and I want/need to know more. Where is this defined that 0.0 / 0.0 should be -NAN ?!?

Problem is with the code, example:

T := 0;
D := 0.0 / 0.0;
P := T * D;

This screws up P. instead of P being zero, P is now also -NAN ?!?

I find this very strange but ok.

I guess a simple solution could be to set D to 0 explicitly for this case, is there perhaps another solution ? Maybe some kind of mask or rounding mode so that additional branch is not necessary ???

Bye for now,
Skybuck.

I can\'t remember where NAN is from, but it means the answer can not be computed.

Not A Number.



--

Anybody can count to one.

- Robert Widlar
 

Welcome to EDABoard.com

Sponsor

Back
Top