MH370 crash site identified with amateur radio technology...

On Friday, September 1, 2023 at 7:39:45 PM UTC-7, John Smiht wrote:
On Friday, September 1, 2023 at 4:54:30 PM UTC-5, Flyguy wrote:
On Friday, September 1, 2023 at 2:48:13 PM UTC-7, Flyguy wrote:
A retired aerospace engineer, Richard Godfrey, analyzed radio wave propagation data from the Weak Signal Propagation Reporter network developed by hams to pinpoint MH370\'s crash site to a 300 sq mi area. This sounds like a lot, but previous estimates where hundreds of thousands of sq mi.
https://www.airlineratings.com/news/mh370-new-research-paper-confirms-wsprnet-tracking-technology/
Here is the full report:
https://www.dropbox.com/s/pkolz2mxr1rhepb/MH370%20GDTAAA%20WSPRnet%20Analysis%20Technical%20Report%2015MAR2022.pdf?dl=0
Godfrey was approached by Netflix for a documentary about MH370, but declined as they only wanted conspiratorial viewpoints. In fact, the Netflix \"documentary\" peddles the idea of a Russian conspiracy where MH370 was hijacked by three Russians and flown to Kazakhstan. They
do this by entering the electronics bay and take control of the aircraft and lock out the pilot\'s controls. Obviously, Godfrey\'s flight path totally refutes this theory.
Here is the flight path report:
https://www.dropbox.com/s/k4fn8eec4z9np0z/GDTAAA%20WSPRnet%20MH370%20Analysis%20Flight%20Path%20Report.pdf
Captivating! I had no idea that WSPR analysees could produce such results..
Thanks for the link to the paper.
Cheers,
John

This is from the Comments section of the following article:

Dave Pergamon, Perth, Australia, 2 days ago

I\'m a radio ham and I know full well that WSPR is not technically capable of tracking aircraft movements. For starters, WSPR frequencies and power levels are far too low to detect aircraft and anyhow, WSPR radio waves travel in the ionosphere, 80 to 600 km above the Earth\'s surface, whereas maximum altitude commercial aircraft fly at is around 30,000 feet or about ten kilometres. No professional radio physicist or atmospheric scientist of any repute would put their names to this kind of pseudo-scientific BS.

https://www.dailymail.co.uk/news/article-12468439/MH370-flight-bombshell-claim-resting-place-revealed.html
 
On a sunny day (Mon, 4 Sep 2023 09:47:14 -0700 (PDT)) it happened Fred Bloggs
<bloggs.fredbloggs.fred@gmail.com> wrote in
<83dd46ae-0e3e-484b-9d87-d0eba79ed250n@googlegroups.com>:

On Saturday, September 2, 2023 at 11:17:32 AM UTC-4, Jan Panteltje =
wrote:
On a sunny day (Sat, 2 Sep 2023 06:18:05 -0700 (PDT)) it happened Fred Bl=
oggs
bloggs.fred...@gmail.com> wrote in
d8c35725-1608-4f22...@googlegroups.com>:

On Saturday, September 2, 2023 at 8:59:07 AM UTC-4, Jan Panteltj=
e w=
rote:
On a sunny day (Sat, 2 Sep 2023 04:42:02 -0700 (PDT)) it happened Fred=
Bl=
oggs
bloggs.fred...@gmail.com> wrote in
e55243f3-fec7-4101...@googlegroups.com>:
On Friday, September 1, 2023 at 5:48:13 PM UTC-4, Flyguy wrot=
e:
A retired aerospace engineer, Richard Godfrey, analyzed radio wave =
pro=
pag=
ation data from the Weak Signal Propagation Reporter network develope=
d b=
y h=
ams to pinpoint MH370\'s crash site to a 300 sq mi area. This sounds l=
ike=
a =
lot, but previous estimates where hundreds of thousands of sq mi.
https://www.airlineratings.com/news/mh370-new-research-paper-confir=
ms-=
wsp=
rnet-tracking-technology/
Here is the full report:
https://www.dropbox.com/s/pkolz2mxr1rhepb/MH370%20GDTAAA%20WSPRnet%=
20A=
nal=
ysis%20Technical%20Report%2015MAR2022.pdf?dl=0
Godfrey was approached by Netflix for a documentary about MH370, bu=
t d=
ecl=
ined as they only wanted conspiratorial viewpoints. In fact, the Netf=
lix=
\"d=
ocumentary\" peddles the idea of a Russian conspiracy where MH370 was =
hij=
ack=
ed by three Russians and flown to Kazakhstan. They do this by enterin=
g t=
he =
electronics bay and take control of the aircraft and lock out the pil=
ot\'=
s c=
ontrols. Obviously, Godfrey\'s flight path totally refutes this theory=
.=


Can they get fentanyl implicated in some way? Or UFOs maybe.

Drug smuggler stashed something toxic in the OBOG central filtration =
may=
be.=
..


\"In February 2022, the Australian Transport Safety Bureau and Geoscie=
nce=
Au=
stralia confirmed they were reviewing old data related to MH370, foll=
owi=
ng =
the release of Godfrey\'s report.[15] In April, 2022 the data review \"=
con=
clu=
ded that it is highly unlikely there is an aircraft debris field with=
in =
the=
reviewed search area.\"[16]\"

https://www.atsb.gov.au/media/news-items/2022/mh370-data-review
After reading both papers it seems evident to me that it was a pre-med=
ita=
ted suicide by the pilot.
He must have been 100% concious and in control to make ail those small=
ad=
justments.
and the endpoint corresponds to / is the same as the one he had on his=
fl=
ight simulator at home.
That is the third pilot suicide and one who does not give shit about h=
is =
passengers I read about,
one in France, one in Africa and now this.
Maybe he locked everybody else out the cockpit...

The location has several miles uncertainty, but to look again now with=
th=
e smaller area may make sense.

Not sure. But don\'t all suicide flights crash on the planned route? This=
o=
ne crashed because it ran out of fuel. Pilot could have programmed a dea=
th =
route and then shot himself or took a pill.
the route has funny things like flying in a rectangle
see page 49 and so on of
GDTAAA_WSPRnet_MH370_Analysis_Flight_Path_Report.pdf

many other corrections too
Not sure you can program the MH370 auto-pilot do do all that all by itsel=
f.
My drone can, but then again... You need to enter GPS locations,
altitude.. speed.

The reality is Asian commercial pilots commit suicide like this ALL the tim=
e. But there\'re strong political and economic reasons for their phony accid=
ent investigations to come up short of making that finding.

The most recent one:

https://www.planeandpilotmag.com/news/the-latest/pilot-murder-suicide-likely-cause-of-china-eastern-air-disaster/

When you start losing parts of the wings and other control surfaces, that k=
ind of gives away the pilot was deliberately operating the aircraft outside=
its envelope. Boeing was convinced of their finding based upon the data. C=
hina was angry with it.

I was in an flight that got an engine on fire from Spain to the Netherlands many years ago.
We dumped fuel and landed safely back in Spain back where we started and people applauded the captain.
But before takeoff I noticed a pool of what looked like oil under one engine, but thought
\'they must have checked that\'.
Captain just walked in and never walked even around the plain to see if everything was in order.
When back home on a later flight I called the newspaper and reported it,
they then did an article on airplane maintenance.
I have been flying less since then, a boat from here to the UK does not even take
much longer, leaves on schedule too, couple of hours, I stay on deck :)
and then by train to London.

It is sometimes hard to tell what exactly the cause of accidents is, there are good programs on German TV that go into
details about it, follow the whole search for the cause of plane crashes.
I am sure those can be downloaded too.


>MH370 was a mid-life crisis thing.
 
On Tue, 05 Sep 23 00:41:46 UTC, Loose Sphincter, the unhappily married gay
neo-nazitard, IMPERSONATING his master, jdyí¶’í½§, whined again:


On Sunday, September 3, 2023 at 11:09:50?PM UTC-5,jdyöung wrote:

“Imitation is the sincerest form of flattery that mediocrity can pay to greatness.” - Oscar Wilde

Indeed it is

So why do you CONTINUE flattering him, you abysmally stupid gay
neo-nazitard? Because you ARE an abysmally stupid gay neo-nazitard? LOL

Owned!
ROFL!

Yes you are

LAME attempt at a come back by our resident abysmally stupid gay
neo-nazitard! LOL

jdyoung, Official
jdy...@gmail.com
www.splc.org

No you\'re not.
jdyöung, Official
jdyo...@gmail.com

Even LAMER attempt, you miserable abysmally stupid gay neo-nazitard! LOL

--
Loose Sphincter about his predilection:
\"Foreskins, and only foreskins. That\'s my life.\"
MID: <5qopicpl2kogolncj5rj9q1c0g459m4m7a@4ax.com>
 
On 05-09-2023 00:39, John Larkin wrote:
On Mon, 4 Sep 2023 23:59:08 +0200, Klaus Vestergaard Kragelund
klauskvik@hotmail.com> wrote:

On 03-09-2023 18:05, Fred Bloggs wrote:
On Sunday, September 3, 2023 at 10:42:14?AM UTC-4, John Larkin wrote:
On Sun, 3 Sep 2023 05:38:52 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Sunday, September 3, 2023 at 4:15:30?AM UTC-4, John Larkin wrote:
On Sat, 2 Sep 2023 11:20:49 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Friday, September 1, 2023 at 4:53:24?PM UTC-4, John Larkin wrote:
On Fri, 1 Sep 2023 12:56:31 -0700 (PDT), Klaus Kragelund
klaus.k...@gmail.com> wrote:

Hi

I have a triac control circuit in which I supply gate current all the time to avoid zero crossing noise.

https://electronicsdesign.dk/tmp/TriacSolution.PNG

Apparently, sometimes the circuit spontaneously turns on the triac.
It\'s probable due to a transient, high dV/dt, turning on via \"rate of rise of offstate voltage\" limits.

The triac used is BT137S-600:

https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf

I am using a snubber to divert energy, and also have a pulldown of 1kohm to shunt energy transients that capacitively couple into the gate.

The unit is at the client, so have not measured on it yet, so trying to guess what I should try to remove the problem.

I could:

Do a more hard snubber
Reduce the shunt resistor
Get a better triac
Add an inductor in series to limit the transient

One thing I though of, since I turn it on all the time, and it is not very critical that the timing is perfect in terms of turning it on in the zero crossing, was to add a big capacitor on the gate in parallel with shunt resistor R543. That will act as low impedance for high speed transients.

Good idea, or better ideas?

Cheers

Klaus
It\'s a sensitive-gate triac. R542 and 543 look big to me. They could
be smaller and bypassed.

If there are motors in the vicinity, you want to at least use twisted leads in all feeds of the gate circuit.
I doubt that would make any difference.

Twisted pairs make a HUGE difference.
Sometimes. Probably not here.

I wonder how far from the triac the opto is.


The opto is just next to the Triac, and with a good ground plane, so no
twisting of the gate traces needed,

It drops 1.3V minimum at 10A. It has an R theta-JC of about 2. If the application is high current, it needs a heat sink, so it may be off board.

I^2T is only 21, which is kind of weak.

The max rate of rise of turn off commutating is min. 10 V/us, again on the low side.

But dVd/dt is a minimum of 200 V/us with the gate open, that\'s to trigger a commutation from the off state, which is pretty good but not outstanding. It could be that, and if so a standard L shunt C off the line is all that\'s needed.

Don\'t know how you get sensitive gate with 30mA trigger current.

Some triacs need 150 mA and 1.5 volts to trigger. Some have low ohmic
paths from gate to MT1, which helps reduce sprious triggering.




The kicker is VGT, gate trigger voltage. At 400V across main terminals, it could be as low as 0.25V @125oC, making for 0.4V at 25oC. Table 6. That kind of number indicates a vulnerability. He definitely should guard the gate drive.

https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf

The combination of the voltage rate of rise and the capacitance from
M1/M2 to the gate is what triggers it, right?

So just adding a capacitor on the gate would be a good way to protect
against noise, right?

I\'d bypass the gate and the optocoupler receiver. Either could be
triggered by a bit of capacitively-coupled noise.

As suggested, R542 and R543 could be smaller, both bypassed by as much
c as is compatible with your speed requirements.

Agreed, and thanks for the suggestions. Will try it out :)
 
On 04/09/2023 10:59 pm, Klaus Vestergaard Kragelund wrote:
On 03-09-2023 18:05, Fred Bloggs wrote:
On Sunday, September 3, 2023 at 10:42:14 AM UTC-4, John Larkin wrote:
On Sun, 3 Sep 2023 05:38:52 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Sunday, September 3, 2023 at 4:15:30?AM UTC-4, John Larkin wrote:
On Sat, 2 Sep 2023 11:20:49 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Friday, September 1, 2023 at 4:53:24?PM UTC-4, John Larkin wrote:
On Fri, 1 Sep 2023 12:56:31 -0700 (PDT), Klaus Kragelund
klaus.k...@gmail.com> wrote:

Hi

I have a triac control circuit in which I supply gate current
all the time to avoid zero crossing noise.

https://electronicsdesign.dk/tmp/TriacSolution.PNG

Apparently, sometimes the circuit spontaneously turns on the triac.
It\'s probable due to a transient, high dV/dt, turning on via
\"rate of rise of offstate voltage\" limits.

The triac used is BT137S-600:

https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf

I am using a snubber to divert energy, and also have a pulldown
of 1kohm to shunt energy transients that capacitively couple
into the gate.

The unit is at the client, so have not measured on it yet, so
trying to guess what I should try to remove the problem.

I could:

Do a more hard snubber
Reduce the shunt resistor
Get a better triac
Add an inductor in series to limit the transient

One thing I though of, since I turn it on all the time, and it
is not very critical that the timing is perfect in terms of
turning it on in the zero crossing, was to add a big capacitor
on the gate in parallel with shunt resistor R543. That will act
as low impedance for high speed transients.

Good idea, or better ideas?

Cheers

Klaus
It\'s a sensitive-gate triac. R542 and 543 look big to me. They could
be smaller and bypassed.

If there are motors in the vicinity, you want to at least use
twisted leads in all feeds of the gate circuit.
I doubt that would make any difference.

Twisted pairs make a HUGE difference.
Sometimes. Probably not here.

I wonder how far from the triac the opto is.


The opto is just next to the Triac, and with a good ground plane, so no
twisting of the gate traces needed,

It drops 1.3V minimum at 10A. It has an R theta-JC of about 2. If the
application is high current, it needs a heat sink, so it may be off
board.

I^2T is only 21, which is kind of weak.

The max rate of rise of turn off commutating is min. 10 V/us, again on
the low side.

But dVd/dt is a minimum of 200 V/us with the gate open, that\'s to
trigger a commutation from the off state, which is pretty good but not
outstanding. It could be that, and if so a standard L shunt C off the
line is all that\'s needed.

Don\'t know how you get sensitive gate with 30mA trigger current.

The kicker is VGT, gate trigger voltage. At 400V across main
terminals, it could be as low as 0.25V @125oC, making for 0.4V at
25oC. Table 6. That kind of number indicates a vulnerability. He
definitely should guard the gate drive.

https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf

The combination of the voltage rate of rise and the capacitance from
M1/M2 to the gate is what triggers it, right?

So just adding a capacitor on the gate would be a good way to protect
against noise, right?

Yes, I\'d put a capacitor across R543. At a guess 0.1uF a good first try,
maybe even upto 0.47uF. R543 at 1k is far too high and maybe not even
necessary since a lot of triacs that size have on-die resistors gate-MT1
- you could measure one.

Is there a special reason the triac side supply is negative ground? That
triac is capable of positive gate triggering but is much less sensitive.
I almost always have the logic supply positive ground triac side so the
gate is driven from the -3.3 or -5V.

piglet
 
On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

--
Martin Brown
 
On Tuesday, 5 September 2023 at 06:17:17 UTC+1, gggg gggg wrote:
On Friday, September 1, 2023 at 7:39:45 PM UTC-7, John Smiht wrote:
On Friday, September 1, 2023 at 4:54:30 PM UTC-5, Flyguy wrote:
On Friday, September 1, 2023 at 2:48:13 PM UTC-7, Flyguy wrote:
A retired aerospace engineer, Richard Godfrey, analyzed radio wave propagation data from the Weak Signal Propagation Reporter network developed by hams to pinpoint MH370\'s crash site to a 300 sq mi area. This sounds like a lot, but previous estimates where hundreds of thousands of sq mi.
https://www.airlineratings.com/news/mh370-new-research-paper-confirms-wsprnet-tracking-technology/
Here is the full report:
https://www.dropbox.com/s/pkolz2mxr1rhepb/MH370%20GDTAAA%20WSPRnet%20Analysis%20Technical%20Report%2015MAR2022.pdf?dl=0
Godfrey was approached by Netflix for a documentary about MH370, but declined as they only wanted conspiratorial viewpoints. In fact, the Netflix \"documentary\" peddles the idea of a Russian conspiracy where MH370 was hijacked by three Russians and flown to Kazakhstan. They
do this by entering the electronics bay and take control of the aircraft and lock out the pilot\'s controls. Obviously, Godfrey\'s flight path totally refutes this theory.
Here is the flight path report:
https://www.dropbox.com/s/k4fn8eec4z9np0z/GDTAAA%20WSPRnet%20MH370%20Analysis%20Flight%20Path%20Report.pdf
Captivating! I had no idea that WSPR analysees could produce such results.
Thanks for the link to the paper.
Cheers,
John
This is from the Comments section of the following article:

Dave Pergamon, Perth, Australia, 2 days ago

I\'m a radio ham and I know full well that WSPR is not technically capable of tracking aircraft movements. For starters, WSPR frequencies and power levels are far too low to detect aircraft and anyhow, WSPR radio waves travel in the ionosphere, 80 to 600 km above the Earth\'s surface, whereas maximum altitude commercial aircraft fly at is around 30,000 feet or about ten kilometres. No professional radio physicist or atmospheric scientist of any repute would put their names to this kind of pseudo-scientific BS.

The paper does state that the interaction with aircraft happens close to the locations where the sky wave refracts
down to the ground and reflects up again. This means that the claim quoted above must have been made by
somebody who had not actually read what they were claiming to be BS. Whether the results are accurate enough to
give a useful search area is another matter.
John
https://www.dailymail.co.uk/news/article-12468439/MH370-flight-bombshell-claim-resting-place-revealed.html
 
On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.

The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.
 
On Tuesday, September 5, 2023 at 8:50:22 AM UTC-5, John Walliker wrote:
On Tuesday, 5 September 2023 at 06:17:17 UTC+1, gggg gggg wrote:
On Friday, September 1, 2023 at 7:39:45 PM UTC-7, John Smiht wrote:
On Friday, September 1, 2023 at 4:54:30 PM UTC-5, Flyguy wrote:
On Friday, September 1, 2023 at 2:48:13 PM UTC-7, Flyguy wrote:
A retired aerospace engineer, Richard Godfrey, analyzed radio wave propagation data from the Weak Signal Propagation Reporter network developed by hams to pinpoint MH370\'s crash site to a 300 sq mi area. This sounds like a lot, but previous estimates where hundreds of thousands of sq mi.
https://www.airlineratings.com/news/mh370-new-research-paper-confirms-wsprnet-tracking-technology/
Here is the full report:
https://www.dropbox.com/s/pkolz2mxr1rhepb/MH370%20GDTAAA%20WSPRnet%20Analysis%20Technical%20Report%2015MAR2022.pdf?dl=0
Godfrey was approached by Netflix for a documentary about MH370, but declined as they only wanted conspiratorial viewpoints. In fact, the Netflix \"documentary\" peddles the idea of a Russian conspiracy where MH370 was hijacked by three Russians and flown to Kazakhstan. They
do this by entering the electronics bay and take control of the aircraft and lock out the pilot\'s controls. Obviously, Godfrey\'s flight path totally refutes this theory.
Here is the flight path report:
https://www.dropbox.com/s/k4fn8eec4z9np0z/GDTAAA%20WSPRnet%20MH370%20Analysis%20Flight%20Path%20Report.pdf
Captivating! I had no idea that WSPR analysees could produce such results.
Thanks for the link to the paper.
Cheers,
John
This is from the Comments section of the following article:

Dave Pergamon, Perth, Australia, 2 days ago

I\'m a radio ham and I know full well that WSPR is not technically capable of tracking aircraft movements. For starters, WSPR frequencies and power levels are far too low to detect aircraft and anyhow, WSPR radio waves travel in the ionosphere, 80 to 600 km above the Earth\'s surface, whereas maximum altitude commercial aircraft fly at is around 30,000 feet or about ten kilometres. No professional radio physicist or atmospheric scientist of any repute would put their names to this kind of pseudo-scientific BS.
The paper does state that the interaction with aircraft happens close to the locations where the sky wave refracts
down to the ground and reflects up again. This means that the claim quoted above must have been made by
somebody who had not actually read what they were claiming to be BS. Whether the results are accurate enough to
give a useful search area is another matter.
John

https://www.dailymail.co.uk/news/article-12468439/MH370-flight-bombshell-claim-resting-place-revealed.html

Yes, and those WSPR stations are all over the globe. In addition, I think I remember watching something on TV or YT about how the Russians or somebody used the disturbance of radio waves as a passive \"radar\" system. Actually, ISTR that it allowed the detection of a US stealth plane which was shot down.
Another John
 
On Tue, 05 Sep 2023 08:57:22 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.


The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans. Hardware
is far simpler in terms of lines of FPGA code. But it\'s creeping up.

On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).

In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP
calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

The customer withdrew the requirement.

Joe Gwinn
 
On 05/09/2023 16:57, John Larkin wrote:
On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.

Don\'t blame the engineers for that - it is the ship it and be damned
senior management that is responsible for most buggy code being shipped.
Even more so now that 1+GB upgrades are essentially free. :(

First to market is worth enough that people live with buggy code. The
worst major release I can recall in a very long time was MS Excel 2007
(although bugs in Vista took a lot more flack - rather unfairly IMHO).

(which reminds me it is a MS patch Tuesday today)

The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.

The only ones which actually could be truly relied upon used formal
mathematical proof techniques to ensure reliability. Very few
practitioners are able to do it properly and it is pretty much reserved
for ultra high reliability safety and mission critical code.

It could be all be done to that standard iff commercial developers and
their customers were prepared to pay for it. However, they want it now
and they keep changing their minds about what it is they actually want
so the goalposts are forever shifting around. That sort of functionality
creep is much less common in hardware.

UK\'s NATS system is supposedly 6 sigma coding but its misbehaviour on
Bank Holiday Monday peak travel time was somewhat disastrous. It seems
someone managed to input the halt and catch fire instruction and the
buffers ran out before they were able to fix it. There will be a
technical report out in due course - my guess is that they have reduced
overheads and no longer have some of the key people who understand its
internals. Malformed flight plan data should not have been able to kill
it stone dead - but apparently that is exactly what happened!

https://www.ft.com/content/9fe22207-5867-4c4f-972b-620cdab10790
(might be paywalled)

If so Google \"UK air traffic control outage caused by unusual flight
plan data\"

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

But using design and simulation *software* that you fail to acknowledge
is actually pretty good. If you had to do it with pencil and paper your
would be there forever.

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

So do physical mechanical interlocks. I don\'t trust software or even
electronic interlocks to protect me compared to a damn great beam stop
and a padlock on it with the key in my pocket.

--
Martin Brown
 
On 05/09/2023 17:45, Joe Gwinn wrote:
On Tue, 05 Sep 2023 08:57:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.


The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans. Hardware
is far simpler in terms of lines of FPGA code. But it\'s creeping up.

On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).

In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP

Although that is true it is also true that a small number of cunningly
constructed test datasets can explore a very high proportion of the most
frequently traversed paths in any given codebase. One snag is that
testing is invariably cut short by management when development overruns.

The bits that fail to get explored tend to be weird error recovery
routines. I recall one latent on the VAX for ages which was that when it
ran out of IO handles (because someone was opening them inside a loop)
the first thing the recovery routine tried to do was open an IO channel!

calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

McCabe\'s complexity metric provides a way to test paths in components
and subsystems reasonably thoroughly and catch most of the common
programmer errors. Static dataflow analysis is also a lot better now
than in the past.

Then you only need at most 40000 test vectors to take each branch of
every binary if statement (60000 if it is Fortran with 3 way branches
all used). That is a rather more tractable number (although still large).

Any routine with too high a CCI count is practically certain to contain
latent bugs - which makes it worth looking at more carefully.

--
Martin Brown
 
On 9/5/2023 5:13 AM, Martin Brown wrote:
On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program. > MTBF is another metric that can be used for something that is intended to run
24/7 and recover gracefully from anything that may happen to it.

I\'m looking at the pre-release period (you wouldn\'t want to release
something that wasn\'t \"stable\").

I commit often (dozens of times a day) so I can have a record of
each problem encountered and, thereafter, how it was \"fixed\".
As the number of messages related to fixups decreases, confidence
in the codebase rises.

It is inevitable that a new release will have some bugs and minor differences
from its predecessor that real life users will find PDQ.

The \"bugs\" that tend to show up after release are specification
shortcomings. E.g., I had a case where a guy wired a motor
incorrectly and the software just kept driving it further and further
from it\'s desired setpoint -- until it smashed into the \"wrong\"
limit switches (which, of course, weren\'t examined because it
wasn\'t SUPPOSED to be traveling in that direction).

When you\'ve got 7-figures at stake, you can\'t resort to blaming
the \"electrician\" for the failure (\"Why didn\'t the software
sense that it was running the wrong way?\" Um, why didn\'t it sense
that the electrician\'s wife had been ragging on him before he
came to work and left him in a distracted mood??)

Bugs (as in \"coding errors\") should never leave the lab.

The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without breaking
something else. Modern  optimisers make that more difficult now than it used to
be back when I was involved in commercial development.

Good problem decomposition goes a long way towards that goal.
If you try to do \"too much\" you quickly overwhelm the developer\'s
ability to manage complexity (7 items in STM?). And, as you can\'t
*see* the entire implementation, there\'s nothing to REMIND you
of some salient issue that might impact your local efforts.

[Hence the value of eschewing globals and the languages that
tolerate/encourage them! This dramatically cuts down the
number of ways X can influence Y.]
 
On Tue, 05 Sep 2023 12:45:01 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:

On Tue, 05 Sep 2023 08:57:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.


The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans.

After you type a line of code, read it. When we did that, entire
applications often worked first try.

Hardware
>is far simpler in terms of lines of FPGA code. But it\'s creeping up.

FPGAs are at least (usually) organized state machines. Mistakes are
typically hard failures, not low-rate bugs discovered in the field.
Avoiding race and metastability hazards is common practise.

On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).

Software provability was a brief fad once. It wasn\'t popular or, as
code is now done, possible.


In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP
calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

An FPGA is usually coded as a state machine, where the designer
understands that the machine has a finite number of states and handles
every one. A computer program has an impossibly large number of
states, unknown and certainly not managed. Code is like hairball async
logic design.


The customer withdrew the requirement.

It was naiive of him to want correct code.


Joe Gwinn
 
On Tue, 5 Sep 2023 17:47:41 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 05/09/2023 16:57, John Larkin wrote:
On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.

Don\'t blame the engineers for that - it is the ship it and be damned
senior management that is responsible for most buggy code being shipped.
Even more so now that 1+GB upgrades are essentially free. :(

First to market is worth enough that people live with buggy code. The
worst major release I can recall in a very long time was MS Excel 2007
(although bugs in Vista took a lot more flack - rather unfairly IMHO).

(which reminds me it is a MS patch Tuesday today)

The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.

The only ones which actually could be truly relied upon used formal
mathematical proof techniques to ensure reliability. Very few
practitioners are able to do it properly and it is pretty much reserved
for ultra high reliability safety and mission critical code.

It could be all be done to that standard iff commercial developers and
their customers were prepared to pay for it. However, they want it now
and they keep changing their minds about what it is they actually want
so the goalposts are forever shifting around. That sort of functionality
creep is much less common in hardware.

UK\'s NATS system is supposedly 6 sigma coding but its misbehaviour on
Bank Holiday Monday peak travel time was somewhat disastrous. It seems
someone managed to input the halt and catch fire instruction and the
buffers ran out before they were able to fix it. There will be a
technical report out in due course - my guess is that they have reduced
overheads and no longer have some of the key people who understand its
internals. Malformed flight plan data should not have been able to kill
it stone dead - but apparently that is exactly what happened!

https://www.ft.com/content/9fe22207-5867-4c4f-972b-620cdab10790
(might be paywalled)

If so Google \"UK air traffic control outage caused by unusual flight
plan data\"

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

But using design and simulation *software* that you fail to acknowledge
is actually pretty good. If you had to do it with pencil and paper your
would be there forever.

We did serious electronic design without simulation, and most of it
worked first time, or had dumb mistake hard failures that were easily
hacked. It didn\'t take forever. If one didn\'t understand some part or
circuit, it could be breadboarded and tested.



FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

So do physical mechanical interlocks. I don\'t trust software or even
electronic interlocks to protect me compared to a damn great beam stop
and a padlock on it with the key in my pocket.
 
On 9/5/2023 9:47 AM, Martin Brown wrote:
Don\'t blame the engineers for that - it is the ship it and be damned senior
management that is responsible for most buggy code being shipped. Even more so
now that 1+GB upgrades are essentially free. :(

Note how the latest coding styles inherently acknowledge that.
Agile? How-to-write-code-without-knowing-what-it-has-to-do?

> First to market is worth enough that people live with buggy code. The worst

Of course! Anyone think their Windows/Linux box is bug-free?
USENET client? Browser? yet, somehow, they all seem to provide
real value to their users!

major release I can recall in a very long time was MS Excel 2007 (although bugs
in Vista took a lot more flack - rather unfairly IMHO).

Of course. Folks run Linux with 20M+ LoC? So, a ballpark estimate
of 20K+ *bugs* in the RELEASED product??

<https://en.wikipedia.org/wiki/Linux_kernel#/media/File:Linux_kernel_map.png>

The era of monolithic kernels is over. Unless folks keep wanting
to DONATE their time to maintaining them.

<https://en.wikipedia.org/wiki/Linux_kernel#/media/File:Redevelopment_costs_of_Linux_kernel.png>

Amusing that it\'s pursuing a 50 year old dream... (let\'s get together
an effort to recreate the Wright flyer so we can all take 100 yard flights!)

> (which reminds me it is a MS patch Tuesday today)

Surrender your internet connection, for the day...

The only ones which actually could be truly relied upon used formal
mathematical proof techniques to ensure reliability. Very few practitioners are
able to do it properly and it is pretty much reserved for ultra high
reliability safety and mission critical code.

And only applies to the smallest parts of the codebase. The \"engineering\"
comes in figuring out how to live with systems that aren\'t verifiable.
(you can\'t ensure hardware WILL work as advertised unless you have tested
every component that you put into the fabrication -- ah, but you can blame
someone else for YOUR system\'s failure)

It could be all be done to that standard iff commercial developers and their
customers were prepared to pay for it. However, they want it now and they keep
changing their minds about what it is they actually want so the goalposts are
forever shifting around. That sort of functionality creep is much less common
in hardware.

Exactly. And, software often is told to COMPENSATE for hardware shortcomings.

One of the sound systems used in early video games used a CVSD as an ARB.
But, the idiot who designed the hardware was 200% clueless about how the
software would use the hardware. So, the (dedicated!) processor had
to sit in a tight loop SHIFTING bits into the CVSD. Of course, each
path through the loop had to be balanced in terms of execution time
lest you get a beat component (as every 8th bit requires a new byte
to be fetched -- which takes a different amount of time than shifting
the current byte by one bit).

Hardware designers are typically clueless as to how their decisions
impact the software. And, as the company may have invested a \"couple
of kilobucks\" on a design and layout, Manglement\'s shortsightedness
fails to realize the tens of kilobucks that their penny-pinching
will cost!

[I once had a spectacular FAIL in a bit of hardware that I designed.
It was a custom CPU (\"chip\"). The guy writing the code (and the
tools to write it!) assumed addresses were byte-oriented. But,
the processor was truly a 16b machine and all of the addresses
were for 16b objects. So, all of the addresses generated by his tools
were exactly twice what they should have been (\"Didn\'t you notice
how the LSb was ALWAYS \'0\'?\") Simple fix but embarassing as we each
relied on assumptions that seemed natural to us where the wiser
approach would have made that statement explicit]

UK\'s NATS system is supposedly 6 sigma coding but its misbehaviour on Bank
Holiday Monday peak travel time was somewhat disastrous. It seems someone
managed to input the halt and catch fire instruction and the buffers ran out
before they were able to fix it. There will be a technical report out in due
course - my guess is that they have reduced overheads and no longer have some
of the key people who understand its internals. Malformed flight plan data
should not have been able to kill it stone dead - but apparently that is
exactly what happened!

Lunar landers, etc. Software is complex. Hardware is a walk in the
park. For anything but a trivial piece of code, you can\'t see all of the
interconnects/interdependencies.

https://www.ft.com/content/9fe22207-5867-4c4f-972b-620cdab10790
(might be paywalled)

If so Google \"UK air traffic control outage caused by unusual flight plan data\"

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

But using design and simulation *software* that you fail to acknowledge is
actually pretty good. If you had to do it with pencil and paper your would be
there forever.

When was the last time your calculator PROGRAM produced a verifiable error?
And, desktop software is considerably less complex than software used
in products where interactions arising from temporal differences can
prove unpredictable.

We bought a new stove/oven some time ago. Specify which oven, heat source,
setpoint temperature and time. START.

Ah, but if you want to change the time remaining (because you peeked
at the item and realize it could use another few minutes) AND the
timer expires WHILE YOU ARE TRYING TO CHANGE IT, the user interface
locks up (!). Your recourse is to shut off the oven (abort the
process) and then restart it using the settings you just CANCELED.

It\'s easy to see how this can evade testing -- if the test engineer
didn\'t have a good understanding of how the code worked so he
could challenge it with specially crafted test cases.

When drafting system specifications, I (try to) imagine every
situation that can come up and describe how each should be handled.
So, the test scaffolding and actual tests can be designed to verify
that behavior in the resulting product.

[How do you test for the case where the user tries to change the
remaining time AS the timer is expiring? How do you test for the
case where the process on the remote host crashes AFTER it has
received a request for service but before it has acknolwedged
that? Or, BEFORE it receives it? Or, WHILE acknowledging it?]

Hardware is easy to test: set voltage/current/freq/etc. and
observe result.

[We purchased a glass titty many years ago. At one point, we turned
it on, then off, then on again -- in relatively short order. I
guess the guy who designed the power supply hadn\'t considered this
possibility as the magic smoke rushed out of it! How hard can it
be to design a power supply???]

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

No, the physical electronics are the EASIEST bits. If designing
hardware was so difficult, then the solution to the software
\"problem\" would be to just have all the hardware designers switch
over to designing software! Problem solved INSTANTLY!

In practice, the problem would be worsened by a few orders of
magnitude as they suddenly found themselves living in an opaque world.

So do physical mechanical interlocks. I don\'t trust software or even electronic
interlocks to protect me compared to a damn great beam stop and a padlock on it
with the key in my pocket.

Note the miswired motor example, above. If the limit switches had
been hardwired, the problem still would have been present as the
problem was in the hardware -- the wiring of the motor.
 
On 9/5/2023 9:45 AM, Joe Gwinn wrote:
There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans. Hardware
is far simpler in terms of lines of FPGA code. But it\'s creeping up.

Even small projects defy hardware implementations.

BUILD a speech synthesizer, entirely out of hardware.
Make sure there is a way the user can adjust the voice
their individual liking. (*you*, not your TEAM, have
3 months to produce a working prototype).

Or, something that recognizes faces, voices, etc.
Or, something that knows which plants should be watered,
today (if any), and how much water to dispense.
Or, something that examines the text in a document
and flags grammatical and spelling errors.
Or...

On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).

In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP
calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

The *first* problem is codifying how the code should behave in
*each* of those test cases.

> The customer withdrew the requirement.

\"Verify your sqrt() function produces correct answers over the
range of inputs\"
 
On 9/5/2023 10:02 AM, Martin Brown wrote:
In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements.  So, there
are up to 2^20000 unique paths through the code.  Which chokes my HP

Although that is true it is also true that a small number of cunningly
constructed test datasets can explore a very high proportion of the most
frequently traversed paths in any given codebase. One snag is that testing is
invariably cut short by management when development overruns.

\"We\'ll fix it in version 2\"

I always found this an amusing delusion.

If the product is successful, there will be lots of people clamoring
for fixes so you won\'t have any manpower to devote to designing
version 2 (but your competitors will see the appeal your product
has and will start designing THEIR replacement for it!)

If the product is a dud (possibly because of these problems),
there won\'t be a need for a version 2.

> The bits that fail to get explored tend to be weird error recovery routines. I

Because, by design, they are seldom encountered.
So, don\'t benefit from being exercised in the normal
course of operation.

recall one latent on the VAX for ages which was that when it ran out of IO
handles (because someone was opening them inside a loop) the first thing the
recovery routine tried to do was open an IO channel!

calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number.  The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

McCabe\'s complexity metric provides a way to test paths in components and
subsystems reasonably thoroughly and catch most of the common programmer
errors. Static dataflow analysis is also a lot better now than in the past.

But some test cases can mask other paths through the code.
There is no guarantee that a given piece of code *can* be
thoroughly tested -- especially if you take into account the
fact that the underlying hardware isn\'t infallible;
\"if (x % )\" can yield one result, now, and a different
result, 5 lines later -- even though x hasn\'t been
altered (but the hardware farted).

So:

if (x % 2) {
do this;
do that;
do another_thing;
} else {
do that;
}

can execute differently than:

if (x % 2) {
do this;
}

do that;

if (x % 2) {
do another_thing;
}

Years ago, this possibility wasn\'t ever considered.

[Yes, optimizers can twiddle this but the point remains]

And, that doesn\'t begin to address hostile actors in a
system!

Then you only need at most 40000 test vectors to take each branch of every
binary if statement (60000 if it is Fortran with 3 way branches all used). That
is a rather more tractable number (although still large).

Any routine with too high a CCI count is practically certain to contain latent
bugs - which makes it worth looking at more carefully.

\"A \'program\' should fit on a single piece of paper\"
 
On Tue, 5 Sep 2023 10:44:08 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

On 9/5/2023 9:47 AM, Martin Brown wrote:
Don\'t blame the engineers for that - it is the ship it and be damned senior
management that is responsible for most buggy code being shipped. Even more so
now that 1+GB upgrades are essentially free. :(

Note how the latest coding styles inherently acknowledge that.
Agile? How-to-write-code-without-knowing-what-it-has-to-do?

First to market is worth enough that people live with buggy code. The worst

Of course! Anyone think their Windows/Linux box is bug-free?
USENET client? Browser? yet, somehow, they all seem to provide
real value to their users!

major release I can recall in a very long time was MS Excel 2007 (although bugs
in Vista took a lot more flack - rather unfairly IMHO).

Of course. Folks run Linux with 20M+ LoC? So, a ballpark estimate
of 20K+ *bugs* in the RELEASED product??

https://en.wikipedia.org/wiki/Linux_kernel#/media/File:Linux_kernel_map.png

The era of monolithic kernels is over. Unless folks keep wanting
to DONATE their time to maintaining them.

https://en.wikipedia.org/wiki/Linux_kernel#/media/File:Redevelopment_costs_of_Linux_kernel.png

Amusing that it\'s pursuing a 50 year old dream... (let\'s get together
an effort to recreate the Wright flyer so we can all take 100 yard flights!)

(which reminds me it is a MS patch Tuesday today)

Surrender your internet connection, for the day...

The only ones which actually could be truly relied upon used formal
mathematical proof techniques to ensure reliability. Very few practitioners are
able to do it properly and it is pretty much reserved for ultra high
reliability safety and mission critical code.

And only applies to the smallest parts of the codebase. The \"engineering\"
comes in figuring out how to live with systems that aren\'t verifiable.
(you can\'t ensure hardware WILL work as advertised unless you have tested
every component that you put into the fabrication -- ah, but you can blame
someone else for YOUR system\'s failure)

It could be all be done to that standard iff commercial developers and their
customers were prepared to pay for it. However, they want it now and they keep
changing their minds about what it is they actually want so the goalposts are
forever shifting around. That sort of functionality creep is much less common
in hardware.

Exactly. And, software often is told to COMPENSATE for hardware shortcomings.

One of the sound systems used in early video games used a CVSD as an ARB.
But, the idiot who designed the hardware was 200% clueless about how the
software would use the hardware. So, the (dedicated!) processor had
to sit in a tight loop SHIFTING bits into the CVSD. Of course, each
path through the loop had to be balanced in terms of execution time
lest you get a beat component (as every 8th bit requires a new byte
to be fetched -- which takes a different amount of time than shifting
the current byte by one bit).

Hardware designers are typically clueless as to how their decisions
impact the software. And, as the company may have invested a \"couple
of kilobucks\" on a design and layout, Manglement\'s shortsightedness
fails to realize the tens of kilobucks that their penny-pinching
will cost!

[I once had a spectacular FAIL in a bit of hardware that I designed.
It was a custom CPU (\"chip\"). The guy writing the code (and the
tools to write it!) assumed addresses were byte-oriented. But,
the processor was truly a 16b machine and all of the addresses
were for 16b objects. So, all of the addresses generated by his tools
were exactly twice what they should have been (\"Didn\'t you notice
how the LSb was ALWAYS \'0\'?\") Simple fix but embarassing as we each
relied on assumptions that seemed natural to us where the wiser
approach would have made that statement explicit]

UK\'s NATS system is supposedly 6 sigma coding but its misbehaviour on Bank
Holiday Monday peak travel time was somewhat disastrous. It seems someone
managed to input the halt and catch fire instruction and the buffers ran out
before they were able to fix it. There will be a technical report out in due
course - my guess is that they have reduced overheads and no longer have some
of the key people who understand its internals. Malformed flight plan data
should not have been able to kill it stone dead - but apparently that is
exactly what happened!

Lunar landers, etc. Software is complex. Hardware is a walk in the
park. For anything but a trivial piece of code, you can\'t see all of the
interconnects/interdependencies.

https://www.ft.com/content/9fe22207-5867-4c4f-972b-620cdab10790
(might be paywalled)

If so Google \"UK air traffic control outage caused by unusual flight plan data\"

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

But using design and simulation *software* that you fail to acknowledge is
actually pretty good. If you had to do it with pencil and paper your would be
there forever.

When was the last time your calculator PROGRAM produced a verifiable error?
And, desktop software is considerably less complex than software used
in products where interactions arising from temporal differences can
prove unpredictable.

We bought a new stove/oven some time ago. Specify which oven, heat source,
setpoint temperature and time. START.

Ah, but if you want to change the time remaining (because you peeked
at the item and realize it could use another few minutes) AND the
timer expires WHILE YOU ARE TRYING TO CHANGE IT, the user interface
locks up (!). Your recourse is to shut off the oven (abort the
process) and then restart it using the settings you just CANCELED.

It\'s easy to see how this can evade testing -- if the test engineer
didn\'t have a good understanding of how the code worked so he
could challenge it with specially crafted test cases.

When drafting system specifications, I (try to) imagine every
situation that can come up and describe how each should be handled.
So, the test scaffolding and actual tests can be designed to verify
that behavior in the resulting product.

[How do you test for the case where the user tries to change the
remaining time AS the timer is expiring? How do you test for the
case where the process on the remote host crashes AFTER it has
received a request for service but before it has acknolwedged
that? Or, BEFORE it receives it? Or, WHILE acknowledging it?]

Hardware is easy to test: set voltage/current/freq/etc. and
observe result.

[We purchased a glass titty many years ago. At one point, we turned
it on, then off, then on again -- in relatively short order. I
guess the guy who designed the power supply hadn\'t considered this
possibility as the magic smoke rushed out of it! How hard can it
be to design a power supply???]

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

No, the physical electronics are the EASIEST bits. If designing
hardware was so difficult, then the solution to the software
\"problem\" would be to just have all the hardware designers switch
over to designing software! Problem solved INSTANTLY!

The state of software development is a disgrace. We are plagued with
absurd user interfaces, hidden states, and massive numbers of bugs.

There is no science, math, or discipline to programming. What famous
person said that \"anybody can learn to code\"? One study fould that
English majors, on average, were better programmers than CE or CS
majors.

In practice, the problem would be worsened by a few orders of
magnitude as they suddenly found themselves living in an opaque world.

So do physical mechanical interlocks. I don\'t trust software or even electronic
interlocks to protect me compared to a damn great beam stop and a padlock on it
with the key in my pocket.

Note the miswired motor example, above. If the limit switches had
been hardwired, the problem still would have been present as the
problem was in the hardware -- the wiring of the motor.

I wonder if the programmer had ever wired or worked with actual
motors. One of our neighbors is a highly-paid Apple software engineer
and might kill himself if you handed him a screwdriver. He is entirely
clueless about electricity.

We always consider user wiring error effects, as in a recent
remote-sense power supply. No connection can damage it or make the
voltage go more than 2 volts over or under the programmed value.
 

Welcome to EDABoard.com

Sponsor

Back
Top