Conical inductors--still $10!...

On Thursday, August 13, 2020 at 7:30:25 PM UTC-7, jla...@highlandsniptechnology.com wrote:

When I was a kid, we knew that rocks came in barges from rock
factories. We had geology classes and didn\'t believe any of it.

In high school, we got a gravel track, with imported crushed rock.
From which I harvested a bunch of semiconductor crystals (iron pyrite).
But, aside from riverbeds, the soils were all sand and clay.

It is said that in parts of Kansas, every rock you find is...a meteorite.
Ditto some ice fields in Antarctica.
 
On a sunny day (Fri, 14 Aug 2020 09:46:09 +1000) it happened Clifford Heath
<no.spam@please.net> wrote in <6hkZG.279525$eN2.273478@fx47.iad>:

On 14/8/20 1:14 am, Jan Panteltje wrote:
On a sunny day (Thu, 13 Aug 2020 07:46:05 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
Like put on one ski on one foot, make sure it is OK,
then the other on the other foot?
Certainly not. Snap into skis. Get on the chair lift. Ascend 1800
feet. At the top, unload and ski straight down. That\'s the test.

Every year trains come back here from Switzerland with people in casts..
broken things...

You folk are good at horizontal things. Vertical, not so much :p

CH

Yes, we only got one small hill in the south,
the rest is flat, much even below sea level, protected by dikes and pumps.
Amsterdam is about 2 meters or so below sea level.
We can swim!
;-)
 
On 2020-08-14 02:15, Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

It also goes lower than that. The processor internally decomposes
x86 ISA instructions into sequences of simpler micro operations that
are invisible externally. Yup, microcode :)


Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

.... the triumphant reinvention of bureaux timesharing systems.

Greybeards remember the collective sighs of relief when users
realised PCs enabled them to finally get hold of and own their
own data

Until they realized how unreliable floppy discs were. ;)

Cheers

Phil Hobbs

(Who wouldn\'t go back to the \'80s on a bet)

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 14/08/20 12:40, Phil Hobbs wrote:
On 2020-08-14 02:15, Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

It also goes lower than that. The processor internally decomposes
x86 ISA instructions into sequences of simpler micro operations that
are invisible externally. Yup, microcode :)


Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

.... the triumphant reinvention of bureaux timesharing systems.

Greybeards remember the collective sighs of relief when users
realised PCs enabled them to finally get hold of and own their
own data

Until they realized how unreliable floppy discs were. ;)

s/floppy/internet access/

At least you could duplicate the floppy yourself.

Didn\'t early IBM PCs have hard disk options?
 
On 2020-08-14 08:09, Tom Gardner wrote:
On 14/08/20 12:40, Phil Hobbs wrote:
On 2020-08-14 02:15, Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual
sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly
exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

It also goes lower than that. The processor internally decomposes
x86 ISA instructions into sequences of simpler micro operations that
are invisible externally. Yup, microcode :)


Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

.... the triumphant reinvention of bureaux timesharing systems.

Greybeards remember the collective sighs of relief when users
realised PCs enabled them to finally get hold of and own their
own data

Until they realized how unreliable floppy discs were. ;)

s/floppy/internet access/

At least you could duplicate the floppy yourself.

Didn\'t early IBM PCs have hard disk options?

The first PC I used belonged to my father, who bought it in 1981 when it
first came out. It had two *160 kB* single-sided floppy drives. and a
modified Selectric as a printer. Initially you had to decide whether to
put your files on the OS disc or the WordStar disc. ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 14/08/20 13:20, Phil Hobbs wrote:
On 2020-08-14 08:09, Tom Gardner wrote:
On 14/08/20 12:40, Phil Hobbs wrote:
On 2020-08-14 02:15, Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

It also goes lower than that. The processor internally decomposes
x86 ISA instructions into sequences of simpler micro operations that
are invisible externally. Yup, microcode :)


Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

.... the triumphant reinvention of bureaux timesharing systems.

Greybeards remember the collective sighs of relief when users
realised PCs enabled them to finally get hold of and own their
own data

Until they realized how unreliable floppy discs were. ;)

s/floppy/internet access/

At least you could duplicate the floppy yourself.

Didn\'t early IBM PCs have hard disk options?

The first PC I used belonged to my father, who bought it in 1981 when it first
came out.  It had two *160 kB* single-sided floppy drives. and a modified
Selectric as a printer.  Initially you had to decide whether to put your files
on the OS disc or the WordStar disc. ;)

Oh, I realise not all had hard disks, but I thought there were
options.

I don\'t remember too well, since around that time I was on
PDP11s running Microsoft\'s operating system, and CP/M machines.

OTOH I have a 1986 fat mac which on which I learned the power of
OOP via Apple\'s Smalltalk. The performance can charitably be
described as glacial.

The Smalltalk runtime was on one floppy, the image and changes.st
on another.
 
On 2020-08-14 08:52, Tom Gardner wrote:
On 14/08/20 13:20, Phil Hobbs wrote:
On 2020-08-14 08:09, Tom Gardner wrote:
On 14/08/20 12:40, Phil Hobbs wrote:
On 2020-08-14 02:15, Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT
security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual
sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly
exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine
needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

It also goes lower than that. The processor internally decomposes
x86 ISA instructions into sequences of simpler micro operations that
are invisible externally. Yup, microcode :)


Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

.... the triumphant reinvention of bureaux timesharing systems.

Greybeards remember the collective sighs of relief when users
realised PCs enabled them to finally get hold of and own their
own data

Until they realized how unreliable floppy discs were. ;)

s/floppy/internet access/

At least you could duplicate the floppy yourself.

Didn\'t early IBM PCs have hard disk options?

The first PC I used belonged to my father, who bought it in 1981 when
it first came out.  It had two *160 kB* single-sided floppy drives.
and a modified Selectric as a printer.  Initially you had to decide
whether to put your files on the OS disc or the WordStar disc. ;)

Oh, I realise not all had hard disks, but I thought there were
options.

The PC/XT with its capacious 10 MB hard drive didn\'t come out till \'83.

I don\'t remember too well, since around that time I was on
PDP11s running Microsoft\'s operating system, and CP/M machines.

Between \'78 and \'81 I was mostly using UBC\'s Amdahl 470 V6 and then V8
machines, running the Michigan Terminal System. I really liked MTS, and
the computing centre had stacks of manuals that you could take away
free. I chucked them all when I went to grad school in California in
1983, but I recall them as being comprehensive and very well written.

OTOH I have a 1986 fat mac which on which I learned the power of
OOP via Apple\'s Smalltalk. The performance can charitably be
described as glacial.

I bought a 128K Mac in 1985 to write my thesis on. (There was a student
discount.) I knew it didn\'t have enough memory, but having seen that
Apple treated their Lisa owners very well on hardware upgrades, I took a
flier despite having very little money and a kid on the way.

Then Apple decided to charge $1000 for 16 41256 DRAM chips and backed it
up with strong-arm tactics against anybody with the temerity to change
the chips themselves.

That was as much as I paid for the computer. I\'ve never bought myself
another Apple product.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Fri, 14 Aug 2020 07:09:30 GMT, Jan Panteltje
<pNaOnStPeAlMtje@yahoo.com> wrote:

On a sunny day (Fri, 14 Aug 2020 09:46:09 +1000) it happened Clifford Heath
no.spam@please.net> wrote in <6hkZG.279525$eN2.273478@fx47.iad>:

On 14/8/20 1:14 am, Jan Panteltje wrote:
On a sunny day (Thu, 13 Aug 2020 07:46:05 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
Like put on one ski on one foot, make sure it is OK,
then the other on the other foot?
Certainly not. Snap into skis. Get on the chair lift. Ascend 1800
feet. At the top, unload and ski straight down. That\'s the test.

Every year trains come back here from Switzerland with people in casts..
broken things...

You folk are good at horizontal things. Vertical, not so much :p

CH

Yes, we only got one small hill in the south,
the rest is flat, much even below sea level, protected by dikes and pumps.
Amsterdam is about 2 meters or so below sea level.
We can swim!
;-)

I used to look UP from my house at ships on the Mississippi river. A
lot of New Orleans is below sea level too. NOLA has levees and pumps
too, which is why it\'s slowly sinking.

Our big adventure as kids was riding bicycles down from the top of
Monkey Hill, 12 feet of sheer terror.

When I reached the Age of Reason, 32 in my case, I moved to
California.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Fri, 14 Aug 2020 07:40:48 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-08-14 02:15, Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

It also goes lower than that. The processor internally decomposes
x86 ISA instructions into sequences of simpler micro operations that
are invisible externally. Yup, microcode :)


Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

.... the triumphant reinvention of bureaux timesharing systems.

Greybeards remember the collective sighs of relief when users
realised PCs enabled them to finally get hold of and own their
own data

Until they realized how unreliable floppy discs were. ;)

Cheers

Phil Hobbs

(Who wouldn\'t go back to the \'80s on a bet)

It\'s stunning how reliable a $60 2-terabyte USB drive is now.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Wed, 12 Aug 2020 07:30:18 -0700, jlarkin@highlandsniptechnology.com
wrote:

[snip]
Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.

For the record, I have not - it\'s too slow to do it that way. But I
do have a war story from the early 1970s:

My first job out of school was as an engineer at the Federal
Communications Commission in Washington, DC. We had a Univac 1106
computer. The was in the days before Burroughs, as I recall.

One fine day, I was asked to help an much older enginer whose Fortran
program was tying the 1106 up for hours. This engineer was a very
methodical man. He would extensively desk-check his code, and was
proud that his code always ran the first time, without error.

The program estimated signal strengths of all other radio stations as
measured at the transmittinig antenna of each radio system. The
propagation code required the distance from the station under
consideration to all other stations, which he calculated anew for each
and every station, so the scaling law was n^3, for a few thousand
stations. No wonder it ground away all day.

It turns out that he was unaware that disk storage was easy in Univac
Fortran, or that he could compute the distance matrix once, and reload
it whenever needed. Now, his program took five or ten minutes, not 8
to 10 hours.

Joe Gwinn
 
On a sunny day (Fri, 14 Aug 2020 07:03:14 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
<pm5djf5hvdptk8vetugu9q6l19ll2t88sj@4ax.com>:

On Fri, 14 Aug 2020 07:09:30 GMT, Jan Panteltje
pNaOnStPeAlMtje@yahoo.com> wrote:

On a sunny day (Fri, 14 Aug 2020 09:46:09 +1000) it happened Clifford Heath
no.spam@please.net> wrote in <6hkZG.279525$eN2.273478@fx47.iad>:

On 14/8/20 1:14 am, Jan Panteltje wrote:
On a sunny day (Thu, 13 Aug 2020 07:46:05 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
Like put on one ski on one foot, make sure it is OK,
then the other on the other foot?
Certainly not. Snap into skis. Get on the chair lift. Ascend 1800
feet. At the top, unload and ski straight down. That\'s the test.

Every year trains come back here from Switzerland with people in casts..
broken things...

You folk are good at horizontal things. Vertical, not so much :p

CH

Yes, we only got one small hill in the south,
the rest is flat, much even below sea level, protected by dikes and pumps.
Amsterdam is about 2 meters or so below sea level.
We can swim!
;-)


I used to look UP from my house at ships on the Mississippi river. A
lot of New Orleans is below sea level too. NOLA has levees and pumps
too, which is why it\'s slowly sinking.

Our big adventure as kids was riding bicycles down from the top of
Monkey Hill, 12 feet of sheer terror.

When I reached the Age of Reason, 32 in my case, I moved to
California.

I grew up on the west edge of Amsterdam, canals everywhere, the street ended in the north at a canal.
That canal went into a ship-elevator, where the little boats with vegetables from the farmers
in even lower level canals were lifted to Amsterdam city level so they could bring their stuff to market.
Water control is a big thing in the Netherlands, sensors everywhere, central computer.
It has to be maintained at about 10 cm accuracy, if too high the crops roots will start to rot,
if too low the crops will die of thirst.
At about 10 we moved more south, lived about every where since then,
except more south where that only hill is :)

Now for that ticked to Mars ...
 
On Fri, 14 Aug 2020 10:32:56 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:

On Wed, 12 Aug 2020 07:30:18 -0700, jlarkin@highlandsniptechnology.com
wrote:

[snip]

Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.

For the record, I have not - it\'s too slow to do it that way. But I
do have a war story from the early 1970s:

My first job out of school was as an engineer at the Federal
Communications Commission in Washington, DC. We had a Univac 1106
computer. The was in the days before Burroughs, as I recall.

One fine day, I was asked to help an much older enginer whose Fortran
program was tying the 1106 up for hours. This engineer was a very
methodical man. He would extensively desk-check his code, and was
proud that his code always ran the first time, without error.

The program estimated signal strengths of all other radio stations as
measured at the transmittinig antenna of each radio system. The
propagation code required the distance from the station under
consideration to all other stations, which he calculated anew for each
and every station, so the scaling law was n^3, for a few thousand
stations. No wonder it ground away all day.

It turns out that he was unaware that disk storage was easy in Univac
Fortran, or that he could compute the distance matrix once, and reload
it whenever needed. Now, his program took five or ten minutes, not 8
to 10 hours.

Joe Gwinn

While staying with a friend in Juneau, I wrote an RTOS on paper, with
a pencil, and mailed sheets back to the factory for them to enter and
assemble. Someone claimed that it had one bug.

I just checked my work, like good engineers do.
 
On Fri, 14 Aug 2020 10:30:12 -0700, John Larkin
<jlarkin@highland_atwork_technology.com> wrote:

On Fri, 14 Aug 2020 10:32:56 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Wed, 12 Aug 2020 07:30:18 -0700, jlarkin@highlandsniptechnology.com
wrote:

[snip]

Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.

For the record, I have not - it\'s too slow to do it that way. But I
do have a war story from the early 1970s:

My first job out of school was as an engineer at the Federal
Communications Commission in Washington, DC. We had a Univac 1106
computer. The was in the days before Burroughs, as I recall.

One fine day, I was asked to help an much older enginer whose Fortran
program was tying the 1106 up for hours. This engineer was a very
methodical man. He would extensively desk-check his code, and was
proud that his code always ran the first time, without error.

The program estimated signal strengths of all other radio stations as
measured at the transmittinig antenna of each radio system. The
propagation code required the distance from the station under
consideration to all other stations, which he calculated anew for each
and every station, so the scaling law was n^3, for a few thousand
stations. No wonder it ground away all day.

It turns out that he was unaware that disk storage was easy in Univac
Fortran, or that he could compute the distance matrix once, and reload
it whenever needed. Now, his program took five or ten minutes, not 8
to 10 hours.

Joe Gwinn

While staying with a friend in Juneau, I wrote an RTOS on paper, with
a pencil, and mailed sheets back to the factory for them to enter and
assemble. Someone claimed that it had one bug.

For what machine, in what language, and how many lines of code?

I\'ve done much the same, but the RTOS was usually purchased and then
modified. These RTOSs were all in assembly code. The largest was
35,000 lines of SEL 32/55 (a IBM 360 clone) assembly, including the
file system, if memory serves.

There was much integration and OS debugging involved. Usually the
applications programmers would find the problem, and be stuck.
Eventually, they world call me, and I\'d go to the lab. In may cases,
I could tell them what they had done wrong. In a few cases, I\'d say
that this smelled like an OS problem, and take over with kernel-level
tools, usually with immediate success because they had handed it over
to me right in the middle of the problem.


>I just checked my work, like good engineers do.

And so do we all, with varying degrees of success. I know from
experience when further checking is not worthwhile, and it\'s time for
the lab.

My favorite non-OS bug was actually in a program written in SEL 32/55
Fortran. When a certain subroutine was called, the sky fell. I
debugged at the Fortran level, which did isolate the problem to the
call of this specific subroutine, and then hit a wall. Dropped into
the assembly-level debugger, single-stepping the assembly code
generated by the Fortran compiler. Still no joy. Went back and
forth, multiple times. Stuck. Then, a crazy thought... Dropped a
level deeper, into the CPU microcode debugger, on the machine console,
and micro-stepped through the indexed load machine instruction where
the problem occurred. Bingo!

In the IBM 360 instruction set, there is no \"load word\" or \"load
double word\" instruction per se. There is a general \"load\"
instruction, the load width being determined by a field just above the
operand address field in the instruction word. What was happening was
that the indexed operations added the entire operand field to the
index register contents, and an overflow overlaid the load-width
field, changing the load width to a double, overlaying both the
intended register and an adjacent register. Oops. I forget which
value was incorrect, the operand or the register, but one or the other
had been stomped upon, but it didn\'t take long to trace it back to the
original cause.

The compiler had no idea that an uninvolved register had been stomped,
and no amount of staring at the Fortran code was going to help.

In my whole career, I\'ve had to resort to microcode-level debugging
only this one time.

Joe Gwinn
 
On Fri, 14 Aug 2020 15:39:26 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:

On Fri, 14 Aug 2020 10:30:12 -0700, John Larkin
jlarkin@highland_atwork_technology.com> wrote:

On Fri, 14 Aug 2020 10:32:56 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Wed, 12 Aug 2020 07:30:18 -0700, jlarkin@highlandsniptechnology.com
wrote:

[snip]

Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.

For the record, I have not - it\'s too slow to do it that way. But I
do have a war story from the early 1970s:

My first job out of school was as an engineer at the Federal
Communications Commission in Washington, DC. We had a Univac 1106
computer. The was in the days before Burroughs, as I recall.

One fine day, I was asked to help an much older enginer whose Fortran
program was tying the 1106 up for hours. This engineer was a very
methodical man. He would extensively desk-check his code, and was
proud that his code always ran the first time, without error.

The program estimated signal strengths of all other radio stations as
measured at the transmittinig antenna of each radio system. The
propagation code required the distance from the station under
consideration to all other stations, which he calculated anew for each
and every station, so the scaling law was n^3, for a few thousand
stations. No wonder it ground away all day.

It turns out that he was unaware that disk storage was easy in Univac
Fortran, or that he could compute the distance matrix once, and reload
it whenever needed. Now, his program took five or ten minutes, not 8
to 10 hours.

Joe Gwinn

While staying with a friend in Juneau, I wrote an RTOS on paper, with
a pencil, and mailed sheets back to the factory for them to enter and
assemble. Someone claimed that it had one bug.

For what machine, in what language, and how many lines of code?

MC6800, assembly, can\'t remember how many lines. It included keyboard
and serial and video display drivers. I designed the hardware too.

MC6800 was a pain. You couldn\'t directly push the index register onto
the stack.

My point was that engineers usually check their work before applying
power, and programmers usually don\'t.



I\'ve done much the same, but the RTOS was usually purchased and then
modified. These RTOSs were all in assembly code. The largest was
35,000 lines of SEL 32/55 (a IBM 360 clone) assembly, including the
file system, if memory serves.

Mine was a lot smaller, a couple thousand lines maybe?


There was much integration and OS debugging involved. Usually the
applications programmers would find the problem, and be stuck.
Eventually, they world call me, and I\'d go to the lab. In may cases,
I could tell them what they had done wrong. In a few cases, I\'d say
that this smelled like an OS problem, and take over with kernel-level
tools, usually with immediate success because they had handed it over
to me right in the middle of the problem.


I just checked my work, like good engineers do.

And so do we all, with varying degrees of success. I know from
experience when further checking is not worthwhile, and it\'s time for
the lab.

The amount of checking pretty much tracks how easy it is to change it.
Code might be tested and fixes in minutes, so people just fire it up
and see what happens. It might take a month or two to spin a PCB
layout. An FPGA is intermediate. How about the Three Gorges Dam?



My favorite non-OS bug was actually in a program written in SEL 32/55
Fortran. When a certain subroutine was called, the sky fell. I
debugged at the Fortran level, which did isolate the problem to the
call of this specific subroutine, and then hit a wall. Dropped into
the assembly-level debugger, single-stepping the assembly code
generated by the Fortran compiler. Still no joy. Went back and
forth, multiple times. Stuck. Then, a crazy thought... Dropped a
level deeper, into the CPU microcode debugger, on the machine console,
and micro-stepped through the indexed load machine instruction where
the problem occurred. Bingo!

In the IBM 360 instruction set, there is no \"load word\" or \"load
double word\" instruction per se. There is a general \"load\"
instruction, the load width being determined by a field just above the
operand address field in the instruction word. What was happening was
that the indexed operations added the entire operand field to the
index register contents, and an overflow overlaid the load-width
field, changing the load width to a double, overlaying both the
intended register and an adjacent register. Oops. I forget which
value was incorrect, the operand or the register, but one or the other
had been stomped upon, but it didn\'t take long to trace it back to the
original cause.

BAH Branch and Hang

SCF Stop and Catch Fire

JRA Jump to Random Address




The compiler had no idea that an uninvolved register had been stomped,
and no amount of staring at the Fortran code was going to help.

In my whole career, I\'ve had to resort to microcode-level debugging
only this one time.

Joe Gwinn
 
Tom Gardner wrote:
On 14/08/20 04:13, Les Cargill wrote:
Tom Gardner wrote:
Rust and Go are showing significant promise in the
marketplace,

Mozzlla seems to have dumped at least some of the Rust team:

https://www.reddit.com/r/rust/comments/i7stjy/how_do_mozilla_layoffs_affect_rust/


I doubt they will remain unemployed. Rust is gaining traction
in wider settings.

I dunno - I can\'t separate the messaging from the offering. I\'m
fine with a C/C++ compiler so I have less than no incentive to
even become remotely literate about Rust.

The Rustaceans seem obsessed with stuff my cohort ( read:eek:ld people )
learned six months into their first C project. But there may
well be benefits I don\'t know about.

It is not-not a thing; the CVE list shows that. I am just appalled
that these defects are released.

Linus Torvalds is vociferously and famously opposed to having
C++ anywhere near the Linux kernel (good taste IMNSHO).

Don\'t take any cues from Linus Torvalds. He\'s why my deliverables
at one gig were patch files. I\'ve no objection to that but geez...

And C++ is Just Fine. Now. It took what, 20 years?

The reasons for \"no C++ in the kernel\" are quite serious, valid and
worthy of our approval.

He
has given a big hint he wouldn\'t oppose Rust, by stating that
if it is there it should be enabled by default.

https://www.phoronix.com/scan.php?page=news_item&px=Torvalds-Rust-Kernel-K-Build

I\'ve seen this movie before. It\'s yet another This Time It\'s Different
approach.

--
Les Cargill
 
Phil Hobbs wrote:
On 2020-08-13 23:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

Sort of like UCSD Pascal, circa 1975. ;)

Precisely.

All that\'s old is new again :)

Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

snip

Not in our shop.  We just had an 8-day phone/net/cell outage on account
of a dippy little 50kt storm that blew through in three hours (Isaias).
I was having to wardrive to find a place with enough cell bars that I
could use my phone\'s hotspot, so eventually I just started working from
home again, where there was still cell service.

I will not disagree, but that\'s where a lot of stuff is headed. I
suppose you would have been forgiven for going to the beach, or sitting
in a glade in the woods listening.

There were hurricane parties in Florida in \'06. Alcohol figured
prominently.


And then there\'s the data security problem.

Actually mightily tractable, if a bit slow these days. It depends on
what level of resources your adversary is prepared to deploy against you.

I had to get a security cert last year; of course I whinged and moaned
but it\'s been one of the better uses of my time in the last 20 years.


Cheers

Phil Hobbs

--
Les Cargill
 
Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

It also goes lower than that. The processor internally decomposes
x86 ISA instructions into sequences of simpler micro operations that
are invisible externally. Yup, microcode :)

I\'m not 100% sure what you mean by that but it\'s way less than
important. We all know that x86 is microcoded. So was the HP3000.

Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

... the triumphant reinvention of bureaux timesharing systems.

One of the greatest reads in tech is McDyson-Spohn, \"ATM Theory and
Applications\". They go all Michener and start with the volcanoes that
create that island.

In this text, they prove the theory of \"eternal return\" w.r.t
communications technology. The wheel in the sky, all
that.

Greybeards remember the collective sighs of relief when users
realised PCs enabled them to finally get hold of and own their
own data

That was in the before times. But yo better believe I have my own data
available 100%. I even test restores.

--
Les Cargill
 
Phil Hobbs wrote:
On 2020-08-14 02:15, Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly
exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

It also goes lower than that. The processor internally decomposes
x86 ISA instructions into sequences of simpler micro operations that
are invisible externally. Yup, microcode :)


Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

.... the triumphant reinvention of bureaux timesharing systems.

Greybeards remember the collective sighs of relief when users
realised PCs enabled them to finally get hold of and own their
own data

Until they realized how unreliable floppy discs were. ;)

Real men didn\'t have flo... oh bugger it :)

Cheers

Phil Hobbs

(Who wouldn\'t go back to the \'80s on a bet)

These are the good old days.

--
Les Cargill
 
jlarkin@highlandsniptechnology.com wrote:
On Fri, 14 Aug 2020 07:40:48 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-08-14 02:15, Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

It also goes lower than that. The processor internally decomposes
x86 ISA instructions into sequences of simpler micro operations that
are invisible externally. Yup, microcode :)


Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

.... the triumphant reinvention of bureaux timesharing systems.

Greybeards remember the collective sighs of relief when users
realised PCs enabled them to finally get hold of and own their
own data

Until they realized how unreliable floppy discs were. ;)

Cheers

Phil Hobbs

(Who wouldn\'t go back to the \'80s on a bet)

It\'s stunning how reliable a $60 2-terabyte USB drive is now.

Never mind the bandwidth of a NVME M.2 SSD. It\'s several orders
of magnitude faster than the interface to a nice graphics card.

Higher order USB standards are in a horse race with that. It\'s
an embarrassment of riches.

--
Les Cargill
 
Tom Gardner wrote:
On 14/08/20 04:21, Les Cargill wrote:
snip

The political economy of software is
very, very bent. Correctness is simply not a consideration unless you
can make it a billable thing, like the testing done in avionics.

Consequential damages would be a starting point.

There either aren\'t any or we whistle past the graveyard of them.

The great calculating machine of the economy is beating is at software.

To our benefit.

--
Les Cargill
 

Welcome to EDABoard.com

Sponsor

Back
Top