Conical inductors--still $10!...

On 12/08/20 15:30, jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 11/08/2020 17:10, jlarkin@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 08:46:38 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

tirsdag den 11. august 2020 kl. 16.50.28 UTC+2 skrev jla...@highlandsniptechnology.com:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

what\'s the definition of \"bad code\"?

Code that can contain or allow viruses, trojans, spyware, or
ransomware, or can modify the OS, or use excess resources. That should
be obvious.

It is very obvious that you have no understanding of the basics of
computing. The halting problem shows that what you want is impossible.

I\'ve written maybe a million lines of code, mostly realtime stuff, and
three RTOSs and two or three compilers, and actually designed one CPU
from MSI TTL chips, that went into production. I contributed code to
FOCAL (I\'m named in the source) and met with some of the guys that
invented the PDP-11 architecture, before they did it. Got slightly
involved in the dreadful HP 2114 thing too.

Have you done anything like that?

I\'m sure you have done that, but Martin is correct.



Bulletproof memory management is certainly not impossible. It\'s just
that not enough people care.

The core problem is fundamental: data is numbers and
programs are numbers. The only difference is in how
the numbers are interpreted by the hardware.

So, to ensure bulletproof memory management, you have
to ensure data cannot be executed. That rules out things
like JITters and general purpose compilers.

I\'ve never used them but I /believe/ the
only computers that achieve that are the Unisys/Burroughs
machines, by ensuring only their compilers can generate
code that can be executed - and keeping the compilers
under lock and key.


\"Computer Science\" theory has almost nothing to do with computers.
I\'ve told that story before.

It does, however, put solid limits on what computers can
and cannot achieve.

One hardware analogy is Shannon\'s law, but there
are others :)

People that blunder into electronics and make statements
equivalent to breaking Shannon\'s law are correctly
regarded as ignorant cranks.



You cannot tell reliably what code will do until it gets executed.

You can stop it from ransoming all the data on all of your servers
because some nurse opened an email attachment.

That\'s what anti-virus packages *attempt* to do. And my,
don\'t they work well.


A less severe class of \"bad\" is code that doesn\'t perform its intended
function properly, or crashes. If that annoys people, they can stop
using it.

Most decent software does what it is supposed to most of the time. Bugs
typically reside for a long time in seldom trodden paths that should
never normally happen like error recovery in weird situations.

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.

Yup.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs.

Then you need to protect the pseudocode interpreter, and you
are back where you began.


The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.

Good luck with that AI project.


Or push everything into the cloud and not actually run application
programs on a flakey box or phone.

\"Oooh goodie\" say the malefactors. A single attack surface :)


Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.

what are the rules?


Don\'t access outside your assigned memory map. Don\'t execute anything
but what\'s in read-only code space. Don\'t overflow stacks or buffers.

That is motherhood and apple pie. It allows other programs and tasks to
keep running and was one of the strengths of IBM\'s OS/2 but apart from
in bank machines and air traffic control hardly anyone adopted it :(

My point. Why do you call me ignorant for wanting hardware-based
security?

Your desire is understandable.

Your proposed implementation cannot work as you wish.

Whether it would be better than current standards
is a different question.


IBM soured the pitch by delivering it late and not quite working and
conflating it with the horrible PS/2 hardware lockin that forced their
competitors to collaborate and design the EISA bus the rest is history.

Don\'t access any system resources that you are not specifically
assigned access to (which includes devices and IP addresses.) Don\'t
modify drivers or the OS. The penalty for violation is instant death.

You are going to have a lot of time wasting checking against all these
rules which will themselves contain inconsistencies after a while.

Let\'s get rid of virtual memory too.

Why? Disk is so much cheaper than ram and plentiful. SSDs are fast too.

Some of those rules just make programmers pay more attention, which is
nice but not critical. What really matters is that the hardware and OS
detect violations and kill the offending process.

One that you can do either in hardware or software is to catch any
attempt to fetch an undefined value from memory. These days there are a
few sophisticated compilers that can do this at *compile* time.

The problem circles back: the compilers are written, and run, the same
way as the application programs. The software bad guys will always be
more ceative than the software defenders.

Yup.

Plus the malefactors are highly incentivised, whereas the
capitalist business imperative doesn\'t incentivise the good guys.

Good luck fixing that :(


One I know (Russian as it happens) by default compiles a hard runtime
trap at the location of the latent fault. I have mine set to warning.

Hardware designers usually get things right, which is why FPGAs seldom
have bugs but procedural code is littered with errors. Programmers
can\'t control states, if they understand the concept at all.

Oh rubbish. You should stop using simulators and see how far you get -
since all software is all so buggy that you can\'t trust it can you?

I\'ve done nontrivial OTP (antifuse) CPLDs and FPGAs that worked first
pass, without simulation. First pass. You just need to use state
machines and think before you compile. People who build dams
understand the concept. Usually.

Martin is correct.

I\'ve created a semi-custom IC design with a three-month
fabrication turnaround, which worked first time.


Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.

Yes, I have.
 
On a sunny day (Wed, 12 Aug 2020 07:30:18 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
<j8t7jf1qv47p6l2qi72egnaq0gjcgb8ijq@4ax.com>:

Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.

That is silly,
For me I normally write incremental code,
one part at the time, thousands and thousands of lines of code that simply work.
In C, or asm
Any compiler complaints are usually typing errors.

For the code I release as open source you can see it for yourself.

There is more in the world than state machines.
Maybe the problem is that many people cannot think logically,
those should not be programming, but are likely good at other things.

The other thing is that over the years you build up a collection
of routines you can the just cut and paste into new projects.

From the other side (just to stay with verifiable stuff so you can check for yourself),
to hack the Hubsan drone and build an autopilot took maybe 2 weeks a couple of hours a day
in PIC asm and C for the PC part;
http://panteltje.com/panteltje/quadcopter/index.html
there was a short discussion with somebody from Germany in the drone group about the secret
format on the board test point I used to grab the data and hack it.
By the time he got back I had already cracked it.
and that includes the electronic design to actually fly the thing.
As the thing is still in one piece I think bugs, if at all present, are not an issue.
I know the limitations, but that is an other issue.
Not done much flying lately, gov created a no fly zone as I am close to the mil airport
and they know I was targeting the new F35.. ;-)
Those fly over once a day to see I still behave I guess.,
Pity you cannot hear anything when that happens, not even with Sennheiser HD201 headphones on.

0) know the hardware
1) know how to write code
2) know in depth about what you code for.

With any of these 3 missing the result will not be optimal,
probably bloat, power sucking, slow booting ... what have you.

It is not so difficult to write down some instructions to a machine
do this then do that..
But if YOU have no clue then the machine following those instructions will not work right either.

It, programming, is like learning a language, any language.
But to be able to say Hello World ? Or write a novel? Or even an instruction book,
bomb defusing manual:
1) turn big bolt 90 degrees left
2) before you do that pull pin.

got it?
 
onsdag den 12. august 2020 kl. 17.18.15 UTC+2 skrev Tom Gardner:
On 12/08/20 15:30, jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 11/08/2020 17:10, jlarkin@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 08:46:38 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

tirsdag den 11. august 2020 kl. 16.50.28 UTC+2 skrev jla...@highlandsniptechnology.com:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

what\'s the definition of \"bad code\"?

Code that can contain or allow viruses, trojans, spyware, or
ransomware, or can modify the OS, or use excess resources. That should
be obvious.

It is very obvious that you have no understanding of the basics of
computing. The halting problem shows that what you want is impossible.

I\'ve written maybe a million lines of code, mostly realtime stuff, and
three RTOSs and two or three compilers, and actually designed one CPU
from MSI TTL chips, that went into production. I contributed code to
FOCAL (I\'m named in the source) and met with some of the guys that
invented the PDP-11 architecture, before they did it. Got slightly
involved in the dreadful HP 2114 thing too.

Have you done anything like that?

I\'m sure you have done that, but Martin is correct.



Bulletproof memory management is certainly not impossible. It\'s just
that not enough people care.

The core problem is fundamental: data is numbers and
programs are numbers. The only difference is in how
the numbers are interpreted by the hardware.

and it all comes from the same harddrive
 
On Wed, 12 Aug 2020 15:23:46 GMT, Jan Panteltje
<pNaOnStPeAlMtje@yahoo.com> wrote:

On a sunny day (Wed, 12 Aug 2020 07:30:18 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
j8t7jf1qv47p6l2qi72egnaq0gjcgb8ijq@4ax.com>:

Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.

That is silly,
For me I normally write incremental code,
one part at the time, thousands and thousands of lines of code that simply work.

Do you compile, run, test, and debug at each iteration? If not, why
iterate? Just write the whole thing.
 
On a sunny day (Wed, 12 Aug 2020 11:02:17 -0700) it happened John Larkin
<jlarkin@highland_atwork_technology.com> wrote in
<p6b8jf1f3haok5g7qpc21i0gvmgk37bl4o@4ax.com>:

On Wed, 12 Aug 2020 15:23:46 GMT, Jan Panteltje
pNaOnStPeAlMtje@yahoo.com> wrote:

On a sunny day (Wed, 12 Aug 2020 07:30:18 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
j8t7jf1qv47p6l2qi72egnaq0gjcgb8ijq@4ax.com>:

Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.

That is silly,
For me I normally write incremental code,
one part at the time, thousands and thousands of lines of code that simply work.

Do you compile, run, test, and debug at each iteration? If not, why
iterate? Just write the whole thing.

I am not talking about iteration, something you and Elon seem to be into.
Iteration comes when you have no grasp of what you want to do :)
I mentioned *incremental*, one part at the time, test it, next part.
Like put on one ski on one foot, make sure it is OK,
then the other on the other foot?
Write part of your idea in code, test it.
Then the next part.
I must say that over all the years I hardly have encountered any structural
(if you want to call it that way) issues (idea wrong, never).
Once I start coding I know exactly what I want the thing to do.
It is silly to start coding (or building anything) if you have no
clue what you want, costly too.

I was wondering .. how long it will be .. before we see a computah language
with the
but first
statement

I simply do not see the problems you guys have,
try doing some embedded in asm.

Freaking ... I do not even use a debugger,
neither do I use the Microchip tools, man I even wrote the PIC programmer.
And in C use printf(), what has the world come to?
 
Martin Brown wrote:
On 24/07/2020 23:34, John Larkin wrote:
On Fri, 24 Jul 2020 23:15:27 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

As for \"better\" languages, they help by reducing the
opportunities for making boring old preventable mistakes.

It should be flat impossible for any application program to compromise
the OS, or any other unrelated application. Intel and Microsoft are
just criminally stupid. I don\'t understand why they are not liable for
damages.

There is nothing much wrong with the Intel hardware it has been able to
support fully segmented protected address spaces for processes ever
since the 386. IBM OS/2 (and to a lesser extent NT) was quite capable of
terminating a rogue process with extreme prejudice and no side effects.

Other CPUs have more elegant instruction sets but that is not relevant.

The trouble is that Windows made some dangerous compromises to make
games go 5% faster or whatever the actual figure may be. There is far
too much privileged kernel code and not enough parameter checking.

First, the entire Win32 API was a petri dish for pathology. They
pointerized parameters that didn\'t even need to be pointerized.

Second, for a long time all that crudity extended the natural life of
installed systems. Linux still doesn\'t properly do \"multimedia\" without
considerable value-added work.

In addition too many Windows users sit with super user privileges all
the time and that leaves them a lot more open to malware.

Not so much any more. Not so much for quite some time now.

We are in the dark ages of computing. Like steam engines blowing up
and poaching everybody nearby.

More like medieval cathedral builders making very large buildings - if
it is still standing in five years time then it was a good one.

Lousy metaphor, really. Lousy when ESR coined it and even less so now :)

Ely and Durham cathedrals came uncomfortably close to falling down due
to different design defects. Several big UK churches famously have
crooked spires to say nothing of the leaning tower of Pisa.

In software, architecture won\'t save you.

--
Les Cargill
 
On Thu, 13 Aug 2020 06:38:18 GMT, Jan Panteltje
<pNaOnStPeAlMtje@yahoo.com> wrote:

On a sunny day (Wed, 12 Aug 2020 11:02:17 -0700) it happened John Larkin
jlarkin@highland_atwork_technology.com> wrote in
p6b8jf1f3haok5g7qpc21i0gvmgk37bl4o@4ax.com>:

On Wed, 12 Aug 2020 15:23:46 GMT, Jan Panteltje
pNaOnStPeAlMtje@yahoo.com> wrote:

On a sunny day (Wed, 12 Aug 2020 07:30:18 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
j8t7jf1qv47p6l2qi72egnaq0gjcgb8ijq@4ax.com>:

Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.

That is silly,
For me I normally write incremental code,
one part at the time, thousands and thousands of lines of code that simply work.

Do you compile, run, test, and debug at each iteration? If not, why
iterate? Just write the whole thing.

I am not talking about iteration, something you and Elon seem to be into.
Iteration comes when you have no grasp of what you want to do :)
I mentioned *incremental*, one part at the time, test it, next part.

OK, you code, let the compiler find the spelling and syntax and typing
errors, run it, test it, debug it, and then when it looks OK, add more
source code and repeat.

That\'s standard operating procdure in the world of software. If we
did that in hardware design, we\'d never ship a product. We have to get
the entire thing right first time, before we apply power.

Some people, probably you, have native talent to get it right. But
such people are a minority, and the world needs a lot of code, and
most of it is bad and buggy. A hard embedded system is a nice tight
single-person programming challenge, too. One person owns every line
of code, and there are mere thousands of lines of code.


I\'m competing with an organization that does design that way. I could
have shipped in 4 or 5 months from \"go.\" Their policy is incremental
design. They scheduled two big teams for two years. The latest
schedule calls for a final design (ready to qualify!) in 3.5 years
from start, but I doubt they will make that. Their sunk cost is
gigantic now.

Like put on one ski on one foot, make sure it is OK,
then the other on the other foot?

Certainly not. Snap into skis. Get on the chair lift. Ascend 1800
feet. At the top, unload and ski straight down. That\'s the test.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On a sunny day (Thu, 13 Aug 2020 07:46:05 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
<tgjajfh203tlb06evvaa2j8cq45qcbvkkh@4ax.com>:

On Thu, 13 Aug 2020 06:38:18 GMT, Jan Panteltje
pNaOnStPeAlMtje@yahoo.com> wrote:

On a sunny day (Wed, 12 Aug 2020 11:02:17 -0700) it happened John Larkin
jlarkin@highland_atwork_technology.com> wrote in
p6b8jf1f3haok5g7qpc21i0gvmgk37bl4o@4ax.com>:

On Wed, 12 Aug 2020 15:23:46 GMT, Jan Panteltje
pNaOnStPeAlMtje@yahoo.com> wrote:

On a sunny day (Wed, 12 Aug 2020 07:30:18 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
j8t7jf1qv47p6l2qi72egnaq0gjcgb8ijq@4ax.com>:

Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.

That is silly,
For me I normally write incremental code,
one part at the time, thousands and thousands of lines of code that simply work.

Do you compile, run, test, and debug at each iteration? If not, why
iterate? Just write the whole thing.

I am not talking about iteration, something you and Elon seem to be into.
Iteration comes when you have no grasp of what you want to do :)
I mentioned *incremental*, one part at the time, test it, next part.

OK, you code, let the compiler find the spelling and syntax and typing
errors, run it, test it, debug it, and then when it looks OK, add more
source code and repeat.

That\'s standard operating procdure in the world of software. If we
did that in hardware design, we\'d never ship a product. We have to get
the entire thing right first time, before we apply power.

I do not mind saying you are right, for 100$ I will say it once and give you a discount
2 x for 199 $.
But that does not change the fact that you are on some strange trip
in HARDWARE we always work incremental, you use tested parts, like chips, switches,
add it together to do more, like libraries or pieces of code in software.


>Some people, probably you, have native talent to get it right.

I do not start coding unless I know how to tell the thing to solve the problem.
A big advantage of open source is that zillions of people have solved zillions of problems
and you can learn from their code and use their libraries if you want.
Add you own solutions to the public space...


But
such people are a minority, and the world needs a lot of code, and
most of it is bad and buggy. A hard embedded system is a nice tight
single-person programming challenge, too. One person owns every line
of code, and there are mere thousands of lines of code.


I\'m competing with an organization that does design that way. I could
have shipped in 4 or 5 months from \"go.\" Their policy is incremental
design. They scheduled two big teams for two years. The latest
schedule calls for a final design (ready to qualify!) in 3.5 years
from start, but I doubt they will make that. Their sunk cost is
gigantic now.

Capitalism is out to extract the maximum money out of \'things\'.
You see that with many projects, military, and often the taxpayer pays.
Once you could go to the moon, all that happened after that was driving around the block
at a cost of trillions for the taxpayer and human life.
Job creation program.
There was Obamacare project, in a weekend some students made the website..



Like put on one ski on one foot, make sure it is OK,
then the other on the other foot?

Certainly not. Snap into skis. Get on the chair lift. Ascend 1800
feet. At the top, unload and ski straight down. That\'s the test.

Every year trains come back here from Switzerland with people in casts..
broken things...
You left that part out.
But in cars airbags are required by law I think.
Even the mars landers used airbags! Get smart :)
Anyways pointless to argue, let\'s agree on eh... taste of chocolate being good?
It is so hot here my chocolates melted.
 
On 14/8/20 1:14 am, Jan Panteltje wrote:
On a sunny day (Thu, 13 Aug 2020 07:46:05 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
Like put on one ski on one foot, make sure it is OK,
then the other on the other foot?
Certainly not. Snap into skis. Get on the chair lift. Ascend 1800
feet. At the top, unload and ski straight down. That\'s the test.

Every year trains come back here from Switzerland with people in casts..
broken things...

You folk are good at horizontal things. Vertical, not so much :p

CH
 
On 2020-08-13 19:46, Clifford Heath wrote:
On 14/8/20 1:14 am, Jan Panteltje wrote:
On a sunny day (Thu, 13 Aug 2020 07:46:05 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
Like put on one ski on one foot, make sure it is OK,
then the other on the other foot?
Certainly not. Snap into skis. Get on the chair lift. Ascend 1800
feet. At the top, unload and ski straight down. That\'s the test.

Every year trains come back here from Switzerland with people in casts..
broken things...

You folk are good at horizontal things. Vertical, not so much :p

CH

They do mud really well though. You have to give them that.

Cheers

Phil Hobbs

(Who used to spend time in New Mexico, where they specialize in dirt and
rocks.)

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Thu, 13 Aug 2020 20:45:07 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-08-13 19:46, Clifford Heath wrote:
On 14/8/20 1:14 am, Jan Panteltje wrote:
On a sunny day (Thu, 13 Aug 2020 07:46:05 -0700) it happened
jlarkin@highlandsniptechnology.com wrote in
Like put on one ski on one foot, make sure it is OK,
then the other on the other foot?
Certainly not. Snap into skis. Get on the chair lift. Ascend 1800
feet. At the top, unload and ski straight down. That\'s the test.

Every year trains come back here from Switzerland with people in casts..
broken things...

You folk are good at horizontal things. Vertical, not so much :p

CH

They do mud really well though. You have to give them that.

Cheers

Phil Hobbs

(Who used to spend time in New Mexico, where they specialize in dirt and
rocks.)

When I was a kid, we knew that rocks came in barges from rock
factories. We had geology classes and didn\'t believe any of it.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
Tom Gardner wrote:
On 11/08/20 10:02, Martin Brown wrote:
On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent
that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools
that can find the most commonly made mistakes as rapidly as possible.
Various dataflow methods can catch a whole host of classic bugs before
the code is even run but industry seems reluctant to invest so we have
the status quo. C isn\'t a great language for proof of correctness

It is worse than that. According to some that have sat
on the standards committees (e.g. WG14), and been at the
sharp end of user-compiler-machine \"debates\", even the
language designers and implementers have disagreements
and misunderstandings about what standards and
implementations do and say.

If they have those problems, automated systems and mere
users stand zero chance.


but the languages that tried to force good programmer behaviour have
never made any serious penetration into the commercial market. I know
this to my cost as I have in the past been involved with compilers.

Well, Java has been successful at preventing many
stupid mistakes seen with c/c++. That allows idiot
programmers to make new and more subtle mistakes :(

Rust and Go are showing significant promise in the
marketplace,

Mozzlla seems to have dumped at least some of the Rust team:

https://www.reddit.com/r/rust/comments/i7stjy/how_do_mozilla_layoffs_affect_rust/


and will remove many traditional \"infelicities\"
in multicore and distributed applications.

Ada/SPARK is the best commercial example of a language
for high reliability applications. That it is a niche
market is an illustration of how difficult the problems
are.

It perhaps should not be niche, but it is.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

Yup. But now there are more under-educated programmers
and PHBs around :(

--
Les Cargill
 
jlarkin@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

There are high reliability methologies where you exhaustively account
for all the invariants ( things that must be true for correct
operation ).

It\'s not a well-known methodology. During the CASE tool era, there were
several; just do that without the CASE tool. You can do it in Haskell
with the Actor pattern.

It\'s not that far from thinking like an FPGA designer.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

Green Hills has really tight memory protection built into its products.
It\'s ... bad, but not that bad. Kluncky.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.

Online code updates should of course be disallowed by default. It\'s an
invitation to ship crap code now and assume it will be fixed some day.
And that the users will find the bugs and the black-hats will find the
vulnerabilities.

Why is there no legal liability for bad code?

Because the licensing prevents it. The political economy of software is
very, very bent. Correctness is simply not a consideration unless you
can make it a billable thing, like the testing done in avionics.


--
Les Cargill
 
Martin Brown wrote:
On 11/08/2020 17:10, jlarkin@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 08:46:38 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

tirsdag den 11. august 2020 kl. 16.50.28 UTC+2 skrev
jla...@highlandsniptechnology.com:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still
have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally
prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and
litigation.

Humans make mistakes and the least bad solution is to design tools
that
can find the most commonly made mistakes as rapidly as possible.
Various
dataflow methods can catch a whole host of classic bugs before the
code
is even run but industry seems reluctant to invest so we have the
status
quo. C isn\'t a great language for proof of correctness but the
languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.


No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

what\'s the definition of \"bad code\"?

Code that can contain or allow viruses, trojans, spyware, or
ransomware, or can modify the OS, or use excess resources. That should
be obvious.

It is very obvious that you have no understanding of the basics of
computing. The halting problem shows that what you want is impossible.

You cannot tell reliably what code will do until it gets executed.

You can translate known models ( frequently state machine like ones )
directly into code. There will be minimal knuckle-busting and the
remaining defects are then all hard cases ( timing, race conditions,
the like ). After a span of time, you stop making hard cases too.

A less severe class of \"bad\" is code that doesn\'t perform its intended
function properly, or crashes. If that annoys people, they can stop
using it.

Most decent software does what it is supposed to most of the time. Bugs
typically reside for a long time in seldom trodden paths that should
never normally happen like error recovery in weird situations.

C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Then that null better be there.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.

what are the rules?


Don\'t access outside your assigned memory map. Don\'t execute anything
but what\'s in read-only code space. Don\'t overflow stacks or buffers.

That is motherhood and apple pie. It allows other programs and tasks to
keep running and was one of the strengths of IBM\'s OS/2 but apart from
in bank machines and air traffic control hardly anyone adopted it :(

There was something akin to a gold rush in PC software and IBM never
had their head in the game. Remember that this was when EDS was
one of IBM\'s biggest VARs, and IBM themselves were trying to embrace a
service model.

IBM soured the pitch by delivering it late and not quite working and
conflating it with the horrible PS/2 hardware lockin that forced their
competitors to collaborate and design the EISA bus the rest is history.

I personally saw lots of PS2 machines installed.

Don\'t access any system resources that you are not specifically
assigned access to (which includes devices and IP addresses.) Don\'t
modify drivers or the OS. The penalty for violation is instant death.

You are going to have a lot of time wasting checking against all these
rules which will themselves contain inconsistencies after a while.

Eh. It\'s all a game of constraints. Kids just aren\'t taught; the
emphasis is on getting butts in seats as fast as possible. The kids
are struggling, too. Lots of \"impostor syndrome\" and the like.

Let\'s get rid of virtual memory too.

Why? Disk is so much cheaper than ram and plentiful. SSDs are fast too.

And getting faster.

Some of those rules just make programmers pay more attention, which is
nice but not critical. What really matters is that the hardware and OS
detect violations and kill the offending process.

One that you can do either in hardware or software is to catch any
attempt to fetch an undefined value from memory. These days there are a
few sophisticated compilers that can do this at *compile* time.

More than a few; it\'s moving mainstream. Visual Studio now has LLVM as
an option.

One I know (Russian as it happens) by default compiles a hard runtime
trap at the location of the latent fault. I have mine set to warning.

Hardware designers usually get things right, which is why FPGAs seldom
have bugs but procedural code is littered with errors. Programmers
can\'t control states, if they understand the concept at all.

Oh rubbish. You should stop using simulators and see how far you get -
since all software is all so buggy that you can\'t trust it can you?

:) I\'ve fixed sooooo many hardware problems in software...

Most of the protections we need here were common in 1975. Microsoft
and Intel weren\'t paying attention, and a culture of sloppiness and
tolerance of hazard resulted.

Intel hardware has the capability to do full segmented protected modes
where you only get allocated the memory you ask for and get zapped by
the OS if you try anything funny. But the world went with Windows :(

I blame IBM for their shambolic marketing of OS/2.

IBM wasn\'t actually interested in it. It\'s something that happened *to*
them.

--
Les Cargill
 
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.

It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.

C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.

That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

Or push everything into the cloud and not actually run application
programs on a flakey box or phone.

And there you have it. That\'s next.

<snip>

--
Les Cargill
 
On 2020-08-13 23:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

Sort of like UCSD Pascal, circa 1975. ;)

Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

snip

Not in our shop. We just had an 8-day phone/net/cell outage on account
of a dippy little 50kt storm that blew through in three hours (Isaias).
I was having to wardrive to find a place with enough cell bars that I
could use my phone\'s hotspot, so eventually I just started working from
home again, where there was still cell service.

And then there\'s the data security problem.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 14/08/20 04:13, Les Cargill wrote:
Tom Gardner wrote:
Rust and Go are showing significant promise in the
marketplace,

Mozzlla seems to have dumped at least some of the Rust team:

https://www.reddit.com/r/rust/comments/i7stjy/how_do_mozilla_layoffs_affect_rust/

I doubt they will remain unemployed. Rust is gaining traction
in wider settings.

Linus Torvalds is vociferously and famously opposed to having
C++ anywhere near the Linux kernel (good taste IMNSHO). He
has given a big hint he wouldn\'t oppose Rust, by stating that
if it is there it should be enabled by default.

https://www.phoronix.com/scan.php?page=news_item&px=Torvalds-Rust-Kernel-K-Build
 
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

It also goes lower than that. The processor internally decomposes
x86 ISA instructions into sequences of simpler micro operations that
are invisible externally. Yup, microcode :)


Or push everything into the cloud and not actually run application
programs on a flakey box or phone.


And there you have it. That\'s next.

.... the triumphant reinvention of bureaux timesharing systems.

Greybeards remember the collective sighs of relief when users
realised PCs enabled them to finally get hold of and own their
own data
 
On 11/08/20 17:10, jlarkin@highlandsniptechnology.com wrote:
Hardware designers usually get things right, which is why FPGAs seldom
have bugs but procedural code is littered with errors. Programmers
can\'t control states, if they understand the concept at all.

Have you ever looked at the <cough>errata</cough> lists
associated with modern processors?
 
On 14/08/20 04:21, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:

There are high reliability methologies where you exhaustively account
for all the invariants ( things that must be true for correct
operation ).

SPARK/Ada is well known and well developed and used. It was
originated by one of my university lecturers, Bernard Carre.


It\'s not a well-known methodology. During the CASE tool era, there were several;
just do that without the CASE tool. You can do it in Haskell
with the Actor pattern.

There are many useful design methods. They all have two
fundamental problems:
- stuff that is outside the methodology, e.g. runtimes
and libraries
- incorrect/unproven specifications



Why is there no legal liability for bad code?


Because the licensing prevents it.

I doubt that, since if that was the case then other
engineering disciplines would use the same technique!

I\'m waiting to see how the legal system deals with
machine learning systems in terms of
- baked in illegal biases, where nobody knows why
\"the computer\" made a decision
- accident liability, e.g. with cars


The political economy of software is
very, very bent. Correctness is simply not a consideration unless you
can make it a billable thing, like the testing done in avionics.

Consequential damages would be a starting point.
 

Welcome to EDABoard.com

Sponsor

Back
Top