Conical inductors--still $10!...

I\'ve replaced thousands of failed TTL ICs over the decades.
 
On 2020-07-25 18:37, Tom Gardner wrote:
On 25/07/20 19:51, Phil Hobbs wrote:
Check out Qubes OS, which is what I run daily.  It addresses most of
the problems you note by encouraging you to run browsers in disposable
VMs and otherwise containing the pwnage.

I did.

It doesn\'t like Nvidia graphics cards, and that\'s all my
new machine has :(

I mostly run it on $150 eBay Thinkpad T430s (and up) and Supermicro AMD
tower boxes. I wouldn\'t be without it at this point.

Cheers

Phil Hobbs
(posting from a $150 eBay T430s)

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Mon, 27 Jul 2020 19:58:27 -0700 (PDT), Flyguy
<soar2morrow@yahoo.com> wrote:

On Thursday, July 16, 2020 at 11:07:33 PM UTC-7, Ricketty C wrote:
On Thursday, July 16, 2020 at 1:23:33 PM UTC-4, Gerhard Hoffmann wrote:
Am 16.07.20 um 15:44 schrieb jlarkin@highlandsniptechnology.com:
On Thu, 16 Jul 2020 09:55:32 +0200, Gerhard Hoffmann <dk4xp@arcor.de
wrote:

Am 16.07.20 um 09:20 schrieb Bill Sloman:

James Arthur hasn\'t noticed that when people start dying of Covid-19, other people start practice social distancing of their own accord. Note the Swedish example.

It still kills a lot of people, and even the Swedes aren\'t anywhere near herd immunity yet.

In fact, Sweden was just yesterday removed from the list of dangerous
countries by our (German) government. That has consequences for
insurances etc if you insist to go there.

And herd immunity is a silly idea. It does not even work in Bad Ischgl,
the Austrian skiing resort where they sport 42% seropositives, which
is probably the world record, much higher than Sweden.

And herd immunity does not mean that you won\'t get it if you are
in the herd.

It means that if you get it and survive, you are out of the herd.

No, it means that you are now a badly needed part of the herd by
diluting the danger for the rest.

Some people seem to have serious problems helping others.

I was hoping Larkin would take his own advice and work on his personal herd immunity. Unfortunately the evidence is mounting that there will be no lasting immunity and so no herd immunity ever.

As with many situations this is a Darwinian event. Part of the trouble is those who choose to ignore the danger put the rest of us at risk by continuing the propagation of the disease.

I know people in this group saw the video about the three general approaches to dealing with the disease. Ignore it and lots of people die. There is not so much impact on the economy and the disease reduces at some point.

Fight the disease with isolation, etc. to the detriment of the economy and save lives. Again, the disease does not last forever and at some point everything can reopen.

But the middle of the road approach, where we try to \"balance\" fighting the disease with keeping the economy open is insane because it continues the disease indefinitely resulting in the most morbidity and mortality as well as the worst impact to the economy.

Dealing with this disease halfheartedly is worse than doing nothing at all. Doing nothing at all is still much worse than mounting an effect attack on the disease and saving lives as well as the economy.

I don\'t get why this is not well understood. I guess Kim was right.

--

Rick C.

-+-- Get 1,000 miles of free Supercharging
-+-- Tesla referral code - https://ts.la/richard11209

And you are a fucking SHRILL - posting that shit about 1,000 miles of \"free\" supercharging to LINE YOUR FUCKING POCKETS!!!!!!

He\'s a penny-pincher. Some people are that way.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Tuesday, July 28, 2020 at 10:29:36 AM UTC-4, jla...@highlandsniptechnology.com wrote:
On Mon, 27 Jul 2020 19:58:27 -0700 (PDT), Flyguy
soar2morrow@yahoo.com> wrote:

On Thursday, July 16, 2020 at 11:07:33 PM UTC-7, Ricketty C wrote:
On Thursday, July 16, 2020 at 1:23:33 PM UTC-4, Gerhard Hoffmann wrote:
Am 16.07.20 um 15:44 schrieb jlarkin@highlandsniptechnology.com:
On Thu, 16 Jul 2020 09:55:32 +0200, Gerhard Hoffmann <dk4xp@arcor.de
wrote:

Am 16.07.20 um 09:20 schrieb Bill Sloman:

James Arthur hasn\'t noticed that when people start dying of Covid-19, other people start practice social distancing of their own accord. Note the Swedish example.

It still kills a lot of people, and even the Swedes aren\'t anywhere near herd immunity yet.

In fact, Sweden was just yesterday removed from the list of dangerous
countries by our (German) government. That has consequences for
insurances etc if you insist to go there.

And herd immunity is a silly idea. It does not even work in Bad Ischgl,
the Austrian skiing resort where they sport 42% seropositives, which
is probably the world record, much higher than Sweden.

And herd immunity does not mean that you won\'t get it if you are
in the herd.

It means that if you get it and survive, you are out of the herd.

No, it means that you are now a badly needed part of the herd by
diluting the danger for the rest.

Some people seem to have serious problems helping others.

I was hoping Larkin would take his own advice and work on his personal herd immunity. Unfortunately the evidence is mounting that there will be no lasting immunity and so no herd immunity ever.

As with many situations this is a Darwinian event. Part of the trouble is those who choose to ignore the danger put the rest of us at risk by continuing the propagation of the disease.

I know people in this group saw the video about the three general approaches to dealing with the disease. Ignore it and lots of people die. There is not so much impact on the economy and the disease reduces at some point.

Fight the disease with isolation, etc. to the detriment of the economy and save lives. Again, the disease does not last forever and at some point everything can reopen.

But the middle of the road approach, where we try to \"balance\" fighting the disease with keeping the economy open is insane because it continues the disease indefinitely resulting in the most morbidity and mortality as well as the worst impact to the economy.

Dealing with this disease halfheartedly is worse than doing nothing at all. Doing nothing at all is still much worse than mounting an effect attack on the disease and saving lives as well as the economy.

I don\'t get why this is not well understood. I guess Kim was right.

--

Rick C.

-+-- Get 1,000 miles of free Supercharging
-+-- Tesla referral code - https://ts.la/richard11209

And you are a fucking SHRILL - posting that shit about 1,000 miles of \"free\" supercharging to LINE YOUR FUCKING POCKETS!!!!!!

He\'s a penny-pincher. Some people are that way.

It\'s funny that Larkin is so intimidated by me. He never responds directly to my posts, but loves to take little digs by responding about me to others. Clearly he is afraid to actually engage me in conversation.

He\'s an ankle biter. Some people are that way.

--

Rick C.

+--+ Get 1,000 miles of free Supercharging
+--+ Tesla referral code - https://ts.la/richard11209
 
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what
an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out that
the
factors-of-10 productivity improvements of the early days were gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.

Now the issues are mostly intrinsic to an artifact built of thought.
So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit you
quickly realised that the process which ensures all the other processes
are kept busy doing useful things is by far the most important.

> I\'m talking about programmer productivity, not MIPS.

There is still scope for some improvement but most of the ways it might
happen have singularly failed to deliver. There are plenty of very high
quality code libraries in existence already but people still roll their
own :( An unwillingness of businesses to pay for licensed working code.

The big snag is that way too many programmers do the coding equivalent
in mechanical engineering terms of manually cutting their own non
standard pitch and diameter bolts - sometimes they make very predictable
mistakes too. The latest compilers and tools are better at spotting
human errors using dataflow analysis but they are far from perfect.

--
Regards,
Martin Brown
 
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what
an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.

Now the issues are mostly intrinsic to an artifact built of thought.
So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit you
quickly realised that the process which ensures all the other processes
are kept busy doing useful things is by far the most important.

I wrote a clusterized optimizing EM simulator that I still use--I have a
simulation gig just starting up now, in fact. I learned a lot of ugly
things about the Linux thread scheduler in the process, such as that the
pthreads documents are full of lies about scheduling and that you can\'t
have a real-time thread in a user mode program and vice versa. This is
an entirely arbitrary thing--there\'s no such restriction in Windows or
OS/2. Dunno about BSD--I should try that out.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

I\'m talking about programmer productivity, not MIPS.

There is still scope for some improvement but most of the ways it might
happen have singularly failed to deliver. There are plenty of very high
quality code libraries in existence already but people still roll their
own :( An unwillingness of businesses to pay for licensed working code.

The big snag is that way too many programmers do the coding equivalent
in mechanical engineering terms of manually cutting their own non
standard pitch and diameter bolts - sometimes they make very predictable
mistakes too. The latest compilers and tools are better at spotting
human errors using dataflow analysis but they are far from perfect.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what
an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.

Now the issues are mostly intrinsic to an artifact built of thought.
So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Fencepost errors are also relatively easy to test for. Having a good
method of generating test vectors is important.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit you
quickly realised that the process which ensures all the other processes
are kept busy doing useful things is by far the most important.

For things like CUDA or OpenCL you\'re smack up against the
von Neumann Bottleneck. I expect the video card makers to
embrace interfaces faster than ePCI relatively soon. Like M.2NVMe .

I may get back to it, but for one thing I work on ( VST plugin
convolution ), the GPP approach wins for now.

Other than that, it depends on what you mean by \"massively parallel.\"
With bog standard open/read/write/close Linux driver ioctl() and
event driven things like select()/poll()/epoll() it gets quite a bit
easier.

If that\'s not good enough, shared memory is a possibility. There are
other paradigms.

I\'m talking about programmer productivity, not MIPS.

There is still scope for some improvement but most of the ways it might
happen have singularly failed to deliver. There are plenty of very high
quality code libraries in existence already but people still roll their
own :( An unwillingness of businesses to pay for licensed working code.

When I think of libraries, I think of what\'s available for Fortran.


Writing libraries otherwise isn\'t that good of a business model. Open
source libraries are only as good as the people who steer them.

The \"boost\" library quite should be a wonderful thing; it is , sometimes
but more often it\'s just a whacking great overhead.

The big snag is that way too many programmers do the coding equivalent
in mechanical engineering terms of manually cutting their own non
standard pitch and diameter bolts - sometimes they make very predictable
mistakes too. The latest compilers and tools are better at spotting
human errors using dataflow analysis but they are far from perfect.

But it\'s not like there are healthy markets around for \"bolt makers\" in
software. And there\'s got to be a limit to how good the tools actually
are. Turns out I can start using CLANG at home; I\'ll see how impressive
it is.

The problem in software is pretty simple 0 half the practitioners have
been doing it less than five years. Throw in the distractions of
developing for the Web and it\'s even worse.

--
Les Cargill
 
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what
an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most important.

I wrote a clusterized optimizing EM simulator that I still use--I have a
simulation gig just starting up now, in fact.  I learned a lot of ugly
things about the Linux thread scheduler in the process, such as that the
pthreads documents are full of lies about scheduling and that you can\'t
have a real-time thread in a user mode program and vice versa.  This is
an entirely arbitrary thing--there\'s no such restriction in Windows or
OS/2.  Dunno about BSD--I should try that out.

In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee them
to run. You may be able to get close if you remove unnecessary services.

I don\'t think this does what you want.

\"Cons: Tasks that runs in the real-time context does not have access to
all of the resources (drivers, services, etc.) of the Linux system.\"

https://github.com/MarineChap/Real-time-system-course

I haven\'t set any of this up on the past. Again - if we had an FPGA,
there was a device driver for it and the device driver kept enough FIFO
to prevent misses.



I\'m talking about programmer productivity, not MIPS.

There is still scope for some improvement but most of the ways it
might happen have singularly failed to deliver. There are plenty of
very high quality code libraries in existence already but people still
roll their own :( An unwillingness of businesses to pay for licensed
working code.

The big snag is that way too many programmers do the coding equivalent
in mechanical engineering terms of manually cutting their own non
standard pitch and diameter bolts - sometimes they make very
predictable mistakes too. The latest compilers and tools are better at
spotting human errors using dataflow analysis but they are far from
perfect.

Cheers

Phil Hobbs

--
Les Cargill
 
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.

I wrote a clusterized optimizing EM simulator that I still use--I have
a simulation gig just starting up now, in fact.  I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and that
you can\'t have a real-time thread in a user mode program and vice
versa.  This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2.  Dunno about BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee them
to run. You may be able to get close if you remove unnecessary services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the process have
to be as well. Any compute-bound thread in a realtime process will
bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread priority in a
user process, but noooooo. They all have to have the same priority,
despite what the pthreads docs say. So in Linux there is no way to
express the idea that some threads in a process are more important than
others. That destroys the otherwise-excellent scaling of my simulation
code.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
Phil Hobbs wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors
are typically fence post errors. Binary fence post errors being
about the most severe since you end up with the opposite of what you
intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively
parallel hardware. If you have ever done any serious programming on
such kit you quickly realised that the process which ensures all the
other processes are kept busy doing useful things is by far the most
important.

I wrote a clusterized optimizing EM simulator that I still use--I
have a simulation gig just starting up now, in fact.  I learned a lot
of ugly things about the Linux thread scheduler in the process, such
as that the pthreads documents are full of lies about scheduling and
that you can\'t have a real-time thread in a user mode program and
vice versa.  This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2.  Dunno about BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\". It\'s a bit
of a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee
them to run. You may be able to get close if you remove unnecessary
services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the process have
to be as well.  Any compute-bound thread in a realtime process will
bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread priority in a
user process, but noooooo.  They all have to have the same priority,
despite what the pthreads docs say.

That last bit makes me wonder. Priority settings have to conform to
\"policies\" but there have to be more options than \"the same\".

This link sends me to the \"sched(7)\" man page.

https://man7.org/linux/man-pages/man2/sched_setscheduler.2.html

Might have to use sudo to start the thing, which is bad form in
some domains these days.

So in Linux there is no way to
express the idea that some threads in a process are more important than
others.  That destroys the otherwise-excellent scaling of my simulation
code.

There\'s a lot of info on the web now that seems to indicate you can
probably do what you need done.

FWIW:
http://www.yonch.com/tech/82-linux-thread-priority#:~:text=In%20real%2Dtime%20scheduling%20policies,always%20preempt%20lower%20priority%20threads.&text=Two%20alternatives%20exist%20to%20set,(also%20known%20as%20pthreads).


Cheers

Phil Hobbs

--
Les Cargill
 
On a sunny day (Sun, 2 Aug 2020 16:26:26 -0400) it happened Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote in
<35ffa56c-b81f-c4e2-4227-138e706cdc91@electrooptical.net>:

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Assuming RT means \'real time\'
No
First Unix / Linux (whatever version) is not a real time system.
It is a multi-tasker and so sooner or later it will have to do other things than your code.
For a kernel module to some extend you can service interrupts and keep some data in memory.
It will then be read sooner or later by the user program.

For threads in a program anything time critical is out.
Many things will work though as for example i2c protocol does not care so much about
timing, I talk to SPI and i2c chips all the time from threads.

The way I do \'real time\' with Linux is add a PIC to do the real time stuff,
or add logic and a hardware FIFO, FPGA if needed.

All depends on your definition of \'real time\' and requirements.

Here real time DVB-S encoding from a Raspberry Pi,
uses two 4k x 9 FIFOs to handle the task switch interrupt.
http://panteltje.com/panteltje/raspberry_pi_dvb-s_transmitter/
 
On 03/08/20 07:04, Jan Panteltje wrote:
On a sunny day (Sun, 2 Aug 2020 16:26:26 -0400) it happened Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote in
35ffa56c-b81f-c4e2-4227-138e706cdc91@electrooptical.net>:

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Assuming RT means \'real time\'
No
First Unix / Linux (whatever version) is not a real time system.

It can be, for the reasons you note below: definition of terms.

There are many telecoms Unix/Linux and Java programs that
are realtime.


It is a multi-tasker and so sooner or later it will have to do other things than your code.
For a kernel module to some extend you can service interrupts and keep some data in memory.
It will then be read sooner or later by the user program.

For threads in a program anything time critical is out.
Many things will work though as for example i2c protocol does not care so much about
timing, I talk to SPI and i2c chips all the time from threads.

The way I do \'real time\' with Linux is add a PIC to do the real time stuff,
or add logic and a hardware FIFO, FPGA if needed.

All depends on your definition of \'real time\' and requirements.

Obviously real time != fast, but that\'s boringly obvious.

In the telecoms industry \"real time\" often means time
guarantees are statistical, e.g. connect a call with a
mean time less than 0.5s.

Personally as a customer and engineer I would prefer
95th percentile rather than mean since it is a better
indication of the performance limit, but as a vendor
mean is more convenient.

Anybody trying to use Linux as a fast hard realtime
system is going to have to use a specialised kernel.
Even then they can be screwed by caches and interrupts.
 
On 2020-08-03 02:04, Jan Panteltje wrote:
On a sunny day (Sun, 2 Aug 2020 16:26:26 -0400) it happened Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote in
35ffa56c-b81f-c4e2-4227-138e706cdc91@electrooptical.net>:

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Assuming RT means \'real time\'
No
First Unix / Linux (whatever version) is not a real time system.
It is a multi-tasker and so sooner or later it will have to do other things than your code.
For a kernel module to some extend you can service interrupts and keep some data in memory.
It will then be read sooner or later by the user program.

For threads in a program anything time critical is out.
Many things will work though as for example i2c protocol does not care so much about
timing, I talk to SPI and i2c chips all the time from threads.

The way I do \'real time\' with Linux is add a PIC to do the real time stuff,
or add logic and a hardware FIFO, FPGA if needed.

All depends on your definition of \'real time\' and requirements.

Here real time DVB-S encoding from a Raspberry Pi,
uses two 4k x 9 FIFOs to handle the task switch interrupt.
http://panteltje.com/panteltje/raspberry_pi_dvb-s_transmitter/

Perhaps I was unclear. I\'m talking about thread classes, one of which is
officially called \"real time\", i.e. high priority, not about the design
of real time systems.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2020-08-03 01:17, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a
lot more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors
are typically fence post errors. Binary fence post errors being
about the most severe since you end up with the opposite of what
you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively
parallel hardware. If you have ever done any serious programming on
such kit you quickly realised that the process which ensures all
the other processes are kept busy doing useful things is by far the
most important.

I wrote a clusterized optimizing EM simulator that I still use--I
have a simulation gig just starting up now, in fact.  I learned a
lot of ugly things about the Linux thread scheduler in the process,
such as that the pthreads documents are full of lies about
scheduling and that you can\'t have a real-time thread in a user mode
program and vice versa.  This is an entirely arbitrary
thing--there\'s no such restriction in Windows or OS/2.  Dunno about
BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\". It\'s a bit
of a cadge. I\'ve never really seen a good explanation of that that
means.

Does anybody here know if you can mix RT and user threads in a
single process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee
them to run. You may be able to get close if you remove unnecessary
services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the process
have to be as well.  Any compute-bound thread in a realtime process
will bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread priority in
a user process, but noooooo.  They all have to have the same priority,
despite what the pthreads docs say.

That last bit makes me wonder. Priority settings have to conform to
\"policies\" but there have to be more options than \"the same\".

This link sends me to the \"sched(7)\" man page.

https://man7.org/linux/man-pages/man2/sched_setscheduler.2.html

Might have to use sudo to start the thing, which is bad form in
some domains these days.

So in Linux there is no way to express the idea that some threads in a
process are more important than others.  That destroys the
otherwise-excellent scaling of my simulation code.


There\'s a lot of info on the web now that seems to indicate you can
probably  do what you need done.

FWIW:
http://www.yonch.com/tech/82-linux-thread-priority#:~:text=In%20real%2Dtime%20scheduling%20policies,always%20preempt%20lower%20priority%20threads.&text=Two%20alternatives%20exist%20to%20set,(also%20known%20as%20pthreads).

Oh, I know what the docs say. What they don\'t tell you is that (a) None
of that scheduler stuff applies to user processes, just realtime ones;
(b) You can\'t mix real time and user threads in the same process; and
(c) You can\'t adjust the relative priority of threads in a user process,
no way, no how. You can turn the niceness of the whole process up (or
down, if you\'re running as root), but you can\'t do it thread-by-thread.

That means that when I need a communications thread to preempt other
threads unconditionally in a compute-bound process such as my simulator,
I have no way to express that in Linux. If I put a compute-bound thread
in the realtime class, it brings the UI to its knees and the box
eventually crashes.

For my purposes, I\'d be perfectly happy if I could _reduce_ the priority
of the compute threads and leave the comms threads\' priority alone, but
nooooooo.(*)

This limitation destroys the scaling of my simulator on Linux--it\'s
about a 30% performance hit on many-host clusters. Works fine in Windows
and OS/2, but the Linux Kernel Gods don\'t permit it, hence my question
about BSD.

The only way to do something like that in Linux appears to be to put all
the comms threads in a separate process, which involves all sorts of
shared memory and synchronization hackery too hideous to contemplate.

Cheers

Phil Hobbs

(*) When I talk about this, some fanboi always accuses me of trying to
hog the machine by jacking up the priority of my process, so let\'s be
clear about it.

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.

I wrote a clusterized optimizing EM simulator that I still use--I have
a simulation gig just starting up now, in fact.  I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and that
you can\'t have a real-time thread in a user mode program and vice
versa.  This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2.  Dunno about BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee them
to run. You may be able to get close if you remove unnecessary services.

I don\'t think this does what you want.

\"Cons: Tasks that runs in the real-time context does not have access to
all of the resources (drivers, services, etc.) of the Linux system.\"

https://github.com/MarineChap/Real-time-system-course

I haven\'t set any of this up on the past. Again - if we had an FPGA,
there was a device driver for it and the device driver kept enough FIFO
to prevent misses.




I\'m talking about programmer productivity, not MIPS.

There is still scope for some improvement but most of the ways it
might happen have singularly failed to deliver. There are plenty of
very high quality code libraries in existence already but people
still roll their own :( An unwillingness of businesses to pay for
licensed working code.

The big snag is that way too many programmers do the coding
equivalent in mechanical engineering terms of manually cutting their
own non standard pitch and diameter bolts - sometimes they make very
predictable mistakes too. The latest compilers and tools are better
at spotting human errors using dataflow analysis but they are far
from perfect.

Cheers

Phil Hobbs
There\'s an interesting 2016 paper about serious performance bugs in the
Linux scheduler here:

http://www.ece.ubc.ca/%7Esasha/papers/eurosys16-final29.pdf

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

--
Regards,
Martin Brown
 
Phil Hobbs wrote:
On 2020-08-03 01:17, Les Cargill wrote:
snip

Apologies for inspiring you to repeat yourself.

The only way to do something like that in Linux appears to be to put all
the comms threads in a separate process, which involves all sorts of
shared memory and synchronization hackery too hideous to contemplate.

Have you ruled out (nonblocking) sockets yet? They\'re quite
performant[1]. This would give you a mechanism to differentiate
priority. You can butch up an approximation of control flow, and it
should solve any synchronization problems - you won\'t at least need
sempahores.

[1] but perhaps not performant enough...

There is MSG_ZEROCOPY.

Cheers

Phil Hobbs

(*) When I talk about this, some fanboi always accuses me of trying to
hog the machine by jacking up the priority of my process, so let\'s be
clear about it.

There is always significant confusion about priority. Pushing it as a
make/break thing in a design is considered bad form :) But sometimes...

--
Les Cargill
 
On 11/08/20 10:02, Martin Brown wrote:
On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be correct by
design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that can find
the most commonly made mistakes as rapidly as possible. Various dataflow methods
can catch a whole host of classic bugs before the code is even run but industry
seems reluctant to invest so we have the status quo. C isn\'t a great language
for proof of correctness

It is worse than that. According to some that have sat
on the standards committees (e.g. WG14), and been at the
sharp end of user-compiler-machine \"debates\", even the
language designers and implementers have disagreements
and misunderstandings about what standards and
implementations do and say.

If they have those problems, automated systems and mere
users stand zero chance.


but the languages that tried to force good programmer
behaviour have never made any serious penetration into the commercial market. I
know this to my cost as I have in the past been involved with compilers.

Well, Java has been successful at preventing many
stupid mistakes seen with c/c++. That allows idiot
programmers to make new and more subtle mistakes :(

Rust and Go are showing significant promise in the
marketplace, and will remove many traditional \"infelicities\"
in multicore and distributed applications.

Ada/SPARK is the best commercial example of a language
for high reliability applications. That it is a niche
market is an illustration of how difficult the problems
are.


Ship it and be damned software development culture persists and it existed long
before there were online updates over the internet.

Yup. But now there are more under-educated programmers
and PHBs around :(
 
On 2020-08-04 09:25, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-03 01:17, Les Cargill wrote:
snip

Apologies for inspiring you to repeat yourself.

The only way to do something like that in Linux appears to be to put
all the comms threads in a separate process, which involves all sorts
of shared memory and synchronization hackery too hideous to contemplate.


Have you ruled out (nonblocking) sockets yet? They\'re quite
performant[1]. This would give you a mechanism to differentiate
priority. You can butch up an approximation of control flow, and it
should solve any synchronization problems - you won\'t at least need
sempahores.

[1] but perhaps not performant enough...

There is MSG_ZEROCOPY.


(*) When I talk about this, some fanboi always accuses me of trying to
hog the machine by jacking up the priority of my process, so let\'s be
clear about it.


There is always significant confusion about priority. Pushing it as a
make/break thing in a design is considered bad form :) But sometimes...

Did you look at the paper I posted upthread? Its title is \"The Linux
Scheduler: A Decade of Wasted Cores.\" (2016) That\'s about it.

I\'m using nonblocking sockets already. On Windows and OS/2, even on
multicore machines, the realtime threads get run very soon after
becoming runnable. They don\'t have that much to do, so they don\'t bog
down the UI or the other realtime services.

The design breaks the computational universe down into shoeboxes full of
sugar cubes. Each shoebox is a Chunk object, and communicates via six
Surface objects, each with its own high priority thread. Surface has
two subclasses, LocalSurface and NetSurface, which communicate via local
copying and sockets respectively, depending on whether the adjacent
Chunk is running on the same machine or not.

The Chunk data arrays are further broken down into Legs (like a journey,
not a millipede). A Leg is a 1-D row of adjacent cells that all have
the same updating equations and coefficients,(*) plus const references
to the four nearest neighbours and the functions that do the updating.
Generating the Legs is done once at the beginning of the run, and the
inner loop is a single while() that iterates over the list of Legs, once
on each half timestep (E -> H then H -> E).

This is a nice clean design that vectorizes pretty well even in C++ and
runs dramatically faster than the usual approach, which is to use a
triple loop wrapped around a switch statement that selects the updating
equation and coefficients for each cell on each half-step.

It runs fine on Linux as well, except that there\'s this unfortunate
tendency for the Surface threads to sit around sucking their thumbs when
they should be running, sometimes for seconds at a time. That really
hurts on highly-multicore boxes, where you want to run lots of small
Chunks to get the shortest run times.

It\'s an optimizing simulator, and it may need 100 or more complete runs
for the optimizer to converge, especially if you\'re using several
optimization parameters and have no idea what the optimal configuration
looks like. That can easily run to 10k-100k time steps altogether,
especially with metals, which require very fine grid meshes. (FDTD\'s
run time goes as N**4 because the time step has to be less than n/c
times the diagonal of the cells.)

Cheers

Phil Hobbs

(*) Dielectrics are pretty simple, but metals are another thing
altogether, especially at high frequency where the \"normal metal\"
approximation is approximately worthless. The free-electron metals
(copper, silver, and gold) exhibit large negative dielectric constants
through large ranges of the IR, which makes the usual FDTD propagator
unstable.



--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On a sunny day (Tue, 11 Aug 2020 10:02:32 +0100) it happened Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <rgtmr9$60l$1@gioia.aioe.org>:

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

I think it is not that hard to write code that simply works and does what it needs to do.
The problem I see is that many people who write code do not seem to understand
that there are 3 requirements:

0) you need to understand the hardware your code runs on.
1) you need to know how to code and the various coding systems used.
2) you need to know 100% about what you are coding for.

What I see in the world of bloat we live in is
0) no clue
1) 1 week tinkering with C++ or snake languages.
2) Huh? that is easy ..

And then blame everything on the languages and compilers if it goes wrong.

And then there are hackers, and NO system is 100% secure.

Some open source code I wrote and published runs 20 years without problems.
I know it can be hacked...


We will see ever more bloat as cluelessness is build upon cluelessness,
problem here is that industry / capitalism likes that.
Sell more bloat, sell more hardware, obsolete things ever faster
keep spitting out new standards ever faster,
 
On 11/08/2020 12:42, Jan Panteltje wrote:
On a sunny day (Tue, 11 Aug 2020 10:02:32 +0100) it happened Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <rgtmr9$60l$1@gioia.aioe.org>:

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

I think it is not that hard to write code that simply works and does what it needs to do.

Although I tend to agree with you I think a part of the problem is that
the people who are any good at it discover pretty early on that for
typical university scale projects they can hack it out from the solid in
the last week before the assignment is due to be handed in.

This method does not scale well to large scale software projects.

The problem I see is that many people who write code do not seem to understand
that there are 3 requirements:

0) you need to understand the hardware your code runs on.
1) you need to know how to code and the various coding systems used.
2) you need to know 100% about what you are coding for.

What I see in the world of bloat we live in is
0) no clue
1) 1 week tinkering with C++ or snake languages.
2) Huh? that is easy ..

Although I have an interest in computer architecture I would say that
today 0) is almost completely irrelevant to most programming problems
(unless it is on a massively parallel or Harvard architecture CPU)

Teaching of algorithms and complexity is where things have gone awry.
Programmers should not be reinventing the square or if you are very
lucky hexagonal wheel every time they should know about round wheels and
where to find them. Knuth was on the right path but events overtook him.
And then blame everything on the languages and compilers if it goes wrong.

Compilers have improved a long way since the early days but they could
do a lot more to prevent compile time detectable errors being allowed
through into production code. Such tools are only present in the high
end compilers rather than the ones that students use at university.

> And then there are hackers, and NO system is 100% secure.

Again you can automate some of the most likely hacker tests and see if
you can break things that way. They are not called script kiddies for
nothing. Regression testing is powerful for preventing bugs from
reappearing in a large codebase.

Some open source code I wrote and published runs 20 years without problems.
I know it can be hacked...

I once ported a big mainframe package onto a Z80 for a bet. It needed an
ice pack for my head and a lot of overlays. It was code that we had
ported to everything from a Cray-XMP downwards. We always learned
something new from every port. The Cyber-76 was entertaining because our
unstated assumption of IBM FORTRAN 32 bit and 64 bit reals was violated.

The Z80 implementation of Fortran was rather less forgiving than the
mainframes (being a strict interpretation of the Fortran IV standard)

We will see ever more bloat as cluelessness is build upon cluelessness,
problem here is that industry / capitalism likes that.
Sell more bloat, sell more hardware, obsolete things ever faster
keep spitting out new standards ever faster,

I do think that software has become ever more over complicated in an
attempt to make it more user friendly. OTOH we now have almost fully
working voice communication with the likes of Alexa aka she who must not
be named (or she lights up and tries to interpret your commands).
(and there are no teleprinter noises off a la original Star Trek)

--
Regards,
Martin Brown
 
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.

No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.

Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.

Online code updates should of course be disallowed by default. It\'s an
invitation to ship crap code now and assume it will be fixed some day.
And that the users will find the bugs and the black-hats will find the
vulnerabilities.

Why is there no legal liability for bad code?





--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Tue, 11 Aug 2020 11:42:10 GMT, Jan Panteltje
<pNaOnStPeAlMtje@yahoo.com> wrote:

On a sunny day (Tue, 11 Aug 2020 10:02:32 +0100) it happened Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <rgtmr9$60l$1@gioia.aioe.org>:

Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.

Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.

I think it is not that hard to write code that simply works and does what it needs to do.
The problem I see is that many people who write code do not seem to understand
that there are 3 requirements:

0) you need to understand the hardware your code runs on.

That\'s impossible. Not even Intel understands Intel processors, and
they keep a lot secret too.

>1) you need to know how to code and the various coding systems used.

There are not enough people who can do that.

>2) you need to know 100% about what you are coding for.

Generally impossible too.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 

Welcome to EDABoard.com

Sponsor

Back
Top