Conical inductors--still $10!...

Tom Gardner wrote:
On 15/08/20 03:51, Les Cargill wrote:
Tom Gardner wrote:
On 14/08/20 04:13, Les Cargill wrote:
Tom Gardner wrote:
Rust and Go are showing significant promise in the
marketplace,

Mozzlla seems to have dumped at least some of the Rust team:

https://www.reddit.com/r/rust/comments/i7stjy/how_do_mozilla_layoffs_affect_rust/


I doubt they will remain unemployed. Rust is gaining traction
in wider settings.


I dunno - I can\'t separate the messaging from the offering. I\'m
fine with a C/C++ compiler so I have less than no incentive to
even become remotely literate about Rust.

The Rustaceans seem obsessed with stuff my cohort ( read:eek:ld people )
learned six months into their first C project. But there may
well be benefits I don\'t know about.

Too many people /think/ they know C.

I first used C in ~81, and learned it from the two
available books, which I still have. The second book
was, of course, a book on traditional mistakes in C
\"The C Puzzle Book\".

It is horrifying that Boehm thought it worth writing this
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
http://hboehm.info/misc_slides/pldi05_threads.pdf
and that it surprised many C practitioners.

Why did it surprise anybody?

Rust directly addresses some of the pain points.


It is not-not a thing; the CVE list shows that. I am just appalled
that these defects are released.

If you think C and C++ languages and implementations
are fault-free, I\'d like to visit your planet sometime :)

I don\'t think that at all. That\'s not necessarily a reasonable
standard to boot. You can *reliably* produce perfectly
functional work product with them, without knowing a whole lot about
what\'s under the hood (mostly ) and without a whole mass of pain.

Once you find the few rocks under the water...

You can start with the C++ FQA http://yosefk.com/c++fqa/

Watching (from a distance) the deliberations of the C/C++
committees in the early 90s was enlightening, in a bad way.
One simple debate (which lasted years) was whether is ought
to be possible or impossible to \"cast away constness\".
There are good reasons for both, and they cannot be
reconciled.
(Yes: to allow debuggers and similar tools to inspect memory.
No: to enable safe aggressive optimisations)

You\'ll get no argument here. But all things which have too
much light on them end up in Mandaranism.

And the best tools to inspect memory are built into the running
application itself.

Linus Torvalds is vociferously and famously opposed to having
C++ anywhere near the Linux kernel (good taste IMNSHO).

Don\'t take any cues from Linus Torvalds. He\'s why my deliverables
at one gig were patch files. I\'ve no objection to that but geez...

And C++ is Just Fine. Now. It took what, 20 years?

Worse: 30 years!

Yep; yer right.

I first used it in \'88, and thought it a regression
over other available languages.


The reasons for \"no C++ in the kernel\" are quite serious, valid and
worthy of our approval.

He
has given a big hint he wouldn\'t oppose Rust, by stating that
if it is there it should be enabled by default.

https://www.phoronix.com/scan.php?page=news_item&px=Torvalds-Rust-Kernel-K-Build



I\'ve seen this movie before. It\'s yet another This Time It\'s Different
approach.

Oh, we\'ve all seen too many examples of that, in hardware
and software! The trick is recognising which bring worthwhile
practical and novel capabilities to the party. Most don\'t,
a very few do.

The jury is out w.r.t. Rust and Go, but they are worth watching.

Agreed. I really expected better progress, but you know how we are...

--
Les Cargill
 
Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

Java is, C# isn\'t.

Close enough:
https://docs.microsoft.com/en-us/dotnet/standard/managed-code#:~:text=Managed%20code%20is%20written%20in,don\'t%20get%20machine%20code.

\"You get Intermediate Language code which the runtime then compiles and
executes.\"

I\'d call that an implementation detail; it does not load the image into
memory then jump to _main.


The point of my comment is that both Java and C# are considered \"managed
languages\", especially for security purposes. I suppose somebody,
somewhere is writing virii in C# but ...

During installation C# assemblies are compiled into code
optimised for the specific processor. Of course those
optimisations can only be based on what the compiler/installer
guesses the code will do at runtime.

I\'ve wondered (without conclusion) whether that is why
it takes so long to install Windows updates, compared
with Linux updates.

Caveat: I haven\'t followed C# since the designer (Anders
Hejlsberg) gave us a lecture at HPLabs, just as C# was
being released.

None of us were impressed, correctly regarding it as a
proprietary me-too Java knockoff with slightly different
implementation choices.

Exactly.

--
Les Cargill
 
On 19/08/20 10:28, Les Cargill wrote:
Tom Gardner wrote:
On 15/08/20 03:51, Les Cargill wrote:
Tom Gardner wrote:
On 14/08/20 04:13, Les Cargill wrote:
Tom Gardner wrote:
Rust and Go are showing significant promise in the
marketplace,

Mozzlla seems to have dumped at least some of the Rust team:

https://www.reddit.com/r/rust/comments/i7stjy/how_do_mozilla_layoffs_affect_rust/


I doubt they will remain unemployed. Rust is gaining traction
in wider settings.


I dunno - I can\'t separate the messaging from the offering. I\'m
fine with a C/C++ compiler so I have less than no incentive to
even become remotely literate about Rust.

The Rustaceans seem obsessed with stuff my cohort ( read:eek:ld people )
learned six months into their first C project. But there may
well be benefits I don\'t know about.

Too many people /think/ they know C.

I first used C in ~81, and learned it from the two
available books, which I still have. The second book
was, of course, a book on traditional mistakes in C
\"The C Puzzle Book\".

It is horrifying that Boehm thought it worth writing this
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
http://hboehm.info/misc_slides/pldi05_threads.pdf
and that it surprised many C practitioners.


Why did it surprise anybody?

I can only guess they
- were young and inexperienced
- didn\'t think about the language\'s fundamentals
- believed vendor/supplier hype/statements

Sad :(


Rust directly addresses some of the pain points.


It is not-not a thing; the CVE list shows that. I am just appalled
that these defects are released.

If you think C and C++ languages and implementations
are fault-free, I\'d like to visit your planet sometime :)


I don\'t think that at all. That\'s not necessarily a reasonable
standard to boot. You can *reliably* produce perfectly
functional work product with them, without knowing a whole lot about what\'s
under the hood (mostly ) and without a whole mass of pain.

Once you find the few rocks under the water...

You can start with the C++ FQA http://yosefk.com/c++fqa/

Watching (from a distance) the deliberations of the C/C++
committees in the early 90s was enlightening, in a bad way.
One simple debate (which lasted years) was whether is ought
to be possible or impossible to \"cast away constness\".
There are good reasons for both, and they cannot be
reconciled.
(Yes: to allow debuggers and similar tools to inspect memory.
No: to enable safe aggressive optimisations)



You\'ll get no argument here. But all things which have too
much light on them end up in Mandaranism.

Mandaranism?


> And the best tools to inspect memory are built into the running application itself.

Yes, but they will inevitably rely on core language
and implementation guarantees and lack of guarantees.

A classic in this context is that many optimisations
can only be done if const declarations are present.
Without that, the possibility of aliasing precludes
optimisations.

Now generic inspection tools, e.g. those that you might
use to inspect what\'s happening in a library, have to
be able to access the data, which implies aliasing.
And that requires the ability to remove constness.

The debate as to whether it should possible/impossible
to \"cast away constness\" occupied the committees for
at least a year in the early 90s.

Different languages have taken different extreme
positions on that. Either extreme is OK, but fudging
the issue isn\'t.

Java and similar heavily use reflection.
Rust ensures data has only one owner at a time.
Both are good, effective and reliable.


Linus Torvalds is vociferously and famously opposed to having
C++ anywhere near the Linux kernel (good taste IMNSHO).

Don\'t take any cues from Linus Torvalds. He\'s why my deliverables
at one gig were patch files. I\'ve no objection to that but geez...

And C++ is Just Fine. Now. It took what, 20 years?

Worse: 30 years!


Yep; yer right.

I first used it in \'88, and thought it a regression
over other available languages.


The reasons for \"no C++ in the kernel\" are quite serious, valid and worthy of
our approval.

He
has given a big hint he wouldn\'t oppose Rust, by stating that
if it is there it should be enabled by default.

https://www.phoronix.com/scan.php?page=news_item&px=Torvalds-Rust-Kernel-K-Build



I\'ve seen this movie before. It\'s yet another This Time It\'s Different
approach.

Oh, we\'ve all seen too many examples of that, in hardware
and software! The trick is recognising which bring worthwhile
practical and novel capabilities to the party. Most don\'t,
a very few do.

The jury is out w.r.t. Rust and Go, but they are worth watching.

Agreed. I really expected better progress, but you know how we are...

I live in hope. The only thing that makes me despair is people
who think the status quo is good and acceptable :)
 
On 19/08/20 10:33, Les Cargill wrote:
Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

Java is, C# isn\'t.


Close enough:
https://docs.microsoft.com/en-us/dotnet/standard/managed-code#:~:text=Managed%20code%20is%20written%20in,don\'t%20get%20machine%20code.


\"You get Intermediate Language code which the runtime then compiles and
executes.\"

I\'d call that an implementation detail; it does not load the image into memory
then jump to _main.

Yebbut :)


The point of my comment is that both Java and C# are considered \"managed
languages\", especially for security purposes. I suppose somebody, somewhere is
writing virii in C# but ...

Except that C# has a gaping chasm in that security: \"unsafe\".
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/unsafe

I imagine too many programs make use of that \"convenience\".


During installation C# assemblies are compiled into code
optimised for the specific processor. Of course those
optimisations can only be based on what the compiler/installer
guesses the code will do at runtime.

I\'ve wondered (without conclusion) whether that is why
it takes so long to install Windows updates, compared
with Linux updates.

Caveat: I haven\'t followed C# since the designer (Anders
Hejlsberg) gave us a lecture at HPLabs, just as C# was
being released.

None of us were impressed, correctly regarding it as a
proprietary me-too Java knockoff with slightly different
implementation choices.

Exactly.
 
Tom Gardner wrote:
On 19/08/20 10:28, Les Cargill wrote:
Tom Gardner wrote:
snip

Why did it surprise anybody?

I can only guess they
 - were young and inexperienced
 - didn\'t think about the language\'s fundamentals
 - believed vendor/supplier hype/statements

Sad :(

I just mean anyone who\'s had a passing glance at an operating
systems course would at least most likely understand.


<snip>
You\'ll get no argument here. But all things which have too
much light on them end up in Mandaranism.

Mandaranism?

\"That which is not required is forbidden. That which is not forbidden is
required.\"

And the best tools to inspect memory are built into the running
application itself.

Yes, but they will inevitably rely on core language
and implementation guarantees and lack of guarantees.

This is not a problem. Really. Now, if you\'re debugging whether *memory*
works or not, that\'s a different sorta bear hunt.

But it sorta shocks me how few developers seem to
reach for an \"instrumentation first\" approach.

I guess they guess. :)


A classic in this context is that many optimisations
can only be done if const declarations are present.
Without that, the possibility of aliasing precludes
optimisations.

Now generic inspection tools, e.g. those that you might
use to inspect what\'s happening in a library, have to
be able to access the data, which implies aliasing.

Mmmmm.... okay. Good instrumentation may invoke
aliasing. but it\'s not necessary.

I should, at this point... by \"instrumentation\", I
mean anything from free transmission of state to RFC1213
style \"pull\" approaches to $DEITY only knows. Basically,
what used to be known as \"telemetry\".

And that requires the ability to remove constness.

Ah, baloney :) ( that bit was intended to be entertaining ) .

I just mean that this whole exersize in pilpul doesn\'t
amount to too much.

The debate as to whether it should possible/impossible
to \"cast away constness\" occupied the committees for
at least a year in the early 90s.

Turns out you can, and it\'s fine.

Different languages have taken different extreme
positions on that. Either extreme is OK, but fudging
the issue isn\'t.

Extremes are boring.

> Java and similar heavily use reflection.

As should good C programs. Not the same thing as in Java but
still. How can you measure something without... measuring something?

<snip>
The jury is out w.r.t. Rust and Go, but they are worth watching.

Agreed. I really expected better progress, but you know how we are...

I live in hope. The only thing that makes me despair is people
who think the status quo is good and acceptable :)

Overall, it\'s not that bad now. It wasn\'t that bad before. We
have continents of embarrassments of options now.

Getting people to pay you for it is another story. Functional
correctness is for lawyers now.

--
Les Cargill
 
Tom Gardner wrote:
On 19/08/20 10:33, Les Cargill wrote:
Tom Gardner wrote:
On 14/08/20 04:35, Les Cargill wrote:
jlarkin@highlandsniptechnology.com wrote:
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

snip

The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.


It\'s not even accounted for, nor is it an actual cost in the usual
sense
of the word - nobody\'s trying to make this actually less expensive.


C invites certain dangerous practices that attackers ruthlessly
exploit
like loops copying until they hit a null byte.

Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.

We probably need to go to pseudocode-only programs. The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.


That\'s what \"managed languages\" like Java or C# do. It\'s all bytecode
in a VM.

Java is, C# isn\'t.


Close enough:
https://docs.microsoft.com/en-us/dotnet/standard/managed-code#:~:text=Managed%20code%20is%20written%20in,don\'t%20get%20machine%20code.


\"You get Intermediate Language code which the runtime then compiles and
executes.\"

I\'d call that an implementation detail; it does not load the image
into memory then jump to _main.

Yebbut :)

We knew that was coming. :)

The point of my comment is that both Java and C# are considered \"managed
languages\", especially for security purposes. I suppose somebody,
somewhere is writing virii in C# but ...

Except that C# has a gaping chasm in that security: \"unsafe\".
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/unsafe

Oh my. Words fail.

I imagine too many programs make use of that \"convenience\".

Of coursed they do. Without the proper training, we\'re all
some shade of destructive primate anyway.

--
Les Cargill
 
On 30/11/2020 22:26, Phil Hobbs wrote:
On 11/30/20 12:07 PM, albert wrote:
In article <c8c0cf25-aa21-8b1d-b6f6-518624c35183@electrooptical.net>,
Phil Hobbs  <pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.

I wrote a clusterized optimizing EM simulator that I still use--I have
a simulation gig just starting up now, in fact.  I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and that
you can\'t have a real-time thread in a user mode program and vice
versa.  This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2.  Dunno about BSD--I should try that
out.


In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee
them
to run. You may be able to get close if you remove unnecessary
services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the process have
to be as well.  Any compute-bound thread in a realtime process will
bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread priority in a
user process, but noooooo.  They all have to have the same priority,
despite what the pthreads docs say.  So in Linux there is no way to
express the idea that some threads in a process are more important than
others.  That destroys the otherwise-excellent scaling of my simulation
code.

Wouldn\'t the system call setpriority() work, after a thread has bee
started?

As I said way back in the summer, you can\'t change the priority or
scheduling of an individual user thread, and you can\'t mix user and
real-time threads in one process.

Thus you can have the default scheduler, which doesn\'t scale at all well
for this sort of job, or you can run the whole process as real-time and
make the box\'s user interface completely unresponsive any time the
real-time job is running.

As a work around can you not put a few ms wait in one of the outer loops
so that the core OS gets a turn for the UI updates every second or so?

VBA in Excel fails that way if you don\'t take countermeasures.
DoEvents is the magic incantation that sorts it out there.

It does seem odd that you cannot be user real-time and still at a lower
priority than system level real-time. A cooperative approach to letting
the OS have a very short timeslice now and then might work...
In Windows and OS/2, you can set thread priorities any way you like.
Linux is notionally a multiuser system, so it\'s no surprise there are
limits to how much non-root users can increase priority.  I\'d be very
happy being able to _reduce_ the priority of my compute threads so that
the comms threads would preempt them, but nooooooo.

OS/2 was so good at it that someone wrote a system level driver to
implement the 16550 buffered serial chip as a device driver for the
original stock Intel 8250 part.

On the plus side, there\'s been some rumor that the thread scheduler was
going to be replaced.  None too soon, if so.

--
Regards,
Martin Brown
 
On 30/11/2020 18:17, albert wrote:
In article <rgudq8$3oc$1@dont-email.me>,
Jan Panteltje <pNaOnStPeAlMtje@yahoo.com> wrote:
For the rest most I have written does a clean compile, unlike the
endless warning listings I have seen from others..

+1
I hate that mentality. The code should be as good as possible.
1).
I will not add superfluous casts to make a pedantic compiler
shut up. Those superfluous casts are a liability and may
hide future bugs. A warning may set me thinking whether the
code can be improved.

Production code should compile without any errors and warnings. That way
if a change breaks something it stands out clearly.

I always used to hate IBM\'s FORTRAN G for its uncertain message at the
end of an entirely successful compile:

NO DIAGNOSTICS GENERATED?

The \"?\" was a trailing null being translated to \"?\" on the chain printer
and their change procedure made it \"too painful\" to fix it.

We have generally always learned something when porting code to pedantic
compilers on new machines. ISTR the Z80 FORTRAN compiler was one of the
pickiest we encountered with none of the common extensions that were
tolerated on most of the mainframes at the time.

2).
Some warnings are proper but do not change my mind about the code.
It falls in the general concept of tests. Compare an expected
outcome with an actual outcome.
Some warnings just are part of the expected outcome.
(It is known that the double result will fit in a double,
so I ignore \"possible truncation error\". This will require
putting a paragraph in the design/spec, why the computation
results in a single. Beats adding a cast 1000 to 1.)

Those managers requiring a \"clean built\" instead of quality
deserve to be shot. If you can\'t convince them by documenting
what the warnings mean, and why they result from your code,
find a better job.

Hardly. It makes it much easier to spot something that has gone awry if
the compiler generates a warning on the latest compile.

I knew one slightly mischievous Modula2 compiler where the default
behaviour for an attempt to use an uninitialised variable on the RHS of
an equation was to issue a compiler warning and code a hard execution
trap. I promoted all such \"soft warnings\" to hard compile time errors.

Compilers are not perfect, I like asm because there is no
misunderstanding about what I want to do.
And really, asm is NOT harder but in fact much simpler than C++.
C++ is, in my view, a crime against humanity.
To look at computing as objects is basically wrong.

In my experience objects are all but necessary in order to
get complexity under control.

They are good for some things but not for everything.

These days compilers do better optimising than most humans can ever hope
to since they understand the pipelines better as to what can execute in
parallel, speculative execution and how branch prediction interact.
However, they are not perfect at it and sometimes do really dumb things.

It is only worth hand tweaking code that has to be either insanely
accurate or very very fast (usually only a few lines of source worth). I
have found profile directed optimisation rather lacklustre YMMV.

Keeping intermediate results on the FPU stack can gain a worthwhile gain
in precision in certain otherwise tricky problems. MS C withdrew
convenient access to the hardware supported 80bit reals around v6.0.

SNIP
C is simple, you can define structures and call those objects if you are
so inclined.
I take a designers stand. Objects or modules do not become fundamentally
different by the way they are implemented.

you bit width and can specify the basically everything,
Anyways am getting carried away,
Would it not be nice if those newbie programmers started with some
embedded asm, just to know
what is happening under the hood so to speak.

I\'v seen collegues being raised on lisp. That is fine, but recursive coding
can get you in real trouble in real life. A mental picture of
real computers is inexpensible. An interesting example is generating
a new error flag each time you communicate. You will never find out that
the communicating went wrong for the second time and the internet
may be down.

Recursion is sometimes a useful tool though.

--
Regards,
Martin Brown
 
On Mon, 30 Nov 2020 20:59:50 -0500, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 11/30/20 8:26 PM, Joe Gwinn wrote:
On Mon, 30 Nov 2020 17:26:03 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 11/30/20 12:07 PM, albert wrote:
In article <c8c0cf25-aa21-8b1d-b6f6-518624c35183@electrooptical.net>,
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.

I wrote a clusterized optimizing EM simulator that I still use--I have
a simulation gig just starting up now, in fact.  I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and that
you can\'t have a real-time thread in a user mode program and vice
versa.  This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2.  Dunno about BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee them
to run. You may be able to get close if you remove unnecessary services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the process have
to be as well. Any compute-bound thread in a realtime process will
bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread priority in a
user process, but noooooo. They all have to have the same priority,
despite what the pthreads docs say. So in Linux there is no way to
express the idea that some threads in a process are more important than
others. That destroys the otherwise-excellent scaling of my simulation
code.

Wouldn\'t the system call setpriority() work, after a thread has bee
started?

As I said way back in the summer, you can\'t change the priority or
scheduling of an individual user thread, and you can\'t mix user and
real-time threads in one process.

Thus you can have the default scheduler, which doesn\'t scale at all well
for this sort of job, or you can run the whole process as real-time and
make the box\'s user interface completely unresponsive any time the
real-time job is running.

In Windows and OS/2, you can set thread priorities any way you like.
Linux is notionally a multiuser system, so it\'s no surprise there are
limits to how much non-root users can increase priority. I\'d be very
happy being able to _reduce_ the priority of my compute threads so that
the comms threads would preempt them, but nooooooo.

I don\'t think that this is true for Red Hat Linux Enterprise Edition.
I don\'t recall if the realtime stuff is added, or built in. People
also use MRG with this.

More generally, not all Linux distributions care about such things.

Joe Gwinn

It\'s in the kernel, so they mostly don\'t get to care or not care. I\'ve
recently tried again with CentOS, which is RHEL EE without the
support--no joy.

Let me add that while it does take root permission to turn realtime
on, it does not follow that the application must also run as root.

Running a big application as root is a real bad idea, for both
robustness and security reasons.

What I\'ve seen done is that during startup, the application startup
process or thread is given permission to run sudo, which it uses to
set scheduling policies (FIFO) and numerical priority (real urgent)
for the main processes and threads before normal operation commences.

The other thing that is normally set up during startup is the
establishment of shared memory windows between processes. For may
applications, passing data blocks around by pointer passing is the
only practical approach.

Joe Gwinn
 
On 12/1/20 12:48 PM, Joe Gwinn wrote:
On Mon, 30 Nov 2020 20:59:50 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 11/30/20 8:26 PM, Joe Gwinn wrote:
On Mon, 30 Nov 2020 17:26:03 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 11/30/20 12:07 PM, albert wrote:
In article
c8c0cf25-aa21-8b1d-b6f6-518624c35183@electrooptical.net>,
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a
computer is, and what an OS and languages
are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\",
Brooks points out that the factors-of-10
productivity improvements of the early days
were gained by getting rid of extrinsic
complexity--crude tools, limited hardware, and
so forth.

Now the issues are mostly intrinsic to an
artifact built of thought. So apart from more
and more Python libraries, I doubt that there
are a lot more orders of magnitude available.

It is ironic that a lot of the potentially avoidable
human errors are typically fence post errors. Binary
fence post errors being about the most severe since
you end up with the opposite of what you intended.

Not in a single processor (except perhaps the
Mill).

But with multiple processors there can be
significant improvement - provided we are
prepared to think in different ways, and the
tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on
massively parallel hardware. If you have ever done
any serious programming on such kit you quickly
realised that the process which ensures all the
other processes are kept busy doing useful things is
by far the most important.

I wrote a clusterized optimizing EM simulator that I
still use--I have a simulation gig just starting up
now, in fact. I learned a lot of ugly things about the
Linux thread scheduler in the process, such as that the
pthreads documents are full of lies about scheduling
and that you can\'t have a real-time thread in a user
mode program and vice versa. This is an entirely
arbitrary thing--there\'s no such restriction in Windows
or OS/2. Dunno about BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\".
It\'s a bit of a cadge. I\'ve never really seen a good
explanation of that that means.

Does anybody here know if you can mix RT and user
threads in a single process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of
priorities. You can install kernel loadable modules ( aka
device drivers ) to provide a timebase that will make
them eligible. SFAIK, you can\'t guarantee them to run.
You may be able to get close if you remove unnecessary
services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the
process have to be as well. Any compute-bound thread in a
realtime process will bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread
priority in a user process, but noooooo. They all have to
have the same priority, despite what the pthreads docs say.
So in Linux there is no way to express the idea that some
threads in a process are more important than others. That
destroys the otherwise-excellent scaling of my simulation
code.

Wouldn\'t the system call setpriority() work, after a thread
has bee started?

As I said way back in the summer, you can\'t change the priority
or scheduling of an individual user thread, and you can\'t mix
user and real-time threads in one process.

Thus you can have the default scheduler, which doesn\'t scale at
all well for this sort of job, or you can run the whole process
as real-time and make the box\'s user interface completely
unresponsive any time the real-time job is running.

In Windows and OS/2, you can set thread priorities any way you
like. Linux is notionally a multiuser system, so it\'s no
surprise there are limits to how much non-root users can
increase priority. I\'d be very happy being able to _reduce_
the priority of my compute threads so that the comms threads
would preempt them, but nooooooo.

I don\'t think that this is true for Red Hat Linux Enterprise
Edition. I don\'t recall if the realtime stuff is added, or built
in. People also use MRG with this.

More generally, not all Linux distributions care about such
things.

Joe Gwinn

It\'s in the kernel, so they mostly don\'t get to care or not care.
I\'ve recently tried again with CentOS, which is RHEL EE without
the support--no joy.

Let me add that while it does take root permission to turn realtime
on, it does not follow that the application must also run as root.

Running a big application as root is a real bad idea, for both
robustness and security reasons.

They\'re my boxes, and when doing cluster sims they usually run a special
OS for the purpose: Rocks 7. It has one head-node and N compute nodes
that get network-booted off the head, so I get a clean install every
time they boot up. So no worries there.

There\'s no point in running as root, though, because you can\'t change
the priority of user threads even then--only realtime ones.

The Linux thread scheduler is a really-o truly-o Charlie Foxtrot.

What I\'ve seen done is that during startup, the application startup
process or thread is given permission to run sudo, which it uses to
set scheduling policies (FIFO) and numerical priority (real urgent)
for the main processes and threads before normal operation
commences.

The other thing that is normally set up during startup is the
establishment of shared memory windows between processes. For may
applications, passing data blocks around by pointer passing is the
only practical approach.
Passing pointers only works on unis or shared-memory SMP boxes. My
pre-cluster versions did it that way, but by about 2006 I needed 20+
cores to do the job in a reasonable time.

I was building nanoantennas with travelling wave plasmonic waveguides,
coupled to SOI optical waveguides. (They eventually worked really well,
but it was a long slog, mostly by myself.)

The plasmonic section was intended to avoid the ~30 THz RC bandwidth of
my tunnel junction detectors--the light wave went transverse to the
photocurrent, so the light didn\'t have to drive the capacitance all at
once. That saved me over 30 dB of rolloff right there.

The issue was that metals that exhibit plasmons (copper, silver, and
gold) exhibit free-electron behaviour in the infrared, i.e. their
epsilons are very nearly pure, negative real numbers. That makes the
real part of their refractive indices very very small, so you have to
take very small time steps or the simulation becomes unstable due to
superluminal propagation.

Their imaginary epsilons are very large, so you need very small voxels
to represent the fields well. Small voxels and short time steps make
for loooong run times, especially when it\'s wrapped in an optimization loop.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 12/1/20 5:38 AM, Martin Brown wrote:
On 30/11/2020 22:26, Phil Hobbs wrote:
On 11/30/20 12:07 PM, albert wrote:
In article <c8c0cf25-aa21-8b1d-b6f6-518624c35183@electrooptical.net>,
Phil Hobbs  <pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors
are
typically fence post errors. Binary fence post errors being about
the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively
parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.

I wrote a clusterized optimizing EM simulator that I still use--I
have
a simulation gig just starting up now, in fact.  I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and
that
you can\'t have a real-time thread in a user mode program and vice
versa.  This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2.  Dunno about BSD--I should try
that out.


In Linux, realtime threads are in \"the realtime context\". It\'s a
bit of
a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee
them
to run. You may be able to get close if you remove unnecessary
services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the process
have
to be as well.  Any compute-bound thread in a realtime process will
bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread priority in a
user process, but noooooo.  They all have to have the same priority,
despite what the pthreads docs say.  So in Linux there is no way to
express the idea that some threads in a process are more important than
others.  That destroys the otherwise-excellent scaling of my simulation
code.

Wouldn\'t the system call setpriority() work, after a thread has bee
started?

As I said way back in the summer, you can\'t change the priority or
scheduling of an individual user thread, and you can\'t mix user and
real-time threads in one process.

Thus you can have the default scheduler, which doesn\'t scale at all
well for this sort of job, or you can run the whole process as
real-time and make the box\'s user interface completely unresponsive
any time the real-time job is running.

As a work around can you not put a few ms wait in one of the outer loops
so that the core OS gets a turn for the UI updates every second or so?

Yeah, probably, but stuff like the disk driver and network stack
eventually get hosed too.

VBA in Excel fails that way if you don\'t take countermeasures.
DoEvents is the magic incantation that sorts it out there.

It does seem odd that you cannot be user real-time and still at a lower
priority than system level real-time. A cooperative approach to letting
the OS have a very short timeslice now and then might work...

In Windows and OS/2, you can set thread priorities any way you like.
Linux is notionally a multiuser system, so it\'s no surprise there are
limits to how much non-root users can increase priority.  I\'d be very
happy being able to _reduce_ the priority of my compute threads so
that the comms threads would preempt them, but nooooooo.

OS/2 was so good at it that someone wrote a system level driver to
implement the 16550 buffered serial chip as a device driver for the
original stock Intel 8250 part.

On the plus side, there\'s been some rumor that the thread scheduler
was going to be replaced.  None too soon, if so.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
In article <c8c0cf25-aa21-8b1d-b6f6-518624c35183@electrooptical.net>,
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.

I wrote a clusterized optimizing EM simulator that I still use--I have
a simulation gig just starting up now, in fact.  I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and that
you can\'t have a real-time thread in a user mode program and vice
versa.  This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2.  Dunno about BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee them
to run. You may be able to get close if you remove unnecessary services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the process have
to be as well. Any compute-bound thread in a realtime process will
bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread priority in a
user process, but noooooo. They all have to have the same priority,
despite what the pthreads docs say. So in Linux there is no way to
express the idea that some threads in a process are more important than
others. That destroys the otherwise-excellent scaling of my simulation
code.

Wouldn\'t the system call setpriority() work, after a thread has bee
started?

Cheers

Phil Hobbs
--
This is the first day of the end of your life.
It may not kill you, but it does make your weaker.
If you can\'t beat them, too bad.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
 
In article <35ffa56c-b81f-c4e2-4227-138e706cdc91@electrooptical.net>,
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
I wrote a clusterized optimizing EM simulator that I still use--I have a
simulation gig just starting up now, in fact. I learned a lot of ugly
things about the Linux thread scheduler in the process, such as that the
pthreads documents are full of lies about scheduling and that you can\'t
have a real-time thread in a user mode program and vice versa. This is
an entirely arbitrary thing--there\'s no such restriction in Windows or
OS/2. Dunno about BSD--I should try that out.

I use the clone system call directly in my Forth compiler, resulting
in real tasks with shared memory.
https://github.com/albertvanderhorst/ciforth
My pre-emptive multitasker takes one screen plus one screen it shares
with the cooperative multitasker. (A screen is a unit of program size,
16 lines by 64 characters.)
The register preservation for system calls is reversely documented.
Nobody is interested in what registers those system calls use, it is
important which registers will not alter. That is particularly important
to know for the clone register call, because registers are the only
things that are possibly conserved.
Bottom line, in 64 bit linux I use the undocumented feature that
R15 is preserved across a clone call.

A similar screen in behalf of windows 32, makes this feature portable,
(except for the library used, of course.)
My experience with windows 64 is bad. The multitasker of windows contains
no 32/64 dependancy (apart from the wordsize obviously ), but it just
doesn\'t work. Microsoft follows the bad habit of the Linux folks to
not properly document the system calls (dll) and instead forcing
everybody to use \"standard\" c libraries or even c++ classes.

One example I use it on is a prime counting program. It speeds up approximately
with the number of cores. It has been published on comp.lang.forth.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?
32 bit ciforth works in BSD the same than in linux, as far as I know.
I see no reason why you couldn\'t use a cooperative scheduler within a
pre-emptive task under my linux compiler. However that could be messy.
So in principle yes.

I guess the real question is whether it can be done with any amount
of ease by you, using your undisclosed favourite programming
language.

<SNIP>

Cheers

Phil Hobbs
--
This is the first day of the end of your life.
It may not kill you, but it does make your weaker.
If you can\'t beat them, too bad.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
 
In article <e7e34b09-d6ae-4003-98b7-6104e8ff84d6o@googlegroups.com>,
<pcdhobbs@gmail.com> wrote:
Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out that
the factors-of-10 productivity improvements of the early days were
gained by getting rid of extrinsic complexity--crude tools, limited
hardware, and so forth.

Now the issues are mostly intrinsic to an artifact built of thought. So
apart from more and more Python libraries, I doubt that there are a lot
more orders of magnitude available.

Of course there are. The very minimal thing to do in order to
have a computer to play chess is specify the rules.
The Google Alpha Zero machine did exactly that, being \"programmed\"
by just specifying the outcome.
It beats the best chess engines after chewing a couple of day on those
rules. (And no grand master is a match for the best chess engines
today.)

Cheers

Phil Hobbs
--
This is the first day of the end of your life.
It may not kill you, but it does make your weaker.
If you can\'t beat them, too bad.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
 
In article <rgudq8$3oc$1@dont-email.me>,
Jan Panteltje <pNaOnStPeAlMtje@yahoo.com> wrote:
For the rest most I have written does a clean compile, unlike the
endless warning listings I have seen from others..

I hate that mentality. The code should be as good as possible.
1).
I will not add superfluous casts to make a pedantic compiler
shut up. Those superfluous casts are a liability and may
hide future bugs. A warning may set me thinking whether the
code can be improved.
2).
Some warnings are proper but do not change my mind about the code.
It falls in the general concept of tests. Compare an expected
outcome with an actual outcome.
Some warnings just are part of the expected outcome.
(It is known that the double result will fit in a double,
so I ignore \"possible truncation error\". This will require
putting a paragraph in the design/spec, why the computation
results in a single. Beats adding a cast 1000 to 1.)

Those managers requiring a \"clean built\" instead of quality
deserve to be shot. If you can\'t convince them by documenting
what the warnings mean, and why they result from your code,
find a better job.

Compilers are not perfect, I like asm because there is no
misunderstanding about what I want to do.
And really, asm is NOT harder but in fact much simpler than C++.
C++ is, in my view, a crime against humanity.
To look at computing as objects is basically wrong.

In my experience objects are all but necessary in order to
get complexity under control.

<SNIP>
C is simple, you can define structures and call those objects if you are
so inclined.
I take a designers stand. Objects or modules do not become fundamentally
different by the way they are implemented.

you bit width and can specify the basically everything,
Anyways am getting carried away,
Would it not be nice if those newbie programmers started with some
embedded asm, just to know
what is happening under the hood so to speak.

I\'v seen collegues being raised on lisp. That is fine, but recursive coding
can get you in real trouble in real life. A mental picture of
real computers is inexpensible. An interesting example is generating
a new error flag each time you communicate. You will never find out that
the communicating went wrong for the second time and the internet
may be down.

Groetjes Albert
--
This is the first day of the end of your life.
It may not kill you, but it does make your weaker.
If you can\'t beat them, too bad.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
 
On 11/30/20 12:07 PM, albert wrote:
In article <c8c0cf25-aa21-8b1d-b6f6-518624c35183@electrooptical.net>,
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.

I wrote a clusterized optimizing EM simulator that I still use--I have
a simulation gig just starting up now, in fact.  I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and that
you can\'t have a real-time thread in a user mode program and vice
versa.  This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2.  Dunno about BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee them
to run. You may be able to get close if you remove unnecessary services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the process have
to be as well. Any compute-bound thread in a realtime process will
bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread priority in a
user process, but noooooo. They all have to have the same priority,
despite what the pthreads docs say. So in Linux there is no way to
express the idea that some threads in a process are more important than
others. That destroys the otherwise-excellent scaling of my simulation
code.

Wouldn\'t the system call setpriority() work, after a thread has bee
started?

As I said way back in the summer, you can\'t change the priority or
scheduling of an individual user thread, and you can\'t mix user and
real-time threads in one process.

Thus you can have the default scheduler, which doesn\'t scale at all well
for this sort of job, or you can run the whole process as real-time and
make the box\'s user interface completely unresponsive any time the
real-time job is running.

In Windows and OS/2, you can set thread priorities any way you like.
Linux is notionally a multiuser system, so it\'s no surprise there are
limits to how much non-root users can increase priority. I\'d be very
happy being able to _reduce_ the priority of my compute threads so that
the comms threads would preempt them, but nooooooo.

On the plus side, there\'s been some rumor that the thread scheduler was
going to be replaced. None too soon, if so.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 11/30/20 12:01 PM, albert wrote:
In article <35ffa56c-b81f-c4e2-4227-138e706cdc91@electrooptical.net>,
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
I wrote a clusterized optimizing EM simulator that I still use--I have a
simulation gig just starting up now, in fact. I learned a lot of ugly
things about the Linux thread scheduler in the process, such as that the
pthreads documents are full of lies about scheduling and that you can\'t
have a real-time thread in a user mode program and vice versa. This is
an entirely arbitrary thing--there\'s no such restriction in Windows or
OS/2. Dunno about BSD--I should try that out.

I use the clone system call directly in my Forth compiler, resulting
in real tasks with shared memory.
https://github.com/albertvanderhorst/ciforth
My pre-emptive multitasker takes one screen plus one screen it shares
with the cooperative multitasker. (A screen is a unit of program size,
16 lines by 64 characters.)
The register preservation for system calls is reversely documented.
Nobody is interested in what registers those system calls use, it is
important which registers will not alter. That is particularly important
to know for the clone register call, because registers are the only
things that are possibly conserved.
Bottom line, in 64 bit linux I use the undocumented feature that
R15 is preserved across a clone call.

A similar screen in behalf of windows 32, makes this feature portable,
(except for the library used, of course.)
My experience with windows 64 is bad. The multitasker of windows contains
no 32/64 dependancy (apart from the wordsize obviously ), but it just
doesn\'t work. Microsoft follows the bad habit of the Linux folks to
not properly document the system calls (dll) and instead forcing
everybody to use \"standard\" c libraries or even c++ classes.

One example I use it on is a prime counting program. It speeds up approximately
with the number of cores. It has been published on comp.lang.forth.


Does anybody here know if you can mix RT and user threads in a single
process in BSD?

32 bit ciforth works in BSD the same than in linux, as far as I know.
I see no reason why you couldn\'t use a cooperative scheduler within a
pre-emptive task under my linux compiler. However that could be messy.
So in principle yes.

Oh, I could code around it, sure, but to fix the problem I\'d have to
reproduce some major fraction of the OS\'s thread scheduling. For
performance it\'s very important to balance the workload between cores
very accurately, because everything moves at the speed of the slowest one.

It would wind up being some hideous kluge with an N-thread userspace
compute process and a 3N-thread real-time comms process sharing memory,
with all sorts of mutexes and stuff.

The Linux version is a port of the original Windows/OS2 version, so
there\'s no way I\'m going to spend months recoding something that already
works. It\'s way way cheaper to build a bigger cluster. (Better for my
soul too--I\'d be much less likely to start dreaming about sticking pins
in a Linus Torvalds voodoo doll.) ;)

I guess the real question is whether it can be done with any amount
of ease by you, using your undisclosed favourite programming
language.

C++.

Cheers

Phil Hobbs



--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Mon, 30 Nov 2020 17:26:03 -0500, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 11/30/20 12:07 PM, albert wrote:
In article <c8c0cf25-aa21-8b1d-b6f6-518624c35183@electrooptical.net>,
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.

I wrote a clusterized optimizing EM simulator that I still use--I have
a simulation gig just starting up now, in fact.  I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and that
you can\'t have a real-time thread in a user mode program and vice
versa.  This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2.  Dunno about BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee them
to run. You may be able to get close if you remove unnecessary services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the process have
to be as well. Any compute-bound thread in a realtime process will
bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread priority in a
user process, but noooooo. They all have to have the same priority,
despite what the pthreads docs say. So in Linux there is no way to
express the idea that some threads in a process are more important than
others. That destroys the otherwise-excellent scaling of my simulation
code.

Wouldn\'t the system call setpriority() work, after a thread has bee
started?

As I said way back in the summer, you can\'t change the priority or
scheduling of an individual user thread, and you can\'t mix user and
real-time threads in one process.

Thus you can have the default scheduler, which doesn\'t scale at all well
for this sort of job, or you can run the whole process as real-time and
make the box\'s user interface completely unresponsive any time the
real-time job is running.

In Windows and OS/2, you can set thread priorities any way you like.
Linux is notionally a multiuser system, so it\'s no surprise there are
limits to how much non-root users can increase priority. I\'d be very
happy being able to _reduce_ the priority of my compute threads so that
the comms threads would preempt them, but nooooooo.

I don\'t think that this is true for Red Hat Linux Enterprise Edition.
I don\'t recall if the realtime stuff is added, or built in. People
also use MRG with this.

More generally, not all Linux distributions care about such things.

Joe Gwinn


On the plus side, there\'s been some rumor that the thread scheduler was
going to be replaced. None too soon, if so.

Cheers

Phil Hobbs
 
On 11/30/20 8:26 PM, Joe Gwinn wrote:
On Mon, 30 Nov 2020 17:26:03 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 11/30/20 12:07 PM, albert wrote:
In article <c8c0cf25-aa21-8b1d-b6f6-518624c35183@electrooptical.net>,
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.

Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.

It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.

I wrote a clusterized optimizing EM simulator that I still use--I have
a simulation gig just starting up now, in fact.  I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and that
you can\'t have a real-time thread in a user mode program and vice
versa.  This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2.  Dunno about BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.

Does anybody here know if you can mix RT and user threads in a single
process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee them
to run. You may be able to get close if you remove unnecessary services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the process have
to be as well. Any compute-bound thread in a realtime process will
bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread priority in a
user process, but noooooo. They all have to have the same priority,
despite what the pthreads docs say. So in Linux there is no way to
express the idea that some threads in a process are more important than
others. That destroys the otherwise-excellent scaling of my simulation
code.

Wouldn\'t the system call setpriority() work, after a thread has bee
started?

As I said way back in the summer, you can\'t change the priority or
scheduling of an individual user thread, and you can\'t mix user and
real-time threads in one process.

Thus you can have the default scheduler, which doesn\'t scale at all well
for this sort of job, or you can run the whole process as real-time and
make the box\'s user interface completely unresponsive any time the
real-time job is running.

In Windows and OS/2, you can set thread priorities any way you like.
Linux is notionally a multiuser system, so it\'s no surprise there are
limits to how much non-root users can increase priority. I\'d be very
happy being able to _reduce_ the priority of my compute threads so that
the comms threads would preempt them, but nooooooo.

I don\'t think that this is true for Red Hat Linux Enterprise Edition.
I don\'t recall if the realtime stuff is added, or built in. People
also use MRG with this.

More generally, not all Linux distributions care about such things.

Joe Gwinn

It\'s in the kernel, so they mostly don\'t get to care or not care. I\'ve
recently tried again with CentOS, which is RHEL EE without the
support--no joy.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Tue, 1 Dec 2020 13:46:29 -0500, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 12/1/20 12:48 PM, Joe Gwinn wrote:
On Mon, 30 Nov 2020 20:59:50 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 11/30/20 8:26 PM, Joe Gwinn wrote:
On Mon, 30 Nov 2020 17:26:03 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 11/30/20 12:07 PM, albert wrote:
In article
c8c0cf25-aa21-8b1d-b6f6-518624c35183@electrooptical.net>,
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a
computer is, and what an OS and languages
are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\",
Brooks points out that the factors-of-10
productivity improvements of the early days
were gained by getting rid of extrinsic
complexity--crude tools, limited hardware, and
so forth.

Now the issues are mostly intrinsic to an
artifact built of thought. So apart from more
and more Python libraries, I doubt that there
are a lot more orders of magnitude available.

It is ironic that a lot of the potentially avoidable
human errors are typically fence post errors. Binary
fence post errors being about the most severe since
you end up with the opposite of what you intended.

Not in a single processor (except perhaps the
Mill).

But with multiple processors there can be
significant improvement - provided we are
prepared to think in different ways, and the
tools support it.

Examples: mapreduce, or xC on xCORE processors.

The average practitioner today really struggles on
massively parallel hardware. If you have ever done
any serious programming on such kit you quickly
realised that the process which ensures all the
other processes are kept busy doing useful things is
by far the most important.

I wrote a clusterized optimizing EM simulator that I
still use--I have a simulation gig just starting up
now, in fact. I learned a lot of ugly things about the
Linux thread scheduler in the process, such as that the
pthreads documents are full of lies about scheduling
and that you can\'t have a real-time thread in a user
mode program and vice versa. This is an entirely
arbitrary thing--there\'s no such restriction in Windows
or OS/2. Dunno about BSD--I should try that out.


In Linux, realtime threads are in \"the realtime context\".
It\'s a bit of a cadge. I\'ve never really seen a good
explanation of that that means.

Does anybody here know if you can mix RT and user
threads in a single process in BSD?

Sorry; never used BSD.

Realtime threads are simply of a different group of
priorities. You can install kernel loadable modules ( aka
device drivers ) to provide a timebase that will make
them eligible. SFAIK, you can\'t guarantee them to run.
You may be able to get close if you remove unnecessary
services.

I don\'t think this does what you want.

In Linux if one thread is real time, all the threads in the
process have to be as well. Any compute-bound thread in a
realtime process will bring the UI to its knees.

I\'d be perfectly happy with being able to _reduce_ thread
priority in a user process, but noooooo. They all have to
have the same priority, despite what the pthreads docs say.
So in Linux there is no way to express the idea that some
threads in a process are more important than others. That
destroys the otherwise-excellent scaling of my simulation
code.

Wouldn\'t the system call setpriority() work, after a thread
has bee started?

As I said way back in the summer, you can\'t change the priority
or scheduling of an individual user thread, and you can\'t mix
user and real-time threads in one process.

Thus you can have the default scheduler, which doesn\'t scale at
all well for this sort of job, or you can run the whole process
as real-time and make the box\'s user interface completely
unresponsive any time the real-time job is running.

In Windows and OS/2, you can set thread priorities any way you
like. Linux is notionally a multiuser system, so it\'s no
surprise there are limits to how much non-root users can
increase priority. I\'d be very happy being able to _reduce_
the priority of my compute threads so that the comms threads
would preempt them, but nooooooo.

I don\'t think that this is true for Red Hat Linux Enterprise
Edition. I don\'t recall if the realtime stuff is added, or built
in. People also use MRG with this.

More generally, not all Linux distributions care about such
things.

Joe Gwinn

It\'s in the kernel, so they mostly don\'t get to care or not care.
I\'ve recently tried again with CentOS, which is RHEL EE without
the support--no joy.

Let me add that while it does take root permission to turn realtime
on, it does not follow that the application must also run as root.

Running a big application as root is a real bad idea, for both
robustness and security reasons.

They\'re my boxes, and when doing cluster sims they usually run a special
OS for the purpose: Rocks 7. It has one head-node and N compute nodes
that get network-booted off the head, so I get a clean install every
time they boot up. So no worries there.

I hadn\'t heard of Rocks 7. I\'ll look into it.


There\'s no point in running as root, though, because you can\'t change
the priority of user threads even then--only realtime ones.

So, why not use realtime ones? This is done all the time. The
application code works the same.


>The Linux thread scheduler is a really-o truly-o Charlie Foxtrot.

The legacy UNIX \"nice\" scheduler, retained for backward compatibility,
it unsuited for realtime for sure. And for what you are doing it
would seem.


What I\'ve seen done is that during startup, the application startup
process or thread is given permission to run sudo, which it uses to
set scheduling policies (FIFO) and numerical priority (real urgent)
for the main processes and threads before normal operation
commences.

The other thing that is normally set up during startup is the
establishment of shared memory windows between processes. For may
applications, passing data blocks around by pointer passing is the
only practical approach.

Passing pointers only works on unis or shared-memory SMP boxes. My
pre-cluster versions did it that way, but by about 2006 I needed 20+
cores to do the job in a reasonable time.

The present-day large-scale solution is to use shared memory via
Infiniband. Memory is transferred to and from IB to local memory
using DMA (Direct Memory Access) hardware.

This is done with a fleet of identical enterprise-class PCs, often
from HP or Dell or the like.


I was building nanoantennas with travelling wave plasmonic waveguides,
coupled to SOI optical waveguides. (They eventually worked really well,
but it was a long slog, mostly by myself.)

The plasmonic section was intended to avoid the ~30 THz RC bandwidth of
my tunnel junction detectors--the light wave went transverse to the
photocurrent, so the light didn\'t have to drive the capacitance all at
once. That saved me over 30 dB of rolloff right there.

Wow! That\'s worth a bunch of trouble for sure.


The issue was that metals that exhibit plasmons (copper, silver, and
gold) exhibit free-electron behaviour in the infrared, i.e. their
epsilons are very nearly pure, negative real numbers. That makes the
real part of their refractive indices very very small, so you have to
take very small time steps or the simulation becomes unstable due to
superluminal propagation.

Are there any metals that don\'t exhibit plasmons?


Their imaginary epsilons are very large, so you need very small voxels
to represent the fields well. Small voxels and short time steps make
for loooong run times, especially when it\'s wrapped in an optimization loop.

If I recall/understand, the imaginary epsilons are the loss factors,
so this is quite lossy.

This sounds like a problem that would benefit from parallelism, and
shared memory. And lots of physical memory.

Basically divide the propagation media into independent squares or
boxes that overlap somewhat, with inter square/box traffic needed only
for the overlap regions. So, there will be an optimum square/box area
or volume.

Joe Gwinn
 

Welcome to EDABoard.com

Sponsor

Back
Top