not OT : fear...

Don Y wrote:
On 8/13/2022 4:32 PM, Les Cargill wrote:
snip

Performance of a kernel affects every job running on the machine.
You write a sloppy app?  <shrug>  YOUR app sucks -- but no one
else\'s.

If you have a simple kernel, then it need not be efficient as it
doesn\'t \"do much\".

A kernel is \"swap()\" ( register exchange ) plus libraries. swap()
is irreducible; libraries less so.

Looks like big voodoo but it\'s not. Now, throw in a MMU and life
gets interesting...

But, the more you do, the more your implementation
affects overall performance.  If faulting in a page is expensive,
then apps won\'t want to incur that cost and will try to wire-down
everything at the start.  Of course, not possible for everyone to
do so without running out of resources.  And, the poor folks who
either didn\'t know they *could* do that (or, felt responsible enough
NOT to impose their greed on others) end up taking the bigger hits.

When you look at big projects, productivity falls dramatically
as the complexity increases (app->driver->os, etc.)

But that\'s because human communication only goes so far.

Communication also happens *in* the \"system\".  What happens if THIS
ftn invocation doesn\'t happen (client/server is offline, crashed,
busy, etc.)?  How do we handle that case locally?  Is there a way
to recover?  Or, do we just abend?

But people like things they can see, even if that\'s an illusion.
The best computer is the one you don\'t even know is there until it
stops working.

And, why so many folks sit down and write code without having any
formal
documents to describe WHAT the code must do and the criteria against
which it will be tested/qualified!  <rolls eyes

If you start with the test harness you get more done. Once you have
the prototype up, then write the documents. You\'ll simply know more
that
way.

I approach it from the top, down.  Figure out what the *requirements*
are (how can you design a test harness if you don\'t know what you\'ll be
testing or the criteria that will be important?).

\"I\'m making an <x>. It uses interfaces <y,z...>.\" That\'s how you know.

How do you know it will use those i/fs?  What if there are no existing
APIs to draw upon (you\'re making a motor controller and have never made one
before; you\'re measuring positions of an LVDT instrumented actuator;
you\'re...)

For a motor controller, chances are reeally good it\'ll be PWM. etc, etc.

The rest is serialization.

When you\'r3e making *things*, you are often dealing with sensors and
mechanisms
that are novel to a particular application...

But you can usually sketch that out in one paragraph. Not always. IMO,
when you use \"big process\", chances are you\'ll over do it because your
risk perception is not well calibrated. Plus you have to break
things up...


Maybe you need message sequence charts for use cases. No problem.

<snip>
That\'s a dangerous place.

It\'s a THRILLING place!  It forces you to really put forth your best
effort.
And, causes you to think of what you *really* want to do -- instead of just
hammering out a schematic and a bunch of code.

It is fun but not the good sort of fun :)

<snip>
Nicely put - at least you could quantify the costs.

Don\'t you quantify the load you\'re going to put on a power supply before
you design the power supply?  It\'s called *design* not \"hacking\".

Well of course. Might be off but you do what you can. Or you use a
bench supply and measure it.

And hacking can be quite legit.

[of *course* I had already done the math, that\'s called engineering!
a \"programmer\" would have just written the code and wondered why
the system couldn\'t meet it\'s performance goals]

The final product often ran at 100% of real-time as it had to react
to the
actions of the user; you can\'t limit how quickly a user drags a barcode
label across a photodiode (no, you can\'t put a \"barcode reader\" in the
design as that costs recurring dollars!)

I\'m familiar.

It got to be a favorite exercise to see how quickly and *continuously* you
could swipe a barcode label across the sensor to see if the code would
crash, misread, etc.  The specification was for 100 inches per second
(which really
isn\'t that fast if you are deliberately trying to *be* fast) which would
generate edges/events at ~15KHz.  When an opcode fetch takes O(1us), you
quickly run out of instructions between events!

Seems like a peripheral might have been in order. Of course, if you\'re
the peripheral...

[Of course, everything ground to a halt during such abuse -- but, picked up
where it left off as soon as your arm got tired!  :> ]

--
Les Cargill
 
On 8/13/2022 6:38 PM, Les Cargill wrote:
Don Y wrote:
On 8/13/2022 4:32 PM, Les Cargill wrote:
snip

Performance of a kernel affects every job running on the machine.
You write a sloppy app? <shrug> YOUR app sucks -- but no one
else\'s.

If you have a simple kernel, then it need not be efficient as it
doesn\'t \"do much\".

A kernel is \"swap()\" ( register exchange ) plus libraries. swap()
is irreducible; libraries less so.

Looks like big voodoo but it\'s not. Now, throw in a MMU and life
gets interesting...

A kernel (OS) is responsible for managing the hardware resources of
a machine/system for its users (applications). The extent that this
is done -- and the number of \"users\" (which may be *one* -- think DOS)
involved is part of the requirements document.

[FWIW, the Linux *kernel* is now ~30MSLOC -- a wee bit more than
\"swapping registers\"]

In the case of multiple \"users\" (apps/clients), it has to act as a referee
doling out resources (memory, MIPS, time) between those competing users
as well as isolating them from each other and facilitating communication
between them in an orderly fashion.

It also strives to maintain the illusion (to the clients) that each is
the sole user of the machine -- that each new user effectively has their
own machine. And, provide abstractions for the resources that hide
implementation details, reliability, etc.

For toy kernels, this can be little more than \"programmer discipline\" -- the
developer imposes these policies by his coding style. An ideal programmer
(in a non-malevolent environment) can provide these features at compile
time and, if the binary never changes, forever thereafter.

As the environment becomes less hospitable -- either malicious actors or
non-ideal developers -- the OS has to take deliberate steps to *impose*
these constraints. You\'d likely *not* want a rogue/buggy task to be
able to twiddle some I/O that it has no purpose accessing just as you\'d
not want it to be able to twiddle data that doesn\'t \"belong\" to it. Or,
fork an unlimited number of copies of itself, each consuming \"modest\"
resources but, exhausting all available resources due to their multiplicity
of number.

[We\'re not even addressing the non-malevolent application that just tries to
game the system to access more resources than it would otherwise be allowed.]

For this reason, *monolithic* kernels (even multithreaded) get very large
and complex; damn near *everything* has to reside under the control of the
kernel -- device drivers, communication systems, memory management and
allocation, timing services, etc. And, all of the APIs to access and
manipulate them.

And, because these components have to cooperate with each other, they
have implicit trust between themselves -- implicit EXPOSURE to each
other\'s shortcomings (bugs).

If you decompose the kernel into smaller cooperating ISOLATED services, then
the efficiency of communication (and data sharing) between them becomes
important. X can\'t just peek at something of interest in Y, even though both
are conceptually part of the same kernel.

But, the more you do, the more your implementation
affects overall performance. If faulting in a page is expensive,
then apps won\'t want to incur that cost and will try to wire-down
everything at the start. Of course, not possible for everyone to
do so without running out of resources. And, the poor folks who
either didn\'t know they *could* do that (or, felt responsible enough
NOT to impose their greed on others) end up taking the bigger hits.

When you look at big projects, productivity falls dramatically
as the complexity increases (app->driver->os, etc.)

But that\'s because human communication only goes so far.

Communication also happens *in* the \"system\". What happens if THIS
ftn invocation doesn\'t happen (client/server is offline, crashed,
busy, etc.)? How do we handle that case locally? Is there a way
to recover? Or, do we just abend?

But people like things they can see, even if that\'s an illusion.
The best computer is the one you don\'t even know is there until it
stops working.

And, why so many folks sit down and write code without having any formal
documents to describe WHAT the code must do and the criteria against
which it will be tested/qualified! <rolls eyes

If you start with the test harness you get more done. Once you have the
prototype up, then write the documents. You\'ll simply know more that
way.

I approach it from the top, down. Figure out what the *requirements*
are (how can you design a test harness if you don\'t know what you\'ll be
testing or the criteria that will be important?).

\"I\'m making an <x>. It uses interfaces <y,z...>.\" That\'s how you know.

How do you know it will use those i/fs? What if there are no existing
APIs to draw upon (you\'re making a motor controller and have never made one
before; you\'re measuring positions of an LVDT instrumented actuator; you\'re...)

For a motor controller, chances are reeally good it\'ll be PWM. etc, etc.

It may be a DC servo motor driven with DC or PWM. The position of the
rotor may be monitored and controlled. Or, just its angular velocity.
A stepper motor may be driven with \"DC\" or PWM. The position of the rotor
may be deduced with an encoder, monitoring back EMF, etc. Or, this may be
indicated indirectly as thru a gearbox (in which case, you\'ll have to
understand backlash in the gears). As you are directly commutating the
stator field, you have to be aware of the maximum acceleration profile
that the motor and mechanism can support, etc.

Each of these approaches has different hardware requirements and control
algorithms.

The rest is serialization.

When you\'r3e making *things*, you are often dealing with sensors and mechanisms
that are novel to a particular application...

But you can usually sketch that out in one paragraph. Not always. IMO,
when you use \"big process\", chances are you\'ll over do it because your
risk perception is not well calibrated. Plus you have to break
things up...

Many problems aren\'t simple. Many actuators control multiple parameters and
sensors respond to multiple factors. Yet, your requirements document won\'t
(likely) speak in terms of composites but, rather, independent variables.

For a given type of granulation (\"powdered pills\"), the mass of the resulting
compressed tablet is proportional to the force experienced at a given
dimension (thickness).

But, so is the hardness.

And, so is the likeliness to \"capping\" (the top of the tablet \"popping off\"
because air was trapped in the granulation as it was compressed too quickly
or too deep in the die).

How to fix? We can change the rate of tablet production to give more
time for entrapped air to escape. Or, longer dwell times to get a better
\"weld\". Or, move the position in the die at which the tablet is
formed *up* (as long as we don\'t constrain the amount of \"fill\" that
this new position can support). Or, change the speed of the feeder.
Or, indicate that the granulation should be reformulated. Or, ...

What\'s the optimal strategy given that the manufacturer wants:
- maximize profit
- minimize risk/exposure/loss

Maybe you need message sequence charts for use cases. No problem.

snip

That\'s a dangerous place.

It\'s a THRILLING place! It forces you to really put forth your best effort.
And, causes you to think of what you *really* want to do -- instead of just
hammering out a schematic and a bunch of code.

It is fun but not the good sort of fun :)

I disagree. Some of the most exciting projects I\'ve worked on are those
that have had the most insane constraints.

Too often, people are lazy thinkers and don\'t push themselves to find
better/smarter solutions -- unless they MUST.

Given: Three soda bottles. Three chopsticks.

Problem: Place the chopsticks on the bottles in such a way that they
don\'t touch the ground.

There are a variety of trivial solutions -- the most obvious is to
arrange the bottles in an equilateral triangle formation and span
each pair of bottles with a chopstick.

Problem2: Same as above but with just *two* bottles.

Again, a variety of trivial solutions -- span the two with a chopstick and
balance the remaining two on this one (may be tricky if the chopsticks are
round instead of square cross section).

Problem3: Same as above but with just *one* bottle.

....

Note that solutions 2 and 3 apply equally well to problem 1 -- yet likely
weren\'t presented because there were simpler solutions (were they really
any simpler?)

Why not lay the bottle(s) on their sides and balance the chopsticks
on them THAT way? Or, try to stuff all of them in the necks of the
bottle(s)?

There are lots of similar examples but each goes to point out how
\"lazy\" most solvers are.


snip
Nicely put - at least you could quantify the costs.

Don\'t you quantify the load you\'re going to put on a power supply before
you design the power supply? It\'s called *design* not \"hacking\".

Well of course. Might be off but you do what you can. Or you use a
bench supply and measure it.

And hacking can be quite legit.

[of *course* I had already done the math, that\'s called engineering!
a \"programmer\" would have just written the code and wondered why
the system couldn\'t meet it\'s performance goals]

The final product often ran at 100% of real-time as it had to react to the
actions of the user; you can\'t limit how quickly a user drags a barcode
label across a photodiode (no, you can\'t put a \"barcode reader\" in the
design as that costs recurring dollars!)

I\'m familiar.

It got to be a favorite exercise to see how quickly and *continuously* you
could swipe a barcode label across the sensor to see if the code would crash,
misread, etc. The specification was for 100 inches per second (which really
isn\'t that fast if you are deliberately trying to *be* fast) which would
generate edges/events at ~15KHz. When an opcode fetch takes O(1us), you
quickly run out of instructions between events!

Seems like a peripheral might have been in order. Of course, if you\'re
the peripheral...

Peripheral adds cost. Why not a peripheral to manage the battery charging?
And another for the serial comms? And another to run the display? Scan the
keypad?

I.e., what role should be \"left\" for THE processor?

If a user (not something that you can control) engages in a behavior
that could result in a fault, data corruption, etc. then all you
can reasonably be expected to do is NOT fault! You can\'t reach out and
slap him upside the head! (and *crashing* is always in bad form!)

[Of course, everything ground to a halt during such abuse -- but, picked up
where it left off as soon as your arm got tired! :> ]
 
On 08/13/2022 12:05 PM, Don Y wrote:
Were I charged with such a task, I\'d define a virtual middleware to host
the app. Then, have a second team responsible for keeping that middleware
supported on newer browsers. This eliminates the problem of having the
application developers constantly trying to adjust to shifting sand.

That tends to be part of the framework. Life is getting simpler since
most browers with the exception of Firefox and its derivatives are
chromium based. Even Microsoft finally realized they can\'t write
browsers for sour owl shit. The chromium based Edge is decent.

Then there\'s the Apple world. Even if you shoehorn something other that
Safari on an Apple device it will use Apple\'s WebKit, which they don\'t
seem to care much about.
 
On 08/13/2022 12:12 PM, Don Y wrote:
I built a CDI for my family\'s vehicle but, other than testing it, dad
wasn\'t
keen on having something between the points and coil that he didn\'t grok.
(a shame as it was really well executed and packaged in a EMI/RFI-sealed
box).

I built one, which let to an interesting search for ferrite cores. I
finally tracked down one of IBM\'s many spinoffs in Kingston. The
paperwork was too much of a hassle so he gifted me with a bag of \'samples\'.
 
On 08/13/2022 04:39 PM, Les Cargill wrote:
A blower means you\'d better do something about fuel delivery too.


That doesn\'t get mentioned much but yeah.

Always the problem with shade tree engineering. \'That shiny new Carter
4-barrel is impressive. Is that tired old mechanical fuel pump going to
be able to feed it?\'


Many variations. \'That high lift Edelbrock cam is really something. Now
about the rest of the gear.\'

Dyno runs get pricey.
 
On 08/13/2022 07:03 PM, Don Y wrote:
If you are pregnant, then you are a female. (implication)
If you are a female, then you are pregnant. (converse)
If you are not pregnant, then you are not a female. (inverse)
If you are not female, then you are not pregnant. (contrapositive)

https://en.wikipedia.org/wiki/Tetralemma

Old Nagarjuna tends to gum up the works. The Mūlamadhyamikakārikā isn\'t
very light reading.
 
On 8/13/2022 10:44 PM, rbowman wrote:
On 08/13/2022 12:05 PM, Don Y wrote:
Were I charged with such a task, I\'d define a virtual middleware to host
the app. Then, have a second team responsible for keeping that middleware
supported on newer browsers. This eliminates the problem of having the
application developers constantly trying to adjust to shifting sand.

That tends to be part of the framework. Life is getting simpler since most
browers with the exception of Firefox and its derivatives are chromium based.
Even Microsoft finally realized they can\'t write browsers for sour owl shit.
The chromium based Edge is decent.

Then there\'s the Apple world. Even if you shoehorn something other that Safari
on an Apple device it will use Apple\'s WebKit, which they don\'t seem to care
much about.

But you\'re still dependent on someone else\'s \"vision\" for *their* product
(the browser).

We developed a pricey piece of kit (~megadollar) and one of the vendors
of a software package that we were using opted to make significant
changes to it.

And, NOT let us purchase more licenses for the \"old version\".

So, the codebase was essentially trashed and we had to start over.
And Manglement still didn\'t learn from the experience (too much
of a panic to get a new version of the product to market based on
the new licensed package to worry about what we\'ll do *next*!)

I\'m always at the mercy of the hardware (components) I select.
But, I can be careful in minimizing my exposure needlessly.
And, choose sole-source components for which there are \"similar\"
offerings available -- even if not \"compatible\".

E.g., I check my code against three different hardware platforms:
SPARC, x86 and ARM -- just to try to catch any non-obvious dependencies
that I may have baked into an implementation. There\'s no guarantee that
I\'ll catch all of those. But, it\'s far better than picking *one*
platform and, later, discovering that you\'ve got a boatload of
ties to that of which you were unaware.

[FWIW, when MULTICS was decommissioned, there were estimates that
it would cost upwards of 30 man years to port the code -- for just
the OS -- to a modern architecture. What fool would have *relied*
on a 36b architecture? Where\'s the abstraction??]
 
On 8/13/2022 10:49 PM, rbowman wrote:
On 08/13/2022 12:12 PM, Don Y wrote:
I built a CDI for my family\'s vehicle but, other than testing it, dad
wasn\'t
keen on having something between the points and coil that he didn\'t grok.
(a shame as it was really well executed and packaged in a EMI/RFI-sealed
box).

I built one, which let to an interesting search for ferrite cores. I finally
tracked down one of IBM\'s many spinoffs in Kingston. The paperwork was too much
of a hassle so he gifted me with a bag of \'samples\'.

At the time, I was working for a company that made navigation equipment
for boats. So, the core was easy to come by -- as was a nice shielded box
to put everything in! (cuz you can\'t have a noisey power supply in the same
box as a sensitive receiver!)
 
On 08/14/2022 12:07 AM, Don Y wrote:
On 8/13/2022 10:49 PM, rbowman wrote:
On 08/13/2022 12:12 PM, Don Y wrote:
I built a CDI for my family\'s vehicle but, other than testing it, dad
wasn\'t
keen on having something between the points and coil that he didn\'t
grok.
(a shame as it was really well executed and packaged in a EMI/RFI-sealed
box).

I built one, which let to an interesting search for ferrite cores. I
finally tracked down one of IBM\'s many spinoffs in Kingston. The
paperwork was too much of a hassle so he gifted me with a bag of
\'samples\'.

At the time, I was working for a company that made navigation equipment
for boats. So, the core was easy to come by -- as was a nice shielded box
to put everything in! (cuz you can\'t have a noisey power supply in the
same
box as a sensitive receiver!)

Ah, for the days of Packard 440 ignition wire and non-resistor plugs.
The neighbors didn\'t have to wonder when you got home as the 11 o\'clock
news on the TV went to hell.

It kept the neighborhood hams riled up too.
 
On 08/14/2022 12:05 AM, Don Y wrote:
So, the codebase was essentially trashed and we had to start over.
And Manglement still didn\'t learn from the experience (too much
of a panic to get a new version of the product to market based on
the new licensed package to worry about what we\'ll do *next*!)

Our legacy codebase was developed on AIX. What used to be Mortice Kern
Systems but was bought by PTC offered a cross platform solution similar
to the open source Cygwin. PTC also improved the Windows X server and
has been keeping it current. The legacy products run fine on Windows 11,
if having a dated appearance in the Motif GUIs. We shut down the last
RS6000 box years ago and I\'m not sure it would still boot but the
codebase builds and runs on Linux as well.


I\'m always at the mercy of the hardware (components) I select.
But, I can be careful in minimizing my exposure needlessly.
And, choose sole-source components for which there are \"similar\"
offerings available -- even if not \"compatible\".

When you are a software vendor you don\'t get to select the hardware. In
the public safety field it\'s increasingly difficult to select a browser
or install third party packages, hence the \'zero footprint\' requirement.
You play with the cards you\'re dealt.

Browsers can present challenges but nothing to the extent of developing
an app to run on an iPad, an Android table, and a Windows desktop.
 
Don Y wrote:
On 8/13/2022 6:38 PM, Les Cargill wrote:
Don Y wrote:
On 8/13/2022 4:32 PM, Les Cargill wrote:
snip

Performance of a kernel affects every job running on the machine.
You write a sloppy app?  <shrug>  YOUR app sucks -- but no one
else\'s.

If you have a simple kernel, then it need not be efficient as it
doesn\'t \"do much\".

A kernel is \"swap()\" ( register exchange ) plus libraries. swap()
is irreducible; libraries less so.

Looks like big voodoo but it\'s not. Now, throw in a MMU and life
gets interesting...

A kernel (OS) is responsible for managing the hardware resources of
a machine/system for its users (applications).  The extent that this
is done -- and the number of \"users\" (which may be *one* -- think DOS)
involved is part of the requirements document.

[FWIW, the Linux *kernel* is now ~30MSLOC -- a wee bit more than
\"swapping registers\"]

That\'s just mission creep. Those, by the way, are the libraries to which
I refer. BTW, scheduler and that sort of thing goes along with swap()
handily.

You\'re conflating the deployment case with the thing itself. To be fair,
the Linux community encourages this - with \"kernel loadable
modules are part of the OS\" and being completely driven by the bundling
process.

It didn\'t used to be that way - pSOS could be quite minimal and used
the much more flexible microkernel approach. It\'s just territorial
imperative and conforming to the Unix method - eminently reasonable but
clearly less manageable.

<snip>
For this reason, *monolithic* kernels (even multithreaded) get very large
and complex; damn near *everything* has to reside under the control of the
kernel -- device drivers, communication systems, memory management and
allocation, timing services, etc.  And, all of the APIs to access and
manipulate them.

That\'s been shown to be a tactical error for decades.

<snip>
For a motor controller, chances are reeally good it\'ll be PWM. etc, etc.

It may be a DC servo motor driven with DC or PWM.  The position of the
rotor may be monitored and controlled.  Or, just its angular velocity.
A stepper motor may be driven with \"DC\" or PWM.  The position of the rotor
may be deduced with an encoder, monitoring back EMF, etc.  Or, this may be
indicated indirectly as thru a gearbox (in which case, you\'ll have to
understand backlash in the gears).  As you are directly commutating the
stator field, you have to be aware of the maximum acceleration profile
that the motor and mechanism can support, etc.

Each of these approaches has different hardware requirements and control
algorithms.

Each can also have an equivalent API and the decision about what\'s where
is a deployment decision. You write a PWM \"driver\", a DC \"driver\", use
the same ioctl() or reasonable facsimile thereof and go.

There should be separation of concern w.r.t peripherals and the
actual system operation.

The rest is serialization.

When you\'r3e making *things*, you are often dealing with sensors and
mechanisms
that are novel to a particular application...

But you can usually sketch that out in one paragraph. Not always. IMO,
when you use \"big process\", chances are you\'ll over do it because your
risk perception is not well calibrated. Plus you have to break
things up...

Many problems aren\'t simple.  Many actuators control multiple parameters
and
sensors respond to multiple factors.  Yet, your requirements document won\'t
(likely) speak in terms of composites but, rather, independent variables.

Maybe. See also \"separation of concern\".

<snip>


Seems like a peripheral might have been in order. Of course, if you\'re
the peripheral...

Peripheral adds cost Why not a peripheral to manage the battery charging?
And another for the serial comms?  And another to run the display?  Scan
the
keypad?

I.e., what role should be \"left\" for THE processor?

The role that retires the best amount of risk. How much NRE is your
organization up for? There\'s certainly a place in the world for
obsessively shaving cents, or hundredths of cents off cost but I
prefer a better class of customer.

Cycles are extremely cheap and getting cheaper.

<snip>

--
Les Cargill
 
On 8/14/2022 11:46 AM, rbowman wrote:
On 08/14/2022 12:05 AM, Don Y wrote:
So, the codebase was essentially trashed and we had to start over.
And Manglement still didn\'t learn from the experience (too much
of a panic to get a new version of the product to market based on
the new licensed package to worry about what we\'ll do *next*!)

Our legacy codebase was developed on AIX. What used to be Mortice Kern Systems
but was bought by PTC offered a cross platform solution similar to the open
source Cygwin. PTC also improved the Windows X server and has been keeping it
current. The legacy products run fine on Windows 11, if having a dated
appearance in the Motif GUIs. We shut down the last RS6000 box years ago and
I\'m not sure it would still boot but the codebase builds and runs on Linux as
well.

Imagine they opted not to offer additional licenses for the product line
they bought out. Instead, coercing you to adopt some OTHER product offering
that they want to \"push\", going forward. I.e., you no longer can purchase
the components (platform/library) that your codebase requires to operate.

I\'m always at the mercy of the hardware (components) I select.
But, I can be careful in minimizing my exposure needlessly.
And, choose sole-source components for which there are \"similar\"
offerings available -- even if not \"compatible\".

When you are a software vendor you don\'t get to select the hardware. In the
public safety field it\'s increasingly difficult to select a browser or install
third party packages, hence the \'zero footprint\' requirement. You play with the
cards you\'re dealt.

But you can adopt a design approach that isolates much of your effort
from changes to that platform. And, gives you increased \"portability\"
(platform independence).

Of course, the rub is that you have to invest to create a portable platform on
which to build (like a Hardware Abstraction Layer).

Browsers can present challenges but nothing to the extent of developing an app
to run on an iPad, an Android table, and a Windows desktop.

In each case, someone else is controlling the underlying technology and
can do so to suit *their* needs without concern for yours. If they are
an 800 pound gorilla, even moreso -- you\'re stuck fighting to convince
your customers to stick with an existing, working implementation when
the gorilla is coercing them to \"move forward\" (AWAY from you).

This is why apps keep getting rewritten, rebugged, etc.

If you can define a VM on which to operate, then everything above
that level can mature and reap the benefits of evolution, bug fixes, etc.
regardless of the changes going on beneath.

You then have a separate problem of building that platform atop the
(changing) underlying system that others are controlling.

E.g., my current design *requires* a PMMU -- or something that can
be made to LOOK like one. But, it doesn\'t care what the page size
is or how *many* different page sizes are supported. Those become
efficiency issues (e.g., if page sizes were 1TB, then the value
of the PMMU vanishes and the platform is effectively disqualified).

But, an application needn\'t care; virtual memory is allocated in
page-size multiples so if your request takes 1 page or 500, your
code doesn\'t change! The API to the underlying system insulates
you from all this.
 
On 8/14/2022 12:00 PM, Les Cargill wrote:
Don Y wrote:
On 8/13/2022 6:38 PM, Les Cargill wrote:
Don Y wrote:
On 8/13/2022 4:32 PM, Les Cargill wrote:
snip

Performance of a kernel affects every job running on the machine.
You write a sloppy app? <shrug> YOUR app sucks -- but no one
else\'s.

If you have a simple kernel, then it need not be efficient as it
doesn\'t \"do much\".

A kernel is \"swap()\" ( register exchange ) plus libraries. swap()
is irreducible; libraries less so.

Looks like big voodoo but it\'s not. Now, throw in a MMU and life
gets interesting...

A kernel (OS) is responsible for managing the hardware resources of
a machine/system for its users (applications). The extent that this
is done -- and the number of \"users\" (which may be *one* -- think DOS)
involved is part of the requirements document.

[FWIW, the Linux *kernel* is now ~30MSLOC -- a wee bit more than
\"swapping registers\"]

That\'s just mission creep. Those, by the way, are the libraries to which I
refer. BTW, scheduler and that sort of thing goes along with swap()
handily.

Everything is arguably \"mission creep\". Why doesn\'t the application do
it\'s own virtual memory management? Or, elect when to relinquish control of
the processor to another task? As well as choosing WHICH task? Why not
make all data global and let tasks pick and choose what they want to
access -- and know what NOT to access? Why not decode your own network
packets?

You put services in the kernel to improve reliability, fairness, structure,
ensure correctness, protect from abuse/misuse, etc.

I\'m sure I could expose the registers of a disk controller to the application
and let *it* read bytes off the disk. And, come up with a CONVENTION by
which it shared that hardware resource with other tasks that might have
similar needs -- at the volume and file levels. Likewise, cooperate
to ensure your messages out the serial port aren\'t intermixed with the
characters from another task\'s messages for that same device.

You\'re conflating the deployment case with the thing itself. To be fair,
the Linux community encourages this - with \"kernel loadable
modules are part of the OS\" and being completely driven by the bundling
process.

I don\'t run Linux and don\'t believe it to be a realistic solution to most
of the designs I\'ve created over the years. Can you spell \"bloat\"?

Pick some set of kernel modules and statically link them to the kernel.
Call that your kernel. It\'s the set of services you rely on to provide
abstraction, sharing, protection, etc. so your application doesn\'t need
to manage its own memory space, arbitrate for use of peripherals,
protect data structures, SHARE data, etc.

It\'s overhead. So, you only include the features that \"add value\"
to your design. That value may take the form of reducing development
time/cost or improving product reliability, troubleshooting, etc.
You don\'t see many 1N914\'s with heat sinks! :>

It didn\'t used to be that way - pSOS could be quite minimal and used
the much more flexible microkernel approach. It\'s just territorial imperative
and conforming to the Unix method - eminently reasonable but
clearly less manageable.

As above, you don\'t *need* any kernel/OS. My first product was implemented
super-loop, foreground-background and worked reliably. But, it was a closed
appliance and was brittle; modifications had to be aware of everything that
MIGHT run in the device to ensure there were no conflicts (in time, space,
etc.)

I moved on to designing multitasking devices with \"zero overhead\" scheduling
(literally a few instruction fetches to switch tasks, no scheduler, no
synchronization primitives, a single stack, etc.). Lots of bang-for-the-buck
(because there was so little \"buck\"). But, essentially just as brittle as
a non-multitasking solution; you still needed to know a lot about the
product in order to make modifications -- even small ones.

From there, truly preemptive multitasking, separate stacks, arbitrated access
to physical devices, etc. E.g., I could say:
DEBUG(\"Now starting task foo at t = %d\\n\", time)
in each of N tasks and be assured that complete messages would appear
on the debug console; not parts of message 1 interleaved with message 3
or message 8.

And physical address spaces that exceeded the logical space seemlessly
managed by the \"runtime\" (with compiler assist).

In each case, the developer is freed from dealing with detail that is
immaterial to solving the problem at hand. Imagine having to grow your own
silicon before you could put a rectifier in a circuit!

snip

For this reason, *monolithic* kernels (even multithreaded) get very large
and complex; damn near *everything* has to reside under the control of the
kernel -- device drivers, communication systems, memory management and
allocation, timing services, etc. And, all of the APIs to access and
manipulate them.

That\'s been shown to be a tactical error for decades.

Yet that\'s still the way most are designed. It\'s a lot easier to design
in a big, fat playground where you can grab anything that you want
without thinking about *partitioning* ahead of time! (Of course, just
as easily screw with things that haven\'t been partitioned!)

My latest RTOS is \"decomposed\". Everything is a service (even the
scheduler is separate from the \"kernel\"). Easier to design (reliably).
But much harder to get lots of performance -- all those protection domains
being crossed.

But, the advantages outweigh the costs. And, the balance will be even more
pronounced as hardware gets cheaper and faster (driving the costs down
further).

snip
For a motor controller, chances are reeally good it\'ll be PWM. etc, etc.

It may be a DC servo motor driven with DC or PWM. The position of the
rotor may be monitored and controlled. Or, just its angular velocity.
A stepper motor may be driven with \"DC\" or PWM. The position of the rotor
may be deduced with an encoder, monitoring back EMF, etc. Or, this may be
indicated indirectly as thru a gearbox (in which case, you\'ll have to
understand backlash in the gears). As you are directly commutating the
stator field, you have to be aware of the maximum acceleration profile
that the motor and mechanism can support, etc.

Each of these approaches has different hardware requirements and control
algorithms.

Each can also have an equivalent API and the decision about what\'s where
is a deployment decision. You write a PWM \"driver\", a DC \"driver\", use
the same ioctl() or reasonable facsimile thereof and go.

So you solve ALL of the problems before solving the first? Yeah, that\'s
going to be an easy sell to your client/manglement. :>

There should be separation of concern w.r.t peripherals and the
actual system operation.

The rest is serialization.

When you\'r3e making *things*, you are often dealing with sensors and
mechanisms
that are novel to a particular application...

But you can usually sketch that out in one paragraph. Not always. IMO,
when you use \"big process\", chances are you\'ll over do it because your
risk perception is not well calibrated. Plus you have to break
things up...

Many problems aren\'t simple. Many actuators control multiple parameters and
sensors respond to multiple factors. Yet, your requirements document won\'t
(likely) speak in terms of composites but, rather, independent variables.

Maybe. See also \"separation of concern\".

snip

Seems like a peripheral might have been in order. Of course, if you\'re
the peripheral...

Peripheral adds cost Why not a peripheral to manage the battery charging?
And another for the serial comms? And another to run the display? Scan the
keypad?

I.e., what role should be \"left\" for THE processor?

The role that retires the best amount of risk. How much NRE is your
organization up for? There\'s certainly a place in the world for
obsessively shaving cents, or hundredths of cents off cost but I
prefer a better class of customer.

If your business model allows for and encourages design reuse, then
having spent the time/money to design/develop X means you now have
X available for use in other products \"for free\".

If you purchase a peripheral that implements X, then you are forever
paying for that vendor\'s development/maintenance -- which may address
needs that aren\'t important to *you*! Do you really care about
3-of-9 barcodes? Or, UPC? Or... (but the vendor has bundled all of
that into his product X offering)

> Cycles are extremely cheap and getting cheaper.

And problems are large and getting larger!

In the 80\'s a video game was ~40KB of code and ran on an 8b processor
at less than 1 VAX MIPS. Now, they are gigabytes and need gigahertz,
multicore processors and video adapters that make many *processors* pale
in comparison.

Is the game that much more complex? Or, is it just \"dressed up\"
considerably?

For 100 (?) years, a doorbell was a simple power-button-annunciator circuit.
Now, it\'s a camera that recognizes *who* is at the door and notifies you
of their presence -- even if you are thousands of miles away -- in real time
and provides a bidirectional audio link to allow you to interact with that
visitor as he approaches your door!

Surveillance (CCTV) cameras used to just record to 1/2\" tape or display
live video on a monitor. Now, they analyze the scene to identify active
entities in the scene, ascertain threat level, do real-time
reporting/notification, etc.

Dumb little \"islands\" (appliances) are soon going to be a thing of the past.
If you don\'t/can\'t talk to and integrate with other devices (likely made
by other manufacturers), you\'ll find yourself with dwindling market share.
 
On 08/14/2022 01:06 PM, Don Y wrote:
But you can adopt a design approach that isolates much of your effort
from changes to that platform. And, gives you increased \"portability\"
(platform independence).

Of course, the rub is that you have to invest to create a portable
platform on
which to build (like a Hardware Abstraction Layer).

Yes, 20 years ago we could have invested in a portable platform to solve
a problem that never happened. The code base already contains some
unnecessarily complex code that anticipated needs that were never
needed. As Don Schlitz wrote:

\"You got to know when to hold \'em,
Know when to fold \'em,
Know when to walk away,
And know when to run.\"

Modern operating systems are Hardware Abstraction Layers. 40 years ago I
was familiar with the nuances of the WD1793. Today I don\'t have a clue
how data winds up on a M.2 SSD or if the box even has a M.2 for that
matter.
 
On 8/14/2022 9:32 PM, rbowman wrote:
> Modern operating systems are Hardware Abstraction Layers.

That\'s *one* aspect. But, they often provide mechanisms that have no direct
bearing on the underlying hardware. E.g., my RTOS is an object-based
capability driven system. Ideally, support for protection domains lets
me ensure the security of the capabilities but one could design similar
with less robust guarantees in the absence of protection domains.

40 years ago I was
familiar with the nuances of the WD1793. Today I don\'t have a clue how data
winds up on a M.2 SSD or if the box even has a M.2 for that matter.

Designing deeply embedded devices means I\'m often \"down in the weeds\" trying
to make sense of some bit of hardware -- in *or* out of the processor.

Processors, nowadays, are far more complex than even the most complex
peripherals/support chips from years past. 1000+ pp datasheets are
not uncommon.

Plus, a lot of devices aren\'t quite as \"esoterically\" designed as they would
have been in the past; it\'s more like an \"80% solution\" to most problems
(counters/timers are a prime example) where the i\'s aren\'t always dotted
nor t\'s crossed.

And, if you\'re designing a non-toy OS (anything more than a simple \"task
switcher\"), you need to survey the range of potential devices before settling
on the abstractions that you want to implement, lest you make choices that
aren\'t particularly portable.

OTOH, what you can get for a few dollars is amazing! Definitely worth any
hassles or blemishes in implementation!
 
On 2022-08-13, Don Y <blockedofcourse@foo.invalid> wrote:
On 8/13/2022 8:39 AM, rbowman wrote:
On 08/12/2022 09:39 PM, Don Y wrote:
On 8/12/2022 8:34 PM, rbowman wrote:
On 08/12/2022 08:24 PM, Don Y wrote:
I\'ve never written a JS applet but can\'t imagine it would be
much more than an afternoon excercise (for most \"pages\"). Esp
given the highly developed frameworks that make it a cut and
paste sort of ordeal.

mmm-hmmm. The covid gap has messed up my time sense but I think we\'re
close to 3 years into developing what amounts to an Angular SPA.
That\'s 3 to 4 1/2 full time programmers and 3 to 4 testers. I count
myself as 1/2 since most of what I\'ve done is integrating the ESRI
4.20 Javascript API into one of the panels and other odd tasks while
doing enhancements and fixes in the legacy code.

That\'s not an \"appLET\". You\'re developing a real application suite/system
*under* a Java framework. I\'d wager most JS \"coders\" are busy knocking
out what could reasonably be called cookie-cutter \"web sites\". Not much
\"engineering\", there! (hence the use of *coders*)

NOT Java!!! That was a very unfortunate naming. This is a replacement for a
legacy Java app that was the original attempt at a cross platform solution.
That became an albatross when the browsers dropped support for Java applets
(and ActiveX) because of the security holes.

It is a challenge to replace a suite of legacy programs with a browser based
application but many of the RFP\'s have been specifying zero footprint. It
certainly makes updates a lot easier, particularly for mobile resources.

I don\'t believe you can write a portable, browser-based application with
any guarantee of long-term (10 years?) support. Browsers are continually
evolving; in three years we may be on *U*HTML 37.

https://www.elizium.nu/scripts/lemmings/

Possibly no guarantee.... but that\'s 16 years.
that would run on IE4 and whatever version Netscape was back then.
(and still works)

Were I charged with such a task, I\'d define a virtual middleware to host
the app. Then, have a second team responsible for keeping that middleware
supported on newer browsers. This eliminates the problem of having the
application developers constantly trying to adjust to shifting sand.

So far as I know, if you read the standards and exploit them in as
much at they are consistently implemented by browser suppliers you
end up with high reliability scripts. if you go chasing latest browser
features not so much.


--
Jasen.
 
On 08/16/2022 04:27 AM, Jasen Betts wrote:
So far as I know, if you read the standards and exploit them in as
much at they are consistently implemented by browser suppliers you
end up with high reliability scripts. if you go chasing latest browser
features not so much.

Counter examples are browsers removing NPAPI or Flash support for
security reasons. 20 years ago a Java applet was a reasonable approach
to developing a responsive web page.
 
On 8/16/2022 3:27 AM, Jasen Betts wrote:
On 2022-08-13, Don Y <blockedofcourse@foo.invalid> wrote:
On 8/13/2022 8:39 AM, rbowman wrote:
On 08/12/2022 09:39 PM, Don Y wrote:
On 8/12/2022 8:34 PM, rbowman wrote:
On 08/12/2022 08:24 PM, Don Y wrote:
I\'ve never written a JS applet but can\'t imagine it would be
much more than an afternoon excercise (for most \"pages\"). Esp
given the highly developed frameworks that make it a cut and
paste sort of ordeal.

mmm-hmmm. The covid gap has messed up my time sense but I think we\'re
close to 3 years into developing what amounts to an Angular SPA.
That\'s 3 to 4 1/2 full time programmers and 3 to 4 testers. I count
myself as 1/2 since most of what I\'ve done is integrating the ESRI
4.20 Javascript API into one of the panels and other odd tasks while
doing enhancements and fixes in the legacy code.

That\'s not an \"appLET\". You\'re developing a real application suite/system
*under* a Java framework. I\'d wager most JS \"coders\" are busy knocking
out what could reasonably be called cookie-cutter \"web sites\". Not much
\"engineering\", there! (hence the use of *coders*)

NOT Java!!! That was a very unfortunate naming. This is a replacement for a
legacy Java app that was the original attempt at a cross platform solution.
That became an albatross when the browsers dropped support for Java applets
(and ActiveX) because of the security holes.

It is a challenge to replace a suite of legacy programs with a browser based
application but many of the RFP\'s have been specifying zero footprint. It
certainly makes updates a lot easier, particularly for mobile resources.

I don\'t believe you can write a portable, browser-based application with
any guarantee of long-term (10 years?) support. Browsers are continually
evolving; in three years we may be on *U*HTML 37.

https://www.elizium.nu/scripts/lemmings/

Possibly no guarantee.... but that\'s 16 years.
that would run on IE4 and whatever version Netscape was back then.
(and still works)

Were I charged with such a task, I\'d define a virtual middleware to host
the app. Then, have a second team responsible for keeping that middleware
supported on newer browsers. This eliminates the problem of having the
application developers constantly trying to adjust to shifting sand.

So far as I know, if you read the standards and exploit them in as
much at they are consistently implemented by browser suppliers you
end up with high reliability scripts. if you go chasing latest browser
features not so much.

If it \"still works\" then it has deliberately kept it\'s implementation tied
to the past. Browsers evolve because of perceived needs. Deliberately
avoiding new features limits what you can do in an application to whatever
was possible \"in ages gone by\".

Once you drink the kool-aid, you\'re forever stuck with the possibility
of customer A running NewBrowser while customer B clings to OldBrowser.

My current design has to accommodate hardware and software \"additions\"
incrementally added to an installation (deployment) WITHOUT requiring
all existing hardware/software to be \"upgraded\" (to the latest release).
It does this by supporting multiple interface versions for each piece
of software. So, an application/device can (late-) bind to whatever
version of an interface it needs.
 
On 8/16/2022 9:57 AM, rbowman wrote:
On 08/16/2022 04:27 AM, Jasen Betts wrote:
So far as I know, if you read the standards and exploit them in as
much at they are consistently implemented by browser suppliers you
end up with high reliability scripts. if you go chasing latest browser
features not so much.

Counter examples are browsers removing NPAPI or Flash support for security
reasons. 20 years ago a Java applet was a reasonable approach to developing a
responsive web page.

It\'s not just pushing client side processing to increase responsiveness.
How would you, for example, run a virtual machine of your own design (e.g.,
to host an application) *in* a browser? Each \"tab\" is an independent
(isolated) VM.

So, to build a tabbed interface, you\'d have to do it *within* a single
session -- *or*, require server-side coordination of those multiple
sessions. Anything that \"tab #1\" needed to know about \"tab #2\" would
require a round-trip to the server, each time tab #2 dicked with something.

[To be truly amusing, imagine opening two different browsers on the same
machine. Should they *ever* be permitted to \"co-operate\"?]
 
On Tuesday, August 16, 2022 at 10:28:26 AM UTC-7, Don Y wrote:

[To be truly amusing, imagine opening two different browsers on the same
machine. Should they *ever* be permitted to \"co-operate\"?]

No need to imagine, I\'m doing it. The main cooperation required, is just that
I can drag an address from the browser that mangles the page, to the
icon in the dock of the other browser, and see if that helps. Often, it does.

Latest issue: light-grey font on white. Gotta cut-and-paste into a text editor
to see it, multiple browsers don\'t help. I certainly DO miss the old \'show source\'
feature, which allowed some disentanglements in the past.
 

Welcome to EDABoard.com

Sponsor

Back
Top