D
Don Y
Guest
Hi Phil,
On 9/4/2014 12:03 PM, Phil Hobbs wrote:
> On 9/4/2014 2:35 PM, Don Y wrote:
[attrs elided]
WHICH IS EXACTLY THE POINT I WAS MAKING TO LASSE! I.e., there is no
inherent difference between Windows and OSX (PC vs Mac) -- in terms of
its ability to support "engineering programs".
Where does it assume this is an auto variable and not a static?
Or, that it's an array of "bytes"? The amount of memory that
an app requires doesn't classify certain apps as "bugs" and others
as "non-bugs". How many bytes of store *can* an application have?
As long as you don't exceed the syntactic constraints of the language
(i.e. run out of identifiers, etc.), the limit is determined by the
OS under which the app runs.
In the early 80's, I had to "crack" a "security device" implemented
in hardware. Basically an "undocumented FSM". My approach was
straightforward: build an array of next_state[current_state][stimulus].
Initialize it with "unknown".
Then, interatively query the hardware to determine the "current_state".
For that state, find a "stimulus" for which the "next_state" was, as
yet, unknown. Apply that stimulus driving the machine into a new (?)
next_state.
Lather, rinse, repeat.
When all entries in the array are known, you have a complete map of
the device -- which you can now reduce to "next state logic".
In the early 80's getting 64KB of "data" in an application was
tedious. Two or three times that was even moreso! (e.g., for
an 8 bit state vector, you need at least (256+1)(256) bytes just
to encode the next_state[][] -- double that if you want to make
it easier on yourself!
Rather than screw around with a MS compiler (which wouldn't support
such large objects) *and* an MS OS (which was still dealing with
silly "segment registers" and tiny address spaces), I wrote the
algorithm to run on a UN*X box (knowing it would still "thrash")
without having to cope with the (MS) OS's silly limitations.
*I* was talking about "parallel processing". You assumed multicore
approaches to parallel algorithms. E.g., the SETI example makes no
reliance on number of cores (*or* processes) on an individual
node. The "system" exists above/outside the individual nodes.
*It* has provisions -- exported to the constituent nodes -- that
allow them to operate in the collective.
I am unaware of any "Consumer" kit that is NORMA. But, that doesn't
alter the validity of my statement. *You* keep trying to make these
issues "specific". *I* stated them as "general constraints".
[My home automation system relies extensively on NORMA to get the
bandwidth and network transparency that it exploits (multimedia
and fault tolerance). Doing so in more conventional approaches
pushes all the work into the application (a bad idea, IME)]
I made a point in presenting an argument. Your counterpoints suggest
a failure to see those abstractions. If you want to live in *exactly*
2014 -- with no concept of 2015 or beyond (or awareness of what has
preceded), so be it.
No. RT doesn't preclude hardware independence. It's *easier* if you
can see the actual hardware and its performance. But, unless your
RT system is completely closed (i.e., boring and naive), the workload
can change over time. Effectively, such changes look like a constant
load with changing *hardware* (capabilities).
Isn't all of this about porting *code* to other OS's??
A namespace is a security concept. Write a piece of code that
generates and attempts to resolve every conceivable name. Run that in
a single, unified namespace system and you will get different results
than in a system that has per-task namespaces. I.e., it will
"stumble" on names that it shouldn't be accessing!
Some OS's try to give the illusion of per-task (process) namespaces
for certain devices. E.g., /dev/tty being the controlling [pt]ty for
a process -- regardless of the actual device that is wired to it.
With true independent namespaces, the OS generalizes this and allows
parents to fabricate (and, thus, CONSTRAIN) their offspring (a child
can't access <something> if it can't *name* it!)
E.g., I can bind "/input" to a particular hardware device, file,
etc. Then, "/foo/bar/whatever/output" to another. And, spawning a
child with the intent of copying /input to /foo/bar/whatever/output
will *ensure* that it doesn't "accidentally" access "/secret" or
"/kernel" (because "/secret" doesn't *exist* in the child's namespace).
How you code and how the system enforces these mechanisms varies
as does the reliability and robustness of the system. E.g., it's
as if every process is executing in an explicit jail created
*exclusively* for it! Yet, all existing in a larger namespace
(known, perhaps, only to its parent, grandparent, great-grandparent,
etc.). So, A's "/output" may, in fact, correspond to B's "/input".
Yet, neither is aware of the coexistence of the other -- nor the
fact that the same object has different names in each of their
respective namespaces.
If you don't have this ability in the OS, *adding* it in an emulation
library is difficult, at best, and probably unreliable in most cases!
[E.g., I don't need to worry about a user typing:
COPY SOMETHINGHESHOULDNOTACCESS SOMEPLACEPRIVATE
because SOMETHINGHESHOULDNOTACCESS isn't *visible* to him -- EVEN IF
HE MODIFIES THE SOURCE CODE of COPY.EXE]
No, I don't own a Mac. If you read upthread, you'll note that
I haven't used a Mac since MacOS 7-ish (68040). I'd like to
buy one to play with ProTools. But, that hasn't risen to a level
of interest to warrant my direct attention.
For the record, I don't own an iPhone, iPad, iWatch, iTV, etc.
(though I *have* rescued a few iPod's over the years).
My machines are PC's and Sun workstations. Windows/Solaris/*BSD
instead of OSX.
But, that doesn't change my assertion that there is nothing preventing
"Windows" apps from running on a Mac. Even *before* Mac's went to
"x86" architecture. (e.g., I have Windows XP on an UltraSPARC!).
What I was pointing out to Lasse -- who was standing by the "but the
apps aren't available on Macs" opinion -- was that this could change
tomorrow (though unlikely). I.e., Macs went from 68K's to x86's
so stranger things *have* happened!
Similarly, I can't get FrameMaker, AutoCAD, etc. on a Linux/*BSD box
*today*. But, that *may* change tomorrow. *If* those vendors
perceive a real market, there (users who will NOT buy their product
UNLESS it runs under those other OS's). As long as (folks like me)
will keep a Windows PC running to avail themselves of their product,
then why undertake the cost of porting and supporting yet another
OS (and, having to "jump" everytime MS *or* Apple *or* Linux ... makes
some change to their OS)
I don't really sympathize. I paid $1500 for my first 12MB of RAM.
Or, make a PC *into* a Mac (Hackintosh). You just won't end up
with the same STARK white appearance! :-/
[AFAICT, OSX is more NeXT/FreeBSD derived than Linux.]
On 9/4/2014 12:03 PM, Phil Hobbs wrote:
> On 9/4/2014 2:35 PM, Don Y wrote:
[attrs elided]
Or, an OS that only supports a single execution context (vs. one
that supports multiples).
Like DOS? Everybody's had multiple threads for the last 20 years. Unix
and OS/2 had it further back than that.
How long a feature has been available wasn't the point of my comment.
Rather, to illustrate the types of issues that make an OS sufficiently
-----------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
different from another -- at a technological level -- to complicate
(or, alternatively, enhance) an application's development on one
platform over another.
I understand, but even the Raspberry Pi has a pthreads library. What are
you writing for, Voyager 2?
Please read what I wrote.
I did read it. I just think that it's an irrelevant counterfactual
digression, because there isn't a lot of opportunity to suffer from any
of these constraints on modern hardware.
WHICH IS EXACTLY THE POINT I WAS MAKING TO LASSE! I.e., there is no
inherent difference between Windows and OSX (PC vs Mac) -- in terms of
its ability to support "engineering programs".
E.g., an app intended to run on a 64-bit OS trying to be hosted on a
32b one.
Or, an app that expects gobs of memory to be available (real or
virtual)
instead of what the current OS supports.
That's a bug, not a porting issue.
No, it isn't. How is "myarray[65535][65535]" a bug? It's syntactically
correct. It will run on an OS that supports big memory spaces (virtual
or real) but not on one that doesn't.
It's a bug because it assumes that the program will have at least 4 gig
of stack space, which it won't on any machine that I'm familiar with.
Where does it assume this is an auto variable and not a static?
Or, that it's an array of "bytes"? The amount of memory that
an app requires doesn't classify certain apps as "bugs" and others
as "non-bugs". How many bytes of store *can* an application have?
As long as you don't exceed the syntactic constraints of the language
(i.e. run out of identifiers, etc.), the limit is determined by the
OS under which the app runs.
In the early 80's, I had to "crack" a "security device" implemented
in hardware. Basically an "undocumented FSM". My approach was
straightforward: build an array of next_state[current_state][stimulus].
Initialize it with "unknown".
Then, interatively query the hardware to determine the "current_state".
For that state, find a "stimulus" for which the "next_state" was, as
yet, unknown. Apply that stimulus driving the machine into a new (?)
next_state.
Lather, rinse, repeat.
When all entries in the array are known, you have a complete map of
the device -- which you can now reduce to "next state logic".
In the early 80's getting 64KB of "data" in an application was
tedious. Two or three times that was even moreso! (e.g., for
an 8 bit state vector, you need at least (256+1)(256) bytes just
to encode the next_state[][] -- double that if you want to make
it easier on yourself!
Rather than screw around with a MS compiler (which wouldn't support
such large objects) *and* an MS OS (which was still dealing with
silly "segment registers" and tiny address spaces), I wrote the
algorithm to run on a UN*X box (knowing it would still "thrash")
without having to cope with the (MS) OS's silly limitations.
OTOH, "speed = distance / time" is a *bug* -- because "time" can
conceivably be zero. (regardless of the OS)
There are lots of different kinds of bugs.
Or, an app that expects real-time guarantees trying to run on an
OS that has no concept of such.
Etc.
Or, an OS that supports true parallel processing (vs. one that only
supports the illusion of same).
That makes almost no difference at all as far as the coder is
concerned.
SIMD is a bit different, but usually the compiler looks after
vectorizing stuff. But multithreaded apps on one core vs multiple
cores
is no problem--usually you just ignore the issue, because the
serialization problems are just the same.
Don't limit your thinking to SMP. :> (remember, apps can be written
BEFORE "current" features are available or in widespread use! I've been
writing multitasking and RT apps for over 30 years -- the latter still
isn't "widely embraced" in the programming community)
I'm not limiting myself to SMP, I'm saying that there's no difference
between the programming models of single-core multithread and SMP, or
various flavours of NUMA, as long as cache-coherency is maintained.
So, NORMA systems aren't considered "parallel processing"?
As in Norma Jean Brady? I don't know of any actual NORMA boxes, do you?
You can make something similar with a cluster, but we were talking about
multithread vs multicore.
*I* was talking about "parallel processing". You assumed multicore
approaches to parallel algorithms. E.g., the SETI example makes no
reliance on number of cores (*or* processes) on an individual
node. The "system" exists above/outside the individual nodes.
*It* has provisions -- exported to the constituent nodes -- that
allow them to operate in the collective.
I am unaware of any "Consumer" kit that is NORMA. But, that doesn't
alter the validity of my statement. *You* keep trying to make these
issues "specific". *I* stated them as "general constraints".
[My home automation system relies extensively on NORMA to get the
bandwidth and network transparency that it exploits (multimedia
and fault tolerance). Doing so in more conventional approaches
pushes all the work into the application (a bad idea, IME)]
An app written expecting a *particular* parallel processing model
(e.g.,
before multiple core machines were en vogue) isn't easily ported to
an environment where you are limited to a single active thread.
Okay, so a Cray program won't run on a toaster, even in 2014. So what?
I thought we were talking about running engineering programs on
desktops?
I made a general statement. You chose to ignore its generality and
apply it to specifics:
If you're just writing the Great American Mythological Computer Novel, OK.
I made a point in presenting an argument. Your counterpoints suggest
a failure to see those abstractions. If you want to live in *exactly*
2014 -- with no concept of 2015 or beyond (or awareness of what has
preceded), so be it.
"If there are no underlying technological differences/advantages to
one OS over another, then it is conceivable and practical to use
an emulation library or other middleware to RELATIVELY easily
port an application from one to another."
NOT having an RPC/IPC facility is a technological difference.
NOT having a network stack (or even the NOTION of a "remote" host)
is a technological difference.
Right, that's the toaster controller again. Are you running SPICE on a
toaster?
NOT having a capabilities based security model is a technological
difference (i.e., it impacts the code you write!).
Right, I said that at first. Security APIs and GUIs are the things that
make porting a pain.
NOT having support for big/virtual memory is a technological difference
(want to REFACTOR your code as OVERLAYS???).
NOT having support for the concept of "timeliness" in an OS is a
technological difference.
Real-time apps are generally tightly tied to the hardware, because
otherwise you wouldn't care about guaranteed latency bounds. So that's a
whole other issue, not just OS and processor.
No. RT doesn't preclude hardware independence. It's *easier* if you
can see the actual hardware and its performance. But, unless your
RT system is completely closed (i.e., boring and naive), the workload
can change over time. Effectively, such changes look like a constant
load with changing *hardware* (capabilities).
NOT having per-task namespaces is a technological difference.
What? A namespace is a source code concept. You can use any language you
like, even on toasters.
Isn't all of this about porting *code* to other OS's??
A namespace is a security concept. Write a piece of code that
generates and attempts to resolve every conceivable name. Run that in
a single, unified namespace system and you will get different results
than in a system that has per-task namespaces. I.e., it will
"stumble" on names that it shouldn't be accessing!
Some OS's try to give the illusion of per-task (process) namespaces
for certain devices. E.g., /dev/tty being the controlling [pt]ty for
a process -- regardless of the actual device that is wired to it.
With true independent namespaces, the OS generalizes this and allows
parents to fabricate (and, thus, CONSTRAIN) their offspring (a child
can't access <something> if it can't *name* it!)
E.g., I can bind "/input" to a particular hardware device, file,
etc. Then, "/foo/bar/whatever/output" to another. And, spawning a
child with the intent of copying /input to /foo/bar/whatever/output
will *ensure* that it doesn't "accidentally" access "/secret" or
"/kernel" (because "/secret" doesn't *exist* in the child's namespace).
How you code and how the system enforces these mechanisms varies
as does the reliability and robustness of the system. E.g., it's
as if every process is executing in an explicit jail created
*exclusively* for it! Yet, all existing in a larger namespace
(known, perhaps, only to its parent, grandparent, great-grandparent,
etc.). So, A's "/output" may, in fact, correspond to B's "/input".
Yet, neither is aware of the coexistence of the other -- nor the
fact that the same object has different names in each of their
respective namespaces.
If you don't have this ability in the OS, *adding* it in an emulation
library is difficult, at best, and probably unreliable in most cases!
[E.g., I don't need to worry about a user typing:
COPY SOMETHINGHESHOULDNOTACCESS SOMEPLACEPRIVATE
because SOMETHINGHESHOULDNOTACCESS isn't *visible* to him -- EVEN IF
HE MODIFIES THE SOURCE CODE of COPY.EXE]
I see no technological differences between OSX and Windows (or Linux)
that preclude using "an emulation layer or other middleware" to port
an application -- especially an "engineering program" -- from one to
the other (I would have a different opinion if we were discussing a
process control system, etc.)
The fact (?) that fewer "engineering programs" run on Macs than PC's
simply suggests that the markets that Macs and PCs address have an
imbalance in terms of the types of users AND APPLICATIONS THEY DEMAND.
But, that doesn't mean EngineeringApplicationX couldn't be ported
from, e.g., Windows to OSX (or Linux) *if* the vendor/author decided
there was sufficient market for it, there. (Or, if a third party
saw a large enough market to produce a complete emulation ENVIRONMENT
of systemA under systemB so one acts as the other) This is the point I
was making to Lasse -- just because EngineeringApplicationX (or Y or Z)
isn't available TODAY on a Mac doesn't mean it won't be, tomorrow.
(unless you can point to some technological capability that is
MISSING in OSX -- but present in Windows -- that those apps rely upon).
'Twas't my point at all. I realize that you're a Mac fanboy, which is
fine with me--I was too, at least for a brief while in in 1984. Then
No, I don't own a Mac. If you read upthread, you'll note that
I haven't used a Mac since MacOS 7-ish (68040). I'd like to
buy one to play with ProTools. But, that hasn't risen to a level
of interest to warrant my direct attention.
For the record, I don't own an iPhone, iPad, iWatch, iTV, etc.
(though I *have* rescued a few iPod's over the years).
My machines are PC's and Sun workstations. Windows/Solaris/*BSD
instead of OSX.
But, that doesn't change my assertion that there is nothing preventing
"Windows" apps from running on a Mac. Even *before* Mac's went to
"x86" architecture. (e.g., I have Windows XP on an UltraSPARC!).
What I was pointing out to Lasse -- who was standing by the "but the
apps aren't available on Macs" opinion -- was that this could change
tomorrow (though unlikely). I.e., Macs went from 68K's to x86's
so stranger things *have* happened!
Similarly, I can't get FrameMaker, AutoCAD, etc. on a Linux/*BSD box
*today*. But, that *may* change tomorrow. *If* those vendors
perceive a real market, there (users who will NOT buy their product
UNLESS it runs under those other OS's). As long as (folks like me)
will keep a Windows PC running to avail themselves of their product,
then why undertake the cost of porting and supporting yet another
OS (and, having to "jump" everytime MS *or* Apple *or* Linux ... makes
some change to their OS)
Apple screwed over its original Mac customers by charging $1000 for a
$75 memory upgrade, and has never looked back. (Neither have I.)
I don't really sympathize. I paid $1500 for my first 12MB of RAM.
Newer Macs are basically closed-hardware PCs running Linux with eye
candy on top of it, so there's no reason in principle that you couldn't
run anything you like.
Or, make a PC *into* a Mac (Hackintosh). You just won't end up
with the same STARK white appearance! :-/
[AFAICT, OSX is more NeXT/FreeBSD derived than Linux.]