PC or Mac for at- home engineering?

Hi Phil,

On 9/4/2014 12:03 PM, Phil Hobbs wrote:
> On 9/4/2014 2:35 PM, Don Y wrote:

[attrs elided]

Or, an OS that only supports a single execution context (vs. one
that supports multiples).

Like DOS? Everybody's had multiple threads for the last 20 years. Unix
and OS/2 had it further back than that.

How long a feature has been available wasn't the point of my comment.
Rather, to illustrate the types of issues that make an OS sufficiently
-----------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
different from another -- at a technological level -- to complicate
(or, alternatively, enhance) an application's development on one
platform over another.

I understand, but even the Raspberry Pi has a pthreads library. What are
you writing for, Voyager 2?

Please read what I wrote.

I did read it. I just think that it's an irrelevant counterfactual
digression, because there isn't a lot of opportunity to suffer from any
of these constraints on modern hardware.

WHICH IS EXACTLY THE POINT I WAS MAKING TO LASSE! I.e., there is no
inherent difference between Windows and OSX (PC vs Mac) -- in terms of
its ability to support "engineering programs".

E.g., an app intended to run on a 64-bit OS trying to be hosted on a
32b one.

Or, an app that expects gobs of memory to be available (real or
virtual)
instead of what the current OS supports.

That's a bug, not a porting issue.

No, it isn't. How is "myarray[65535][65535]" a bug? It's syntactically
correct. It will run on an OS that supports big memory spaces (virtual
or real) but not on one that doesn't.

It's a bug because it assumes that the program will have at least 4 gig
of stack space, which it won't on any machine that I'm familiar with.

Where does it assume this is an auto variable and not a static?
Or, that it's an array of "bytes"? The amount of memory that
an app requires doesn't classify certain apps as "bugs" and others
as "non-bugs". How many bytes of store *can* an application have?
As long as you don't exceed the syntactic constraints of the language
(i.e. run out of identifiers, etc.), the limit is determined by the
OS under which the app runs.

In the early 80's, I had to "crack" a "security device" implemented
in hardware. Basically an "undocumented FSM". My approach was
straightforward: build an array of next_state[current_state][stimulus].
Initialize it with "unknown".

Then, interatively query the hardware to determine the "current_state".
For that state, find a "stimulus" for which the "next_state" was, as
yet, unknown. Apply that stimulus driving the machine into a new (?)
next_state.

Lather, rinse, repeat.

When all entries in the array are known, you have a complete map of
the device -- which you can now reduce to "next state logic".

In the early 80's getting 64KB of "data" in an application was
tedious. Two or three times that was even moreso! (e.g., for
an 8 bit state vector, you need at least (256+1)(256) bytes just
to encode the next_state[][] -- double that if you want to make
it easier on yourself!

Rather than screw around with a MS compiler (which wouldn't support
such large objects) *and* an MS OS (which was still dealing with
silly "segment registers" and tiny address spaces), I wrote the
algorithm to run on a UN*X box (knowing it would still "thrash")
without having to cope with the (MS) OS's silly limitations.

OTOH, "speed = distance / time" is a *bug* -- because "time" can
conceivably be zero. (regardless of the OS)

There are lots of different kinds of bugs.

Or, an app that expects real-time guarantees trying to run on an
OS that has no concept of such.

Etc.

Or, an OS that supports true parallel processing (vs. one that only
supports the illusion of same).

That makes almost no difference at all as far as the coder is
concerned.
SIMD is a bit different, but usually the compiler looks after
vectorizing stuff. But multithreaded apps on one core vs multiple
cores
is no problem--usually you just ignore the issue, because the
serialization problems are just the same.

Don't limit your thinking to SMP. :> (remember, apps can be written
BEFORE "current" features are available or in widespread use! I've been
writing multitasking and RT apps for over 30 years -- the latter still
isn't "widely embraced" in the programming community)

I'm not limiting myself to SMP, I'm saying that there's no difference
between the programming models of single-core multithread and SMP, or
various flavours of NUMA, as long as cache-coherency is maintained.

So, NORMA systems aren't considered "parallel processing"?

As in Norma Jean Brady? I don't know of any actual NORMA boxes, do you?
You can make something similar with a cluster, but we were talking about
multithread vs multicore.

*I* was talking about "parallel processing". You assumed multicore
approaches to parallel algorithms. E.g., the SETI example makes no
reliance on number of cores (*or* processes) on an individual
node. The "system" exists above/outside the individual nodes.
*It* has provisions -- exported to the constituent nodes -- that
allow them to operate in the collective.

I am unaware of any "Consumer" kit that is NORMA. But, that doesn't
alter the validity of my statement. *You* keep trying to make these
issues "specific". *I* stated them as "general constraints".

[My home automation system relies extensively on NORMA to get the
bandwidth and network transparency that it exploits (multimedia
and fault tolerance). Doing so in more conventional approaches
pushes all the work into the application (a bad idea, IME)]

An app written expecting a *particular* parallel processing model
(e.g.,
before multiple core machines were en vogue) isn't easily ported to
an environment where you are limited to a single active thread.

Okay, so a Cray program won't run on a toaster, even in 2014. So what?

I thought we were talking about running engineering programs on
desktops?

I made a general statement. You chose to ignore its generality and
apply it to specifics:

If you're just writing the Great American Mythological Computer Novel, OK.

I made a point in presenting an argument. Your counterpoints suggest
a failure to see those abstractions. If you want to live in *exactly*
2014 -- with no concept of 2015 or beyond (or awareness of what has
preceded), so be it.

"If there are no underlying technological differences/advantages to
one OS over another, then it is conceivable and practical to use
an emulation library or other middleware to RELATIVELY easily
port an application from one to another."

NOT having an RPC/IPC facility is a technological difference.
NOT having a network stack (or even the NOTION of a "remote" host)
is a technological difference.

Right, that's the toaster controller again. Are you running SPICE on a
toaster?

NOT having a capabilities based security model is a technological
difference (i.e., it impacts the code you write!).

Right, I said that at first. Security APIs and GUIs are the things that
make porting a pain.

NOT having support for big/virtual memory is a technological difference
(want to REFACTOR your code as OVERLAYS???).
NOT having support for the concept of "timeliness" in an OS is a
technological difference.

Real-time apps are generally tightly tied to the hardware, because
otherwise you wouldn't care about guaranteed latency bounds. So that's a
whole other issue, not just OS and processor.

No. RT doesn't preclude hardware independence. It's *easier* if you
can see the actual hardware and its performance. But, unless your
RT system is completely closed (i.e., boring and naive), the workload
can change over time. Effectively, such changes look like a constant
load with changing *hardware* (capabilities).

NOT having per-task namespaces is a technological difference.

What? A namespace is a source code concept. You can use any language you
like, even on toasters.

Isn't all of this about porting *code* to other OS's??

A namespace is a security concept. Write a piece of code that
generates and attempts to resolve every conceivable name. Run that in
a single, unified namespace system and you will get different results
than in a system that has per-task namespaces. I.e., it will
"stumble" on names that it shouldn't be accessing!

Some OS's try to give the illusion of per-task (process) namespaces
for certain devices. E.g., /dev/tty being the controlling [pt]ty for
a process -- regardless of the actual device that is wired to it.
With true independent namespaces, the OS generalizes this and allows
parents to fabricate (and, thus, CONSTRAIN) their offspring (a child
can't access <something> if it can't *name* it!)

E.g., I can bind "/input" to a particular hardware device, file,
etc. Then, "/foo/bar/whatever/output" to another. And, spawning a
child with the intent of copying /input to /foo/bar/whatever/output
will *ensure* that it doesn't "accidentally" access "/secret" or
"/kernel" (because "/secret" doesn't *exist* in the child's namespace).

How you code and how the system enforces these mechanisms varies
as does the reliability and robustness of the system. E.g., it's
as if every process is executing in an explicit jail created
*exclusively* for it! Yet, all existing in a larger namespace
(known, perhaps, only to its parent, grandparent, great-grandparent,
etc.). So, A's "/output" may, in fact, correspond to B's "/input".
Yet, neither is aware of the coexistence of the other -- nor the
fact that the same object has different names in each of their
respective namespaces.

If you don't have this ability in the OS, *adding* it in an emulation
library is difficult, at best, and probably unreliable in most cases!

[E.g., I don't need to worry about a user typing:
COPY SOMETHINGHESHOULDNOTACCESS SOMEPLACEPRIVATE
because SOMETHINGHESHOULDNOTACCESS isn't *visible* to him -- EVEN IF
HE MODIFIES THE SOURCE CODE of COPY.EXE]

I see no technological differences between OSX and Windows (or Linux)
that preclude using "an emulation layer or other middleware" to port
an application -- especially an "engineering program" -- from one to
the other (I would have a different opinion if we were discussing a
process control system, etc.)

The fact (?) that fewer "engineering programs" run on Macs than PC's
simply suggests that the markets that Macs and PCs address have an
imbalance in terms of the types of users AND APPLICATIONS THEY DEMAND.

But, that doesn't mean EngineeringApplicationX couldn't be ported
from, e.g., Windows to OSX (or Linux) *if* the vendor/author decided
there was sufficient market for it, there. (Or, if a third party
saw a large enough market to produce a complete emulation ENVIRONMENT
of systemA under systemB so one acts as the other) This is the point I
was making to Lasse -- just because EngineeringApplicationX (or Y or Z)
isn't available TODAY on a Mac doesn't mean it won't be, tomorrow.
(unless you can point to some technological capability that is
MISSING in OSX -- but present in Windows -- that those apps rely upon).

'Twas't my point at all. I realize that you're a Mac fanboy, which is
fine with me--I was too, at least for a brief while in in 1984. Then

No, I don't own a Mac. If you read upthread, you'll note that
I haven't used a Mac since MacOS 7-ish (68040). I'd like to
buy one to play with ProTools. But, that hasn't risen to a level
of interest to warrant my direct attention.

For the record, I don't own an iPhone, iPad, iWatch, iTV, etc.
(though I *have* rescued a few iPod's over the years).

My machines are PC's and Sun workstations. Windows/Solaris/*BSD
instead of OSX.

But, that doesn't change my assertion that there is nothing preventing
"Windows" apps from running on a Mac. Even *before* Mac's went to
"x86" architecture. (e.g., I have Windows XP on an UltraSPARC!).

What I was pointing out to Lasse -- who was standing by the "but the
apps aren't available on Macs" opinion -- was that this could change
tomorrow (though unlikely). I.e., Macs went from 68K's to x86's
so stranger things *have* happened!

Similarly, I can't get FrameMaker, AutoCAD, etc. on a Linux/*BSD box
*today*. But, that *may* change tomorrow. *If* those vendors
perceive a real market, there (users who will NOT buy their product
UNLESS it runs under those other OS's). As long as (folks like me)
will keep a Windows PC running to avail themselves of their product,
then why undertake the cost of porting and supporting yet another
OS (and, having to "jump" everytime MS *or* Apple *or* Linux ... makes
some change to their OS)

Apple screwed over its original Mac customers by charging $1000 for a
$75 memory upgrade, and has never looked back. (Neither have I.)

I don't really sympathize. I paid $1500 for my first 12MB of RAM.

Newer Macs are basically closed-hardware PCs running Linux with eye
candy on top of it, so there's no reason in principle that you couldn't
run anything you like.

Or, make a PC *into* a Mac (Hackintosh). You just won't end up
with the same STARK white appearance! :-/

[AFAICT, OSX is more NeXT/FreeBSD derived than Linux.]
 
On 9/4/2014 12:53 PM, whit3rd wrote:
On Thursday, September 4, 2014 12:39:00 PM UTC-4, Don Y wrote:

At the limit, a *simulator* can run under "foreign OS, foreign iron"
that mimics the behavior of "native OS, native iron" -- typically at
a *huge* performance penalty! For example, (MULTICS, GE645) running
on (Windows, IA64)?

Actually, ARM and MIPS and Alpha (all RISC type machines) have little
difficulty doing emulations.

Yes, even Intel is addressing this "in silicon".

Also, 'modern' software works with runtime library
functions so that compiling up a few native libraries is a more important
speed consideration than whether the main application is native or
interpreted.

Some OS's allow complete OS API patches. In effect, pulling the
emulation library *into* the OS instead of layering it atop
(think: protection domain cost penalties -- much more efficient
to move the emulation library into the kernel's protection domain
than to require a layered emulation library to make multiple
traps *into* it with those performance hits)

ARM cpus ran the 68000 code of earlier Palm devices just fine.
Windows NT was fully supported on DEC Alpha workstations.
A trio of generations of Macintosh CPUs ran (until Apple dropped
the software compatibility functions) 68000 code on PowerPC, and PowerPC
code on Intel.

Each comes with a performance penalty. I.e., you *could* emulate
a GE645 on a Windows machine -- even with it's funky 36? bit data.
But, at some considerable cost.

Of course, you can argue that process improvements have sped up
modern processors enough that a *simulation* of such an old
architecture could still happen in "near real time" (i.e., if
your performance expectations of those apps were that of 1976! :> )

[Look, for example, at MAME's emulation! Of course, those are
typically 1-3MIPs machines... If the app isn't *truly* "real-time",
then a lot of fudging is possible (i.e., not cycle-count accurate
emulation)]
 
Hi Miso, (?)

On 9/1/2014 11:51 PM, miso wrote:
Ask those Hollywood starlets that have their naked photos on the
internet if "Apple just works."

Apple can market, but they can't code. They suck just as bad at
security as microsoft.

Security is *always* a problem -- because people (users/customers)
don't want to be INCONVENIENCED. (keeping ANYTHING in The Cloud
is just a problem waiting to happen -- I am surprised we haven't
heard of some CORPORATE entity's cloud storage being violated)

Add to that, the fact that there is little incentive for firms
to *impose* security (despite the "objections" of "inconvenienced"
users) and its surprising we don't see MORE/bigger problems
(no doubt because there are enough BIG TARGETS to engage hackers).

E.g., none of my work/business machines talk to the outside world.
It isn't possible for them to do so even if there was a GAPING HOLE
in the OS -- there are no cables connecting them to the outside
world (and no wireless enabled!). So, yeah, a buffer overflow
problem may cause one of my apps to crash if I type AReallyLongName.
But, nothing *leaks* in the process!

But, this is "inconvenient" -- it means I have to SneakerNet anything
that I want to import/export. P'feh. So what? I'll gladly take
the peace of mind *and* added performance (from not having antivirus
crap running all the time) over that minor inconvenience!

[Did we *really* have to TELL people "not to run as 'Administrator'"?
Whose idea was it to give that level of privilege to the default user??]

"Don't reuse passwords"

Yeah. And how many of these "mixed upper/lowercase, some numerics
and at least one 'special'" should I commit to memory? Note you've
told me NOT to write them down, anywhere (so "memory" it will have to
be!).

Designing for a *secure* environment takes an entirely different
mindset. E.g., my automation system only allows certain network
devices (MAC/IP) at specific network drops to send certain traffic
on certain ports using certain protocols to certain other hosts,
etc. I.e., you can't just "plug something in" and expect it to
talk to *anything* you want to!
 
On Sun, 31 Aug 2014 07:35:31 -0700 (PDT), radams2000@gmail.com Gave us:

Which is better for this , PC or Mac ?

PC

Under Linux

Using free dev tools.

OR

Simple design stuff? a PC with DOS and Tango PCB (it will run under
DOSBox or VMWare as well). Layout? The old legacy OrCAD is out there.

Both of the latter ways involve using not quite free old, out of use
legacy software (DOS, Tango, OrCAD), but nobody likely cares.

The Linux, free method is truly on the up and up as it relates to
paying for what you have.

The newer CAD packages generally ARE all protected still (to the
greater degree) and most even "phone home".

For anything complex or signal and timing related, you need the
Linux/free solution, unless you really want to pay for a real package.
 
On Sun, 31 Aug 2014 12:36:27 -0700, Don Y <this@is.not.me.com> Gave us:

Apple is considerably more expensive than a PC. OTOH, I rarely hear
Mac users complaining as LOUDLY and as OFTEN as PC users!

As I was alluding in my post on this subject, a lot comes from your
personal attitude towards the tools. Do you want them to "just work"?
Or, are you willing to put up with some "grief" to get where you
want to be?

Absolute horseshit.

My PC will outrun ANY Apple machine EVER released.

But for this task, a simple Atom machine, or even my $149 ARM
processed Cubox-i4 Pro would work.

Overkill is overkill. Apple has always been overkill, and that mainly
in the price arena, and the hdw has NEVER been so much better at ANY
point along the way, as to justify the differential.

And it continues to this day, and you idiots even pay too much for
music, and you suck up to their retarded "synching" schema as well.

Now, it costs a fucking dollar to play a song on the juke box at the
bar, and the band doesn't EVER see a dime of it. Thanks a lot, Apple,
you economy manipulating, landlord mentality jackasses!
 
On Sun, 31 Aug 2014 17:59:29 -0700, Don Y <this@is.not.me.com> Gave us:

(e.g., a disk crash doesn't cost you *all*
your "environments")

That is what backups are for. Doh!
 
On Mon, 01 Sep 2014 23:56:27 -0700, miso <miso@sushi.com> Gave us:

Gimp is still 8 bits per pixel, and the 16 bit
version never seems to get released.

You sure about that?
 
On Thu, 4 Sep 2014 12:53:05 -0700 (PDT), whit3rd <whit3rd@gmail.com>
Gave us:

On Thursday, September 4, 2014 12:39:00 PM UTC-4, Don Y wrote:

At the limit, a *simulator* can run under "foreign OS, foreign iron"
that mimics the behavior of "native OS, native iron" -- typically at
a *huge* performance penalty! For example, (MULTICS, GE645) running
on (Windows, IA64)?

Actually, ARM and MIPS and Alpha (all RISC type machines) have little
difficulty doing emulations. Also, 'modern' software works with runtime library
functions so that compiling up a few native libraries is a more important
speed consideration than whether the main application is native or
interpreted.

ARM cpus ran the 68000 code of earlier Palm devices just fine.
Windows NT was fully supported on DEC Alpha workstations.
A trio of generations of Macintosh CPUs ran (until Apple dropped
the software compatibility functions) 68000 code on PowerPC, and PowerPC
code on Intel.

The MAME emulator runs on x86 under linux or windows, but emulates
nearly every CPU used in the last 40 years. It even runs on my ARM
machine (MAME).

All their emulator code is open. http://www.mamedev.org/
 
On Wed, 3 Sep 2014 17:36:51 -0700 (PDT), Bill Sloman
<bill.sloman@gmail.com> Gave us:

On Thursday, 4 September 2014 09:11:30 UTC+10, k...@attt.bizz wrote:
On Wed, 03 Sep 2014 17:59:13 -0500, John Fields
jfields@austininstruments.com> wrote:
On Wed, 03 Sep 2014 18:43:24 -0400, krw@attt.bizz wrote:

snip

Sure there is. Cost.

Cost is an economic advantage, not a technological one.

Which just shows, once again, that you're a has-been wannabe.

Not really. "Cost" really is an economic advantage, rather
than a technological advantage. You may find it irritating
that John Fields pointed this out, but the fact that he pointed
it out would make him - at worst - a pedant, rather than a retiree
trying to raise his status (which does happen to be higher than
yours around here, because he does sometimes post technical answers
to technical question, even if most of them do involve the NE555).

It doesn't get any more stupid than "slow man" Bill Sloman.
 
On Sun, 31 Aug 2014 15:24:52 -0400, rickman <gnuarm@gmail.com> Gave us:

On 8/31/2014 12:44 PM, ChesterW wrote:

I updated the HDD to a 1 TB SSD about a year ago and updated the memory
from 4 to 8 Gig about two years ago. No problem with the hardware
upgrades, although on the newer models I think they may have switched to
soldered-in memory.

1 TB SSD? I expect that alone cost more than my entire laptop?

2.5" form factor is lame.

Look at M.2

http://en.wikipedia.org/wiki/M.2
 
On Saturday, 6 September 2014 02:47:49 UTC+10, DecadentLinuxUserNumeroUno wrote:
On Wed, 3 Sep 2014 17:36:51 -0700 (PDT), Bill Sloman

bill.sloman@gmail.com> Gave us:



On Thursday, 4 September 2014 09:11:30 UTC+10, k...@attt.bizz wrote:

On Wed, 03 Sep 2014 17:59:13 -0500, John Fields

jfields@austininstruments.com> wrote:

On Wed, 03 Sep 2014 18:43:24 -0400, krw@attt.bizz wrote:



snip



Sure there is. Cost.



Cost is an economic advantage, not a technological one.



Which just shows, once again, that you're a has-been wannabe.



Not really. "Cost" really is an economic advantage, rather than a technological advantage. You may find it irritating that John Fields pointed this out, but the fact that he pointed it out would make him - at worst - a pedant, rather than a retiree trying to raise his status (which does happen to be higher than yours around here, because he does sometimes post technical answers to technical question, even if most of them do involve the NE555).

It doesn't get any more stupid than "slow man" Bill Sloman.

Pity about your judgement. Jamie and krw are much dumber than I am - which may be like a one legged man claiming to be more mobile than a double amputee, but the difference - to those with even a minimum of sense - is equally obvious. You may not be able to see it.

--
Bill Sloman, Sydney
 
On Saturday, 6 September 2014 10:04:17 UTC+10, DecadentLinuxUserNumeroUno wrote:
On Fri, 5 Sep 2014 16:27:28 -0700 (PDT), Bill Sloman
bill.sloman@gmail.com> Gave us:

Pity about your judgement. Jamie and krw are much dumber than I am

Hardly. You cannot even get page formatting down correctly, you
uncivil dumbshit.

If I know that it's for you and your antique hardware, I do make the
effort, but for most readers, it doesn't matter, and in fact messes up
the formatting that their programs do. Your's is the stupid response.

--
Bill Sloman, Sydney
 
On 2014-09-05, DecadentLinuxUserNumeroUno <DLU1@DecadentLinuxUser.org> wrote:
On Mon, 01 Sep 2014 23:56:27 -0700, miso <miso@sushi.com> Gave us:

Gimp is still 8 bits per pixel, and the 16 bit
version never seems to get released.

You sure about that?

he means 8 per channel or 32 per pixel.


--
umop apisdn


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
 
On Saturday, 6 September 2014 10:45:29 UTC+10, DecadentLinuxUserNumeroUno wrote:
On Fri, 5 Sep 2014 17:35:09 -0700 (PDT), Bill Sloman

bill.sloman@gmail.com> Gave us:



On Saturday, 6 September 2014 10:04:17 UTC+10, DecadentLinuxUserNumeroUno wrote:

On Fri, 5 Sep 2014 16:27:28 -0700 (PDT), Bill Sloman

bill.sloman@gmail.com> Gave us:



Pity about your judgement. Jamie and krw are much dumber than I am



Hardly. You cannot even get page formatting down correctly, you

uncivil dumbshit.



If I know that it's for you and your antique hardware, I do make the
effort, but for most readers, it doesn't matter, and in fact messes up
the formatting that their programs do. Your's is the stupid response.
Absolutely not. Usenet is a TEXT forum. There is no evolution departing from that, so *you* are decidedly one of many many thousands of odd men out.

Whereas you are one of the few usenet participants who still care how many
characters end up between carriage returns and line feeds. That makes you
an even odder man, as if that wasn't already obvious.

--
Bill Sloman, Sydney
 
On 5 Sep 2014 22:06:00 GMT, Jasen Betts <jasen@xnet.co.nz> Gave us:

On 2014-09-05, DecadentLinuxUserNumeroUno <DLU1@DecadentLinuxUser.org> wrote:
On Mon, 01 Sep 2014 23:56:27 -0700, miso <miso@sushi.com> Gave us:

Gimp is still 8 bits per pixel, and the 16 bit
version never seems to get released.

You sure about that?

he means 8 per channel or 32 per pixel.

They (it) do(es) that as well.
 
On Fri, 5 Sep 2014 16:27:28 -0700 (PDT), Bill Sloman
<bill.sloman@gmail.com> Gave us:

Pity about your judgement. Jamie and krw are much dumber than I am

Hardly. You cannot even get page formatting down correctly, you
uncivil dumbshit.
 
On Fri, 5 Sep 2014 17:35:09 -0700 (PDT), Bill Sloman
<bill.sloman@gmail.com> Gave us:

On Saturday, 6 September 2014 10:04:17 UTC+10, DecadentLinuxUserNumeroUno wrote:
On Fri, 5 Sep 2014 16:27:28 -0700 (PDT), Bill Sloman
bill.sloman@gmail.com> Gave us:

Pity about your judgement. Jamie and krw are much dumber than I am

Hardly. You cannot even get page formatting down correctly, you
uncivil dumbshit.

If I know that it's for you and your antique hardware, I do make the
effort, but for most readers, it doesn't matter, and in fact messes up
the formatting that their programs do. Your's is the stupid response.

Absolutely not. Usenet is a TEXT forum. There is no evolution
departing from that, so *you* are decidedly one of many many thousands
of odd men out.
 
On 5 Sep 2014 22:06:00 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2014-09-05, DecadentLinuxUserNumeroUno <DLU1@DecadentLinuxUser.org> wrote:
On Mon, 01 Sep 2014 23:56:27 -0700, miso <miso@sushi.com> Gave us:

Gimp is still 8 bits per pixel, and the 16 bit
version never seems to get released.

You sure about that?

he means 8 per channel or 32 per pixel.

Four color?
 
On 2014-09-06, krw@attt.bizz <krw@attt.bizz> wrote:
On 5 Sep 2014 22:06:00 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2014-09-05, DecadentLinuxUserNumeroUno <DLU1@DecadentLinuxUser.org> wrote:
On Mon, 01 Sep 2014 23:56:27 -0700, miso <miso@sushi.com> Gave us:

Gimp is still 8 bits per pixel, and the 16 bit
version never seems to get released.

You sure about that?

he means 8 per channel or 32 per pixel.

Four color?

transparency, AKA RGBA

I don't know if it does CMYK I'm using an older version here.


--
umop apisdn


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
 
On 6 Sep 2014 09:26:33 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2014-09-06, krw@attt.bizz <krw@attt.bizz> wrote:
On 5 Sep 2014 22:06:00 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2014-09-05, DecadentLinuxUserNumeroUno <DLU1@DecadentLinuxUser.org> wrote:
On Mon, 01 Sep 2014 23:56:27 -0700, miso <miso@sushi.com> Gave us:

Gimp is still 8 bits per pixel, and the 16 bit
version never seems to get released.

You sure about that?

he means 8 per channel or 32 per pixel.

Four color?

transparency, AKA RGBA

Gotcha. Thanks!

>I don't know if it does CMYK I'm using an older version here.

It would make more sense, though processing power is free.
 

Welcome to EDABoard.com

Sponsor

Back
Top