Conical inductors--still $10!...

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore
 
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out that the
factors-of-10 productivity improvements of the early days were gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.

Now the issues are mostly intrinsic to an artifact built of thought.
So apart
from more and more Python libraries, I doubt that there are a lot more
orders
of magnitude available.
Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

I\'m talking about programmer productivity, not MIPS.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out that the
factors-of-10 productivity improvements of the early days were gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.

Now the issues are mostly intrinsic to an artifact built of thought.
So apart
from more and more Python libraries, I doubt that there are a lot more
orders
of magnitude available.
Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

I\'m talking about programmer productivity, not MIPS.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
Per capita comparisons are interesting.

The Wuhan event resulted in virus detections per capita
in the 54ppm range at the end of its containment excercises
in which the resources of the whole country were applied.

In the US, Sweden and Peru, detections are over 5000ppm, at
present, without containment.

China would have had to experience 100 Wuhans, to match this,
and detections indicate a 2x Wuhan-level event for every day
that records are kept in those nations.

It\'s not unreasonable to question whether containment is
possible, at this late stage.

A 100ppm daily detection rate would, in one year, result in
an approximately 4% penetration within the population.

Recently Chile has reported virus penetration of 1.5% after
four months. Fatality rates of covid sufferers is reported
to be below 3%.

Median age in US and Russia is 38.7 and 37.6yrs respectively.
Median age in Canada, UK and Sweden is 40-41yrs.
Median age in Germany and Italy is 45-46yrs.
Median age in Peru and Mexico is 27.5yrs.
Median age in Chile and Brazil is 33.7 and 31.3yrs respectively.

As of Jul23:

US detections are at 11997ppm.(1.2%)
Detection rate is 200ppm/day (possibly steadying around 200ppm)
Fatalities at 433ppm (2ppm /day)
FR 4.1% from a 10572ppm detection recorded 7 days previously.
US tests performed total 142670ppm (14.3%), at 2350ppm/day.
8.4% of tests are positive - currently 8.5%.

Canada detections at 2974ppm (0.3%) ~ 86 days behind the US.
Detection rate at 14ppm/day ( and rising ).
Fatalities at 235ppm (0.3ppm/day)
FR 8.2% from a 2883ppm detection recorded 7 days previously.
Canadian tests performed total 95830ppm (9.6%), at 1190ppm/day .
3.1% of tests are positive - currently 1.2%.

Italian detections at 4053ppm. (0.4%)
Detection rate is 3ppm/day (possibly steadying around 3ppm)
Fatalities at 580ppm (0.2ppm/day)
FR 14.4% from a 4027ppm detection recorded 7 days previously.
Italian tests performed total 105100ppm (10.5%), at 710ppm/day.
3.9% of tests are positive - currently 0.4%.

Sweden detections at 7773ppm. (0.8%)
Detection rate is 33ppm/day. (possibly steady around 40ppm/day)
Fatalities at 561ppm (1.2ppm/day)
FR is 7.4% from a 7574ppm detection recorded 7 days previously.
Sweden tests performed unreported, at 980ppm/day
unknown tests are positive - currently 3.4%.

UK detections at 4366ppm.(0.4%)
Detection rate is 8ppm/day (possibly steady around 11ppm)
Fatalities at 670ppm (1.2ppm/day)
FR is 15.6% from a 4300ppm detection recorded 7 days previously.
UK tests performed total 121630ppm (12.2%), at 1850ppm/day.
3.6% of tests are positive - currently 0.4%

Brazil detections at 10480ppm.(1%)
Detection rate is 319ppm/day and unstable
Fatalities at 389ppm (5ppm/day)
FR is 4.2% from a 9253ppm detection recorded 7 days previously.
Brazilian test performed total 4930ppm (0.5%) at 240ppm/day.
No test detection rate is possible while detections exceed tests.

Peru detections at 11117ppm (1.1%)
Detection rate is 135ppm/day (possibly steady around 120ppm)
Fatalities at 418ppm (6ppm/day)
FR is 4.0% from a 10243ppm detection recorded 7 days previously.
Peru tests performed were 10140ppm (1.0%) at 180ppm/day.
No test detection rate is possible while detections exceed tests.

Mexico detections at 2810ppm (0.3%)
Detection rate is 47ppm/day and rising.
Fatalities at 320ppm (6ppm/day)
FR is 13.0% from a 2464ppm detection recorded 7 days previously.
Mexico tests performed were 5880ppm (0.6%), at 70ppm/day
48% of tests are positive - currently 67%.

Chile detections at 17598ppm (1.8%)
Detection rate is 89ppm/day and unstable.
Fatalities at 456ppm (4ppm/day)
FR is 2.9% from 15855ppm detection recorded 14 days previously.
Chile tests performed total 75630 (7.6%), at 830ppm/day.
23% of tests are positive - currently 11%.

South Africa detections at 6660ppm (0.7%)
Detection rate is 222ppm/day and unstable.
Fatalities at 100ppm (4ppm/day)
FR is 1.9% from a 5245ppm detection recorded 7 days previously.
SA tests performed were 42770ppm (4.3%), at 730ppm/day.
15.6% of tests are positive, currently 13.7%.

RL
 
Per capita comparisons are interesting.

The Wuhan event resulted in virus detections per capita
in the 54ppm range at the end of its containment excercises
in which the resources of the whole country were applied.

In the US, Sweden and Peru, detections are over 5000ppm, at
present, without containment.

China would have had to experience 100 Wuhans, to match this,
and detections indicate a 2x Wuhan-level event for every day
that records are kept in those nations.

It\'s not unreasonable to question whether containment is
possible, at this late stage.

A 100ppm daily detection rate would, in one year, result in
an approximately 4% penetration within the population.

Recently Chile has reported virus penetration of 1.5% after
four months. Fatality rates of covid sufferers is reported
to be below 3%.

Median age in US and Russia is 38.7 and 37.6yrs respectively.
Median age in Canada, UK and Sweden is 40-41yrs.
Median age in Germany and Italy is 45-46yrs.
Median age in Peru and Mexico is 27.5yrs.
Median age in Chile and Brazil is 33.7 and 31.3yrs respectively.

As of Jul23:

US detections are at 11997ppm.(1.2%)
Detection rate is 200ppm/day (possibly steadying around 200ppm)
Fatalities at 433ppm (2ppm /day)
FR 4.1% from a 10572ppm detection recorded 7 days previously.
US tests performed total 142670ppm (14.3%), at 2350ppm/day.
8.4% of tests are positive - currently 8.5%.

Canada detections at 2974ppm (0.3%) ~ 86 days behind the US.
Detection rate at 14ppm/day ( and rising ).
Fatalities at 235ppm (0.3ppm/day)
FR 8.2% from a 2883ppm detection recorded 7 days previously.
Canadian tests performed total 95830ppm (9.6%), at 1190ppm/day .
3.1% of tests are positive - currently 1.2%.

Italian detections at 4053ppm. (0.4%)
Detection rate is 3ppm/day (possibly steadying around 3ppm)
Fatalities at 580ppm (0.2ppm/day)
FR 14.4% from a 4027ppm detection recorded 7 days previously.
Italian tests performed total 105100ppm (10.5%), at 710ppm/day.
3.9% of tests are positive - currently 0.4%.

Sweden detections at 7773ppm. (0.8%)
Detection rate is 33ppm/day. (possibly steady around 40ppm/day)
Fatalities at 561ppm (1.2ppm/day)
FR is 7.4% from a 7574ppm detection recorded 7 days previously.
Sweden tests performed unreported, at 980ppm/day
unknown tests are positive - currently 3.4%.

UK detections at 4366ppm.(0.4%)
Detection rate is 8ppm/day (possibly steady around 11ppm)
Fatalities at 670ppm (1.2ppm/day)
FR is 15.6% from a 4300ppm detection recorded 7 days previously.
UK tests performed total 121630ppm (12.2%), at 1850ppm/day.
3.6% of tests are positive - currently 0.4%

Brazil detections at 10480ppm.(1%)
Detection rate is 319ppm/day and unstable
Fatalities at 389ppm (5ppm/day)
FR is 4.2% from a 9253ppm detection recorded 7 days previously.
Brazilian test performed total 4930ppm (0.5%) at 240ppm/day.
No test detection rate is possible while detections exceed tests.

Peru detections at 11117ppm (1.1%)
Detection rate is 135ppm/day (possibly steady around 120ppm)
Fatalities at 418ppm (6ppm/day)
FR is 4.0% from a 10243ppm detection recorded 7 days previously.
Peru tests performed were 10140ppm (1.0%) at 180ppm/day.
No test detection rate is possible while detections exceed tests.

Mexico detections at 2810ppm (0.3%)
Detection rate is 47ppm/day and rising.
Fatalities at 320ppm (6ppm/day)
FR is 13.0% from a 2464ppm detection recorded 7 days previously.
Mexico tests performed were 5880ppm (0.6%), at 70ppm/day
48% of tests are positive - currently 67%.

Chile detections at 17598ppm (1.8%)
Detection rate is 89ppm/day and unstable.
Fatalities at 456ppm (4ppm/day)
FR is 2.9% from 15855ppm detection recorded 14 days previously.
Chile tests performed total 75630 (7.6%), at 830ppm/day.
23% of tests are positive - currently 11%.

South Africa detections at 6660ppm (0.7%)
Detection rate is 222ppm/day and unstable.
Fatalities at 100ppm (4ppm/day)
FR is 1.9% from a 5245ppm detection recorded 7 days previously.
SA tests performed were 42770ppm (4.3%), at 730ppm/day.
15.6% of tests are positive, currently 13.7%.

RL
 
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what an OS
and languages are, overdue for the next revolution?

In his other famous essay, \"No Silver Bullet\", Brooks points out that the
factors-of-10 productivity improvements of the early days were gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.

Now the issues are mostly intrinsic to an artifact built of thought.
So apart
from more and more Python libraries, I doubt that there are a lot more
orders
of magnitude available.
Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

I\'m talking about programmer productivity, not MIPS.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
<langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.
 
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
<langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.
 
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
<langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.
 
On 23/07/2020 21:40, Tom Gardner wrote:
On 23/07/20 18:06, John Larkin wrote:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced,
for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic
cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at
the
time by far the largest programming project in the world.  As he says,
\"How does a software project go a year late?  One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

Yes indeed. C and C++ are an *appalling*[1] starting point!

But better alternatives are appearing...

xC has some C syntax but removes the dangerous bits and
adds parallel constructs based on CSP; effectively the
hard real time RTOS is in the language and the xCORE
processor hardware.

Rust is gaining ground; although Torvalds hates and
prohibits C++ in the Linux kernel, he has hinted he won\'t
oppose seeing Rust in the Linux kernel.

Go is gaining ground at the application and server level;
it too has CSP constructs to enable parallelism.

Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)

[1] cue comments from David Brown ;}

Since you insist...

Python can make use of multicore if you use the language effectively.
It can do it in two ways:

1. Lots of time-consuming work in Python is done using underlying
libraries in C, which should release the GIL if they are doing a lot of
work. So if you are using numpty for numeric processing, for example,
then the hard work (like matrix operations) is done in C libraries with
the GIL released, and you can have multiple threads running in parallel.
(And if you are doing a lot of time-consuming calculations in pure
Python, you\'re doing something wrong!)

2. Python has a nice \"multi-processing\" library as standard, which makes
it very simple to start multiple Python processes and communicate using
queues and shared data. That way you get parallel processing on
multiple cores, since each process has its own GIL.

But Python is not really a great choice for heavy parallelism. The
others you listed are better choices.
 
On 24/07/2020 00:35, Tom Gardner wrote:
On 23/07/20 21:50, Dennis wrote:
On 7/23/20 2:40 PM, Tom Gardner wrote:


Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)


For multicore use see
https://docs.python.org/3/library/multiprocessing.html

How can I put this... Maybe an analogy (in the full
realisation that analogies are dangerously misleading)...

Just because I can run several compilation processes
(e.g. cc, ld) at the same time doesn\'t mean the cc compiler
or ld linker is meaningfully parallel.

cc and ld are part of the build process, and the build process is
meaningfully parallel. (Actually, modern ld /is/ meaningfully parallel,
especially for link-time optimisation.) How \"meaningfully parallel\" a
task is often depends on how you choose to view it.

That Python library is a thin veneer over the operating
system calls. It adds no parallelism that is not present
in the operating system; essentially it avoids all the
interesting problems and punts them to the operating system.

Yes. So do most (all?) parallel programming languages. When Go runs
tasks in parallel, it runs them on different OS threads - a language and
its run-time libraries don\'t get to run on more than one core without OS
support.

Languages and their run-times or VM\'s can have a kind of cooperative
multitasking within one thread, where there are separate logical
executions but only one is running at a time. These can be useful, of
course, and are often supported (with names like coroutines, async,
generators, greenlets, fibres - details and names vary). But they are
not \"meaningful parallelism\" in that they don\'t do more work in the same
time, they simply give you other choices of how to structure your code.

Hence it is only a coarse grain parallelism, and is not
sufficiently novel to be able to advance the ability to
create and control parallel computation.

In order to be interesting in this regard, I would want
to see either a much higher level very coarse-grain
abstraction (e.g. mapreduce), or finer grain abstractions
as found in, say, CSP derived languages/libraries, or Java,
or Erlang.

The details vary, but all parallelism in languages is done by using OS
processes or threads, with some kind of inter-process or inter-thread
communication (queues, pipes, locks, shared memory, etc.).
 
On 24/07/2020 00:35, Tom Gardner wrote:
On 23/07/20 21:50, Dennis wrote:
On 7/23/20 2:40 PM, Tom Gardner wrote:


Python, on the other hand, cannot make use of multicore
parallelism due to its global interpreter lock :)


For multicore use see
https://docs.python.org/3/library/multiprocessing.html

How can I put this... Maybe an analogy (in the full
realisation that analogies are dangerously misleading)...

Just because I can run several compilation processes
(e.g. cc, ld) at the same time doesn\'t mean the cc compiler
or ld linker is meaningfully parallel.

cc and ld are part of the build process, and the build process is
meaningfully parallel. (Actually, modern ld /is/ meaningfully parallel,
especially for link-time optimisation.) How \"meaningfully parallel\" a
task is often depends on how you choose to view it.

That Python library is a thin veneer over the operating
system calls. It adds no parallelism that is not present
in the operating system; essentially it avoids all the
interesting problems and punts them to the operating system.

Yes. So do most (all?) parallel programming languages. When Go runs
tasks in parallel, it runs them on different OS threads - a language and
its run-time libraries don\'t get to run on more than one core without OS
support.

Languages and their run-times or VM\'s can have a kind of cooperative
multitasking within one thread, where there are separate logical
executions but only one is running at a time. These can be useful, of
course, and are often supported (with names like coroutines, async,
generators, greenlets, fibres - details and names vary). But they are
not \"meaningful parallelism\" in that they don\'t do more work in the same
time, they simply give you other choices of how to structure your code.

Hence it is only a coarse grain parallelism, and is not
sufficiently novel to be able to advance the ability to
create and control parallel computation.

In order to be interesting in this regard, I would want
to see either a much higher level very coarse-grain
abstraction (e.g. mapreduce), or finer grain abstractions
as found in, say, CSP derived languages/libraries, or Java,
or Erlang.

The details vary, but all parallelism in languages is done by using OS
processes or threads, with some kind of inter-process or inter-thread
communication (queues, pipes, locks, shared memory, etc.).
 
On 24/7/20 3:06 am, John Larkin wrote:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

We still have \"an unknown error occurred\". No programming language can
prevent laziness... However...

In the naughties, I introduced our team (about 40 programmers, mostly
using C++) to use of an error management data type that means you could
not write a function that returned an error condition without the
required ceremony: Defining the error condition in a specialised error
definition language, including the name, context variables, message
template (text with slots for variable expansion), language context (for
translators), with encouragement to construct the message in three
parts: problem/reason/solution. Only after this had been done, the
build-time code generators would allow the error condition to be
signalled - with appropriate values for any variables.

The cost was low, but the pay-off was immense; we could automatically
determine which messages had been translated into which languages and
give translators all they needed for other languages. We could signal an
error on a server, but display the message in a different client
language context. We could print a PDF error message manual that
contained additional context and help. Etc, etc... it was a powerful
system...

The need for good error reporting is a problem in *all* software and for
*all* users, yet there is almost no support for it in existing languages
or frameworks.

Poor error reporting is responsible for more than 50% of user
frustration with information technology.

Clifford Heath.
 
On 24/7/20 3:06 am, John Larkin wrote:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

We still have \"an unknown error occurred\". No programming language can
prevent laziness... However...

In the naughties, I introduced our team (about 40 programmers, mostly
using C++) to use of an error management data type that means you could
not write a function that returned an error condition without the
required ceremony: Defining the error condition in a specialised error
definition language, including the name, context variables, message
template (text with slots for variable expansion), language context (for
translators), with encouragement to construct the message in three
parts: problem/reason/solution. Only after this had been done, the
build-time code generators would allow the error condition to be
signalled - with appropriate values for any variables.

The cost was low, but the pay-off was immense; we could automatically
determine which messages had been translated into which languages and
give translators all they needed for other languages. We could signal an
error on a server, but display the message in a different client
language context. We could print a PDF error message manual that
contained additional context and help. Etc, etc... it was a powerful
system...

The need for good error reporting is a problem in *all* software and for
*all* users, yet there is almost no support for it in existing languages
or frameworks.

Poor error reporting is responsible for more than 50% of user
frustration with information technology.

Clifford Heath.
 
On 24/7/20 3:06 am, John Larkin wrote:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

We still have \"an unknown error occurred\". No programming language can
prevent laziness... However...

In the naughties, I introduced our team (about 40 programmers, mostly
using C++) to use of an error management data type that means you could
not write a function that returned an error condition without the
required ceremony: Defining the error condition in a specialised error
definition language, including the name, context variables, message
template (text with slots for variable expansion), language context (for
translators), with encouragement to construct the message in three
parts: problem/reason/solution. Only after this had been done, the
build-time code generators would allow the error condition to be
signalled - with appropriate values for any variables.

The cost was low, but the pay-off was immense; we could automatically
determine which messages had been translated into which languages and
give translators all they needed for other languages. We could signal an
error on a server, but display the message in a different client
language context. We could print a PDF error message manual that
contained additional context and help. Etc, etc... it was a powerful
system...

The need for good error reporting is a problem in *all* software and for
*all* users, yet there is almost no support for it in existing languages
or frameworks.

Poor error reporting is responsible for more than 50% of user
frustration with information technology.

Clifford Heath.
 
On Friday, July 24, 2020 at 4:34:25 AM UTC+10, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

https://en.wikipedia.org/wiki/Z_notation

is an example of that approach. It doesn\'t seem to be ruling the world at the moment.

https://en.wikipedia.org/wiki/Hoare_logic

is a bit older.

The Viper provably correct computer is more recent - 1987.

https://www.cl.cam.ac.uk/archive/mjcg/papers/cohn1987.pdf

It doesn\'t seem to got anywhere either. I heard a bit about it before we left Cambridge (UK) in 1993.

--
Bill Sloman, Sydney
 
On Friday, July 24, 2020 at 4:34:25 AM UTC+10, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

https://en.wikipedia.org/wiki/Z_notation

is an example of that approach. It doesn\'t seem to be ruling the world at the moment.

https://en.wikipedia.org/wiki/Hoare_logic

is a bit older.

The Viper provably correct computer is more recent - 1987.

https://www.cl.cam.ac.uk/archive/mjcg/papers/cohn1987.pdf

It doesn\'t seem to got anywhere either. I heard a bit about it before we left Cambridge (UK) in 1993.

--
Bill Sloman, Sydney
 
On 24/07/20 03:33, Bill Sloman wrote:
On Friday, July 24, 2020 at 4:34:25 AM UTC+10, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

https://en.wikipedia.org/wiki/Z_notation

is an example of that approach. It doesn\'t seem to be ruling the world at the moment.

https://en.wikipedia.org/wiki/Hoare_logic

is a bit older.

The Viper provably correct computer is more recent - 1987.

https://www.cl.cam.ac.uk/archive/mjcg/papers/cohn1987.pdf

It doesn\'t seem to got anywhere either. I heard a bit about it before we left Cambridge (UK) in 1993.

Viper was interesting - a processor with a formal
mathematical proof of correctness. RSRE absolutely did
not want people to think it might be used in missile
fire control systems, oh no, never.

IIRC they flogged it to the Australians, then the Australians
noted there was a missing step between the top level spec
and the implementation. They sued and won.

There are three problems with any component that is
mathematically proven:
- most of the system isn\'t mathematically proven
- is the initial spec \"correct\"
- it is too difficult to do in practice

I don\'t remember NewSpeak, the associated programming
language, ever becoming practical
 
On 24/07/20 03:33, Bill Sloman wrote:
On Friday, July 24, 2020 at 4:34:25 AM UTC+10, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-07-22 20:14, John Larkin wrote:

I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.

I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.


It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.

The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"

Obligatory Real Programmer reference:

http://www.cs.utah.edu/~elb/folklore/mel.html

Cheers

Phil Hobbs

Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.

Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?

The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.

Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.


2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.

I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)

We don\'t need more compute power. We need reliability and user
friendliness.

Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.

a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore

Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.

https://en.wikipedia.org/wiki/Z_notation

is an example of that approach. It doesn\'t seem to be ruling the world at the moment.

https://en.wikipedia.org/wiki/Hoare_logic

is a bit older.

The Viper provably correct computer is more recent - 1987.

https://www.cl.cam.ac.uk/archive/mjcg/papers/cohn1987.pdf

It doesn\'t seem to got anywhere either. I heard a bit about it before we left Cambridge (UK) in 1993.

Viper was interesting - a processor with a formal
mathematical proof of correctness. RSRE absolutely did
not want people to think it might be used in missile
fire control systems, oh no, never.

IIRC they flogged it to the Australians, then the Australians
noted there was a missing step between the top level spec
and the implementation. They sued and won.

There are three problems with any component that is
mathematically proven:
- most of the system isn\'t mathematically proven
- is the initial spec \"correct\"
- it is too difficult to do in practice

I don\'t remember NewSpeak, the associated programming
language, ever becoming practical
 
On a sunny day (Thu, 23 Jul 2020 21:40:02 +0200) it happened Jeroen Belleman
<jeroen@nospam.please> wrote in <rfcp2h$edp$1@gioia.aioe.org>:

We don\'t want productivity, in as more new versions. We
want quality, robustness and durability.

Jeroen Belleman

Yes, what solution and programming languages are suitable
depends on the application hardware,
for example a firing solution for a micro size drone will
have to have the math written for a very simple embedded system, maybe even in asm.
The same firing solution for say a jet can be done in whatever high language makes you drooling.
I like Phil Hobbs link to the story about that programmer and his use of the drum revolution time..

https://www.unisa.edu.au/Media-Centre/Releases/2020/is-it-a-bird-a-plane-not-superman-but-a-flapping-wing-drone/
For better pictures:
https://robotics.sciencemag.org/

And apart from the number of bugs in the higher level version,
the failure rate also goes up with the number of components and chip size in a system,
especially in a radiation environment.
So fro ma robustness POV my choice would be the simple embedded version, not as easy to hack as most (windows??) PCs either, less power so
greener.
 

Welcome to EDABoard.com

Sponsor

Back
Top