the hot new programming language

On Sat, 04 Jul 2015 11:42:43 -0400, Phil Hobbs
<hobbs@electrooptical.net> wrote:

On 7/4/2015 9:32 AM, Tom Del Rosso wrote:
Martin Brown wrote:

Moore's law never made any claims about speed. It was specifically
about the number density of transistors on a given area of silicon.

I roll my eyes when I hear "Moore's law" and the "computing power of a chip"
in the same sentence. He stated his law when a chip had 4 transistors.
Since you can't make a computer with 4, it makes no sense to speak of the
computing power of a chip.

What we need is a breakthrough in 3D structures. In 2D we're limited to a
few connections per transistor, and a few per gate. It's
connections-per-element that will make HAL possible.


There are all kinds of 3D structures. The problem is cooling them. For
instance, say you're stacking a processor and several planes of memory.
The processor generates a lot of heat, so it has to go next to the
heat sink, i.e. at the top of the stack. but then all its I/O has to go
through the memory chips, so you lose all your area to through-silicon
vias (TSVs). Same problem as very tall buildings.

If you put it the other way up, you have to throttle back the CPU to the
point that you don't gain anything. Computer speed has been a tradeoff
between clock rate and cooling since the 1980s. I remember going to a
talk by a system architect in about 1988, where he put up a plot of
delay vs. power consumption per gate. It dropped steeply at first, of
course but then gradually rose again at high powers, because the chips
had to be spaced out in order to cool them, which added time-of-flight
delay.

Some of the power dissipation issues could be solved, if new materials
tolerating higher temperatures could be found. This would increase the
junction to ambient temperature difference, enhancing heat transfer.

For super computers it might even make sense to immerse the whole
system into distilled deionized water. The component case will be
slightly above 100 C, when a lot of heat is removed by vaporization.
Of course, new cold water needs to be added to replaced the vaporized
water. I have not heard of vapor cooling electronics for decades, but
some big transmitting tubes used vapor cooling.

Of course if you use a single synchronous clock to drive everything on
a system, there are going to be a lot of time-of-light problems.
However, splitting the system into multiple "islands" (such as
processors+local memory) and running every island with private clocks
should help a lot, using close proximity within an island. Of course,
the communication between islands need to be asynchronous e.g. using
self clocked messages (such as cache lines).

If a full wafer is used, using high density locally clocked islands
with plenty of space between islands would help heat removal as well
as allowing space for running lines between the islands.

While thermal design must be done assuming 100 % computing load on
each island, the average power consumption would be less, when every
processor not doing useful work at the moment can be stopped, saving a
lot of power.
 
On Wed, 01 Jul 2015 12:13:44 -0700, John Larkin
<jlarkin@highlandtechnology.com> wrote:

http://www.itworld.com/article/2694378/college-students-learning-cobol-make-more-money.html

The revival of Basic is next.

MUMPS (or M) might be the next, since it is widely used in health care
and is the core for new systems. It is also used in some banking
applications. The youngest MUMPS programmer I know of are about 55
years old, so the supply of programmers doesn't last very long.

One of the strangest job advertisement that I have seen was for a
maintenance programmer fluent in MACRO-11 (PDP-11) assembler. The job
was for maintaining a control system into year 2050 on a spent fuel
rod cooling pool in a nuclear power plant in Canada.
 
On 05/07/15 07:32, upsidedown@downunder.com wrote:

One of the strangest job advertisement that I have seen was for a
maintenance programmer fluent in MACRO-11 (PDP-11) assembler. The job
was for maintaining a control system into year 2050 on a spent fuel
rod cooling pool in a nuclear power plant in Canada.

Blimey - I though I was a dinosaur because I dabbled with Macro-32 (the
VAX variant) a few times!

I wonder if they are still maintaining actual PDP-11 hardware or whether
it's all running on an emulator?
 
On 2015-07-05, krw <krw@nowhere.com> wrote:
On 4 Jul 2015 21:47:03 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

The tax could be inverse on holding time, tapering to zero after, say,
5 years. That would change a lot.

such complexity is unneeded, inflation already does that, and faster.

Inflation does the opposite.

No. It does the opposite 0f the opposite, it devalues cash. In effect
increasing the cash value of inverstments.

> It's a tax that is proportional to the holding time.

only if you're holding cash.

--
umop apisdn
 
On 5.7.15 09:32, upsidedown@downunder.com wrote:
On Wed, 01 Jul 2015 12:13:44 -0700, John Larkin
jlarkin@highlandtechnology.com> wrote:


http://www.itworld.com/article/2694378/college-students-learning-cobol-make-more-money.html

The revival of Basic is next.

MUMPS (or M) might be the next, since it is widely used in health care
and is the core for new systems. It is also used in some banking
applications. The youngest MUMPS programmer I know of are about 55
years old, so the supply of programmers doesn't last very long.

I'm just waiting to see if Fortran-II comes next (or IBM 1620 SPS
assembler).

One of the strangest job advertisement that I have seen was for a
maintenance programmer fluent in MACRO-11 (PDP-11) assembler. The job
was for maintaining a control system into year 2050 on a spent fuel
rod cooling pool in a nuclear power plant in Canada.

I might be competent, but I'll be 104 years in 2050.

--

-TV
 
On Sun, 05 Jul 2015 09:26:01 +0100, Tim Watts <tw_usenet@dionic.net>
wrote:

On 05/07/15 07:32, upsidedown@downunder.com wrote:

One of the strangest job advertisement that I have seen was for a
maintenance programmer fluent in MACRO-11 (PDP-11) assembler. The job
was for maintaining a control system into year 2050 on a spent fuel
rod cooling pool in a nuclear power plant in Canada.


Blimey - I though I was a dinosaur because I dabbled with Macro-32 (the
VAX variant) a few times!

I wonder if they are still maintaining actual PDP-11 hardware or whether
it's all running on an emulator?

I would not t all be surprised if they would have been running a real
PDP-11/34 or 11/70, which use 74 resp. 74S series TTL chips, finding
replacement chips should not be too hard for a few decades. Anyway,
the strict certification requirement in nuclear industry might have
made it cheaper to run the old system than certifying some PDP
emulator.

In industrial applications that might run decades or even a century,
it is common to replace the control system every 20-30 years. For many
systems they talk about mid-life updates.

When a fuel rod has been in the reactor for 1-3 years it is removed
and it is moved to a cooling pool for at least a decade or until a
final deposition site can be found. The fuel rods in the pool will
generate heat for at least a decade after the plant has been finally
closed down. This might explain the year 2050 requirement, even the
reactor is planned to be closed much earlier. IMHO, the company should
have recertified a new replacement system for the next 35 years.
 
On Sun, 05 Jul 2015 09:32:36 +0300, upsidedown@downunder.com wrote:

On Wed, 01 Jul 2015 12:13:44 -0700, John Larkin
jlarkin@highlandtechnology.com> wrote:


http://www.itworld.com/article/2694378/college-students-learning-cobol-make-more-money.html

The revival of Basic is next.

MUMPS (or M) might be the next, since it is widely used in health care
and is the core for new systems. It is also used in some banking
applications. The youngest MUMPS programmer I know of are about 55
years old, so the supply of programmers doesn't last very long.

Is MUMPS still in use? It was a spinoff of the FOCAL interpreter.

FOCAL was great when all you had was paper tape and a few kilobytes of
memory. Barbaric in absolute terms.
 
In article <eaiipapvp55cseiv3c98uo01ee8546l9b3@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Sun, 05 Jul 2015 09:32:36 +0300, upsidedown@downunder.com wrote:

On Wed, 01 Jul 2015 12:13:44 -0700, John Larkin
jlarkin@highlandtechnology.com> wrote:


http://www.itworld.com/article/2694378/college-students-learning-cobol-make-more-money.html

The revival of Basic is next.

MUMPS (or M) might be the next, since it is widely used in health care
and is the core for new systems. It is also used in some banking
applications. The youngest MUMPS programmer I know of are about 55
years old, so the supply of programmers doesn't last very long.


Is MUMPS still in use? It was a spinoff of the FOCAL interpreter.

FOCAL was great when all you had was paper tape and a few kilobytes of
memory. Barbaric in absolute terms.

We used paper tape back when I was in trade school going through
exploritory in machine shop for the CNC's...

Yes, it wsa barbaric but it was the thing back then :)

Jamie
 
On 5 Jul 2015 08:51:20 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2015-07-05, krw <krw@nowhere.com> wrote:
On 4 Jul 2015 21:47:03 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

The tax could be inverse on holding time, tapering to zero after, say,
5 years. That would change a lot.

such complexity is unneeded, inflation already does that, and faster.

Inflation does the opposite.

No. It does the opposite 0f the opposite, it devalues cash. In effect
increasing the cash value of inverstments.

But it does nothing to dampen the trading feedback loop. It doesn't
matter what the frequency is, the tax is the same.

It's a tax that is proportional to the holding time.

only if you're holding cash.

No, investments are the same. John's proposal was to dampen
oscillations caused by automated trading programs. A tax proportional
to time does nothing, here. It has to penalize such transactions to
have any effect.
 
On 07/05/2015 01:49 AM, upsidedown@downunder.com wrote:
On Sat, 04 Jul 2015 11:42:43 -0400, Phil Hobbs
hobbs@electrooptical.net> wrote:

On 7/4/2015 9:32 AM, Tom Del Rosso wrote:
Martin Brown wrote:

Moore's law never made any claims about speed. It was specifically
about the number density of transistors on a given area of silicon.

I roll my eyes when I hear "Moore's law" and the "computing power of a chip"
in the same sentence. He stated his law when a chip had 4 transistors.
Since you can't make a computer with 4, it makes no sense to speak of the
computing power of a chip.

What we need is a breakthrough in 3D structures. In 2D we're limited to a
few connections per transistor, and a few per gate. It's
connections-per-element that will make HAL possible.


There are all kinds of 3D structures. The problem is cooling them. For
instance, say you're stacking a processor and several planes of memory.
The processor generates a lot of heat, so it has to go next to the
heat sink, i.e. at the top of the stack. but then all its I/O has to go
through the memory chips, so you lose all your area to through-silicon
vias (TSVs). Same problem as very tall buildings.

If you put it the other way up, you have to throttle back the CPU to the
point that you don't gain anything. Computer speed has been a tradeoff
between clock rate and cooling since the 1980s. I remember going to a
talk by a system architect in about 1988, where he put up a plot of
delay vs. power consumption per gate. It dropped steeply at first, of
course but then gradually rose again at high powers, because the chips
had to be spaced out in order to cool them, which added time-of-flight
delay.

Some of the power dissipation issues could be solved, if new materials
tolerating higher temperatures could be found. This would increase the
junction to ambient temperature difference, enhancing heat transfer.

It isn't just the maximum temperature that's the problem, it's the CTE
mismatch, stress due to temperature gradients (aka the Hot Dog Effect),
and thermal cycling. Chip sizes are already limited by
thermally-induced stress on the corner balls, despite very stiff
underfill and a lot of work on the pad metal.

IBM glass-ceramic MCMs were actually processed so as to deliberately
crack the copper lands free from the top layers of glass-ceramic, for
this reason. (The old alumina/refractory metal MCMs never had a single
field failure throughout > 20 years of manufacture, because the metal
was always in compression. In glass-ceramic bricks, the copper is in
tension, which caused no end of trouble. They stuck with it largely for
political reasons, but saving somebody's job cost the company a _lot_ of
money.

For super computers it might even make sense to immerse the whole
system into distilled deionized water. The component case will be
slightly above 100 C, when a lot of heat is removed by vaporization.
Of course, new cold water needs to be added to replaced the vaporized
water. I have not heard of vapor cooling electronics for decades, but
some big transmitting tubes used vapor cooling.

IBM Z-series are water cooled, or were a few years ago when I saw my
last one. But that's water-cooled heat sinks, maybe with silicon
microgrooves. You don't want to just immerse them in water, because you
can get vapour locks and melt things. (That's the classical way for
overheated boilers to blow up.) Gotta keep pressure and flow on.

Of course if you use a single synchronous clock to drive everything on
a system, there are going to be a lot of time-of-light problems.
However, splitting the system into multiple "islands" (such as
processors+local memory) and running every island with private clocks
should help a lot, using close proximity within an island. Of course,
the communication between islands need to be asynchronous e.g. using
self clocked messages (such as cache lines).

That's been done for years and years--since the '80s that I know about.
Big chips haven't been globally synchronous for quite awhile now.

If a full wafer is used, using high density locally clocked islands
with plenty of space between islands would help heat removal as well
as allowing space for running lines between the islands.

No. It's a huge waste of very expensive silicon, and the yield problems
would be horrible. That was what killed Trilogy back in the day.


While thermal design must be done assuming 100 % computing load on
each island, the average power consumption would be less, when every
processor not doing useful work at the moment can be stopped, saving a
lot of power.

Again, that's already done, and has been for a long time.

Packaging is hard.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On Sun, 05 Jul 2015 12:46:43 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 07/05/2015 01:49 AM, upsidedown@downunder.com wrote:
On Sat, 04 Jul 2015 11:42:43 -0400, Phil Hobbs
hobbs@electrooptical.net> wrote:

On 7/4/2015 9:32 AM, Tom Del Rosso wrote:
Martin Brown wrote:

Moore's law never made any claims about speed. It was specifically
about the number density of transistors on a given area of silicon.

I roll my eyes when I hear "Moore's law" and the "computing power of a chip"
in the same sentence. He stated his law when a chip had 4 transistors.
Since you can't make a computer with 4, it makes no sense to speak of the
computing power of a chip.

What we need is a breakthrough in 3D structures. In 2D we're limited to a
few connections per transistor, and a few per gate. It's
connections-per-element that will make HAL possible.


There are all kinds of 3D structures. The problem is cooling them. For
instance, say you're stacking a processor and several planes of memory.
The processor generates a lot of heat, so it has to go next to the
heat sink, i.e. at the top of the stack. but then all its I/O has to go
through the memory chips, so you lose all your area to through-silicon
vias (TSVs). Same problem as very tall buildings.

If you put it the other way up, you have to throttle back the CPU to the
point that you don't gain anything. Computer speed has been a tradeoff
between clock rate and cooling since the 1980s. I remember going to a
talk by a system architect in about 1988, where he put up a plot of
delay vs. power consumption per gate. It dropped steeply at first, of
course but then gradually rose again at high powers, because the chips
had to be spaced out in order to cool them, which added time-of-flight
delay.

Some of the power dissipation issues could be solved, if new materials
tolerating higher temperatures could be found. This would increase the
junction to ambient temperature difference, enhancing heat transfer.

It isn't just the maximum temperature that's the problem, it's the CTE
mismatch, stress due to temperature gradients (aka the Hot Dog Effect),
and thermal cycling. Chip sizes are already limited by
thermally-induced stress on the corner balls, despite very stiff
underfill and a lot of work on the pad metal.

IBM glass-ceramic MCMs were actually processed so as to deliberately
crack the copper lands free from the top layers of glass-ceramic, for
this reason. (The old alumina/refractory metal MCMs never had a single
field failure throughout > 20 years of manufacture, because the metal
was always in compression. In glass-ceramic bricks, the copper is in
tension, which caused no end of trouble. They stuck with it largely for
political reasons, but saving somebody's job cost the company a _lot_ of
money.


For super computers it might even make sense to immerse the whole
system into distilled deionized water. The component case will be
slightly above 100 C, when a lot of heat is removed by vaporization.
Of course, new cold water needs to be added to replaced the vaporized
water. I have not heard of vapor cooling electronics for decades, but
some big transmitting tubes used vapor cooling.

IBM Z-series are water cooled, or were a few years ago when I saw my
last one. But that's water-cooled heat sinks, maybe with silicon
microgrooves. You don't want to just immerse them in water, because you
can get vapour locks and melt things. (That's the classical way for
overheated boilers to blow up.) Gotta keep pressure and flow on.


Of course if you use a single synchronous clock to drive everything on
a system, there are going to be a lot of time-of-light problems.
However, splitting the system into multiple "islands" (such as
processors+local memory) and running every island with private clocks
should help a lot, using close proximity within an island. Of course,
the communication between islands need to be asynchronous e.g. using
self clocked messages (such as cache lines).

That's been done for years and years--since the '80s that I know about.
Big chips haven't been globally synchronous for quite awhile now.

At *least* the early '70s. The islands were fairly large by today's
standards, though. It's all a matter of speed and size.
If a full wafer is used, using high density locally clocked islands
with plenty of space between islands would help heat removal as well
as allowing space for running lines between the islands.

No. It's a huge waste of very expensive silicon, and the yield problems
would be horrible. That was what killed Trilogy back in the day.


While thermal design must be done assuming 100 % computing load on
each island, the average power consumption would be less, when every
processor not doing useful work at the moment can be stopped, saving a
lot of power.


Again, that's already done, and has been for a long time.

Along with scaling the voltage and clocks on islands with lighter
load.

>Packaging is hard.

The bleeding edge is hard.
 
On Sun, 05 Jul 2015 08:14:25 -0700, John Larkin
<jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

On Sun, 05 Jul 2015 09:32:36 +0300, upsidedown@downunder.com wrote:

On Wed, 01 Jul 2015 12:13:44 -0700, John Larkin
jlarkin@highlandtechnology.com> wrote:


http://www.itworld.com/article/2694378/college-students-learning-cobol-make-more-money.html

The revival of Basic is next.

MUMPS (or M) might be the next, since it is widely used in health care
and is the core for new systems. It is also used in some banking
applications. The youngest MUMPS programmer I know of are about 55
years old, so the supply of programmers doesn't last very long.


Is MUMPS still in use? It was a spinoff of the FOCAL interpreter.

FOCAL was great when all you had was paper tape and a few kilobytes of
memory. Barbaric in absolute terms.

MUMPS is still actively marketed as a part of EPIC
https://en.wikipedia.org/wiki/Epic_Systems for health care
applications.
 
On Sun, 05 Jul 2015 08:14:25 -0700, John Larkin
<jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

On Sun, 05 Jul 2015 09:32:36 +0300, upsidedown@downunder.com wrote:

On Wed, 01 Jul 2015 12:13:44 -0700, John Larkin
jlarkin@highlandtechnology.com> wrote:


http://www.itworld.com/article/2694378/college-students-learning-cobol-make-more-money.html

The revival of Basic is next.

MUMPS (or M) might be the next, since it is widely used in health care
and is the core for new systems. It is also used in some banking
applications. The youngest MUMPS programmer I know of are about 55
years old, so the supply of programmers doesn't last very long.


Is MUMPS still in use? It was a spinoff of the FOCAL interpreter.

FOCAL was great when all you had was paper tape and a few kilobytes of
memory. Barbaric in absolute terms.

The original MUMPS on early PDPs was nice, since it handled local
variable and disk stored variables in nearly the same way.

These days (Open)VMS/Linux/WindowsNT use memory mapped files for the
same purpose. The programmer sees a multipetabyte array (on 64 bit
processors) and you can read or write any byte and the OS uses the
very effective page fault mechanism to load pages from physical disks
as well as saving updated memory locations to disk.
 
On Sun, 05 Jul 2015 21:57:41 +0300, upsidedown@downunder.com Gave us:

MUMPS is still actively marketed as a part of EPIC
https://en.wikipedia.org/wiki/Epic_Systems for health care
applications.

Imagine that.... MUMPS never got eradicated. :)
 
On 2015-07-05, krw <krw@nowhere.com> wrote:
On 5 Jul 2015 08:51:20 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2015-07-05, krw <krw@nowhere.com> wrote:
On 4 Jul 2015 21:47:03 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

The tax could be inverse on holding time, tapering to zero after, say,
5 years. That would change a lot.

such complexity is unneeded, inflation already does that, and faster.

Inflation does the opposite.

No. It does the opposite 0f the opposite, it devalues cash. In effect
increasing the cash value of inverstments.

But it does nothing to dampen the trading feedback loop. It doesn't
matter what the frequency is, the tax is the same.

it reduces the gain, once the gain is below 1.0 short term trading is
pointless.

It's a tax that is proportional to the holding time.

only if you're holding cash.

No, investments are the same.

bank deposits count as "cash". invest in someting better performing.

--
umop apisdn
 
DecadentLinuxUserNumeroUno wrote:
On Sat, 04 Jul 2015 09:19:35 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> Gave us:

The singularity is coming - beware. We are already letting computers
design new bigger chips that no individual human can fully comprehend.


Slow light technology will usher in a simple 4 bit optical computer
that puts them all to shame.
Slow light - is that found in the slow glass of SF fame?
 
On Mon, 06 Jul 2015 01:59:46 -0700, Robert Baer
<robertbaer@localnet.com> Gave us:

DecadentLinuxUserNumeroUno wrote:
On Sat, 04 Jul 2015 09:19:35 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> Gave us:

The singularity is coming - beware. We are already letting computers
design new bigger chips that no individual human can fully comprehend.


Slow light technology will usher in a simple 4 bit optical computer
that puts them all to shame.
Slow light - is that found in the slow glass of SF fame?

https://en.wikipedia.org/wiki/Slow_light
 
On Sat, 4 Jul 2015 13:16:53 -0400 M Philbrook <jamie_ka1lpa@charter.net>
wrote in Message id:
<MPG.3001df0e874eff80989c63@news.eternal-september.org>:

In article <usqfpa517onoc09j0ff9jimdq6s7t3cviu@4ax.com>,
jlarkin@highlandtechnology.com says...

On Sat, 04 Jul 2015 06:33:25 -0400, DecadentLinuxUserNumeroUno
DLU1@DecadentLinuxUser.org> wrote:

On Sat, 04 Jul 2015 09:19:35 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> Gave us:

The singularity is coming - beware. We are already letting computers
design new bigger chips that no individual human can fully comprehend.


Slow light technology will usher in a simple 4 bit optical computer
that puts them all to shame.

The 4004 was slow enough!

I got some!

One day I may need to melt them down for the gold :)

Jamie

Ceramic with gold pins? Don't do that!!
http://www.ebay.com/itm/Vintage-INTEL-C4004-White-Golden-Plate-Extremly-Rare-Ceramic-Processor-Cpu-Good-/231608504079?pt=LH_DefaultDomain_0&hash=item35ecf14f0f
 
On Sat, 4 Jul 2015 13:16:53 -0400 M Philbrook <jamie_ka1lpa@charter.net>
wrote in Message id:
<MPG.3001df0e874eff80989c63@news.eternal-september.org>:

In article <usqfpa517onoc09j0ff9jimdq6s7t3cviu@4ax.com>,
jlarkin@highlandtechnology.com says...

On Sat, 04 Jul 2015 06:33:25 -0400, DecadentLinuxUserNumeroUno
DLU1@DecadentLinuxUser.org> wrote:

On Sat, 04 Jul 2015 09:19:35 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> Gave us:

The singularity is coming - beware. We are already letting computers
design new bigger chips that no individual human can fully comprehend.


Slow light technology will usher in a simple 4 bit optical computer
that puts them all to shame.

The 4004 was slow enough!

I got some!

One day I may need to melt them down for the gold :)

Jamie

http://www.ebay.com/itm/Vintage-INTEL-C4004-White-Golden-Plate-Extremly-Rare-Ceramic-Processor-Cpu-Good-/231608504079?pt=LH_DefaultDomain_0&hash=item35ecf14f0f

Unbelievable!
 
On 07/07/15 09:07, Phil Hobbs wrote:
This was a philosophical commonplace up until our culture lost its mind,
sometime around 1955.
NB this has nothing to do with any particular religion--Plato and
Aristotle knew all about it.

Plato is identified (probably wrongly) as the source of this bizarre
material/spiritual dualism which extends all the way to more modern
Cartesian thinking. It's purest mysticism, and doesn't answer the main
question that it purports to: how can our lives have significance if we
are machines with no "truly" free will. It does not - and cannot -
answer this question because it just relegate "significance" to another
realm - which has the same problems. If that realm has no rules, it's
chaos, and if it does, how is it not a machine? It's turtles all the way
down, folk.

The populist rubbish in which Larkin suggests the two realms are joined
by quantum uncertainties (without breaking the idea of the physical
world as mechanistic) has never been demonstrated to be plausible, and
represents just another attempt to clutch at straws in the search for a
meaning beyond our individual finite lives. Why should the "spiritual"
realm be any more capable of carrying meaning than the one world we
*can* observe? Turtles again, folk.

You reckon it was around 1955 that people started to call bullshit on
this nonsense? Do you associate any particular event or person with that
event?

Brain structure and operation is quite unlike any existing logic
machine, but it's still a logic machine. The recent adoption by the IBM
Cortical Learning Center of Jeff Hawkins' (of Numenta) approach called
Hierarchical Temporal Memory (HTM) will show that our brains are not
magical. The resources that IBM are bringing to bear on realising the
incredible recent achievements of small-scale HTM - like, they're now
starting to build wafer-scale devices consisting of eventually up to
half a dozen stacked full wafers - will tell the truth, and this
mystical nonsense will finally be seen for what it it - a failed attempt
to dream that humans have some significance beyond just the arrangement
of molecules that make us.

One life, then it ends. Make a difference while you can, don't spend it
preparing for a future life where eternity requires that no difference
can ever be made.

Clifford Heath.
 

Welcome to EDABoard.com

Sponsor

Back
Top