Software revolution?...

D

Dean Hoffman

Guest
This article claims there are dramatic changes on the horizon.
Supposedly Google, Facebook, etc are dinosaurs.
<https://www.americanthinker.com/articles/2020/12/how_google_falls.html>
 
On 8/12/20 2:38 am, Jeroen Belleman wrote:
Dean Hoffman wrote:
    This article claims there are dramatic changes on the horizon.
Supposedly  Google, Facebook, etc are dinosaurs.
https://www.americanthinker.com/articles/2020/12/how_google_falls.html

But there is *nothing* in that article. It\'s mindless
prose. No references, no motivation, no examples. Why
should we believe that? It\'s nonsense.

Face it, software is complicated and getting more so
by the day. There are no miracles.

If you look at the author\'s blog, you can see what he\'s claiming.
<https://jayvalentine.com/as-400-ideal-landing-zone-for-system-oriented-programming/>

It\'s bullshit of course. The reason that the mobile phone billing system
was so bad is that it\'s incredibly bloated like so many enterprise
systems. Any green-fields solution is going to be 1000x better.

It is no surprise (and requires no special majick) that a single cheap
computer can bill 5E7 customers in hours, when the current system
requires a whole data centre.

CH
 
In article <rqlinl$1npm$3@gioia.aioe.org>,
Martin Brown <\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 07/12/2020 14:46, Dean Hoffman wrote:
    This article claims there are dramatic changes on the horizon.
Supposedly  Google, Facebook, etc are dinosaurs.
https://www.americanthinker.com/articles/2020/12/how_google_falls.html

People have been advertising the next software silver bullet that will
solve all the worlds problems since the first programmable hardware ever
existed. Some of them were genuine improvements over what went before.

Mostly they quickly prove to be base metal or even worse - pure hype.

Yup. I\'ve got a collection of books by Robert Glass, collecting some
of his columns originally published in datamation back in the 1970s,
in which he discusses some of those Grand Innovations of the era.
Some were incrementally useful, some very useful, and many were snake
oil.

Although the details have changed in the last 50 years, the overall
war between How We Do It Today, and How Someone Says We Really Ought
To Be Doing it, hasn\'t been much altered. The names are different but
the trends are surprisingly similar.
 
On Monday, December 7, 2020 at 10:39:49 AM UTC-5, Don Y wrote:
The biggest challenge, IMO, still lies in the meatware. Folks
just aren\'t comfortable thinking RELIABLY about concurrent actions.
Too often, they internalize them as serial ones that happen to
be acting in parallel (which isn\'t really the case). People
have a hard time dealing with PSEUDO-concurrency, despite the
fact that it gives you lots of implicit guarantees that TRUE
concurrency can\'t claim.

Man, it must be a bitch designing hardware where everything happens in parallel.


> Finally, the sloppy mentality of most \"coders\" will have to change.

Or they can try designing hardware where there is real parallelism. :)

--

Rick C.

- Get 1,000 miles of free Supercharging
- Tesla referral code - https://ts.la/richard11209
 
On 12/7/2020 5:48 PM, Dave Platt wrote:
Although the details have changed in the last 50 years, the overall
war between How We Do It Today, and How Someone Says We Really Ought
To Be Doing it, hasn\'t been much altered. The names are different but
the trends are surprisingly similar.

What I find amusing is the alternation of \"big mainframe\" with
\"individual workstations\" that keeps cycling through.
 
On 08/12/20 04:55, Don Y wrote:
On 12/7/2020 5:48 PM, Dave Platt wrote:
Although the details have changed in the last 50 years, the overall
war between How We Do It Today, and How Someone Says We Really Ought
To Be Doing it, hasn\'t been much altered.  The names are different but
the trends are surprisingly similar.

What I find amusing is the alternation of \"big mainframe\" with
\"individual workstations\" that keeps cycling through.

Timesharing bureaux -> personal computer -> cloud computing.
 
On 08/12/20 04:55, Don Y wrote:
On 12/7/2020 5:48 PM, Dave Platt wrote:
Although the details have changed in the last 50 years, the overall
war between How We Do It Today, and How Someone Says We Really Ought
To Be Doing it, hasn\'t been much altered.  The names are different but
the trends are surprisingly similar.

What I find amusing is the alternation of \"big mainframe\" with
\"individual workstations\" that keeps cycling through.

Timesharing bureaux -> personal computer -> cloud computing.
 
On 07/12/20 15:45, Martin Brown wrote:
On 07/12/2020 14:46, Dean Hoffman wrote:
     This article claims there are dramatic changes on the horizon.
Supposedly  Google, Facebook, etc are dinosaurs.
https://www.americanthinker.com/articles/2020/12/how_google_falls.html

People have been advertising the next software silver bullet that will solve all
the worlds problems since the first programmable hardware ever existed. Some of
them were genuine improvements over what went before.

Mostly they quickly prove to be base metal or even worse - pure hype.

Just so.

I\'ve been successfully dodging the hype since the early 80s,
without too much difficulty.

A more interesting decision is whether something /isn\'t/ hype
but has something beneficial. I\'ve dabbled with a couple of
blind alleys there, but I\'m satisfied with what I\'ve latched
onto.


He is right about one thing though. CPU speed has now maxed out and feature size
cannot go down much more so parallel algorithms are going to be the future.
Usually this comes with much pain and suffering unless you happen to be lucky
and have a problem that parallelises cleanly.

There several aspects to this.

Obviously the algorithms are important, and Amdahl\'s \"law\"
still applies.

Then there\'s the ability to recognise and understand architectural
strategies and design patterns which are beneficial. There\'s a lot
of good stuff available from the hardware and real-time worlds that
just isn\'t understood by the average softie.

And, of course, there is the language features which guarantee
multithreaded programs can work predictably. Or, more often, the
absence of such features and guarantees.

At a low/medium level I would like to see technologies that offer
more than the current best: Hoare\'s CSP. You find that in xC and
echoes of it in Go and Rust.


> Most non-trivial problems cannot which is where life gets interesting.

Yup. But too often it /looks/ like there are solutions, until
you understand the corner cases.
 
On 07/12/20 15:45, Martin Brown wrote:
On 07/12/2020 14:46, Dean Hoffman wrote:
     This article claims there are dramatic changes on the horizon.
Supposedly  Google, Facebook, etc are dinosaurs.
https://www.americanthinker.com/articles/2020/12/how_google_falls.html

People have been advertising the next software silver bullet that will solve all
the worlds problems since the first programmable hardware ever existed. Some of
them were genuine improvements over what went before.

Mostly they quickly prove to be base metal or even worse - pure hype.

Just so.

I\'ve been successfully dodging the hype since the early 80s,
without too much difficulty.

A more interesting decision is whether something /isn\'t/ hype
but has something beneficial. I\'ve dabbled with a couple of
blind alleys there, but I\'m satisfied with what I\'ve latched
onto.


He is right about one thing though. CPU speed has now maxed out and feature size
cannot go down much more so parallel algorithms are going to be the future.
Usually this comes with much pain and suffering unless you happen to be lucky
and have a problem that parallelises cleanly.

There several aspects to this.

Obviously the algorithms are important, and Amdahl\'s \"law\"
still applies.

Then there\'s the ability to recognise and understand architectural
strategies and design patterns which are beneficial. There\'s a lot
of good stuff available from the hardware and real-time worlds that
just isn\'t understood by the average softie.

And, of course, there is the language features which guarantee
multithreaded programs can work predictably. Or, more often, the
absence of such features and guarantees.

At a low/medium level I would like to see technologies that offer
more than the current best: Hoare\'s CSP. You find that in xC and
echoes of it in Go and Rust.


> Most non-trivial problems cannot which is where life gets interesting.

Yup. But too often it /looks/ like there are solutions, until
you understand the corner cases.
 
On a sunny day (Tue, 8 Dec 2020 07:37:28 +0000) it happened Tom Gardner
<spamjunk@blueyonder.co.uk> wrote in <Y2GzH.274722$kpV8.39994@fx36.ams4>:

On 08/12/20 04:55, Don Y wrote:
On 12/7/2020 5:48 PM, Dave Platt wrote:
Although the details have changed in the last 50 years, the overall
war between How We Do It Today, and How Someone Says We Really Ought
To Be Doing it, hasn\'t been much altered.  The names are different but
the trends are surprisingly similar.

What I find amusing is the alternation of \"big mainframe\" with
\"individual workstations\" that keeps cycling through.

Timesharing bureaux -> personal computer -> cloud computing.

What worries me in all this is not the computah power, but the vulnerability.
An enemy can just target data centers, either with bombers or missiles
or using local agents.
The result: imagine nobody can pay with their card,
no [cellphone] communications, no data preserved of past and present.
No food, hunger...
And such an attack is so easy.
Plus some extra powerful solar cycles could be coming:
https://www.sciencedaily.com/releases/2020/12/201207142308.htm
and knock out some satellites..

We have become way to dependent on computahs and electronics.

On top of that the ever growing US deficit that may be \'fixed\' by inflation
after all: US dollar is falling a lot relative to the Euro for example.
The standard solution in the past was to start wars, enlist all the jobless
and burn them in combat and have some building infrastructure.
It starts with: \"Do no think what your country can do for you,
but think what you can do for your country\".
History has a lesson there.

From my perspective the new Biden government may have a tendency to move that way.

I hope I see this wrong, but logic says otherwise.
As to the OP, much is afoot about AI and modified optimized AI systems.
At the best the AI is as smart as the best of us,
at the worst stay clear of it!

There is a lot to do about data centers in my country, for example Microsoft is building
one not far from here, together with google (a second one).

Article is in Dutch, but scroll down for the chart with the location of the data centers:
https://nos.nl/artikel/2359419-onrust-in-lokale-politiek-noord-holland-door-bouw-twee-mega-datacenters.html
Most electrickety from the wind-farms goes not to the people but to those data centers..

In short: We are way to vulnerable with all them computahs.
 
On a sunny day (Tue, 8 Dec 2020 07:37:28 +0000) it happened Tom Gardner
<spamjunk@blueyonder.co.uk> wrote in <Y2GzH.274722$kpV8.39994@fx36.ams4>:

On 08/12/20 04:55, Don Y wrote:
On 12/7/2020 5:48 PM, Dave Platt wrote:
Although the details have changed in the last 50 years, the overall
war between How We Do It Today, and How Someone Says We Really Ought
To Be Doing it, hasn\'t been much altered.  The names are different but
the trends are surprisingly similar.

What I find amusing is the alternation of \"big mainframe\" with
\"individual workstations\" that keeps cycling through.

Timesharing bureaux -> personal computer -> cloud computing.

What worries me in all this is not the computah power, but the vulnerability.
An enemy can just target data centers, either with bombers or missiles
or using local agents.
The result: imagine nobody can pay with their card,
no [cellphone] communications, no data preserved of past and present.
No food, hunger...
And such an attack is so easy.
Plus some extra powerful solar cycles could be coming:
https://www.sciencedaily.com/releases/2020/12/201207142308.htm
and knock out some satellites..

We have become way to dependent on computahs and electronics.

On top of that the ever growing US deficit that may be \'fixed\' by inflation
after all: US dollar is falling a lot relative to the Euro for example.
The standard solution in the past was to start wars, enlist all the jobless
and burn them in combat and have some building infrastructure.
It starts with: \"Do no think what your country can do for you,
but think what you can do for your country\".
History has a lesson there.

From my perspective the new Biden government may have a tendency to move that way.

I hope I see this wrong, but logic says otherwise.
As to the OP, much is afoot about AI and modified optimized AI systems.
At the best the AI is as smart as the best of us,
at the worst stay clear of it!

There is a lot to do about data centers in my country, for example Microsoft is building
one not far from here, together with google (a second one).

Article is in Dutch, but scroll down for the chart with the location of the data centers:
https://nos.nl/artikel/2359419-onrust-in-lokale-politiek-noord-holland-door-bouw-twee-mega-datacenters.html
Most electrickety from the wind-farms goes not to the people but to those data centers..

In short: We are way to vulnerable with all them computahs.
 
On 08/12/2020 04:55, Don Y wrote:
On 12/7/2020 5:48 PM, Dave Platt wrote:
Although the details have changed in the last 50 years, the overall
war between How We Do It Today, and How Someone Says We Really Ought
To Be Doing it, hasn\'t been much altered.  The names are different but
the trends are surprisingly similar.

What I find amusing is the alternation of \"big mainframe\" with
\"individual workstations\" that keeps cycling through.

The \"big\" IBM mainframe of my day had a whopping 4MB of core memory and
you had to have a special ticket to use more than 1MB of that at once.

--
Regards,
Martin Brown
 
On Tue, 8 Dec 2020 11:29:38 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 08/12/2020 04:55, Don Y wrote:
On 12/7/2020 5:48 PM, Dave Platt wrote:
Although the details have changed in the last 50 years, the overall
war between How We Do It Today, and How Someone Says We Really Ought
To Be Doing it, hasn\'t been much altered.  The names are different but
the trends are surprisingly similar.

What I find amusing is the alternation of \"big mainframe\" with
\"individual workstations\" that keeps cycling through.

The \"big\" IBM mainframe of my day had a whopping 4MB of core memory and
you had to have a special ticket to use more than 1MB of that at once.

That is quite a lot of core memory, occupying at least 4 tall cabinets
just for the core :).
 
Dean Hoffman wrote:
This article claims there are dramatic changes on the horizon.
Supposedly Google, Facebook, etc are dinosaurs.
https://www.americanthinker.com/articles/2020/12/how_google_falls.html

But there is *nothing* in that article. It\'s mindless
prose. No references, no motivation, no examples. Why
should we believe that? It\'s nonsense.

Face it, software is complicated and getting more so
by the day. There are no miracles.

Jeroen Belleman
 
Dean Hoffman wrote:
This article claims there are dramatic changes on the horizon.
Supposedly Google, Facebook, etc are dinosaurs.
https://www.americanthinker.com/articles/2020/12/how_google_falls.html

But there is *nothing* in that article. It\'s mindless
prose. No references, no motivation, no examples. Why
should we believe that? It\'s nonsense.

Face it, software is complicated and getting more so
by the day. There are no miracles.

Jeroen Belleman
 
On 12/7/2020 7:46 AM, Dean Hoffman wrote:
This article claims there are dramatic changes on the horizon.
Supposedly Google, Facebook, etc are dinosaurs.
https://www.americanthinker.com/articles/2020/12/how_google_falls.html

It\'s not particularly hard to refactor an *old* design to
exploit more modern technologies (both in hardware as well
as software design practices).

But, many businesses are loath to rip up what they\'ve got in the
\"hope\" (promise?) that they\'ll have \"better, cheaper\".

[I\'ve a friend who is continually scrounging used equipment
markets for Sun big iron as their enterprise is built on that
kit. \"Um, maybe time to move on to something newer? There
isn\'t even a \'Sun\' any longer!\"]

An old/EXISTING system has already embraced all of the specification
creep that has contributed to its high maintenance costs. A
refactoring, thus, knows exactly what to implement and can
benefit from hindsight examination of all of the \"warts\" in
the evolved system.

The biggest challenge, IMO, still lies in the meatware. Folks
just aren\'t comfortable thinking RELIABLY about concurrent actions.
Too often, they internalize them as serial ones that happen to
be acting in parallel (which isn\'t really the case). People
have a hard time dealing with PSEUDO-concurrency, despite the
fact that it gives you lots of implicit guarantees that TRUE
concurrency can\'t claim.

And, newer solutions will rely increasingly on more cores/processors
which will replace \"free\" procedure calls with \"more expensive,
prone to failure\" RPCs. Getting accustomed to the fact that ANY
particular \"call\" can fail because the target is not available
is just not something that has been the case, historically.

Finally, the sloppy mentality of most \"coders\" will have to change.
 
On 12/7/2020 7:46 AM, Dean Hoffman wrote:
This article claims there are dramatic changes on the horizon.
Supposedly Google, Facebook, etc are dinosaurs.
https://www.americanthinker.com/articles/2020/12/how_google_falls.html

It\'s not particularly hard to refactor an *old* design to
exploit more modern technologies (both in hardware as well
as software design practices).

But, many businesses are loath to rip up what they\'ve got in the
\"hope\" (promise?) that they\'ll have \"better, cheaper\".

[I\'ve a friend who is continually scrounging used equipment
markets for Sun big iron as their enterprise is built on that
kit. \"Um, maybe time to move on to something newer? There
isn\'t even a \'Sun\' any longer!\"]

An old/EXISTING system has already embraced all of the specification
creep that has contributed to its high maintenance costs. A
refactoring, thus, knows exactly what to implement and can
benefit from hindsight examination of all of the \"warts\" in
the evolved system.

The biggest challenge, IMO, still lies in the meatware. Folks
just aren\'t comfortable thinking RELIABLY about concurrent actions.
Too often, they internalize them as serial ones that happen to
be acting in parallel (which isn\'t really the case). People
have a hard time dealing with PSEUDO-concurrency, despite the
fact that it gives you lots of implicit guarantees that TRUE
concurrency can\'t claim.

And, newer solutions will rely increasingly on more cores/processors
which will replace \"free\" procedure calls with \"more expensive,
prone to failure\" RPCs. Getting accustomed to the fact that ANY
particular \"call\" can fail because the target is not available
is just not something that has been the case, historically.

Finally, the sloppy mentality of most \"coders\" will have to change.
 
On 07/12/2020 14:46, Dean Hoffman wrote:
    This article claims there are dramatic changes on the horizon.
Supposedly  Google, Facebook, etc are dinosaurs.
https://www.americanthinker.com/articles/2020/12/how_google_falls.html

People have been advertising the next software silver bullet that will
solve all the worlds problems since the first programmable hardware ever
existed. Some of them were genuine improvements over what went before.

Mostly they quickly prove to be base metal or even worse - pure hype.

He is right about one thing though. CPU speed has now maxed out and
feature size cannot go down much more so parallel algorithms are going
to be the future. Usually this comes with much pain and suffering unless
you happen to be lucky and have a problem that parallelises cleanly.

Most non-trivial problems cannot which is where life gets interesting.

--
Regards,
Martin Brown
 
On 07/12/2020 14:46, Dean Hoffman wrote:
    This article claims there are dramatic changes on the horizon.
Supposedly  Google, Facebook, etc are dinosaurs.
https://www.americanthinker.com/articles/2020/12/how_google_falls.html

People have been advertising the next software silver bullet that will
solve all the worlds problems since the first programmable hardware ever
existed. Some of them were genuine improvements over what went before.

Mostly they quickly prove to be base metal or even worse - pure hype.

He is right about one thing though. CPU speed has now maxed out and
feature size cannot go down much more so parallel algorithms are going
to be the future. Usually this comes with much pain and suffering unless
you happen to be lucky and have a problem that parallelises cleanly.

Most non-trivial problems cannot which is where life gets interesting.

--
Regards,
Martin Brown
 
On 12/7/2020 8:45 AM, Martin Brown wrote:
He is right about one thing though. CPU speed has now maxed out and feature
size cannot go down much more so parallel algorithms are going to be the
future. Usually this comes with much pain and suffering unless you happen to be
lucky and have a problem that parallelises cleanly.

Memory bandwidth is the bigger problem. There\'s only so much cache you can
put on the die before the CPU starts to *become* memory!

> Most non-trivial problems cannot which is where life gets interesting.

You need tools that can recognize opportunities in algorithms and
massage the STATED algorithm into an equivalent CONCURRENT algorithm.
Too often, developers THINK serially; it\'s human nature.

I\'ve found dicing \"jobs\" into tiny little snippets (job-ettes?) is
an effective way of visualizing the dependencies that you are
baking into your implementation that aren\'t inherently part of the
problem.

[As formal techniques seem to have fallen out of favor, these issues
are no longer as visible as they once were. Petri nets, anyone?]
 

Welcome to EDABoard.com

Sponsor

Back
Top