bit about transistor cost...

On 07/12/2021 13:47, Dimiter_Popoff wrote:
On 12/7/2021 3:40, Sylvia Else wrote:
On 07-Dec-21 12:14 pm, Anthony William Sloman wrote:

The rest of us exploit those chips to serve different - smaller, but
more numerous - markets. We don\'t need few-nm chips to do that, but
if we can buy one more or less suitable for our application, we will
do it, because it\'s going to be faster and use less current than it\'s
predecessor. The interface to produce the outputs we can sell is
always a mess, but it\'s been like that forever.


A few nm is not many silicon atoms, so I have to wonder about the
longevity of these chips.

People generally may recycle their phones every couple of years
(though I don\'t), and manufacturers may be willing just to replace
those that die during the warranty period, but for most things one
wants the electronics to work for a reasonable time.

I think it depends a lot on how warm you run them. It may ultimately put
a hard limit on just how small the features can go before chip lifetime
becomes a serious problem for fast machines doing heavy computation.

It is astonishing how fine the features have become on modern chips.
I think I saw something somewhere about longevity, figures were not
great (if my memory is real, far from being sure).

My instinct is that the OLEDs in the displays will quite likely be the
weakest link in the chain rather than the silicon CPU itself. Chemistry
of light emission and living in direct sunlight all takes its toll.

My older phone lasted for 5 years and its micro-USB got broken so
I replaced the phone.
The current one is 4 years old and still works, though
about a year (or was it 18 months) ago its battery got swollen
(bad micro USB again, probably it damaged the battery by perpetual
power cycling) but I managed to buy locally both the connector and
a new battery at some negligible cost and replaced these so it still
works.

Batteries again are complex chemistry and so prone to premature failure
especially if you don\'t look after them quite right. My laptops tend to
kill their batteries through being used more than just some of the time
as a portable desktop and left on power crunching numbers. Speed is
maximised when on mains power, but it slowly damages the battery.

--
Regards,
Martin Brown
 
On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote:
Likewise with PC\'s. I\'m in the market for a new one right now but I\'m
not convinced that any of them offer single threaded performance that is
3x better than the ancient i7-3770 I have now. That has always been my
upgrade heuristic (used to be every 3 years). Clock speeds have maxed
out and now they are adding more cores (many of which are idle most of
the time). Performance cores and efficient cores is the new selling
point. It looks on paper like the i5-12600K might just pass this test.

I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they\'ve added for a 20-30% boost.

It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4.


It may yet swing the other way when simulations are so good that the
conversion to masks is essentially error free. Where it gets tricky is
when the AI is designing new chips for us that no-one understands.

This years BBC Rieth lectures are about the rise of AI and the future by
Stuart Russell of Berkley (starts this Wednesday).

https://www.bbc.co.uk/programmes/articles/1N0w5NcK27Tt041LPVLZ51k/reith-lectures-2021-living-with-artificial-intelligence

It is still at least partially holding for number density of transistors
if not for actual computing performance.

It was never about performance, it was just the number of transistors doubling every 18 to 24 months.


We must be very close to the
limits where quantum effects mess things up (but 3D stacks allow some
alternative ways of gaining number density on a chip).

We keep hearing that the limit is just ahead and they find ways of working it. I\'m flabbergasted they have reached single digit nm, How big are silicon atoms? The Corona virus is 70 or more nm. We could build a whole bunch of transistors on one virus. I seem to recall a Ball Semiconductor who wanted to print ICs on balls. I don\'t recall the advantages. They ended up providing some services from the processing technologies they developed.


> It was originally specified in terms of transistors per chip.

Yeah, it was just an observation back when the number was still in the thousands. It is interesting that the advancement wasn\'t a lot faster in the earlier stages, but that may have had to do with finances since the semiconductor market was so much smaller then. Less R&D money available. I think the physics has been developed along with the miniaturization push. They couldn\'t go faster because they didn\'t have the science to design smaller transistors at any given point. They needed the to build smaller transistors to study before they could put them into production. I took one semiconductor course and that\'s what the guy said basically. They first had heuristics which let them build devices and the understanding came as they worked with them.

--

Rick C.

-- Get 1,000 miles of free Supercharging
-- Tesla referral code - https://ts.la/richard11209
 
tirsdag den 7. december 2021 kl. 23.32.09 UTC+1 skrev gnuarm.del...@gmail.com:
On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote:

Likewise with PC\'s. I\'m in the market for a new one right now but I\'m
not convinced that any of them offer single threaded performance that is
3x better than the ancient i7-3770 I have now. That has always been my
upgrade heuristic (used to be every 3 years). Clock speeds have maxed
out and now they are adding more cores (many of which are idle most of
the time). Performance cores and efficient cores is the new selling
point. It looks on paper like the i5-12600K might just pass this test.
I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they\'ve added for a 20-30% boost.

It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4.

https://www.cpubenchmark.net/compare/Intel-i7-3770-vs-Intel-i9-12900KF-vs-Intel-Pentium-4-3.60GHz/896vs4611vs1079
 
On Mon, 06 Dec 2021 09:42:29 -0800, jlarkin@highlandsniptechnology.com
wrote:

https://www.fabricatedknowledge.com/p/the-rising-tide-of-semiconductor

I\'ve also heard that the cost of one next-gen euv scanner is well over
$200M, and that the design and mask set for a high-end chip costs a
billion dollars.

We just don\'t need few-nm chips.

Aside from smart phones and maybe some IOT stuff in the future I think
you\'re right. The penalty is a bit of power and speed.

Of course it\'s possible some amazing new market will come along of
left field that will generate demand, but it\'s hard to imagine
something brand new on the global scale of smart phones (~1.5bn
units/year) that is also power-sensitive.

I think the mask costs are more of the order of $1m or $2m, not
including design, obviously.

--
Best regards,
Spehro Pefhany
 
Rick C wrote:
On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote:

Likewise with PC\'s. I\'m in the market for a new one right now but I\'m
not convinced that any of them offer single threaded performance that is
3x better than the ancient i7-3770 I have now. That has always been my
upgrade heuristic (used to be every 3 years). Clock speeds have maxed
out and now they are adding more cores (many of which are idle most of
the time). Performance cores and efficient cores is the new selling
point. It looks on paper like the i5-12600K might just pass this test.

I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they\'ve added for a 20-30% boost.

It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4.


It may yet swing the other way when simulations are so good that the
conversion to masks is essentially error free. Where it gets tricky is
when the AI is designing new chips for us that no-one understands.

This years BBC Rieth lectures are about the rise of AI and the future by
Stuart Russell of Berkley (starts this Wednesday).

https://www.bbc.co.uk/programmes/articles/1N0w5NcK27Tt041LPVLZ51k/reith-lectures-2021-living-with-artificial-intelligence

It is still at least partially holding for number density of transistors
if not for actual computing performance.

It was never about performance, it was just the number of transistors doubling every 18 to 24 months.

Not so. Back in the Mead-Conway-Dennard days (late 1970s to early
2000s), everything was driven by litho. Once the litho folks figured
out how to make the next node with decent yield, you were golden--the
materials system was almost unchanged (apart from going to copper
wiring), and the speed and power consumption per transistor improved
automatically as the feature sizes got smaller, so the power density
stayed pretty well constant and the performance went up roughly
quadratically.

The 65-nm node was about where that ended, and of course analogue
performance peaked around 0.5 um to 0.18 um depending on what you care
about.

Since then, transistors have been getting slower, leakier, and noisier
with each node. BITD you could leave all of the logic on your processor
running all the time. Now if you did that, it\'d overheat rapidly. We
live in the Dark Silicon era--lots of huge chips, most parts of which
are powered down most of the time.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Tuesday, December 7, 2021 at 6:21:37 PM UTC-5, lang...@fonz.dk wrote:
tirsdag den 7. december 2021 kl. 23.32.09 UTC+1 skrev gnuarm.del...@gmail..com:
On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote:

Likewise with PC\'s. I\'m in the market for a new one right now but I\'m
not convinced that any of them offer single threaded performance that is
3x better than the ancient i7-3770 I have now. That has always been my
upgrade heuristic (used to be every 3 years). Clock speeds have maxed
out and now they are adding more cores (many of which are idle most of
the time). Performance cores and efficient cores is the new selling
point. It looks on paper like the i5-12600K might just pass this test..
I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they\'ve added for a 20-30% boost.

It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4.

https://www.cpubenchmark.net/compare/Intel-i7-3770-vs-Intel-i9-12900KF-vs-Intel-Pentium-4-3.60GHz/896vs4611vs1079

Not sure if you wanted to say something about this page?

Intel Core i7-3770 @ 3.40GHz Intel Core i9-12900KF Intel Pentium 4 3.60GHz
First Seen on Chart Q1 2012 Q4 2021 Q4 2008
Single Thread Rating 2074 4229 605

Not going to try to align the columns. From 2008 to 2012 the CPU processing speed increased over 3 fold. From 2008 to 2021 it increased barely 2 fold. The last 13 years provided less improvement than the previous 4 years.

--

Rick C.

-+ Get 1,000 miles of free Supercharging
-+ Tesla referral code - https://ts.la/richard11209
 
On Tuesday, December 7, 2021 at 8:42:18 PM UTC-5, Phil Hobbs wrote:
Rick C wrote:
On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote:

Likewise with PC\'s. I\'m in the market for a new one right now but I\'m
not convinced that any of them offer single threaded performance that is
3x better than the ancient i7-3770 I have now. That has always been my
upgrade heuristic (used to be every 3 years). Clock speeds have maxed
out and now they are adding more cores (many of which are idle most of
the time). Performance cores and efficient cores is the new selling
point. It looks on paper like the i5-12600K might just pass this test.

I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they\'ve added for a 20-30% boost.

It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4.


It may yet swing the other way when simulations are so good that the
conversion to masks is essentially error free. Where it gets tricky is
when the AI is designing new chips for us that no-one understands.

This years BBC Rieth lectures are about the rise of AI and the future by
Stuart Russell of Berkley (starts this Wednesday).

https://www.bbc.co.uk/programmes/articles/1N0w5NcK27Tt041LPVLZ51k/reith-lectures-2021-living-with-artificial-intelligence

It is still at least partially holding for number density of transistors
if not for actual computing performance.

It was never about performance, it was just the number of transistors doubling every 18 to 24 months.
Not so.

You don\'t understand what I said.


Back in the Mead-Conway-Dennard days (late 1970s to early
2000s), everything was driven by litho. Once the litho folks figured
out how to make the next node with decent yield, you were golden--the
materials system was almost unchanged (apart from going to copper
wiring), and the speed and power consumption per transistor improved
automatically as the feature sizes got smaller, so the power density
stayed pretty well constant and the performance went up roughly
quadratically.

The 65-nm node was about where that ended, and of course analogue
performance peaked around 0.5 um to 0.18 um depending on what you care
about.

Since then, transistors have been getting slower, leakier, and noisier
with each node. BITD you could leave all of the logic on your processor
running all the time. Now if you did that, it\'d overheat rapidly. We
live in the Dark Silicon era--lots of huge chips, most parts of which
are powered down most of the time.

All of that may be true, but Moore\'s observation was simply about the trend in the number of transistors on a die. That\'s all.

None of it matters. Larkin has deemed improvements in lithography to be pointless. So I expect the industry will stop advancing immediately. They just needed someone to explain things to them.

--

Rick C.

+- Get 1,000 miles of free Supercharging
+- Tesla referral code - https://ts.la/richard11209
 
On Wednesday, December 8, 2021 at 12:39:28 AM UTC+11, Dimiter Popoff wrote:
On 12/7/2021 3:25, Anthony William Sloman wrote:
On Tuesday, December 7, 2021 at 12:14:43 PM UTC+11, Martin Brown wrote:
On 06/12/2021 19:04, Dimiter_Popoff wrote:
On 12/6/2021 20:47, John Larkin wrote:
On Mon, 6 Dec 2021 20:36:17 +0200, Dimiter_Popoff <d...@tgi-sci.com> wrote:
On 12/6/2021 19:42, jla...@highlandsniptechnology.com wrote:

snip

It may yet swing the other way when simulations are so good that the conversion to masks is essentially error free.

That happened around 1990. The electron beam tester I was working on back then was the next generation of a unit which famously trimmed three months off the development time of the Motorola 68k processor chip set.

The project wasn\'t canned because out machine didn\'t work - we did get it working quite well enough to demonstrate that it was an order of magnitude faster than it\'s predecessor - but because simulation had got good enough that most mask sets produced chips that worked.

The older, slower, machines were quite fast enough to check out that the simulation software was predicting what actually happened on the chip and that killed our market.

It is a shame such an advanced machinery has been lost (or did it
survive for some niche applications?).

It didn\'t or at least not that I know of. We had a couple of working prototypes, but they did depend on Gigabit Logic\'s GaAs integrated circuits, and Gigabit got merged with a couple of other GaAs suppliers at the same time, partly because they couldn\'t produce the logic with a decent yield. If the machine had gone into production it would probably would have had to be re-worked within a year or so to use Motorola/ON-Semiconductor ECLinPS parts for the quick bits (which would probably have performed better in consequence).

--
Bill Sloman, Sydney
 
John Larkin wrote:
On Mon, 6 Dec 2021 20:36:17 +0200, Dimiter_Popoff <dp@tgi-sci.com
wrote:

On 12/6/2021 19:42, jlarkin@highlandsniptechnology.com wrote:
https://www.fabricatedknowledge.com/p/the-rising-tide-of-semiconductor

I\'ve also heard that the cost of one next-gen euv scanner is well
over $200M, and that the design and mask set for a high-end chip
costs a billion dollars.

We just don\'t need few-nm chips.




Gradually electronics design without having access to a silicon
factory becomes useless, hopefully the process is slow enough so we
don\'t see that in full.
Sort of like nowadays you can somehow master an internal combustion
engine if you have a lathe and a milling machine but you have no
chance to make it comparable to those car makers make, not to speak
about cost.

Some things have got good enough. Hammers, spoons, beds, LED lights,
microwave ovens. Moore\'s Law can\'t go on forever, and is probably at
or in same cases past its practical limit.

Emissions requirements used to get tighter every year until it served no
purpose to make 1000 cars emit less than one BBQ. Then they started
subsidizing EV\'s. When everyone has an EV - or actually when no one has
an ICE and a few have EV\'s and most have to take a bus - then they\'ll
force us to change to something else.


We don\'t need 3 nm chips to text and twitter. I can\'t imagine my cell
phone needing to be better hardware.

First there was a century of advances in transportation, then it was
communication. When something is good enough we have to find something
else to make better and nobody has found it yet.


--
Defund the Thought Police
Andiamo Brandon!
 
On Tue, 7 Dec 2021 22:56:47 -0500, \"Tom Del Rosso\"
<fizzbintuesday@that-google-mail-domain.com> wrote:

John Larkin wrote:
On Mon, 6 Dec 2021 20:36:17 +0200, Dimiter_Popoff <dp@tgi-sci.com
wrote:

On 12/6/2021 19:42, jlarkin@highlandsniptechnology.com wrote:
https://www.fabricatedknowledge.com/p/the-rising-tide-of-semiconductor

I\'ve also heard that the cost of one next-gen euv scanner is well
over $200M, and that the design and mask set for a high-end chip
costs a billion dollars.

We just don\'t need few-nm chips.




Gradually electronics design without having access to a silicon
factory becomes useless, hopefully the process is slow enough so we
don\'t see that in full.
Sort of like nowadays you can somehow master an internal combustion
engine if you have a lathe and a milling machine but you have no
chance to make it comparable to those car makers make, not to speak
about cost.

Some things have got good enough. Hammers, spoons, beds, LED lights,
microwave ovens. Moore\'s Law can\'t go on forever, and is probably at
or in same cases past its practical limit.

Emissions requirements used to get tighter every year until it served no
purpose to make 1000 cars emit less than one BBQ. Then they started
subsidizing EV\'s. When everyone has an EV - or actually when no one has
an ICE and a few have EV\'s and most have to take a bus - then they\'ll
force us to change to something else.


We don\'t need 3 nm chips to text and twitter. I can\'t imagine my cell
phone needing to be better hardware.

First there was a century of advances in transportation, then it was
communication. When something is good enough we have to find something
else to make better and nobody has found it yet.

The giant advances will be in biology.



--

Father Brown\'s figure remained quite dark and still;
but in that instant he had lost his head. His head was
always most valuable when he had lost it.
 
On 07/12/2021 23:21, Lasse Langwadt Christensen wrote:
tirsdag den 7. december 2021 kl. 23.32.09 UTC+1 skrev gnuarm.del...@gmail.com:
On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote:

Likewise with PC\'s. I\'m in the market for a new one right now but I\'m
not convinced that any of them offer single threaded performance that is
3x better than the ancient i7-3770 I have now. That has always been my
upgrade heuristic (used to be every 3 years). Clock speeds have maxed
out and now they are adding more cores (many of which are idle most of
the time). Performance cores and efficient cores is the new selling
point. It looks on paper like the i5-12600K might just pass this test.
I bought an i5 machine and it was a real dog. I said something to the effect that they ran out of ways to add transistors to improve the speed of CPUs a few years ago and someone listed a number of architectural improvements they\'ve added for a 20-30% boost.

It has been quite some time since you could expect significant speed improvements by adding transistors or faster clock speeds. I think it was the Pentium 4 where the clock rate peaked at about 3 GHz by adding pipeline stages for shorter gate delays. But the cost of pipeline stalls pretty much mitigated that advantage. I believe people could overclock the Pentium 3 to run faster than the 4.

It would have needed serious water cooling to overclock a Pentium 3. My
P3 portable actually damaged the surface finish of a table when left on
power running a particularly heavy simulation overnight. Used on a lap
at full speed it would almost certainly have resulted in serious burns!

> https://www.cpubenchmark.net/compare/Intel-i7-3770-vs-Intel-i9-12900KF-vs-Intel-Pentium-4-3.60GHz/896vs4611vs1079

Therein lies the problem. The stuff I am developing only cares about
single thread performance so by moving from the i7-3770 to the latest
and greatest i9-12900 I get just twice the speed for 4x the power used.
It would be a lot more cost effective to buy another 3770 or 4770 (they
are practically giving them away now as desktops have fallen out of
fashion).

Curiously I can see what turns out to be a step backwards in the i7-3770
from my portable which is an i7-2670QM. The latter can correctly handle
sincos simultaneous evaluation without a pipeline stall in my algorithm
but the go faster 3770 cannot. I had assumed until now that it was a
later chip with a lower number until I just looked it up.

It seems some of the trick used in the slower clocked low power portable
CPUs either don\'t make it into the desktop CPUs or are inapplicable.

The i5-12600K looks like it might just be good enough. Improvements in
the pipelining, sincos simultaneous evaluation and SSE extensions for
tough floating point problems might just be enough to push it over 3x.
On paper its floating point performance looks OK.

--
Regards,
Martin Brown
 
On Tuesday, December 7, 2021 at 10:56:54 PM UTC-5, Tom Del Rosso wrote:
John Larkin wrote:
On Mon, 6 Dec 2021 20:36:17 +0200, Dimiter_Popoff <d...@tgi-sci.com
wrote:

On 12/6/2021 19:42, jla...@highlandsniptechnology.com wrote:
https://www.fabricatedknowledge.com/p/the-rising-tide-of-semiconductor

I\'ve also heard that the cost of one next-gen euv scanner is well
over $200M, and that the design and mask set for a high-end chip
costs a billion dollars.

We just don\'t need few-nm chips.




Gradually electronics design without having access to a silicon
factory becomes useless, hopefully the process is slow enough so we
don\'t see that in full.
Sort of like nowadays you can somehow master an internal combustion
engine if you have a lathe and a milling machine but you have no
chance to make it comparable to those car makers make, not to speak
about cost.

Some things have got good enough. Hammers, spoons, beds, LED lights,
microwave ovens. Moore\'s Law can\'t go on forever, and is probably at
or in same cases past its practical limit.
Emissions requirements used to get tighter every year until it served no
purpose to make 1000 cars emit less than one BBQ. Then they started
subsidizing EV\'s. When everyone has an EV - or actually when no one has
an ICE and a few have EV\'s and most have to take a bus - then they\'ll
force us to change to something else.
We don\'t need 3 nm chips to text and twitter. I can\'t imagine my cell
phone needing to be better hardware.
First there was a century of advances in transportation, then it was
communication. When something is good enough we have to find something
else to make better and nobody has found it yet.

Wow. You live in such a defeatist world. Life must suck when you think it\'s pointless to try to improve it. I suppose you don\'t bother to vote, or is there a \"surrender\" party where you are?

--

Rick C.

++ Get 1,000 miles of free Supercharging
++ Tesla referral code - https://ts.la/richard11209
 
On Tuesday, December 7, 2021 at 11:31:03 PM UTC-5, jla...@highlandsniptechnology.com wrote:
The giant advances will be in biology.

How about medicine? Certainly there\'s room there for advances. I would much prefer to die quietly in my sleep at 95 than to die at 70 of pancreatic cancer.

So if we froze advancement of semiconductors where they are today and put all that money saved into medical research... Try to use what little imagination you have to picture the advances we could achieve! Maybe even Father Brown could use his head to help. Maybe we could find a vaccine for Covid! Oh, wait, we have those! The problem is we can\'t get people to take the vaccine because they don\'t believe in medical science.

Here\'s a good page to show how close we are to reaching herd immunity world wide.

https://www.bloomberg.com/graphics/covid-vaccine-tracker-global-distribution/

It says in another 5 months 75% of the world will have received at least one shot. Even one shot helps to keep the hospitals from getting overcrowded with victims.

--

Rick C.

--- Get 1,000 miles of free Supercharging
--- Tesla referral code - https://ts.la/richard11209
 
Rick C wrote:
On Tuesday, December 7, 2021 at 10:56:54 PM UTC-5, Tom Del Rosso
wrote:
John Larkin wrote:
We don\'t need 3 nm chips to text and twitter. I can\'t imagine my
cell phone needing to be better hardware.
First there was a century of advances in transportation, then it was
communication. When something is good enough we have to find
something else to make better and nobody has found it yet.

Wow. You live in such a defeatist world. Life must suck when you
think it\'s pointless to try to improve it. I suppose you don\'t
bother to vote, or is there a \"surrender\" party where you are?

I didn\'t say anything or anyone was defeated. To say that significant
advances will happen in an unexpected area is obviously not defeatist.
John\'s reply suggested that he understood what I meant.


--
Defund the Thought Police
Andiamo Brandon!
 
Rick C wrote:
It was never about performance, it was just the number of
transistors doubling every 18 to 24 months.

All of that may be true, but Moore\'s observation was simply about
the trend in the number of transistors on a die. That\'s all.

Yes but his observation was that it doubled every year, or else the 8080
would have had only 250 transistors.


--
Defund the Thought Police
Andiamo Brandon!
 
On Thursday, December 9, 2021 at 1:20:50 PM UTC+11, Tom Del Rosso wrote:
Rick C wrote:
It was never about performance, it was just the number of
transistors doubling every 18 to 24 months.

All of that may be true, but Moore\'s observation was simply about
the trend in the number of transistors on a die. That\'s all.

Yes but his observation was that it doubled every year, or else the 8080
would have had only 250 transistors.

It appeared in 1974, and had 4500 transistors. Moore made his observation in 1965.

https://en.wikipedia.org/wiki/Moore%27s_law#/media/File:Moore\'s_Law_Transistor_Count_1970-2020.png

puts the 8080 bang on the trend line. Tom Del Rosso clearly doesn\'t understand that it wasn\'t a particularly precise \"law\" - more an observation about the way economics was driving the semi-conductor industry, because everything that let you squeeze more transistors onto a die cost a lot of capital investment. Curiously, most of these investments on paid off.

--
Bill Sloman, Sydney
 
On 08/12/21 03:56, Tom Del Rosso wrote:
First there was a century of advances in transportation, then it was
communication. When something is good enough we have to find something
else to make better and nobody has found it yet.

Twenty years ago I told my daughter that if I was in
the same position as a kid, I would choose life sciences
as a career, rather than electronics and computing.

Even back then it was clear as day that new techniques
meant bioscience was at the same stage of evolution as
hardware was in the 60s.

That\'s still true, as is the concern that script
kiddies will create biological viruses just as
they created computer viruses.
 
On 12/6/21 1:47 PM, John Larkin wrote:
On Mon, 6 Dec 2021 20:36:17 +0200, Dimiter_Popoff <dp@tgi-sci.com
wrote:

On 12/6/2021 19:42, jlarkin@highlandsniptechnology.com wrote:
https://www.fabricatedknowledge.com/p/the-rising-tide-of-semiconductor

I\'ve also heard that the cost of one next-gen euv scanner is well over
$200M, and that the design and mask set for a high-end chip costs a
billion dollars.

We just don\'t need few-nm chips.




Gradually electronics design without having access to a silicon factory
becomes useless, hopefully the process is slow enough so we don\'t see
that in full.
Sort of like nowadays you can somehow master an internal combustion
engine if you have a lathe and a milling machine but you have no chance
to make it comparable to those car makers make, not to speak about cost.

Some things have got good enough. Hammers, spoons, beds, LED lights,
microwave ovens. Moore\'s Law can\'t go on forever, and is probably at
or in same cases past its practical limit.

We don\'t need 3 nm chips to text and twitter. I can\'t imagine my cell
phone needing to be better hardware.

I need a dumb, last-gen FPGA that is less fancy inside but fast
pin-to-pin. The trend is in the opposite directions.

Maybe Moore\'s law is running on psychological momentum, fear of
getting behind. I think I can see that happening.

There are two main forces driving consumer software development, reduced
time-to-market by using higher levels of abstraction and trading of
performance hits for reduced development time, vs. user irritation
pushing back on those performance hits.

The underlying hardware is mostly irrelevant for the bulk of consumer
applications because they don\'t really leverage the full capability of
the hardware to begin with.
 
On 12/8/21 6:21 AM, Martin Brown wrote:
On 07/12/2021 23:21, Lasse Langwadt Christensen wrote:
tirsdag den 7. december 2021 kl. 23.32.09 UTC+1 skrev
gnuarm.del...@gmail.com:
On Monday, December 6, 2021 at 8:14:43 PM UTC-5, Martin Brown wrote:

Likewise with PC\'s. I\'m in the market for a new one right now but I\'m
not convinced that any of them offer single threaded performance
that is
3x better than the ancient i7-3770 I have now. That has always been my
upgrade heuristic (used to be every 3 years). Clock speeds have maxed
out and now they are adding more cores (many of which are idle most of
the time). Performance cores and efficient cores is the new selling
point. It looks on paper like the i5-12600K might just pass this test.
I bought an i5 machine and it was a real dog. I said something to the
effect that they ran out of ways to add transistors to improve the
speed of CPUs a few years ago and someone listed a number of
architectural improvements they\'ve added for a 20-30% boost.

It has been quite some time since you could expect significant speed
improvements by adding transistors or faster clock speeds. I think it
was the Pentium 4 where the clock rate peaked at about 3 GHz by
adding pipeline stages for shorter gate delays. But the cost of
pipeline stalls pretty much mitigated that advantage. I believe
people could overclock the Pentium 3 to run faster than the 4.

It would have needed serious water cooling to overclock a Pentium 3. My
P3 portable actually damaged the surface finish of a table when left on
power running a particularly heavy simulation overnight. Used on a lap
at full speed it would almost certainly have resulted in serious burns!

https://www.cpubenchmark.net/compare/Intel-i7-3770-vs-Intel-i9-12900KF-vs-Intel-Pentium-4-3.60GHz/896vs4611vs1079


Therein lies the problem. The stuff I am developing only cares about
single thread performance so by moving from the i7-3770 to the latest
and greatest i9-12900 I get just twice the speed for 4x the power used.
It would be a lot more cost effective to buy another 3770 or 4770 (they
are practically giving them away now as desktops have fallen out of
fashion).

Curiously I can see what turns out to be a step backwards in the i7-3770
from my portable which is an i7-2670QM. The latter can correctly handle
sincos simultaneous evaluation without a pipeline stall in my algorithm
but the go faster 3770 cannot. I had assumed until now that it was a
later chip with a lower number until I just looked it up.

It seems some of the trick used in the slower clocked low power portable
CPUs either don\'t make it into the desktop CPUs or are inapplicable.

The i5-12600K looks like it might just be good enough. Improvements in
the pipelining, sincos simultaneous evaluation and SSE extensions for
tough floating point problems might just be enough to push it over 3x.
On paper its floating point performance looks OK.

There are web pages that can grind a Haswell-core Celeron N3060 with a
2.4 Ghz boost clock, from ~5 years ago, on a netbook with 4 gigs RAM and
SSD, plus 100 megabit internet connection to a halt all by themselves,
no other tabs open. Example:

<https://owlcation.com/stem/I-Found-A-Pretty-Rock-On-The-Beach-And-Wondered-II>
 
The John Doe troll stated the following in message-id
<sdhn7c$pkp$4@dont-email.me>:

> The troll doesn\'t even know how to format a USENET post...

And the John Doe troll stated the following in message-id
<sg3kr7$qt5$1@dont-email.me>:

The reason Bozo cannot figure out how to get Google to keep from
breaking its lines in inappropriate places is because Bozo is
CLUELESS...

And yet, the clueless John Doe troll has itself posted yet another
incorrectly formatted USENET posting on Thu, 9 Dec 2021 06:16:50 -0000
(UTC) in message-id <sos70h$ujh$6@dont-email.me>.

iViPSuiELbvo
 

Welcome to EDABoard.com

Sponsor

Back
Top