the hot new programming language

On Sat, 4 Jul 2015 16:25:10 +0300, "E. Kappos" <mike@net.gr> wrote:

"John Larkin" <jlarkin@highlandtechnology.com> wrote in message
news:egcdpal3j5j8en0ead82236nm8g28pim5b@4ax.com...
On Fri, 3 Jul 2015 07:29:00 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

Den fredag den 3. juli 2015 kl. 12.52.52 UTC+2 skrev Martin Brown:
If you want to sell your soul for maximum financial gain then
destabilising the global stock trading systems with sophisticated high
frequency trading algorithms is definitely the way to go.

One guy in the UK in his parents bedroom can allegedly do this:

http://www.bbc.co.uk/news/business-32415664

People seem to get upset if you are too good at it!


yeh, you have to be a member of "the money sucking parasite club" to
manipulate prices and steal money like that


-Lasse

A small tax on transactions, like 0.1%, would have a remarkable
damping effect. Maybe we can get that soon, after the next monster
worldwide crash.


but most likely would destabilize the ...government!

....or at least the politician proposing it.
 
On Sat, 4 Jul 2015 16:25:10 +0300, "E. Kappos" <mike@net.gr> wrote:

"John Larkin" <jlarkin@highlandtechnology.com> wrote in message
news:egcdpal3j5j8en0ead82236nm8g28pim5b@4ax.com...
On Fri, 3 Jul 2015 07:29:00 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

Den fredag den 3. juli 2015 kl. 12.52.52 UTC+2 skrev Martin Brown:
If you want to sell your soul for maximum financial gain then
destabilising the global stock trading systems with sophisticated high
frequency trading algorithms is definitely the way to go.

One guy in the UK in his parents bedroom can allegedly do this:

http://www.bbc.co.uk/news/business-32415664

People seem to get upset if you are too good at it!


yeh, you have to be a member of "the money sucking parasite club" to
manipulate prices and steal money like that


-Lasse

A small tax on transactions, like 0.1%, would have a remarkable
damping effect. Maybe we can get that soon, after the next monster
worldwide crash.


but most likely would destabilize the ...government!

Even better.


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Sat, 04 Jul 2015 06:33:25 -0400, DecadentLinuxUserNumeroUno
<DLU1@DecadentLinuxUser.org> wrote:

On Sat, 04 Jul 2015 09:19:35 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> Gave us:

The singularity is coming - beware. We are already letting computers
design new bigger chips that no individual human can fully comprehend.


Slow light technology will usher in a simple 4 bit optical computer
that puts them all to shame.

The 4004 was slow enough!


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Sat, 4 Jul 2015 09:18:18 -0400, "Tom Del Rosso"
<fizzbintuesday@that.google.email.domain.com> wrote:

John Larkin wrote:

What I want is a 1000x speed improvement, so I can move sliders and
see waveforms change instantly, just like a breadboard with pots and a
scope. N-dimensional iteration at 5 minutes per trial is not
intuitive, but then 1 minute isn't either.

Build an analog computer.

I do, often, and call them "breadboards." But Spice has advantages of
its own.


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On 4 Jul 2015 09:58:10 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2015-07-03, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

I agree, but that's a separate issue. Outlawing C strings, for
instance, outlaws good code as well as bad. Those of us who know that,
for instance, strncpy() doesn't append a null if it runs out of space,
know enough to unconditionally put the null in there. (Yes, it's a
stupid design, but alternatives are available. The auto industry's
standards are actually pretty useful.)

it's using the wrong function strncopy is for writting to null padded
records, where maximal strings are unterminated.

if you want a length limit and a nul at the end use sprintf

sprintf(dest,"%.*s",len-1,src);

Powerbasic:

A$ = B$ + C$

is safe, or, if you need it,

A$ = LEFT$(B$, 16)

works too. And there are tons of other crashproof string functions.



--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Sat, 04 Jul 2015 07:20:38 -0700, John Larkin
<jlarkin@highlandtechnology.com> Gave us:

>The 4004 was slow enough!

It was also false high-false low error prone slow TTL too.

ANY optical computer at ANY word length will beat ANY silicon machine.
Interesting that you missed that aspect.
 
On Sat, 04 Jul 2015 07:22:22 -0700, John Larkin
<jlarkin@highlandtechnology.com> Gave us:

On Sat, 4 Jul 2015 09:18:18 -0400, "Tom Del Rosso"
fizzbintuesday@that.google.email.domain.com> wrote:


John Larkin wrote:

What I want is a 1000x speed improvement, so I can move sliders and
see waveforms change instantly, just like a breadboard with pots and a
scope. N-dimensional iteration at 5 minutes per trial is not
intuitive, but then 1 minute isn't either.

Build an analog computer.


I do, often, and call them "breadboards." But Spice has advantages of
its own.

A breadboard is not an analog computer. The operator of the
breadboard who characterizes its behavior is.
 
On 7/4/2015 11:08 AM, DecadentLinuxUserNumeroUno wrote:
On Sat, 04 Jul 2015 07:20:38 -0700, John Larkin
jlarkin@highlandtechnology.com> Gave us:

The 4004 was slow enough!

It was also false high-false low error prone slow TTL too.

ANY optical computer at ANY word length will beat ANY silicon machine.
Interesting that you missed that aspect.

Interestingly, that isn't the case. A colleague of mine summed up the
main reason: "You can control many electrons with one electron, but you
can't control many photons with one photon."

IOW optical logic devices don't have gain. They also generally aren't
as fast as transistors--SiGe and InP transistors work up to over 100 GHz.

The switches have to be at least a few wavelengths in size in order to
be able to confine the fields, and you can pack a _lot_ of transistors
into that space.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 7/4/2015 5:58 AM, Jasen Betts wrote:
On 2015-07-03, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

I agree, but that's a separate issue. Outlawing C strings, for
instance, outlaws good code as well as bad. Those of us who know that,
for instance, strncpy() doesn't append a null if it runs out of space,
know enough to unconditionally put the null in there. (Yes, it's a
stupid design, but alternatives are available. The auto industry's
standards are actually pretty useful.)

it's using the wrong function strncopy is for writting to null padded
records, where maximal strings are unterminated.

if you want a length limit and a nul at the end use sprintf

sprintf(dest,"%.*s",len-1,src);

Ssschhhllllooooowww and opaque by comparison to

strncpy(dest, src, bufsize-1);
dest[bufsize-1] = '\0';

But there's no defending the design of strncpy(). Breaking the
null-termination rule is just plain stupid.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 7/4/2015 11:05 AM, John Larkin wrote:
On 4 Jul 2015 09:58:10 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2015-07-03, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote:

I agree, but that's a separate issue. Outlawing C strings, for
instance, outlaws good code as well as bad. Those of us who know that,
for instance, strncpy() doesn't append a null if it runs out of space,
know enough to unconditionally put the null in there. (Yes, it's a
stupid design, but alternatives are available. The auto industry's
standards are actually pretty useful.)

it's using the wrong function strncopy is for writting to null padded
records, where maximal strings are unterminated.

if you want a length limit and a nul at the end use sprintf

sprintf(dest,"%.*s",len-1,src);

Powerbasic:

A$ = B$ + C$

is safe, or, if you need it,

A$ = LEFT$(B$, 16)

works too. And there are tons of other crashproof string functions.

C++ has crashproof strings too: std::string, and they have semantics
like that too. They just aren't stdio. The null-termination thing is
a minor wart. There are a few things like that strncpy() fail and the
fact that you have to remember to free() things created with strdup(),
but the output functions of stdio are easy to use.

The parsing functions, especially the scanf() family, are a mess, but as
I say I have my own stdio-compatible parsing library that doesn't share
the same problems.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 7/4/2015 9:32 AM, Tom Del Rosso wrote:
Martin Brown wrote:

Moore's law never made any claims about speed. It was specifically
about the number density of transistors on a given area of silicon.

I roll my eyes when I hear "Moore's law" and the "computing power of a chip"
in the same sentence. He stated his law when a chip had 4 transistors.
Since you can't make a computer with 4, it makes no sense to speak of the
computing power of a chip.

What we need is a breakthrough in 3D structures. In 2D we're limited to a
few connections per transistor, and a few per gate. It's
connections-per-element that will make HAL possible.
There are all kinds of 3D structures. The problem is cooling them. For
instance, say you're stacking a processor and several planes of memory.
The processor generates a lot of heat, so it has to go next to the
heat sink, i.e. at the top of the stack. but then all its I/O has to go
through the memory chips, so you lose all your area to through-silicon
vias (TSVs). Same problem as very tall buildings.

If you put it the other way up, you have to throttle back the CPU to the
point that you don't gain anything. Computer speed has been a tradeoff
between clock rate and cooling since the 1980s. I remember going to a
talk by a system architect in about 1988, where he put up a plot of
delay vs. power consumption per gate. It dropped steeply at first, of
course but then gradually rose again at high powers, because the chips
had to be spaced out in order to cool them, which added time-of-flight
delay.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 7/4/2015 11:24 AM, Phil Hobbs wrote:
On 7/4/2015 11:08 AM, DecadentLinuxUserNumeroUno wrote:
On Sat, 04 Jul 2015 07:20:38 -0700, John Larkin
jlarkin@highlandtechnology.com> Gave us:

The 4004 was slow enough!

It was also false high-false low error prone slow TTL too.

ANY optical computer at ANY word length will beat ANY silicon machine.
Interesting that you missed that aspect.


Interestingly, that isn't the case. A colleague of mine summed up the
main reason: "You can control many electrons with one electron, but you
can't control many photons with one photon."

IOW optical logic devices don't have gain. They also generally aren't
as fast as transistors--SiGe and InP transistors work up to over 100 GHz.

The switches have to be at least a few wavelengths in size in order to
be able to confine the fields, and you can pack a _lot_ of transistors
into that space.

You are mistakenly thinking one photon = one transistor.

--

Rick
 
On 7/4/2015 11:59 AM, rickman wrote:
On 7/4/2015 11:24 AM, Phil Hobbs wrote:
On 7/4/2015 11:08 AM, DecadentLinuxUserNumeroUno wrote:
On Sat, 04 Jul 2015 07:20:38 -0700, John Larkin
jlarkin@highlandtechnology.com> Gave us:

The 4004 was slow enough!

It was also false high-false low error prone slow TTL too.

ANY optical computer at ANY word length will beat ANY silicon
machine.
Interesting that you missed that aspect.


Interestingly, that isn't the case. A colleague of mine summed up the
main reason: "You can control many electrons with one electron, but you
can't control many photons with one photon."

IOW optical logic devices don't have gain. They also generally aren't
as fast as transistors--SiGe and InP transistors work up to over 100 GHz.

The switches have to be at least a few wavelengths in size in order to
be able to confine the fields, and you can pack a _lot_ of transistors
into that space.

You are mistakenly thinking one photon = one transistor.

No, I'm not. Electrons stay where they're put, in general, steering
drain current until you put them someplace else. A photon passes by in
a picosecond and is lost.

Trust me, I spent seven years in silicon photonics trying to do stuff
like that.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 7/4/2015 11:42 AM, Phil Hobbs wrote:
On 7/4/2015 9:32 AM, Tom Del Rosso wrote:
Martin Brown wrote:

Moore's law never made any claims about speed. It was specifically
about the number density of transistors on a given area of silicon.

I roll my eyes when I hear "Moore's law" and the "computing power of a
chip"
in the same sentence. He stated his law when a chip had 4 transistors.
Since you can't make a computer with 4, it makes no sense to speak of the
computing power of a chip.

What we need is a breakthrough in 3D structures. In 2D we're limited
to a
few connections per transistor, and a few per gate. It's
connections-per-element that will make HAL possible.


There are all kinds of 3D structures. The problem is cooling them. For
instance, say you're stacking a processor and several planes of memory.
The processor generates a lot of heat, so it has to go next to the
heat sink, i.e. at the top of the stack. but then all its I/O has to go
through the memory chips, so you lose all your area to through-silicon
vias (TSVs). Same problem as very tall buildings.

Aren't the human brain and body 3-D structures? I wonder how they can
do what they do?


If you put it the other way up, you have to throttle back the CPU to the
point that you don't gain anything. Computer speed has been a tradeoff
between clock rate and cooling since the 1980s. I remember going to a
talk by a system architect in about 1988, where he put up a plot of
delay vs. power consumption per gate. It dropped steeply at first, of
course but then gradually rose again at high powers, because the chips
had to be spaced out in order to cool them, which added time-of-flight
delay.

That assumes a constant in process technology which we all know advances
steadily at least if not at the same exponential rates it has been
achieving.

--

Rick
 
On 7/4/2015 12:10 PM, rickman wrote:
On 7/4/2015 11:42 AM, Phil Hobbs wrote:
On 7/4/2015 9:32 AM, Tom Del Rosso wrote:
Martin Brown wrote:

Moore's law never made any claims about speed. It was specifically
about the number density of transistors on a given area of silicon.

I roll my eyes when I hear "Moore's law" and the "computing power of a
chip"
in the same sentence. He stated his law when a chip had 4 transistors.
Since you can't make a computer with 4, it makes no sense to speak of
the
computing power of a chip.

What we need is a breakthrough in 3D structures. In 2D we're limited
to a
few connections per transistor, and a few per gate. It's
connections-per-element that will make HAL possible.


There are all kinds of 3D structures. The problem is cooling them. For
instance, say you're stacking a processor and several planes of memory.
The processor generates a lot of heat, so it has to go next to the
heat sink, i.e. at the top of the stack. but then all its I/O has to go
through the memory chips, so you lose all your area to through-silicon
vias (TSVs). Same problem as very tall buildings.

Aren't the human brain and body 3-D structures? I wonder how they can
do what they do?

You should have studied biology. ;)

If you put it the other way up, you have to throttle back the CPU to the
point that you don't gain anything. Computer speed has been a tradeoff
between clock rate and cooling since the 1980s. I remember going to a
talk by a system architect in about 1988, where he put up a plot of
delay vs. power consumption per gate. It dropped steeply at first, of
course but then gradually rose again at high powers, because the chips
had to be spaced out in order to cool them, which added time-of-flight
delay.

That assumes a constant in process technology which we all know advances
steadily at least if not at the same exponential rates it has been
achieving.

No, it doesn't. There are fundamental physical limits involved, like
the size of atoms and the conductivity of pure copper. Just the random
variations in the local density of dopant atoms causes huge
threshold-voltage shifts. Process improvements can help lots of things,
but you can't make smaller atoms.

And the brain hardly has the same speed-of-light limits as fast silicon.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 7/4/2015 9:13 AM, Tom Del Rosso wrote:
Martin Brown wrote:

TBH I am amazed that Moore's law has held good for as long as it has.

Moore's law will last forever if people keep re-defining it. The number of
transistors doubled every year, like Moore said it would, from 1959 to about
1982. Then we hit the first wall. Carver Mead re-wrote the book on VLSI
and it continued to double every 1.5 years until about 2000. Since then it
has doubled every 2 years. When it doubles every 20 years people will still
call it Moore's law. It is continued progress, but it isn't Moore's law.

In 1965 Moore observed that the density of integrated circuits was
doubling every year with the most recent observation at a density of 2^6
or about 60 devices, so early days. Then before the term "Moore's Law"
was in use, in 1975 he revised his observation to a doubling every two
years. The "law" has remained at that state since.

Here is a Moore paper where he discusses this and various aspects of the
observation. An interesting read.

http://www.chemheritage.org/Downloads/Publications/Books/Understanding-Moores-Law/Understanding-Moores-Law_Chapter-07.pdf

--

Rick
 
On 7/4/2015 12:17 PM, Phil Hobbs wrote:
On 7/4/2015 12:10 PM, rickman wrote:
On 7/4/2015 11:42 AM, Phil Hobbs wrote:
On 7/4/2015 9:32 AM, Tom Del Rosso wrote:
Martin Brown wrote:

Moore's law never made any claims about speed. It was specifically
about the number density of transistors on a given area of silicon.

I roll my eyes when I hear "Moore's law" and the "computing power of a
chip"
in the same sentence. He stated his law when a chip had 4 transistors.
Since you can't make a computer with 4, it makes no sense to speak of
the
computing power of a chip.

What we need is a breakthrough in 3D structures. In 2D we're limited
to a
few connections per transistor, and a few per gate. It's
connections-per-element that will make HAL possible.


There are all kinds of 3D structures. The problem is cooling them. For
instance, say you're stacking a processor and several planes of memory.
The processor generates a lot of heat, so it has to go next to the
heat sink, i.e. at the top of the stack. but then all its I/O has to go
through the memory chips, so you lose all your area to through-silicon
vias (TSVs). Same problem as very tall buildings.

Aren't the human brain and body 3-D structures? I wonder how they can
do what they do?

You should have studied biology. ;)



If you put it the other way up, you have to throttle back the CPU to the
point that you don't gain anything. Computer speed has been a tradeoff
between clock rate and cooling since the 1980s. I remember going to a
talk by a system architect in about 1988, where he put up a plot of
delay vs. power consumption per gate. It dropped steeply at first, of
course but then gradually rose again at high powers, because the chips
had to be spaced out in order to cool them, which added time-of-flight
delay.

That assumes a constant in process technology which we all know advances
steadily at least if not at the same exponential rates it has been
achieving.

No, it doesn't. There are fundamental physical limits involved, like
the size of atoms and the conductivity of pure copper. Just the random
variations in the local density of dopant atoms causes huge
threshold-voltage shifts. Process improvements can help lots of things,
but you can't make smaller atoms.

None of which is relevant as we are nowhere near those limitations and
that is not what we are discussing.


> And the brain hardly has the same speed-of-light limits as fast silicon.

How is that relevant?

--

Rick
 
On 7/3/2015 2:44 PM, Lasse Langwadt Christensen wrote:
Den fredag den 3. juli 2015 kl. 20.35.39 UTC+2 skrev DecadentLinuxUserNumeroUno:
On Fri, 03 Jul 2015 10:45:15 -0700, John Larkin
jlarkin@highlandtechnology.com> Gave us:


The real advantage, to me, of using Spice here, is evaluating the
magnetics. Magnetics tend to be a pain. Spice computes the peak and
RMS coil currents, something I prefer not to do any other way.

Their problem is that there are few, if not no models of xformers with
a single turn feedback winding in their libraries, and adding one
doesn't work, because it needs to be magnetically coupled. So even
their modeling structure needs work.

That is one of the main reasons I do not model my supplies with it, as
I *do* use a feedback winding.

you just add the inductors and tell it what the coupling between is

http://cds.linear.com/docs/en/lt-journal/LTMag-V16N3-23-LTspice_Transformers-MikeEngelhardt.pdf

I used this to model an antenna system once.

--

Rick
 
On 7/4/2015 12:40 PM, rickman wrote:
On 7/4/2015 12:17 PM, Phil Hobbs wrote:
On 7/4/2015 12:10 PM, rickman wrote:
On 7/4/2015 11:42 AM, Phil Hobbs wrote:
On 7/4/2015 9:32 AM, Tom Del Rosso wrote:
Martin Brown wrote:

Moore's law never made any claims about speed. It was specifically
about the number density of transistors on a given area of silicon.

I roll my eyes when I hear "Moore's law" and the "computing power of a
chip"
in the same sentence. He stated his law when a chip had 4
transistors.
Since you can't make a computer with 4, it makes no sense to speak of
the
computing power of a chip.

What we need is a breakthrough in 3D structures. In 2D we're limited
to a
few connections per transistor, and a few per gate. It's
connections-per-element that will make HAL possible.


There are all kinds of 3D structures. The problem is cooling them.
For
instance, say you're stacking a processor and several planes of memory.
The processor generates a lot of heat, so it has to go next to the
heat sink, i.e. at the top of the stack. but then all its I/O has
to go
through the memory chips, so you lose all your area to through-silicon
vias (TSVs). Same problem as very tall buildings.

Aren't the human brain and body 3-D structures? I wonder how they can
do what they do?

You should have studied biology. ;)



If you put it the other way up, you have to throttle back the CPU to
the
point that you don't gain anything. Computer speed has been a tradeoff
between clock rate and cooling since the 1980s. I remember going to a
talk by a system architect in about 1988, where he put up a plot of
delay vs. power consumption per gate. It dropped steeply at first, of
course but then gradually rose again at high powers, because the chips
had to be spaced out in order to cool them, which added time-of-flight
delay.

That assumes a constant in process technology which we all know advances
steadily at least if not at the same exponential rates it has been
achieving.

No, it doesn't. There are fundamental physical limits involved, like
the size of atoms and the conductivity of pure copper. Just the random
variations in the local density of dopant atoms causes huge
threshold-voltage shifts. Process improvements can help lots of things,
but you can't make smaller atoms.

None of which is relevant as we are nowhere near those limitations and
that is not what we are discussing.

You haven't been keeping up. Copper conductivity has been the main
limiting factor in interconnect speed and density for a decade or
more--that's why there was all that work in low-K dielectrics, latterly air.

The electric permittivity of vacuum is another of those limits.

And threshold voltage shifts due to fluctuations in local dopant density
have been known to be a problem for about that long.

And the brain hardly has the same speed-of-light limits as fast silicon.

How is that relevant?

You brought up the brain, not me.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
In article <usqfpa517onoc09j0ff9jimdq6s7t3cviu@4ax.com>,
jlarkin@highlandtechnology.com says...
On Sat, 04 Jul 2015 06:33:25 -0400, DecadentLinuxUserNumeroUno
DLU1@DecadentLinuxUser.org> wrote:

On Sat, 04 Jul 2015 09:19:35 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> Gave us:

The singularity is coming - beware. We are already letting computers
design new bigger chips that no individual human can fully comprehend.


Slow light technology will usher in a simple 4 bit optical computer
that puts them all to shame.

The 4004 was slow enough!

I got some!

One day I may need to melt them down for the gold :)

Jamie
 

Welcome to EDABoard.com

Sponsor

Back
Top