the hot new programming language

On 7 Jul 2015 11:50:19 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2015-07-06, krw <krw@nowhere.com> wrote:
On 6 Jul 2015 06:44:59 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2015-07-05, krw <krw@nowhere.com> wrote:
On 5 Jul 2015 08:51:20 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2015-07-05, krw <krw@nowhere.com> wrote:
On 4 Jul 2015 21:47:03 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

The tax could be inverse on holding time, tapering to zero after, say,
5 years. That would change a lot.

such complexity is unneeded, inflation already does that, and faster.

Inflation does the opposite.

No. It does the opposite 0f the opposite, it devalues cash. In effect
increasing the cash value of inverstments.

But it does nothing to dampen the trading feedback loop. It doesn't
matter what the frequency is, the tax is the same.

it reduces the gain, once the gain is below 1.0 short term trading is
pointless.

No, it doesn't reduce the gain at all, at least any differential
between insta-tradez/speculation and long-term investments.

It's a tax that is proportional to the holding time.

only if you're holding cash.

No, investments are the same.

bank deposits count as "cash". invest in someting better performing.

Irrelevant.

I can make neither head nor tail of your claims, they appeart to be
based on tautologies, and thus I choose to leave this matter unresoved.

If you can't read, I can't help you understand your error.
 
On 07/07/2015 02:30 AM, Clifford Heath wrote:
On 08/07/15 00:21, Phil Hobbs wrote:
On 07/06/2015 02:35 PM, Clifford Heath wrote:
No argument is actually needed until you present evidence of the
existence of this meta-magical woo.

What magic? I'm merely pointing out (along with the whole Western
philosophical tradition) that simple mechanistic materialism is
self-contradictory, because it logically entails the consequence that
logical thought cannot exist.

You define logical thought as requiring logical causation, in other
words, as "why" the thought occurred, the reason (logic) behind the
thought.

Has it ever occurred to you that thought might be *reasonable* in and of
itself without needing such an a-priori assumption that it originates
elsewhere, somewhere other than "in the machine"?

You define a kind of thought that cannot exist in a machine (because it
originates outside a machine) and then you "prove" that such thought
cannot exist in the machine. It's begging the question.

The madman's brain has the same random noise in it as Einstein's does,
but only Einstein has the appropriately trained resonant filters to
choose noise that fits training that others recognise as "reasonable".

No one has ever refuted that AFAICT.

It's not possible to refute a tautological argument, and that's what
this is.

Expound.

A tautology would be "thought is impossible, therefore no thought can be
taking place." This argument is a positive one, just a bit too
conclusive for the comfort of mechanists (of whom I used to be one).


I'm not the one claiming it exists,
you are,

Claiming what exists? You're the one making the metaphysical claims
here, Cliff. You seem to think that your metaphysics is somehow
privileged, exempt from elementary philosophical investigation. Yet you
have no problems putting words in my mouth and attributing all sorts of
views to me that I'm not advancing.

I'm not arguing for ghosts in machines, or the nonexistence of matter,
or anything else of the kind. I'm merely claiming to demonstrate that
your position is too simple, in that it is self contradictory.

Aristotle knew that, and he's nobody's idea of a fundy. Kant knew it
too. So did every thinking person until about 1955ish, when the fashion
for computers and their programs seems to have made everybody forget how
to add and subtract.

and I'm just pointing out the psychological motivations for
such delusions. Prove it. Or are you limited to merely your "faith the
evidence of things unseen"?

I'd say that ignoring a straightforward logical antinomy in your
position in order to support your prior commitment to a strictly
mechanical universe is the faith position, not mine. I think that
there's only one kind of truth.

No. You're claiming that for thought to be "logical", it has to
originate outside the brain - you presume that such an "outside" exists.

I said nothing of the sort. I just said that the mind wasn't merely
software. I don't know what it is, but it logically cannot be merely an
epiphenomenon of the brain.

That's the "magic" I was referring to. It's the same error that has
existed since Plato tried to divide form and substance.

I doubt you could defend that statement.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On Wed, 08 Jul 2015 08:55:32 -0700, John Larkin wrote:

[snip]

Why wouldn't neurons use quantum computing principles inside? If it's
possible, evolution would have taken advantage of it. So you are
saying that it's not merely a weird idea, but it's impossible. Pretty
strong statement.

I work every day with neurophysiologists who are trying to understand how
memory, decision-making, visual recognition, and other amazing "ordinary"
brain activities occur. I have yet to hear of any talk of quantum mechanics
being necessary for any of these.

Looking at single neurons or small sets of neurons (and perhaps glia), there's
been no need to drag quantum mechanics into analytical descriptions of cell
behavior. Moderately complicated computational tasks involving sensory and
motor tasks are moderately well understood relatively near the periphery based
on these models.

We certainly can't categorically exclude quantum mechanical effects in brain
activity at this point, but there is no need to include it, either. Such a
suggestion adds a certain fanciful noise to the discussion without really
helping to understand or predict anything.
 
On 07/07/2015 15:21, Phil Hobbs wrote:
On 07/06/2015 02:35 PM, Clifford Heath wrote:
On 07/07/15 11:53, Phil Hobbs wrote:
On 7/6/2015 12:00 PM, Clifford Heath wrote:
On 07/07/15 09:07, Phil Hobbs wrote:
This was a philosophical commonplace up until our culture lost its
mind,
sometime around 1955.
NB this has nothing to do with any particular religion--Plato and
Aristotle knew all about it.

Plato is identified (probably wrongly) as the source of this bizarre
material/spiritual dualism which extends all the way to more modern
Cartesian thinking. It's purest mysticism, and doesn't answer the main
question that it purports to: how can our lives have significance if we
are machines with no "truly" free will. It does not - and cannot -
answer this question because it just relegate "significance" to another
realm - which has the same problems. If that realm has no rules, it's
chaos, and if it does, how is it not a machine? It's turtles all the
way
down, folk.

The populist rubbish in which Larkin suggests the two realms are joined
by quantum uncertainties (without breaking the idea of the physical
world as mechanistic) has never been demonstrated to be plausible, and
represents just another attempt to clutch at straws in the search for a
meaning beyond our individual finite lives. Why should the "spiritual"
realm be any more capable of carrying meaning than the one world we
*can* observe? Turtles again, folk.

You reckon it was around 1955 that people started to call bullshit on
this nonsense? Do you associate any particular event or person with
that
event?

Brain structure and operation is quite unlike any existing logic
machine, but it's still a logic machine. The recent adoption by the IBM
Cortical Learning Center of Jeff Hawkins' (of Numenta) approach called
Hierarchical Temporal Memory (HTM) will show that our brains are not
magical. The resources that IBM are bringing to bear on realising the
incredible recent achievements of small-scale HTM - like, they're now
starting to build wafer-scale devices consisting of eventually up to
half a dozen stacked full wafers - will tell the truth, and this
mystical nonsense will finally be seen for what it it - a failed
attempt
to dream that humans have some significance beyond just the arrangement
of molecules that make us.

One life, then it ends. Make a difference while you can, don't spend it
preparing for a future life where eternity requires that no difference
can ever be made.

Clifford Heath.

So where's your actual argument?

No argument is actually needed until you present evidence of the
existence of this meta-magical woo.

What magic? I'm merely pointing out (along with the whole Western
philosophical tradition) that simple mechanistic materialism is
self-contradictory, because it logically entails the consequence that
logical thought cannot exist.

I think you will have to elaborate on why you think that a simple
mechanistic materialism logically entails the consequence that logical
thought cannot exist. Our individual reasoning may be flawed but there
is always peer review to find errors in any mathematical proofs.

What we perceive as consciousness and thought is likely to be an
emergent behaviour on a sufficiently large network of interconnected
simple computing nodes. It looks complex from the outside but the
individual elements are nothing more than a pattern of clicks on neurons.

We are not all that far off being able to build dedicated hardware with
comparable complexity to the human brain now so it will soon be a viable
experiment. Though a cat brain seems to be first choice.

In much the same way as the simple rules for Conway's life by sheer luck
happen to produce a computationally complete Turing machine.
No one has ever refuted that AFAICT.

If you thought I was defending some specific model of how the mind
coexists with the brain, you're mistaken. There is clearly a deep
connection between the two, but it equally clearly isn't that the mind
is a program running on the brain's hardware.

Not a program as we would understand it, but a pattern of interconnects
and signals that represent things we have learnt in a physical medium.

I'm not the one claiming it exists,
you are, and I'm just pointing out the psychological motivations for
such delusions. Prove it. Or are you limited to merely your "faith the
evidence of things unseen"?

I'd say that ignoring a straightforward logical antinomy in your
position in order to support your prior commitment to a strictly
mechanical universe is the faith position, not mine. I think that
there's only one kind of truth.

I don't see any compelling reason to invoke magic of any kind.

Cheers

Phil "former mechanist" Hobbs

When we can do the full computer simulations on suitable hardware then
it will be possible to decide this question by experiments.

--
Regards,
Martin Brown
 
On 07/07/2015 00:19, John Larkin wrote:
On Mon, 06 Jul 2015 19:07:12 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 7/6/2015 6:52 PM, John Larkin wrote:

A human, with a wet-chemistry brain, using millisecond logic gates,
can whup a teraflop-CPU robot at tennis.

At the moment because we humans can use pattern matching tricks and
parallelism that are beyond our ability to program into robots.

The robots are getting better though. I have seen a few now that can do
tricky variable jobs that previously only humans could manage. Sorting
Smarties (M&Ms) from a tray being one of the more impressive examples.

The whole singularity thing is a red herring. It's a symptom of the
decline of thinking in our age that most folks assume without proof that
'modern science' has proven that the human mind is just the pure
physical operation of the human brain under physical causation and
nothing else.

It is likely that the computational singularity exists somewhere in the
future of computer hardware. When I was an undergraduate the idea that a
computer could beat me at chess was risible. Now any one of a dozen
chess engines is stronger than our best human world chess champion.

David Levy only just won his bet. Turns out though that computer chess
was a false dawn and is an easier problem than we first thought.

Go requires much deeper pattern matching skills to play at any kind of
serious competitive level and computers are way behind human masters.

More impressive still are the self driving cars which are just into road
trials now. Even with some bugs they may be safer than humans.

That argument isn't modern at all, and in fact was debunked quite
conclusively about 400 BC. We can't think without our brains, but we
don't altogether think *with* them. If pure physical causation (what
Aristotle called 'efficient causation') is all that's going on when our
brains change from one state to the next, there's no room for *logical*
causation at all. A madman's brain is just as good as Einstein's, on
that view, and if (per impossibile) the state of a brain happened to
correspond to one step of a logical argument, there would be no way to
even notice that the next step went wrong, let alone correct it.

The madman's brain *is* almost as good as Einstein's assuming that the
madman can see, walk and run. It takes insane processing power to handle
the visual inputs from the eyes and run a 3D world model.

The amount of the brain dedicated to abstract thought, reasoning and
imagination is tiny by comparison.

This was a philosophical commonplace up until our culture lost its mind,
sometime around 1955.

NB this has nothing to do with any particular religion--Plato and
Aristotle knew all about it.

Cheers

Phil Hobbs

The model of a brain as a bunch of threshold logic gates (the Neural
Network approach) is silly. Prop delay alone makes the idea absurd.
Single-celled critters can do pretty cool adaptive stuff.

It is only silly if you choose not to understand it.

The brain must be quantum mechanical at the cellular level, with all
the mysticism and noncausal behavior of quantum mechanics.

New age weirdo thinking. Presently advocated by Penrose in his various
popular science books. I remain unconvinced. The crux of the complexity
of a human brain is a huge number of tiny simple computing elements and
the insanely large number of permutations of possible interconnects.

--
Regards,
Martin Brown
 
On Wed, 08 Jul 2015 15:19:39 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 07/07/2015 00:19, John Larkin wrote:
On Mon, 06 Jul 2015 19:07:12 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 7/6/2015 6:52 PM, John Larkin wrote:

A human, with a wet-chemistry brain, using millisecond logic gates,
can whup a teraflop-CPU robot at tennis.

At the moment because we humans can use pattern matching tricks and
parallelism that are beyond our ability to program into robots.

The robots are getting better though. I have seen a few now that can do
tricky variable jobs that previously only humans could manage. Sorting
Smarties (M&Ms) from a tray being one of the more impressive examples.

The whole singularity thing is a red herring. It's a symptom of the
decline of thinking in our age that most folks assume without proof that
'modern science' has proven that the human mind is just the pure
physical operation of the human brain under physical causation and
nothing else.

It is likely that the computational singularity exists somewhere in the
future of computer hardware. When I was an undergraduate the idea that a
computer could beat me at chess was risible. Now any one of a dozen
chess engines is stronger than our best human world chess champion.

David Levy only just won his bet. Turns out though that computer chess
was a false dawn and is an easier problem than we first thought.

Go requires much deeper pattern matching skills to play at any kind of
serious competitive level and computers are way behind human masters.

More impressive still are the self driving cars which are just into road
trials now. Even with some bugs they may be safer than humans.

A few things like highway collision avoidance, maybe warnings and
applied braking, might be good. I can't imagine a self-driving car
being feasible in a dense city.


That argument isn't modern at all, and in fact was debunked quite
conclusively about 400 BC. We can't think without our brains, but we
don't altogether think *with* them. If pure physical causation (what
Aristotle called 'efficient causation') is all that's going on when our
brains change from one state to the next, there's no room for *logical*
causation at all. A madman's brain is just as good as Einstein's, on
that view, and if (per impossibile) the state of a brain happened to
correspond to one step of a logical argument, there would be no way to
even notice that the next step went wrong, let alone correct it.

The madman's brain *is* almost as good as Einstein's assuming that the
madman can see, walk and run. It takes insane processing power to handle
the visual inputs from the eyes and run a 3D world model.

The amount of the brain dedicated to abstract thought, reasoning and
imagination is tiny by comparison.

This was a philosophical commonplace up until our culture lost its mind,
sometime around 1955.

NB this has nothing to do with any particular religion--Plato and
Aristotle knew all about it.

Cheers

Phil Hobbs

The model of a brain as a bunch of threshold logic gates (the Neural
Network approach) is silly. Prop delay alone makes the idea absurd.
Single-celled critters can do pretty cool adaptive stuff.

It is only silly if you choose not to understand it.

The brain must be quantum mechanical at the cellular level, with all
the mysticism and noncausal behavior of quantum mechanics.

New age weirdo thinking. Presently advocated by Penrose in his various
popular science books. I remain unconvinced. The crux of the complexity
of a human brain is a huge number of tiny simple computing elements and
the insanely large number of permutations of possible interconnects.

Single-cell and few-cell brainless critters do impressive things, like
hunting and hiding and defending themselves and finding mates. Why
would neurons be limited to acting like slow majority logic gates,
dumber than a bacteria? The Neural Network model is popular because
people don't understand how cells actually work; it's cargo cult
science. What might the image recognition processing time be for a
trillion element neural net computer with millisecond element prop
delay? It wouldn't win many tennis matches.

Why wouldn't neurons use quantum computing principles inside? If it's
possible, evolution would have taken advantage of it. So you are
saying that it's not merely a weird idea, but it's impossible. Pretty
strong statement.



--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On 09/07/15 07:43, John Larkin wrote:
On Wed, 8 Jul 2015 16:57:48 +0000 (UTC), Frank Miles wrote:
Moderately complicated computational tasks involving sensory and
motor tasks are moderately well understood relatively near the periphery based
on these models.

For very small values of "moderately." When you can tap into an optic
nerve and project the image on a computer screen, or go the other way
and fix blindness, I'll be impressed thet you understand how this
stuff works.

And yet we have done both those things already with hearing, and the
operation to implant the bionic ear is approaching routine. The bionic
eye project is active and will succeed within a decade. Will you then
move the goalposts again, to defend your magical thinking?

We certainly can't categorically exclude quantum mechanical effects in brain
activity at this point, but there is no need to include it, either. Such a
suggestion adds a certain fanciful noise to the discussion without really
helping to understand or predict anything.

Since nobody knows how nerves or brains work, why exclude any
possibility? And why would evolution discard any useful phemomena?

Occam's razor. There's no need to consider it, because nothing appears
to be going on that actually needs it. Contrary to your assertion,
no-one claimed it was impossible that quantum effects play a part. Just
that it seems unnecessary (so far).

Neural nets - the kind you refer to - are not remotely structured on the
actual neural structure as it's currently understood. In particular,
they do not have temporal feedback (whereas all cognitive thinking
occurs in a mess of massively-interconnected *oscillators*). They also
lack the hierarchical layering and reinforcement structure of the neural
cortex. They are constructed in modules whose learning time grows
exponentially with size, so are completely unworkable at scale.

All these things have been fixed in the HTM research I referred to. The
breakthrough demonstrations occurred around 2010, and have grown since
then, and as I said, it as recently adopted by IBM's Cortical Learning
Center. They would not be committing resources to wafer-scale
fabrication without some pretty compelling demonstrations.

But hey, why not just continue spouting old criticisms of some earlier
technology instead of doing some reading. That's much easier, right?

Clifford Heath, CTO, Infinuendo.
 
On 09/07/15 07:17, Bill Sloman wrote:
On Tuesday, July 7, 2015 at 1:19:17 AM UTC+2, John Larkin wrote:
The brain must be quantum mechanical at the cellular level, with all
the mysticism and noncausal behavior of quantum mechanics.

Penrose made much the same claim in "The Emperor's New Mind"

https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind

Nobody took him seriously, and he's got a rather better track record than you have.

Paul Davies is a proponent of this magical thinking too. It's all an
unnecessary attempt to salvage free will (and hence, significance, and
the ancestral spirit world) from the machine.

David Pennington (who's no slouch himself; studied under Dirac,
worl-class mathematician and also an Anglican priest!) unwittingly
showed me why that salvage attempt is unnecessary, and put the clincher
on my personal non-theism.

He said, more or less:

"Consider the molecules of air in this room, which collide elastically
(like billiard balls, except for the shape), such that in a tenth of a
nanosecond each molecule will experience an average of 50 collisions.
How much do we need to know about a molecule and its environment
in order to be able to predict its direction and momentum after those
50 collisions?

The astounding answer is that if you omit from your calculations the
gravitational attraction (the weakest of the known physical forces) of
a single electron (the smallest known stable particle) at the opposite
end of the universe (the most distant known location), then after 50
collisions, a tenth of a nanosecond, you know *nothing*, you cannot
predict *at all*, the direction or momentum of that molecule."

In other words, without even invoking any quantum effects, everything in
the universe is inextricably linked to everything else. Just because
there is some non-local causation happening is no reason to disclaim
personal responsibility, or to regard your actions as non-free. They're
just free in a different context, and that's all that matters.

Clifford Heath, CTO, Infinuendo.
 
On 08/07/15 04:51, Phil Hobbs wrote:
On 07/07/2015 02:30 AM, Clifford Heath wrote:
On 08/07/15 00:21, Phil Hobbs wrote:
On 07/06/2015 02:35 PM, Clifford Heath wrote:
No argument is actually needed until you present evidence of the
existence of this meta-magical woo.

What magic? I'm merely pointing out (along with the whole Western
philosophical tradition) that simple mechanistic materialism is
self-contradictory, because it logically entails the consequence that
logical thought cannot exist.

You define logical thought as requiring logical causation, in other
words, as "why" the thought occurred, the reason (logic) behind the
thought.

Has it ever occurred to you that thought might be *reasonable* in and of
itself without needing such an a-priori assumption that it originates
elsewhere, somewhere other than "in the machine"?

You define a kind of thought that cannot exist in a machine (because it
originates outside a machine) and then you "prove" that such thought
cannot exist in the machine. It's begging the question.

The madman's brain has the same random noise in it as Einstein's does,
but only Einstein has the appropriately trained resonant filters to
choose noise that fits training that others recognise as "reasonable".

No one has ever refuted that AFAICT.

It's not possible to refute a tautological argument, and that's what
this is.

Expound.

A tautology would be "thought is impossible, therefore no thought can be
taking place." This argument is a positive one, just a bit too
conclusive for the comfort of mechanists (of whom I used to be one).

Your argument is barely any better. You're firstly defining thought as
something which cannot exist within a machine, and then showing that it
cannot exist within a machine. Well duh.

The idea you attacked in your first post on this subject is that
"'modern science' has proven that the human mind is just the pure
physical operation of the human brain under physical causation and
nothing else."

So either you're attacking the "physical operation... under causation"
or you're saying that the brain is linked to something more than itself.
Furthermore, you're saying that Aristotle proved your point. You say
"there would be no way to even notice that the next step went wrong",
but there *is*: the real experiential world allows us to discount
flights of fancy and determine which things are real.

I'm not the one claiming it exists, you are,
Claiming what exists?

Claiming that thought exists, and requires something outside the machine
that's "not just software".

I'm merely claiming to demonstrate that
your position is too simple, in that it is self contradictory.

Your demonstration of self-contradiction was a failure.

Kant himself departed from Plato and aligned himself with Hume in
claiming that thought alone (without empirical observation) cannot lead
to truth.

No. You're claiming that for thought to be "logical", it has to
originate outside the brain - you presume that such an "outside" exists.

I said nothing of the sort. I just said that the mind wasn't merely
software. I don't know what it is, but it logically cannot be merely an
epiphenomenon of the brain.

Even if we believe that the brain is somehow linked to the rest of the
universe in a cosmic consciousness (via quantum gravity or any other
kind of woo), and that therefore the entire universe is a "thinking
machine", your argument would still claim that it cannot think. To make
that argument requires redefining thought as "that which cannot exist
inside a machine". That's just folly and sophistry.

Clifford Heath.
 
On Tuesday, July 7, 2015 at 1:19:17 AM UTC+2, John Larkin wrote:
On Mon, 06 Jul 2015 19:07:12 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 7/6/2015 6:52 PM, John Larkin wrote:
On Sat, 04 Jul 2015 12:17:19 -0400, Phil Hobbs
hobbs@electrooptical.net> wrote:

On 7/4/2015 12:10 PM, rickman wrote:
On 7/4/2015 11:42 AM, Phil Hobbs wrote:
On 7/4/2015 9:32 AM, Tom Del Rosso wrote:
Martin Brown wrote:

Moore's law never made any claims about speed. It was specifically
about the number density of transistors on a given area of silicon.

I roll my eyes when I hear "Moore's law" and the "computing power of a
chip"
in the same sentence. He stated his law when a chip had 4 transistors.
Since you can't make a computer with 4, it makes no sense to speak of
the
computing power of a chip.

What we need is a breakthrough in 3D structures. In 2D we're limited
to a
few connections per transistor, and a few per gate. It's
connections-per-element that will make HAL possible.


There are all kinds of 3D structures. The problem is cooling them. For
instance, say you're stacking a processor and several planes of memory.
The processor generates a lot of heat, so it has to go next to the
heat sink, i.e. at the top of the stack. but then all its I/O has to go
through the memory chips, so you lose all your area to through-silicon
vias (TSVs). Same problem as very tall buildings.

Aren't the human brain and body 3-D structures? I wonder how they can
do what they do?

You should have studied biology. ;)



If you put it the other way up, you have to throttle back the CPU to the
point that you don't gain anything. Computer speed has been a tradeoff
between clock rate and cooling since the 1980s. I remember going to a
talk by a system architect in about 1988, where he put up a plot of
delay vs. power consumption per gate. It dropped steeply at first, of
course but then gradually rose again at high powers, because the chips
had to be spaced out in order to cool them, which added time-of-flight
delay.

That assumes a constant in process technology which we all know advances
steadily at least if not at the same exponential rates it has been
achieving.

No, it doesn't. There are fundamental physical limits involved, like
the size of atoms and the conductivity of pure copper. Just the random
variations in the local density of dopant atoms causes huge
threshold-voltage shifts. Process improvements can help lots of things,
but you can't make smaller atoms.

And the brain hardly has the same speed-of-light limits as fast silicon.

Cheers

Phil Hobbs

A human, with a wet-chemistry brain, using millisecond logic gates,
can whup a teraflop-CPU robot at tennis.

The whole singularity thing is a red herring. It's a symptom of the
decline of thinking in our age that most folks assume without proof that
'modern science' has proven that the human mind is just the pure
physical operation of the human brain under physical causation and
nothing else.

That argument isn't modern at all, and in fact was debunked quite
conclusively about 400 BC. We can't think without our brains, but we
don't altogether think *with* them. If pure physical causation (what
Aristotle called 'efficient causation') is all that's going on when our
brains change from one state to the next, there's no room for *logical*
causation at all. A madman's brain is just as good as Einstein's, on
that view, and if (per impossibile) the state of a brain happened to
correspond to one step of a logical argument, there would be no way to
even notice that the next step went wrong, let alone correct it.

This was a philosophical commonplace up until our culture lost its mind,
sometime around 1955.

NB this has nothing to do with any particular religion--Plato and
Aristotle knew all about it.

Cheers

Phil Hobbs

The model of a brain as a bunch of threshold logic gates (the Neural
Network approach) is silly. Prop delay alone makes the idea absurd.
Single-celled critters can do pretty cool adaptive stuff.

The brain must be quantum mechanical at the cellular level, with all
the mysticism and noncausal behavior of quantum mechanics.

Penrose made much the same claim in "The Emperor's New Mind"

https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind

Nobody took him seriously, and he's got a rather better track record than you have.

--
Bill Sloman, Sydney
 
On Tuesday, July 7, 2015 at 1:19:17 AM UTC+2, John Larkin wrote:
On Mon, 06 Jul 2015 19:07:12 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 7/6/2015 6:52 PM, John Larkin wrote:
On Sat, 04 Jul 2015 12:17:19 -0400, Phil Hobbs
hobbs@electrooptical.net> wrote:

On 7/4/2015 12:10 PM, rickman wrote:
On 7/4/2015 11:42 AM, Phil Hobbs wrote:
On 7/4/2015 9:32 AM, Tom Del Rosso wrote:
Martin Brown wrote:

Moore's law never made any claims about speed. It was specifically
about the number density of transistors on a given area of silicon.

I roll my eyes when I hear "Moore's law" and the "computing power of a
chip"
in the same sentence. He stated his law when a chip had 4 transistors.
Since you can't make a computer with 4, it makes no sense to speak of
the
computing power of a chip.

What we need is a breakthrough in 3D structures. In 2D we're limited
to a
few connections per transistor, and a few per gate. It's
connections-per-element that will make HAL possible.


There are all kinds of 3D structures. The problem is cooling them. For
instance, say you're stacking a processor and several planes of memory.
The processor generates a lot of heat, so it has to go next to the
heat sink, i.e. at the top of the stack. but then all its I/O has to go
through the memory chips, so you lose all your area to through-silicon
vias (TSVs). Same problem as very tall buildings.

Aren't the human brain and body 3-D structures? I wonder how they can
do what they do?

You should have studied biology. ;)



If you put it the other way up, you have to throttle back the CPU to the
point that you don't gain anything. Computer speed has been a tradeoff
between clock rate and cooling since the 1980s. I remember going to a
talk by a system architect in about 1988, where he put up a plot of
delay vs. power consumption per gate. It dropped steeply at first, of
course but then gradually rose again at high powers, because the chips
had to be spaced out in order to cool them, which added time-of-flight
delay.

That assumes a constant in process technology which we all know advances
steadily at least if not at the same exponential rates it has been
achieving.

No, it doesn't. There are fundamental physical limits involved, like
the size of atoms and the conductivity of pure copper. Just the random
variations in the local density of dopant atoms causes huge
threshold-voltage shifts. Process improvements can help lots of things,
but you can't make smaller atoms.

And the brain hardly has the same speed-of-light limits as fast silicon.

Cheers

Phil Hobbs

A human, with a wet-chemistry brain, using millisecond logic gates,
can whup a teraflop-CPU robot at tennis.

The whole singularity thing is a red herring. It's a symptom of the
decline of thinking in our age that most folks assume without proof that
'modern science' has proven that the human mind is just the pure
physical operation of the human brain under physical causation and
nothing else.

That argument isn't modern at all, and in fact was debunked quite
conclusively about 400 BC. We can't think without our brains, but we
don't altogether think *with* them. If pure physical causation (what
Aristotle called 'efficient causation') is all that's going on when our
brains change from one state to the next, there's no room for *logical*
causation at all. A madman's brain is just as good as Einstein's, on
that view, and if (per impossibile) the state of a brain happened to
correspond to one step of a logical argument, there would be no way to
even notice that the next step went wrong, let alone correct it.

This was a philosophical commonplace up until our culture lost its mind,
sometime around 1955.

NB this has nothing to do with any particular religion--Plato and
Aristotle knew all about it.

Cheers

Phil Hobbs

The model of a brain as a bunch of threshold logic gates (the Neural
Network approach) is silly. Prop delay alone makes the idea absurd.
Single-celled critters can do pretty cool adaptive stuff.

The brain must be quantum mechanical at the cellular level, with all
the mysticism and noncausal behavior of quantum mechanics.

Penrose made much the same claim in "The Emperor's New Mind"

https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind

Nobody took him seriously, and he's got a rather better track record than you have.

--
Bill Sloman, Sydney
 
On 09/07/15 12:20, John Larkin wrote:
On Thu, 09 Jul 2015 02:39:27 +1000, Clifford Heath
no.spam@please.net> wrote:

On 09/07/15 07:43, John Larkin wrote:
On Wed, 8 Jul 2015 16:57:48 +0000 (UTC), Frank Miles wrote:
Moderately complicated computational tasks involving sensory and
motor tasks are moderately well understood relatively near the periphery based
on these models.
For very small values of "moderately." When you can tap into an optic
nerve and project the image on a computer screen, or go the other way
and fix blindness, I'll be impressed thet you understand how this
stuff works.
And yet we have done both those things already with hearing, and the
operation to implant the bionic ear is approaching routine.
Have they? Cochlear implants just stimulate local sections of the
cochlea;

It's only a question of degree, not fundamental differences. No quantum
wooo is required here.

The retina is arguable part of the brain - it does a ton of
preprocessing - so the optic nerve is internal to the brain. I don't
think anybody understands the encoding

On the other hand, quite a lot is known about how to interpret the
output of the cochlear. They've tapped into individual nerves and can
figure out what the ear is hearing. How else would they have known what
stimulae to send? The brain is adaptable, learning to understand
messages that are off-target, but being on-target reduces the training
needs.

If you want more detail, I can put you in touch with a buddy who's a
senior communications engineer at Cochlear Inc, just across town from
here. But I suspect that blind assertion is more palatable to you than
research.

Clifford Heath, CTO, Infinuendo.
 
On Wed, 8 Jul 2015 16:57:48 +0000 (UTC), Frank Miles
<fpm@u.washington.edu> wrote:

On Wed, 08 Jul 2015 08:55:32 -0700, John Larkin wrote:

[snip]

Why wouldn't neurons use quantum computing principles inside? If it's
possible, evolution would have taken advantage of it. So you are
saying that it's not merely a weird idea, but it's impossible. Pretty
strong statement.

I work every day with neurophysiologists who are trying to understand how
memory, decision-making, visual recognition, and other amazing "ordinary"
brain activities occur. I have yet to hear of any talk of quantum mechanics
being necessary for any of these.

OK, explain how those things actually work.

Looking at single neurons or small sets of neurons (and perhaps glia), there's
been no need to drag quantum mechanics into analytical descriptions of cell
behavior. Moderately complicated computational tasks involving sensory and
motor tasks are moderately well understood relatively near the periphery based
on these models.

For very small values of "moderately." When you can tap into an optic
nerve and project the image on a computer screen, or go the other way
and fix blindness, I'll be impressed thet you understand how this
stuff works.

We certainly can't categorically exclude quantum mechanical effects in brain
activity at this point, but there is no need to include it, either. Such a
suggestion adds a certain fanciful noise to the discussion without really
helping to understand or predict anything.

Since nobody knows how nerves or brains work, why exclude any
possibility? And why would evolution discard any useful phemomena?

So many people don't actually believe in evolution.



--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Wed, 8 Jul 2015 14:16:56 -0700 (PDT), Bill Sloman
<bill.sloman@gmail.com> Gave us:

Penrose made much the same claim in "The Emperor's New Mind"

https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind

Nobody took him seriously, and he's got a rather better track record than you have.

Folks tried to ridicule Tiny Tim too, but the guy was a genius.
 
On Thu, 09 Jul 2015 02:39:27 +1000, Clifford Heath
<no.spam@please.net> wrote:

On 09/07/15 07:43, John Larkin wrote:
On Wed, 8 Jul 2015 16:57:48 +0000 (UTC), Frank Miles wrote:
Moderately complicated computational tasks involving sensory and
motor tasks are moderately well understood relatively near the periphery based
on these models.

For very small values of "moderately." When you can tap into an optic
nerve and project the image on a computer screen, or go the other way
and fix blindness, I'll be impressed thet you understand how this
stuff works.

And yet we have done both those things already with hearing, and the
operation to implant the bionic ear is approaching routine.

Have they? Cochlear implants just stimulate local sections of the
cochlea; they don't drive the auditory nerves and are hardly a bionic
ear. Similarly, people can tickle spots on the retina and produce
crude images, sensations of light, but nobody can decode or encode
data in the optic nerve.


The bionic
eye project is active and will succeed within a decade. Will you then
move the goalposts again, to defend your magical thinking?

Tapping into the optic nerve would demonstrate serious understanding
of how the nervous system works. Zapping spots on the retina is
relatively easy, really just a mechanical problem. A real bionic eye
would *replace* the eye, not just poke it.

The retina is arguable part of the brain - it does a ton of
preprocessing - so the optic nerve is internal to the brain. I don't
think anybody understands the encoding, or how images are processed on
the other end. But how does a finger transmit its various sensations
up into the brain?


We certainly can't categorically exclude quantum mechanical effects in brain
activity at this point, but there is no need to include it, either. Such a
suggestion adds a certain fanciful noise to the discussion without really
helping to understand or predict anything.

Since nobody knows how nerves or brains work, why exclude any
possibility? And why would evolution discard any useful phemomena?

Occam's razor. There's no need to consider it, because nothing appears
to be going on that actually needs it. Contrary to your assertion,
no-one claimed it was impossible that quantum effects play a part. Just
that it seems unnecessary (so far).

Neural nets - the kind you refer to - are not remotely structured on the
actual neural structure as it's currently understood. In particular,
they do not have temporal feedback (whereas all cognitive thinking
occurs in a mess of massively-interconnected *oscillators*). They also
lack the hierarchical layering and reinforcement structure of the neural
cortex. They are constructed in modules whose learning time grows
exponentially with size, so are completely unworkable at scale.

All these things have been fixed in the HTM research I referred to. The
breakthrough demonstrations occurred around 2010, and have grown since
then, and as I said, it as recently adopted by IBM's Cortical Learning
Center. They would not be committing resources to wafer-scale
fabrication without some pretty compelling demonstrations.

But hey, why not just continue spouting old criticisms of some earlier
technology instead of doing some reading. That's much easier, right?

Clifford Heath, CTO, Infinuendo.

--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On 09/07/2015 03:20, John Larkin wrote:
On Thu, 09 Jul 2015 02:39:27 +1000, Clifford Heath
no.spam@please.net> wrote:

On 09/07/15 07:43, John Larkin wrote:
On Wed, 8 Jul 2015 16:57:48 +0000 (UTC), Frank Miles wrote:
Moderately complicated computational tasks involving sensory and
motor tasks are moderately well understood relatively near the periphery based
on these models.

For very small values of "moderately." When you can tap into an optic
nerve and project the image on a computer screen, or go the other way
and fix blindness, I'll be impressed thet you understand how this
stuff works.

And yet we have done both those things already with hearing, and the
operation to implant the bionic ear is approaching routine.

Have they? Cochlear implants just stimulate local sections of the
cochlea; they don't drive the auditory nerves and are hardly a bionic
ear. Similarly, people can tickle spots on the retina and produce
crude images, sensations of light, but nobody can decode or encode
data in the optic nerve.

Actually they are getting close to being able to fake signals from
various external sensors at modest resolutions and the brain network
adjusts to use the crude data presented to it. They have also got a lot
better at decoding signals sent to muscle groups for bionic limbs.

That we don't fully understand everything about the brain is a natural
part of bleeding edge scientific research and in no way invalidates what
has been done so far. Current understanding is incomplete it might even
be wrong but until there is a better predictive scientific theory there
is no point in replacing it with new age handwaving magyck.

It ever there was a proponent of cargo cult science it is John Larkin.

The bionic
eye project is active and will succeed within a decade. Will you then
move the goalposts again, to defend your magical thinking?

Tapping into the optic nerve would demonstrate serious understanding
of how the nervous system works. Zapping spots on the retina is
relatively easy, really just a mechanical problem. A real bionic eye
would *replace* the eye, not just poke it.

You keep moving the goalposts so as to invoke magyck quantum weirdness
for everything that *you* don't understand without bothering to look at
what is already being done in the field.

The retina is arguable part of the brain - it does a ton of
preprocessing - so the optic nerve is internal to the brain. I don't
think anybody understands the encoding, or how images are processed on
the other end. But how does a finger transmit its various sensations
up into the brain?

As a series of nerve impulses - faking them is a serious research
project for providing feedback from prosthetic limbs. BBC take on it:

http://www.bbc.co.uk/news/health-17183888

We certainly can't categorically exclude quantum mechanical effects in brain
activity at this point, but there is no need to include it, either. Such a
suggestion adds a certain fanciful noise to the discussion without really
helping to understand or predict anything.

Since nobody knows how nerves or brains work, why exclude any
possibility? And why would evolution discard any useful phemomena?

Quantum effects are typically limited to relatively small scales at the
molecular level. Cells do use every quentum trick available to them to
convert light or food into energy available for electron transfer. But
at the higher level of the cellular scale differential equations suffice
to describe the system.

It is much the same as with thermodynamics - you don't need to know the
exact details of every particle motion to work out the bulk properties.

Occam's razor. There's no need to consider it, because nothing appears
to be going on that actually needs it. Contrary to your assertion,
no-one claimed it was impossible that quantum effects play a part. Just
that it seems unnecessary (so far).

Neural nets - the kind you refer to - are not remotely structured on the
actual neural structure as it's currently understood. In particular,
they do not have temporal feedback (whereas all cognitive thinking
occurs in a mess of massively-interconnected *oscillators*). They also
lack the hierarchical layering and reinforcement structure of the neural
cortex. They are constructed in modules whose learning time grows
exponentially with size, so are completely unworkable at scale.

Neural nets are a very crude approximation to how the brain works. That
they work as well as they do in practice is somewhat surprising.
All these things have been fixed in the HTM research I referred to. The
breakthrough demonstrations occurred around 2010, and have grown since
then, and as I said, it as recently adopted by IBM's Cortical Learning
Center. They would not be committing resources to wafer-scale
fabrication without some pretty compelling demonstrations.

But hey, why not just continue spouting old criticisms of some earlier
technology instead of doing some reading. That's much easier, right?

Clifford Heath, CTO, Infinuendo.

That is the problem. John assumes that anything he can imagine is
reality and that experienced researchers in the field and all the peer
reviewed literature is wrong just because he says so.

--
Regards,
Martin Brown
 
On 08/07/2015 16:55, John Larkin wrote:
On Wed, 08 Jul 2015 15:19:39 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 07/07/2015 00:19, John Larkin wrote:
On Mon, 06 Jul 2015 19:07:12 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 7/6/2015 6:52 PM, John Larkin wrote:

The whole singularity thing is a red herring. It's a symptom of the
decline of thinking in our age that most folks assume without proof that
'modern science' has proven that the human mind is just the pure
physical operation of the human brain under physical causation and
nothing else.

It is likely that the computational singularity exists somewhere in the
future of computer hardware. When I was an undergraduate the idea that a
computer could beat me at chess was risible. Now any one of a dozen
chess engines is stronger than our best human world chess champion.

David Levy only just won his bet. Turns out though that computer chess
was a false dawn and is an easier problem than we first thought.

Go requires much deeper pattern matching skills to play at any kind of
serious competitive level and computers are way behind human masters.

More impressive still are the self driving cars which are just into road
trials now. Even with some bugs they may be safer than humans.

A few things like highway collision avoidance, maybe warnings and
applied braking, might be good. I can't imagine a self-driving car
being feasible in a dense city.

It might well work best there since the average speeds are low and
predictable vehicles can convoy together much closer together under
machine control than with human reaction times 100-1000ms.

Too many drivers these days are using cell phones or worse texting.

The model of a brain as a bunch of threshold logic gates (the Neural
Network approach) is silly. Prop delay alone makes the idea absurd.
Single-celled critters can do pretty cool adaptive stuff.

It is only silly if you choose not to understand it.

The brain must be quantum mechanical at the cellular level, with all
the mysticism and noncausal behavior of quantum mechanics.

New age weirdo thinking. Presently advocated by Penrose in his various
popular science books. I remain unconvinced. The crux of the complexity
of a human brain is a huge number of tiny simple computing elements and
the insanely large number of permutations of possible interconnects.

Single-cell and few-cell brainless critters do impressive things, like
hunting and hiding and defending themselves and finding mates. Why

Complex apparent behaviour can emerge from the interaction of a few very
simple rules. Conways simple 2D automaton Life is Turing complete.

would neurons be limited to acting like slow majority logic gates,
dumber than a bacteria? The Neural Network model is popular because
people don't understand how cells actually work; it's cargo cult
science. What might the image recognition processing time be for a
trillion element neural net computer with millisecond element prop
delay? It wouldn't win many tennis matches.

Yours is the cargo cult science. Anything you presently don't understand
you put down to handwaving quantum mysticism.

There is no compelling reason to invoke anything more sophisticated than
a lot of non-linear differential equations to model neurons.

Why wouldn't neurons use quantum computing principles inside? If it's
possible, evolution would have taken advantage of it. So you are
saying that it's not merely a weird idea, but it's impossible. Pretty
strong statement.

I am saying that on the scale of a typical cell quantum effects are
largely limited to the individual molecules. There is no cellular aura
of new age quantum mysticism needed to explain what is observed.

This could change but at the moment it looks exceedingly unlikely that
anything other than the huge network combinatorial factors are relevant.

--
Regards,
Martin Brown
 
On Wed, 08 Jul 2015 19:20:02 -0700, John Larkin wrote:

On Thu, 09 Jul 2015 02:39:27 +1000, Clifford Heath
no.spam@please.net> wrote:

On 09/07/15 07:43, John Larkin wrote:
On Wed, 8 Jul 2015 16:57:48 +0000 (UTC), Frank Miles wrote:
Moderately complicated computational tasks involving sensory and
motor tasks are moderately well understood relatively near the periphery based
on these models.

For very small values of "moderately." When you can tap into an optic
nerve and project the image on a computer screen, or go the other way
and fix blindness, I'll be impressed thet you understand how this
stuff works.

And yet we have done both those things already with hearing, and the
operation to implant the bionic ear is approaching routine.

Have they? Cochlear implants just stimulate local sections of the
cochlea; they don't drive the auditory nerves and are hardly a bionic
ear. Similarly, people can tickle spots on the retina and produce
crude images, sensations of light, but nobody can decode or encode
data in the optic nerve.

You're wrong - cochlear implants _do_ drive the auditory nerve. The
most common maladies that lead to these implants involve a loss of the
inner hair cells, which ordinarily transduce the vibration at the
basilar membrane into vesicular releases which trigger the auditory
nerve. It is these hair cells that are so commonly damaged by drugs,
loud sounds, certain diseases, and other physiological insults.

No, we don't _completely_ understand the coding of sound OR light in the
auditory or optic nerves. Some researchers have only recently discovered
some new cell types in the retina (which is turning out to be more
complex than most of us thought). We have learned a lot, enough to have
useful cochlear prostheses, and perhaps soon a vestibular prosthetic
device (i.e. for balance e.g. for Meniere's patients) :)


The bionic
eye project is active and will succeed within a decade. Will you then
move the goalposts again, to defend your magical thinking?

Tapping into the optic nerve would demonstrate serious understanding
of how the nervous system works. Zapping spots on the retina is
relatively easy, really just a mechanical problem. A real bionic eye
would *replace* the eye, not just poke it.

The retina is arguable part of the brain - it does a ton of
preprocessing - so the optic nerve is internal to the brain. I don't
think anybody understands the encoding, or how images are processed on
the other end.

It's a mixture. There's greater understanding of how and where a variety
of visual features are handled - color, shape, continuity of visual forms,
movement, and many other features. It's not a complete understanding,
but it's exciting and new things are being discovered all the time!

But how does a finger transmit its various sensations
up into the brain?

There are a variety of coding strategies used in peripheral nerves. Since
we don't have an absolutely complete understanding of the whole system,
we can't assume we completely understand the encoding.


[snip]
 
On Thu, 09 Jul 2015 04:41:30 +1000, Clifford Heath
<no.spam@please.net> wrote:

On 09/07/15 12:20, John Larkin wrote:
On Thu, 09 Jul 2015 02:39:27 +1000, Clifford Heath
no.spam@please.net> wrote:

On 09/07/15 07:43, John Larkin wrote:
On Wed, 8 Jul 2015 16:57:48 +0000 (UTC), Frank Miles wrote:
Moderately complicated computational tasks involving sensory and
motor tasks are moderately well understood relatively near the periphery based
on these models.
For very small values of "moderately." When you can tap into an optic
nerve and project the image on a computer screen, or go the other way
and fix blindness, I'll be impressed thet you understand how this
stuff works.
And yet we have done both those things already with hearing, and the
operation to implant the bionic ear is approaching routine.
Have they? Cochlear implants just stimulate local sections of the
cochlea;

It's only a question of degree, not fundamental differences. No quantum
wooo is required here.

The retina is arguable part of the brain - it does a ton of
preprocessing - so the optic nerve is internal to the brain. I don't
think anybody understands the encoding

On the other hand, quite a lot is known about how to interpret the
output of the cochlear. They've tapped into individual nerves and can
figure out what the ear is hearing. How else would they have known what
stimulae to send?

By fooling around.

It was observed that the cochlea senses different frequencies at
different points along its spiral. It's trivial to stimulate single
points electrically and have subjects report what they hear. Then it's
mechanically difficult but logically trivial to insert multiple
electrodes, and fiddle with algorithms until the subject reports sorta
intelligible sounds. No deep understanding is required to do any of
that. Fiddle until it works.

The brain is adaptable, learning to understand
messages that are off-target, but being on-target reduces the training
needs.

If you want more detail, I can put you in touch with a buddy who's a
senior communications engineer at Cochlear Inc, just across town from
here. But I suspect that blind assertion is more palatable to you than
research.

I bet that buddy couldn't cut out the cochlea and stimulate the
auditory nerve to any useful effect.


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Thu, 09 Jul 2015 09:10:34 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 09/07/2015 03:20, John Larkin wrote:
On Thu, 09 Jul 2015 02:39:27 +1000, Clifford Heath
no.spam@please.net> wrote:

On 09/07/15 07:43, John Larkin wrote:
On Wed, 8 Jul 2015 16:57:48 +0000 (UTC), Frank Miles wrote:
Moderately complicated computational tasks involving sensory and
motor tasks are moderately well understood relatively near the periphery based
on these models.

For very small values of "moderately." When you can tap into an optic
nerve and project the image on a computer screen, or go the other way
and fix blindness, I'll be impressed thet you understand how this
stuff works.

And yet we have done both those things already with hearing, and the
operation to implant the bionic ear is approaching routine.

Have they? Cochlear implants just stimulate local sections of the
cochlea; they don't drive the auditory nerves and are hardly a bionic
ear. Similarly, people can tickle spots on the retina and produce
crude images, sensations of light, but nobody can decode or encode
data in the optic nerve.

Actually they are getting close to being able to fake signals from
various external sensors at modest resolutions and the brain network
adjusts to use the crude data presented to it. They have also got a lot
better at decoding signals sent to muscle groups for bionic limbs.

That we don't fully understand everything about the brain is a natural
part of bleeding edge scientific research and in no way invalidates what
has been done so far. Current understanding is incomplete it might even
be wrong but until there is a better predictive scientific theory there
is no point in replacing it with new age handwaving magyck.

Quantum mechanics is new age handwaving? I suppose it is, being
formalized less than 100 years ago.

It ever there was a proponent of cargo cult science it is John Larkin.

You define yourself by the things you declare to be impossible. The
history of science is punctuated by amazing, shocking discoveries,
which often have to wait for acceptance until the existing scientific
establishment dies off.


The bionic
eye project is active and will succeed within a decade. Will you then
move the goalposts again, to defend your magical thinking?

Tapping into the optic nerve would demonstrate serious understanding
of how the nervous system works. Zapping spots on the retina is
relatively easy, really just a mechanical problem. A real bionic eye
would *replace* the eye, not just poke it.

You keep moving the goalposts so as to invoke magyck quantum weirdness
for everything that *you* don't understand without bothering to look at
what is already being done in the field.

The retina is arguable part of the brain - it does a ton of
preprocessing - so the optic nerve is internal to the brain. I don't
think anybody understands the encoding, or how images are processed on
the other end. But how does a finger transmit its various sensations
up into the brain?

As a series of nerve impulses - faking them is a serious research
project for providing feedback from prosthetic limbs. BBC take on it:

http://www.bbc.co.uk/news/health-17183888

That's admirable, but it is experimental tinkering with the mechanism.


We certainly can't categorically exclude quantum mechanical effects in brain
activity at this point, but there is no need to include it, either. Such a
suggestion adds a certain fanciful noise to the discussion without really
helping to understand or predict anything.

Since nobody knows how nerves or brains work, why exclude any
possibility? And why would evolution discard any useful phemomena?

Quantum effects are typically limited to relatively small scales at the
molecular level. Cells do use every quentum trick available to them to
convert light or food into energy available for electron transfer. But
at the higher level of the cellular scale differential equations suffice
to describe the system.

You have a differential equation that explains visual memory? Show us!


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 

Welcome to EDABoard.com

Sponsor

Back
Top