Is it AI or not...

On 8/10/2023 6:50 PM, rbowman wrote:
On Thu, 10 Aug 2023 14:55:10 -0500, tracy wrote:


Personally, I\'m sick of ths AI crap which seems to exist only in the
minds of the tech idiots. When it devolves into the lives of us common
dummies, I\'ll worry about it then.

Already there:

https://www.prnewswire.com/news-releases/ai-powered-litterbox-system-
offers-new-standard-of-care-for-cat-owners-301632491.html

\"Using artificial intelligence developed by a team of Purina pet and data
experts, the Petivity Smart Litterbox System detects meaningful changes
that indicate health conditions that may require a veterinarian\'s
attention or diagnosis. The monitor, which users are instructed to place
under each litterbox in the household, gathers precise data on each cat\'s
weight and important litterbox habits to help owners be proactive about
their pet\'s health.\"

How long will we have to wait for the human size version?
 
On Thu, 10 Aug 2023 21:08:29 GMT, Scott Lurndal wrote:

The term \"AI\" has been misused by media and most non-computer
scientists. The current crop \"AI\" tools (e.g. chatGPT) are not
artificial intelligence, but rather simple statistical algorithms based
on a huge volume of pre-processed data.

Not quite...

https://blog.dataiku.com/large-language-model-chatgpt

I played around with neural networks in the \'80s. It was going to be the
Next Big Thing. The approach was an attempt to quantify the biological
neuron model and the relationship of axons and dendrites.

https://en.wikipedia.org/wiki/Biological_neuron_model

There was one major problem: the computing power wasn\'t there. Fast
forward 40 years and the availability of GPUs. Google calls their
proprietary units TPUs, or tensor processing units, which is more
accurate. That\'s the linear algebra tensor, not the physics tensor. While
they are certainly related the terminology changes a bit between
disciplines.

These aren\'t quite the GPUs in your gaming PC:

https://beincrypto.com/chatgpt-spurs-nvidia-deep-learning-gpu-demand-post-
crypto-mining-decline/

For training a GPT you need a lot of them -- and a lot of power. They make
the crypto miners look good.

The dirty little secret is after you\'ve trained your model with the
training dataset, validated it with the validation data, and tweaked the
parameters for minimal error you don\'t really know what\'s going on under
the hood.

https://towardsdatascience.com/text-generation-with-markov-chains-an-
introduction-to-using-markovify-742e6680dc33

Markov chains are relatively simple.
 
On 11 Aug 2023 01:50:48 GMT, rbowman <bowman@montana.com> wrote:

On Thu, 10 Aug 2023 14:55:10 -0500, tracy wrote:


Personally, I\'m sick of ths AI crap which seems to exist only in the
minds of the tech idiots. When it devolves into the lives of us common
dummies, I\'ll worry about it then.

Already there:

https://www.prnewswire.com/news-releases/ai-powered-litterbox-system-
offers-new-standard-of-care-for-cat-owners-301632491.html

\"Using artificial intelligence developed by a team of Purina pet and data
experts, the Petivity Smart Litterbox System detects meaningful changes
that indicate health conditions that may require a veterinarian\'s
attention or diagnosis. The monitor, which users are instructed to place
under each litterbox in the household, gathers precise data on each cat\'s
weight and important litterbox habits to help owners be proactive about
their pet\'s health.\"

And why should I be worried about AI for litterboxes?

Let me know when it starts breaking TrueCrypt or PGP encryption and
devastating our security more than Giggle.com and Redmond are doing.

Until then - shut the *F* UP about *realistic\" AI used for something
outside of bagel baking and litterboxes.
 
On Thu, 10 Aug 2023 19:02:40 -0700, Bob F <bobnospam@gmail.com> wrote:

On 8/10/2023 6:50 PM, rbowman wrote:
On Thu, 10 Aug 2023 14:55:10 -0500, tracy wrote:


Personally, I\'m sick of ths AI crap which seems to exist only in the
minds of the tech idiots. When it devolves into the lives of us common
dummies, I\'ll worry about it then.

Already there:

https://www.prnewswire.com/news-releases/ai-powered-litterbox-system-
offers-new-standard-of-care-for-cat-owners-301632491.html

\"Using artificial intelligence developed by a team of Purina pet and data
experts, the Petivity Smart Litterbox System detects meaningful changes
that indicate health conditions that may require a veterinarian\'s
attention or diagnosis. The monitor, which users are instructed to place
under each litterbox in the household, gathers precise data on each cat\'s
weight and important litterbox habits to help owners be proactive about
their pet\'s health.\"


How long will we have to wait for the human size version?

Exactly!

+5
 
On 11 Aug 2023 01:50:48 GMT, lowbrowwoman, the endlessly driveling,
troll-feeding, senile idiot, blabbered again:


> Already there:

Like what? Your self-admiring big mouth? Yep, it\'s ALWAYS there in these
ngs, admiring itself. LOL

--
And yet another idiotic \"cool\" line, this time about the UK, from the
resident bigmouthed all-American superhero:
\"You could dump the entire 93,628 square miles in eastern Montana and only
the prairie dogs would notice.\"
MID: <ka2vrlF6c5uU1@mid.individual.net>
 
On 11 Aug 2023 02:33:51 GMT, lowbrowwoman, the endlessly driveling,
troll-feeding, senile idiot, blabbered again:


Not quite...

https://blog.dataiku.com/large-language-model-chatgpt

I played around with neural networks in the \'80s. It was going to be the
Next Big Thing.

Oh, no! The self-admiring bigmouth is at it again!

<FLUSH rest of the usual senile crap unread again>

--
More of the resident senile gossip\'s absolutely idiotic endless blather
about herself:
\"My family and I traveled cross country in \'52, going out on the northern
route and returning mostly on Rt 66. We also traveled quite a bit as the
interstates were being built. It might have been slower but it was a lot
more interesting. Even now I prefer what William Least Heat-Moon called
the blue highways but it\'s difficult. Around here there are remnants of
the Mullan Road as frontage roads but I-90 was laid over most of it so
there is no continuous route. So far 93 hasn\'t been destroyed.\"
MID: <kae9ivF7suU1@mid.individual.net>
 
micky wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?

A woman I met at a family event recently asked me what I thought of AI.
I started talking about Leibniz and his speculations, Charles Babbage\'s
Differential Engine, Isaac Asimov\'s robot books. Well, she let me have
the limelight; thank you.
Later I found out that what she had in mind was ChatGPT and OpenAI. It\'s
amazed millions, created a rush of books (including ChatGPT for
Dummies), and fuelled a whole new debate. \"AI\" has replaced
\"algorithms\", which replaced \"apps\", which replaced \"programs\".

Ed
 
On 8/11/2023 4:49 AM, Ed Cryer wrote:
micky wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI?   Seems to me it one slightly complicated algorith and
comes nowhere close to AI.  The Turing test for example.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?

A woman I met at a family event recently asked me what I thought of AI. I started talking about Leibniz and his speculations, Charles Babbage\'s Differential Engine, Isaac Asimov\'s robot books. Well, she let me have the limelight; thank you.
Later I found out that what she had in mind was ChatGPT and OpenAI. It\'s amazed millions, created a rush of books (including ChatGPT for Dummies), and fuelled a whole new debate. \"AI\" has replaced \"algorithms\", which replaced \"apps\", which replaced \"programs\".

Ed

Every technology needs to be classified.

CharGPT is about as useful as OCR. OCR is about 99% accurate.
You\'ve just run 200 pages through the scanner. Now what...

Voluminous output, that must be scrupulously checked.

An \"advisor\", not a \"boss\".

A robust source of beer bong pictures.

Paul
 
micky wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The cloud has been trend for some years and has no more juice left, they had to take on something else.
 
On Fri, 11 Aug 2023 04:20:17 -0700 (PDT), Jeroni Paul wrote:
micky wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The cloud has been trend for some years and has no more juice left,
they had to take on something else.

I think you\'ll notice that anything new and shiny,
even if it only employs an 8-bit microprocessor,
well be granted the cloak of artificial intelligence.
 
MPffffff..... I have sat on my fingers long enough.

Apologies in advance to all blondes everywhere, including my wife!

A natural blonde dyes her hair a dark shade of brunette - and then says to her husband: Look! Artificial intelligence!\"

That is about as seriously as I take AI as it affects my daily life. Sure, reading X-rays, doing certain types of surgery, and many other repetitive, but exacting tasks is well suited to a process that does not get tired, does not get blurry vision, does not ignore something or similar, and can even learn as it repeats those tasks ways to do so with fewer steps - but that is hardly \"intelligence\" - there is no self-awareness, just increasingly more complex responses to a defined problem.

But, a robo-caller, as much as it/he/she may try to be \'real\' is so obviously artificial as to make be very sad for those who might be fooled. And robotic phone-trees - such as many companies use to avoid having humans on the payroll - are the furthest possible thing from Intelligent.

Peter Wieck
Melrose Park, PA
 
On Friday, August 11, 2023 at 9:23:59 AM UTC-4, Peter W. wrote:

A natural blonde dyes her hair a dark shade of brunette - and then says to her husband: Look! Artificial intelligence!\"

Snort.. good one Peter.

If there was true AI there would have been a virtual rimshot accompanying that post.
 
On Thu, 10 Aug 2023 14:43:42 -0400, micky <NONONOmisc07@fmguy.com>
wrote:

No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?

Designing machines for looking at images and finding specific items
like unexpected growths is quite simple. That\'s not AI. Simple 3 layer
neural networks can learn to do that already. When machines do things
they are not supposed to do that may be by using a form of AI. When my
washing machine starts making meals for me that could be described as
AI!
 
tracy@invalid.com writes:

On 11 Aug 2023 01:50:48 GMT, rbowman <bowman@montana.com> wrote:

On Thu, 10 Aug 2023 14:55:10 -0500, tracy wrote:


Personally, I\'m sick of ths AI crap which seems to exist only in the
minds of the tech idiots. When it devolves into the lives of us common
dummies, I\'ll worry about it then.

Already there:

https://www.prnewswire.com/news-releases/ai-powered-litterbox-system-
offers-new-standard-of-care-for-cat-owners-301632491.html

\"Using artificial intelligence developed by a team of Purina pet and data
experts, the Petivity Smart Litterbox System detects meaningful changes
that indicate health conditions that may require a veterinarian\'s
attention or diagnosis. The monitor, which users are instructed to place
under each litterbox in the household, gathers precise data on each cat\'s
weight and important litterbox habits to help owners be proactive about
their pet\'s health.\"

And why should I be worried about AI for litterboxes?

Let me know when it starts breaking TrueCrypt or PGP encryption and
devastating our security more than Giggle.com and Redmond are doing.

Until then - shut the *F* UP about *realistic\" AI used for something
outside of bagel baking and litterboxes.

Unless there\'s an as-yet-unknown flaw in the design of RSA or other
public-key algorithms, there\'s no way in which an AI of any sort could
\"break\" encryption -- artificial intelligence cannot beat mathematics. I
guess some hypothetical super-AI could design a quantum computer which
could break non-quantum-resistant algorithms, but that\'s a pretty far
cry from the gussied-up chatbots most people are talking about.

Anyway I\'m pretty sure you\'ve contributed at least 50% of the traffic on
this topic just by whining that people are talking about something you
don\'t care about. Learn to use your client\'s scoring system, killfile
the conversation, then take your own advice and shut the fuck up.

john
 
On Thu, 10 Aug 2023 14:43:42 -0400, micky wrote:
They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

An algorithm would be programmed by some (presumably) human
programmer as, essentially, a list of rules to follow.

AI (old name: neural networks) is trained by being given a huge stack
of photographs that radiologists have previously examined and
pronounced \"yes\" or \"no\". The neural net makes guesses about whether
each picture is a \"yes\" or a \"no\", and somehow learns from its
mistakes, so that over time its accuracy becomes better and better.

While the training is simple in principle: pathways in the neural
network that led to a correct result are given a boost and those that
led to an incorrect response are depressed -- in practice the result
is so complex that nobody can determine what caused the NN to reach a
particular decision in a particular case.

Or, at least, that\'s how I understand it. A NN product was offered by
the company I used to work for, and the programmer explained it to me
that way. Nothing I\'ve seen has told me it\'s different in principle
now, though I believe much bigger computers are being used, and with
most of the Internet as a training set. But does anybody know exactly
why an AI would answer questions like \"Who holds the world speed
record for walking across the English Channel\" with specific name,
date, and time? I don\'t think so.

--
Stan Brown, Tehachapi, California, USA https://BrownMath.com/
Shikata ga nai...
 
On Thu, 10 Aug 2023 21:44:11 -0500, tracy@invalid.com wrote:
Let me know when it starts breaking TrueCrypt

If you really meant TrueCrypt, and you use it, you might want to
think about switching to the fork called VeraCrypt.

https://en.wikipedia.org/wiki/TrueCrypt#End_of_life_announcement

Nobody knows for sure what happened, but the stated reason - that
TrueCrypt isn\'t needed because Bitlocker exists -- was obviously
absurd. (Bitlocker doesn\'t work on Windows Home.)

--
Stan Brown, Tehachapi, California, USA https://BrownMath.com/
Shikata ga nai...
 
On Thu, 10 Aug 2023 17:42:08 -0500, tracy@invalid.com wrote:
From Wikipedia:
\"The Turing test, originally called the imitation game by Alan Turing
in 1950, is a test of a machine\'s ability to exhibit intelligent
behaviour equivalent to, or indistinguishable from, that of a human.\"

Of what good is AI if the product of it is dumber than a human?

Humans as a rule are pretty dumb. (Dave Barry famously remarked that
the three strongest forces in a human are stupidity, selfishness, and
horniness. I\'ve seen nothing to prove him wrong.)

But as for the Turing test, I think that ship has sailed. Case in
point: A few months ago, at character.ai, I was chatting with
Napoleon. Not really, of course, but the responses were consistent
with what I knew of his actions and speeches from my reading of
multiple biographies of Talleyrand, his foreign minister and nemesis.
Seems to me that it passed the Turing test.

What laypeople often mean when they talk about artificial
intelligence is not the Turing test, or not _only_ the Turing test
but rather something intangible like a sense of self. They don\'t want
to know if a machine can imitate a human, they want to know if it\'s
\"alive\", whatever that means.

Add to the mix:

* Daniel Dennett, in /Consciousness Explained/ concluded that our
sense of self is essentially a bunch of parallel processses that look
like a serial process. (Or maybe it was the other way around.
Fascinating book, but it\'s a long time since I read it. I do remember
that he wasn\'t just speculating in a vacuum: he used evidence in
standard medical journals with the results of actual experiments done
with human perception and cognition.)

* It may all be moot. If the speculation is correct that we\'re living
in a simulation, humans are all already AI.

--
Stan Brown, Tehachapi, California, USA https://BrownMath.com/
Shikata ga nai...
 
On Fri, 11 Aug 2023 10:09:23 -0700, Stan Brown wrote:

Or, at least, that\'s how I understand it. A NN product was offered by
the company I used to work for, and the programmer explained it to me
that way. Nothing I\'ve seen has told me it\'s different in principle now,
though I believe much bigger computers are being used, and with most of
the Internet as a training set.

GPUs were the breakthrough. The Graphics part was a misnomer although the
original intent was to speed up the calculations involved in CG. That made
them ideal for crypto mining to the point where a GPU shortage was caused
by the miners buying the high end boards. They are also good at the vector
manipulations needed for training a neural net.

Using something like PyTorch you can experiment on a PC. Even then it\'s
much faster if you have a GPU that supports CUDA. CUDA is an open standard
but afaik only Nvidia supports it in their chips.

When you get to something like Chat you\'re talking many very expensive
GPUs, a lot of power, and millions of dollars. Chat isn\'t aware of recent
events since it was frozen with the data available when the training
occurred and retraining is very expensive.

Once the model is trained inference, or using the model, is much less
intensive. That\'s the \'Pre-trained\' in GPT.

For me the interesting part is pruning the model developed on a huge
system to run locally with limited resources. Cell phones are getting to
be power enough to do so. Many people didn\'t realize that for something
like speech recognition the audio was sent off to Google, processed, and
the text returned. the light dawned when they realized Alexa has very big
ears. The desired outcome is to have more functionality at a local level,
including very small processors like the Arduino family that don\'t require
huge amout of power to run.

It\'s a fascinating development but like all disruptors the potential for
bad is just as high as good.
 
On Fri, 11 Aug 2023 05:42:10 -0400, Paul wrote:

CharGPT is about as useful as OCR. OCR is about 99% accurate.
You\'ve just run 200 pages through the scanner. Now what...

\'Stochastic\' gets used a lot so 100% accuracy isn\'t even a realistic goal.
\'Good enough\' is the criteria. If a simple model can tell a cat from a dog
97% of the time, that\'s pretty good. Humans aren\'t 100% either so you
could say artificial intelligence is a lot like human intelligence.
 
Paul wrote:
On 8/11/2023 4:49 AM, Ed Cryer wrote:
micky wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI?   Seems to me it one slightly complicated algorith and
comes nowhere close to AI.  The Turing test for example.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?

A woman I met at a family event recently asked me what I thought of AI. I started talking about Leibniz and his speculations, Charles Babbage\'s Differential Engine, Isaac Asimov\'s robot books. Well, she let me have the limelight; thank you.
Later I found out that what she had in mind was ChatGPT and OpenAI. It\'s amazed millions, created a rush of books (including ChatGPT for Dummies), and fuelled a whole new debate. \"AI\" has replaced \"algorithms\", which replaced \"apps\", which replaced \"programs\".

Ed

Every technology needs to be classified.

CharGPT is about as useful as OCR. OCR is about 99% accurate.
You\'ve just run 200 pages through the scanner. Now what...

Voluminous output, that must be scrupulously checked.

An \"advisor\", not a \"boss\".

A robust source of beer bong pictures.

Paul

ChatGPT stings even an old computer programmer. It beats the Turing Test
by miles.
But, as you\'ve grasped, it\'s not got nearer the truth; it\'s got nearer
the norms of human interaction, in which fake news and camouflage and
outright ignorance play such a part.
But if we\'re ever going to set up Blade Runners to police humanity,
they\'ll look for those qualities rather than truthfulness.
This is what\'s so scary about it. It mimics us so damn closely.

Ed



 

Welcome to EDABoard.com

Sponsor

Back
Top