Is it AI or not...

On 11 Aug 2023 17:56:42 GMT, lowbrowwoman, the endlessly driveling,
troll-feeding, senile idiot, blabbered again:


\'Stochastic\' gets used a lot so 100% accuracy isn\'t even a realistic goal.
\'Good enough\' is the criteria. If a simple model can tell a cat from a dog
97% of the time, that\'s pretty good. Humans aren\'t 100% either so you
could say artificial intelligence is a lot like human intelligence.

And the bullshit just keeps getting squeezed out of your abnormally big
mouth! <tsk>

--
And yet another idiotic \"cool\" line, this time about the UK, from the
resident bigmouthed all-American superhero:
\"You could dump the entire 93,628 square miles in eastern Montana and only
the prairie dogs would notice.\"
MID: <ka2vrlF6c5uU1@mid.individual.net>
 
On 11 Aug 2023 17:51:26 GMT, lowbrowwoman, the endlessly driveling,
troll-feeding, senile idiot, blabbered again:


It\'s a fascinating development but like all disruptors the potential for
bad is just as high as good.

It\'s even more fascinating what amount of shit you are able to keep spouting
in these ngs, you grandiloquent self-admiring, useless senile bigmouth! <BG>

--
More of the resident senile gossip\'s absolutely idiotic endless blather
about herself:
\"My family and I traveled cross country in \'52, going out on the northern
route and returning mostly on Rt 66. We also traveled quite a bit as the
interstates were being built. It might have been slower but it was a lot
more interesting. Even now I prefer what William Least Heat-Moon called
the blue highways but it\'s difficult. Around here there are remnants of
the Mullan Road as frontage roads but I-90 was laid over most of it so
there is no continuous route. So far 93 hasn\'t been destroyed.\"
MID: <kae9ivF7suU1@mid.individual.net>
 
On Thu, 10 Aug 2023 14:43:42 -0400, micky <NONONOmisc07@fmguy.com> wrote:

No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?

It\'s not generalized AI as per the Turing test.

That and language changes. If marketers want to start calling Machine
Learning (ML) AI instead, well, we\'ll need a new term when we have one
that passes the Turing test. Probably \"Overlord.\"

https://en.wikipedia.org/wiki/Machine_learning

In other words, nobody sane gives a rat\'s ass, excepting marketers
looking to copy a highly successful campaign. In the end, it\'s AI because
everyone is now going to call it that.

It is not generalized AI, which is what AI used to mean, though.

My main concern is not the semantics but corporate needs to release
unsafe stuff in a rush to be first, and the sort of people who buy that
unsafe stuff. There are a lot of unsafe people that like to play with
guns. Nothing against guns, just irresponsible owners.

....and I really don\'t think welding ChatGPT onto Bing is going to be the
success MS thinks it is that will finally usher in an age of Bing and
Edge supremacy over Google. Just no. I don\'t think people want it.

--
Zag

No one ever said on their deathbed, \'Gee, I wish I had
spent more time alone with my computer.\' ~Dan(i) Bunten
 
On Thu, 10 Aug 2023 13:42:11 -0700, Dennis Kane <dkane@mail.com> wrote:

When it devolves into the lives of us
common dummies, I\'ll worry about it then.

By that time, it may be too late.

It\'s already too late. Pandora has opened the box. There\'s no putting it
back in.

--
Zag

No one ever said on their deathbed, \'Gee, I wish I had
spent more time alone with my computer.\' ~Dan(i) Bunten
 
On 8/11/2023 3:41 PM, Zaghadka wrote:
On Thu, 10 Aug 2023 13:42:11 -0700, Dennis Kane <dkane@mail.com> wrote:


When it devolves into the lives of us
common dummies, I\'ll worry about it then.

By that time, it may be too late.

It\'s already too late. Pandora has opened the box. There\'s no putting it
back in.

Be afraid. Be very afraid.
 
On Fri, 11 Aug 2023 18:27:23 -0700, Dennis Kane <dkane@mail.com> wrote:

On 8/11/2023 3:41 PM, Zaghadka wrote:
On Thu, 10 Aug 2023 13:42:11 -0700, Dennis Kane <dkane@mail.com> wrote:


When it devolves into the lives of us
common dummies, I\'ll worry about it then.

By that time, it may be too late.

It\'s already too late. Pandora has opened the box. There\'s no putting it
back in.

Be afraid. Be very afraid.

I wouldn\'t go that far, but standard human stupidity is gonna make this
veeery interesting.

--
Zag

No one ever said on their deathbed, \'Gee, I wish I had
spent more time alone with my computer.\' ~Dan(i) Bunten
 
On Thursday, August 10, 2023 at 2:43:50 PM UTC-4, micky wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?

Being a person with a polly sci major and Army ROTC background in college (along with union electrical construction school), my understanding of AI however, came from talk radio (both politically leftist and rightist).

I heard rightist Hugh Hewitt say something about teaching law school during the day and how someone put a reading, writing version of AI in front of a California state bar exam lawyer certification test and it did much better than several of its human counterparts. I guess that\'s around the time when AI became all the talk.
 
Bob F <bobnospam@gmail.com> wrote:
On 8/10/2023 6:50 PM, rbowman wrote:
On Thu, 10 Aug 2023 14:55:10 -0500, tracy wrote:


Personally, I\'m sick of ths AI crap which seems to exist only in the
minds of the tech idiots. When it devolves into the lives of us common
dummies, I\'ll worry about it then.

Already there:

https://www.prnewswire.com/news-releases/ai-powered-litterbox-system-
offers-new-standard-of-care-for-cat-owners-301632491.html

\"Using artificial intelligence developed by a team of Purina pet and data
experts, the Petivity Smart Litterbox System detects meaningful changes
that indicate health conditions that may require a veterinarian\'s
attention or diagnosis. The monitor, which users are instructed to place
under each litterbox in the household, gathers precise data on each cat\'s
weight and important litterbox habits to help owners be proactive about
their pet\'s health.\"


How long will we have to wait for the human size version?

Have to get people to poop in a litterbox first ;)
 
micky <NONONOmisc07@fmguy.com> wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

The Turing has been passed quite a while ago. Plus the test is flawed.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?

AI has never been properly defined and so people are using to describe all
sorts of things.
 
Paul <nospam@needed.invalid> wrote:
On 8/10/2023 2:43 PM, micky wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?


A radiologist assistant is not a Large Language Model.

I would expect to some extent, image analysis would be a
\"module\" on an LLM, and not a part of the main bit.

Bare minimum, it\'s a neural network, trained on images,
one at a time, that slosh around and train the neurons.

For example, something like YOLO_5 (You Only Look Once), can
be trained to identify animals in photos. It draws a box around
the presumed animal and names it (or whatever). That uses a lot
less hardware than a Large Language Model, and less storage.
The article had a picture with a bear in it, and indeed, the
bear had a square drawn around it.

But as for whether the \"quality\" is there, that is another
issue entirely. In my opinion, no radiologist would ever trust
something as sketchy as YOLO. Radiologists are very particular
about their jobs, as they hate getting sued.

It\'s a sad reflection of priorities where the primary concern is about
being sued rather than making sure patients get the best treatment.

And I can imagine
the look on the judges face when you tell him \"yer honor, I didn\'t
even bother to look at that film, the computer told me there was
nothing there\". Some lawyers recently, learned about what happens
when you \"phone it in\".

Some lawyers getting caught being dumb is not the same as using a
clinically approved tool. A clinician isn\'t ever going to diagnose a
patient via ChatGPT. If they did they deserve to get the book thrown at
them.

Professionals are still on the hook for the
whole bolt of goods. The computer isn\'t going to get sued for
\"being stupid\", because it is stupid.

It would take a *lot* of films, to train a radiologist assistant.
Who would have a collection, large enough for the job ?

Er, hospitals.

It would be
a violation of privacy law, for a bunch of hospitals to throw all
their films into a big vat, for NN training.

No it isn\'t.

It\'s not like crawling
the web and getting access to content that way.

Which is likely illegal. Hence all the suits against google et all.

While a lot of individuals and their jobs can be replaced,
the radiologist will be \"the last to go\".

The risk to jobs from AI is massively overblown. People aren\'t going to
lose jobs to AI, they will lose jobs from other people who use AI.

Radiologists\' jobs have the potential of improvement with AI.
https://www.theguardian.com/society/2023/aug/02/ai-use-breast-cancer-screening-study-preliminary-results

There\'s a way to go yet before this gets into routine practice but it will
get there in one form or other. The pressures on health services are only
growing and efficiencies need to improve to keep up. Money isn\'t enough.
 
rbowman <bowman@montana.com> wrote:
On Fri, 11 Aug 2023 05:42:10 -0400, Paul wrote:

CharGPT is about as useful as OCR. OCR is about 99% accurate.
You\'ve just run 200 pages through the scanner. Now what...

\'Stochastic\' gets used a lot so 100% accuracy isn\'t even a realistic goal.
\'Good enough\' is the criteria. If a simple model can tell a cat from a dog
97% of the time, that\'s pretty good. Humans aren\'t 100% either so you
could say artificial intelligence is a lot like human intelligence.

Exactly. Perfection is not the target. At a minimum an AI that performs as
well as a human that never gets tired, grumpy or distracted and is twice as
fast would be enough of an improvement.
 
On 8/13/2023 5:17 PM, Chris wrote:
micky <NONONOmisc07@fmguy.com> wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

The Turing has been passed quite a while ago. Plus the test is flawed.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?

AI has never been properly defined and so people are using to describe all
sorts of things.

https://en.wikipedia.org/wiki/Artificial_intelligence

\"AI founder John McCarthy agreed, writing that \"Artificial intelligence is not,
by definition, simulation of human intelligence\".

This is the interesting stuff.

https://arstechnica.com/information-technology/2023/03/embodied-ai-googles-palm-e-allows-robot-control-with-natural-commands/

The problem with LLM that write software, is you cannot easily evaluate
whether the output meets the specification.

It\'s possible the robot ideas will fall prey to the same issues. If you
define a complex enough task, maybe the machine will fail to plan what
it has to do properly. And trying the tiny tests with a bag of crisps,
isn\'t really all that much of a challenge, from a complexity perspective.

\"Take this 40 foot container of ASML Lithography machine parts,
assemble and calibrate the machine.\"

The motion on the Google robot, isn\'t all that smooth. Boston Dynamics
has some software they use for \"choreography\", that smooths out some of that.
Maybe telling the Google robot \"how to be smooth\", would automatically
result in it choreographing what it is doing. Instead of \"bump-bump-bumping\"
the drawer shut. But the bumping behavior likely results from the computer
re-evaluating the problem every 0.2 seconds and working out a new set
of actions (which includes another tiny \"bump\"). So perhaps that\'s
an artifact of the method.

Paul
 
Paul <nospam@needed.invalid> wrote:
On 8/13/2023 5:17 PM, Chris wrote:
micky <NONONOmisc07@fmguy.com> wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

The Turing has been passed quite a while ago. Plus the test is flawed.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?

AI has never been properly defined and so people are using to describe all
sorts of things.

https://en.wikipedia.org/wiki/Artificial_intelligence

\"AI founder John McCarthy agreed, writing that \"Artificial intelligence is not,
by definition, simulation of human intelligence\".

This is the interesting stuff.

https://arstechnica.com/information-technology/2023/03/embodied-ai-googles-palm-e-allows-robot-control-with-natural-commands/

The problem with LLM that write software, is you cannot easily evaluate
whether the output meets the specification.

Sure it can. Write tests as is normal practice.
 
On Sunday, August 13, 2023 at 5:39:38 PM UTC-4, Chris wrote:
Paul <nos...@needed.invalid> wrote:
On 8/10/2023 2:43 PM, micky wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?


A radiologist assistant is not a Large Language Model.

I would expect to some extent, image analysis would be a
\"module\" on an LLM, and not a part of the main bit.

Bare minimum, it\'s a neural network, trained on images,
one at a time, that slosh around and train the neurons.

For example, something like YOLO_5 (You Only Look Once), can
be trained to identify animals in photos. It draws a box around
the presumed animal and names it (or whatever). That uses a lot
less hardware than a Large Language Model, and less storage.
The article had a picture with a bear in it, and indeed, the
bear had a square drawn around it.

But as for whether the \"quality\" is there, that is another
issue entirely. In my opinion, no radiologist would ever trust
something as sketchy as YOLO. Radiologists are very particular
about their jobs, as they hate getting sued.

It\'s a sad reflection of priorities where the primary concern is about
being sued rather than making sure patients get the best treatment.

In the UK and Europe, the plaintiff must repay expenses of those involved if their side loses.
 
I used \"AI\" since 1978. But every few years, what used to be AI becomes
commonplace and stops being alled AI. Character and voice recognition, file
completion, symbolic manipulation (Macsyma, Wolfram), ELIZA psychanalyser,
were once called AI. Then again, I\'m a 62yo (familially) third generation
computer user as well as a third generation engineer. If you used Marvin
Minsky\'s fifty year old psychanalysis program ELIZA (available in emacs as
doctor; I use it for night time OCD panic attacks), ChatGPT looks awfully
boring. I\'ve used fractals (EXCEL:LOGEST) for fraud detection for four
decades.


Gregory Nazianzen, the Great, tells us all creativity is divine (28:6; 1
cor 3:5-9) and denounced anti-science at Basil\'s funeral (42:11) as ignorant,
lazy and stupid. (My namesake, Basil of Caereria, was a physician, who
invented the concept of a hospital.) This may be found on p151 of the 1977
OEDB Patrsitics textbook used in high schools in Greece (Evagelos Theodorou,
Anthology of Holy Fathers.) More completely from Florovsky v7 p109 \"We
derive something useful for our orthodoxy even from the worldly
science.. Everyone who has a mind will recognize that learning is our highest
good.. also worldly learning, which many Christians incorrectly abhor.. those
who hold such an opinion are stupid and ignorant. They want everyone to be
just like themselves, so that the general failing will hide their own\"


--
Vasos Panagiotopoulos panix.com/~vjp2/vasos.htm
---{Nothing herein constitutes advice. Everything fully disclaimed.}---
 
vjp2.at@at.BioStrategist.dot.dot.com wrote:
I used \"AI\" since 1978. But every few years, what used to be AI becomes
commonplace and stops being alled AI. Character and voice recognition, file
completion, symbolic manipulation (Macsyma, Wolfram), ELIZA psychanalyser,
were once called AI. Then again, I\'m a 62yo (familially) third generation
computer user as well as a third generation engineer. If you used Marvin
Minsky\'s fifty year old psychanalysis program ELIZA (available in emacs as
doctor; I use it for night time OCD panic attacks), ChatGPT looks awfully
boring. I\'ve used fractals (EXCEL:LOGEST) for fraud detection for four
decades.


Gregory Nazianzen, the Great, tells us all creativity is divine (28:6; 1
cor 3:5-9) and denounced anti-science at Basil\'s funeral (42:11) as ignorant,
lazy and stupid. (My namesake, Basil of Caereria, was a physician, who
invented the concept of a hospital.) This may be found on p151 of the 1977
OEDB Patrsitics textbook used in high schools in Greece (Evagelos Theodorou,
Anthology of Holy Fathers.) More completely from Florovsky v7 p109 \"We
derive something useful for our orthodoxy even from the worldly
science.. Everyone who has a mind will recognize that learning is our highest
good.. also worldly learning, which many Christians incorrectly abhor.. those
who hold such an opinion are stupid and ignorant. They want everyone to be
just like themselves, so that the general failing will hide their own\"

Google Books Ngram Viewer traces \"AI\" way back to beyond 1800, although
it seems to have increased rapidly from the mid 1950s.
https://tinyurl.com/22hru257
I can\'t help but wonder if many of the earlier citings are initials for
things like \"Andalusian Insurance\" or \"American Independence\" or
\"African Iratedness\" etc.

Ed
 
In alt.home.repair, on Tue, 15 Aug 2023 18:36:54 +0100, Ed Cryer
<ed@somewhere.in.the.uk> wrote:

vjp2.at@at.BioStrategist.dot.dot.com wrote:
I used \"AI\" since 1978. But every few years, what used to be AI becomes
commonplace and stops being alled AI. Character and voice recognition, file
completion, symbolic manipulation (Macsyma, Wolfram), ELIZA psychanalyser,
were once called AI. Then again, I\'m a 62yo (familially) third generation
computer user as well as a third generation engineer. If you used Marvin
Minsky\'s fifty year old psychanalysis program ELIZA (available in emacs as
doctor; I use it for night time OCD panic attacks), ChatGPT looks awfully
boring. I\'ve used fractals (EXCEL:LOGEST) for fraud detection for four
decades.


Gregory Nazianzen, the Great, tells us all creativity is divine (28:6; 1
cor 3:5-9) and denounced anti-science at Basil\'s funeral (42:11) as ignorant,
lazy and stupid. (My namesake, Basil of Caereria, was a physician, who
invented the concept of a hospital.) This may be found on p151 of the 1977
OEDB Patrsitics textbook used in high schools in Greece (Evagelos Theodorou,
Anthology of Holy Fathers.) More completely from Florovsky v7 p109 \"We
derive something useful for our orthodoxy even from the worldly
science.. Everyone who has a mind will recognize that learning is our highest
good.. also worldly learning, which many Christians incorrectly abhor.. those
who hold such an opinion are stupid and ignorant. They want everyone to be
just like themselves, so that the general failing will hide their own\"



Google Books Ngram Viewer traces \"AI\" way back to beyond 1800, although
it seems to have increased rapidly from the mid 1950s.
https://tinyurl.com/22hru257
I can\'t help but wonder if many of the earlier citings are initials for
things like \"Andalusian Insurance\" or \"American Independence\" or
\"African Iratedness\" etc.

Ed

Maybe so. Very interesting comments from all of you.
 
On 8/15/2023 8:01 PM, micky wrote:

Maybe so. Very interesting comments from all of you.

When you\'ve got a moment (after you\'ve fixed your clock),
could you zip over and look at this question.

There\'s a guy who owns more Internet Radios than he knows
what to do with, who is getting \"disconnected\" randomly.

<ubinjc$3b5dd$1@dont-email.me>

http://al.howardknight.net/?STYPE=msgid&MSGI=%3Cubinjc%243b5dd%241%40dont-email.me%3E

In a previous message, he identifies the brands of the units.
\"sanstrom\" brand and \"majority\" brand. The UK seems to have had
its share of these things, over the years.

<uasvso$3b2gu$1@dont-email.me>

http://al.howardknight.net/?STYPE=msgid&MSGI=%3Cuasvso%243b2gu%241%40dont-email.me%3E

I have no idea, how the directory feature works on those, to
create the list of stations.

And the other thing I don\'t know, is if IR has made any
recent changes to the transport protocol, which might
break a radio.

Paul
 
On Thursday, August 10, 2023 at 10:33:57 PM UTC-4, rbowman wrote:
On Thu, 10 Aug 2023 21:08:29 GMT, Scott Lurndal wrote:

The term \"AI\" has been misused by media and most non-computer
scientists. The current crop \"AI\" tools (e.g. chatGPT) are not
artificial intelligence, but rather simple statistical algorithms based
on a huge volume of pre-processed data.
Not quite...

https://blog.dataiku.com/large-language-model-chatgpt

I played around with neural networks in the \'80s. It was going to be the
Next Big Thing. The approach was an attempt to quantify the biological
neuron model and the relationship of axons and dendrites.

https://en.wikipedia.org/wiki/Biological_neuron_model

There was one major problem: the computing power wasn\'t there. Fast
forward 40 years and the availability of GPUs. Google calls their
proprietary units TPUs, or tensor processing units, which is more
accurate. That\'s the linear algebra tensor, not the physics tensor. While
they are certainly related the terminology changes a bit between
disciplines.

These aren\'t quite the GPUs in your gaming PC:

https://beincrypto.com/chatgpt-spurs-nvidia-deep-learning-gpu-demand-post-
crypto-mining-decline/

For training a GPT you need a lot of them -- and a lot of power. They make
the crypto miners look good.

The dirty little secret is after you\'ve trained your model with the
training dataset, validated it with the validation data, and tweaked the
parameters for minimal error you don\'t really know what\'s going on under
the hood.

https://towardsdatascience.com/text-generation-with-markov-chains-an-
introduction-to-using-markovify-742e6680dc33

Markov chains are relatively simple.

I developed ANN classifiers in the mid-late 80s for explosive recognition in suitcases and as hardware plant observers for control loops. The size of the training set is one of the issues...but..... the web has a gazillion images that training sets are harvested. I know of a few local self-driving car companies that do exactly that.

Yes, verification of the ANN is and still is a big issue.
J
 
On Thursday, August 10, 2023 at 2:43:50 PM UTC-4, micky wrote:
No one in popular news talked about AI 6 months ago and all of sudden
it\'s everywhere.

The most recent discussion I heard was about \"using AI to read X-rays
and other medical imaging\".

They have computer programs that will \"look\" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it\'s good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?
I did a fair amount of \'AI\' research in the 80s and early 90s. The amount of hype was amazing and it was all about \'branding\' IMHO...a new science fiction technology made real. I bundled up for the first AI winter....I\'ve moved to a different climate where I don\'t have to bundle up for the second AI winter.....

The really hard aspects of AI are knowledge discovery and composition which has made some progress but nowhere near sensational. Ask a computer program to design and build an automatic transmission, and then figure out why it doesn\'t work as well when a different ATF is used.....We are *really* far away from that.
 

Welcome to EDABoard.com

Sponsor

Back
Top