Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is it AI or not

89 views
Skip to first unread message

micky

unread,
Aug 10, 2023, 2:43:50 PM8/10/23
to
No one in popular news talked about AI 6 months ago and all of sudden
it's everywhere.

The most recent discussion I heard was about "using AI to read X-rays
and other medical imaging".

They have computer programs that will "look" at, examine, x-rays etc.
and find medical problems, sometimes ones that the radiologist misses.

So it's good if both look them.

But is it AI? Seems to me it one slightly complicated algorith and
comes nowhere close to AI. The Turing test for example.

And that lots of thigns they are calling AI these days are just slightly
or moderately complicated computer programs, black boxes maybe, but not
AI.

What say you?

Peter W.

unread,
Aug 10, 2023, 3:21:39 PM8/10/23
to
> What say you?

Sitting on my fingers....

Peter Wieck
Melrose Park, PA

Paul

unread,
Aug 10, 2023, 3:47:50 PM8/10/23
to
A radiologist assistant is not a Large Language Model.

I would expect to some extent, image analysis would be a
"module" on an LLM, and not a part of the main bit.

Bare minimum, it's a neural network, trained on images,
one at a time, that slosh around and train the neurons.

For example, something like YOLO_5 (You Only Look Once), can
be trained to identify animals in photos. It draws a box around
the presumed animal and names it (or whatever). That uses a lot
less hardware than a Large Language Model, and less storage.
The article had a picture with a bear in it, and indeed, the
bear had a square drawn around it.

But as for whether the "quality" is there, that is another
issue entirely. In my opinion, no radiologist would ever trust
something as sketchy as YOLO. Radiologists are very particular
about their jobs, as they hate getting sued. And I can imagine
the look on the judges face when you tell him "yer honor, I didn't
even bother to look at that film, the computer told me there was
nothing there". Some lawyers recently, learned about what happens
when you "phone it in". Professionals are still on the hook for the
whole bolt of goods. The computer isn't going to get sued for
"being stupid", because it is stupid.

It would take a *lot* of films, to train a radiologist assistant.
Who would have a collection, large enough for the job ? It would be
a violation of privacy law, for a bunch of hospitals to throw all
their films into a big vat, for NN training. It's not like crawling
the web and getting access to content that way.

While a lot of individuals and their jobs can be replaced,
the radiologist will be "the last to go". Regular doctors are
quite dependent on the radiologist taking the fall for mis-diagnosis.
The doctors would be scared shirtless, if the professional that
"has my back" was as stupid as a computer. The doctors would quit.
Doctors do not read films. They say stuff like "the radiologists
report says you have tits". And you can then take that to the bank.
They don't use their knowledge of Grays Anatomy to figure that out.
They identify the radiologist as the source of the information.
The radiologist is their "God".

Paul

tr...@invalid.com

unread,
Aug 10, 2023, 3:55:13 PM8/10/23
to
On Thu, 10 Aug 2023 14:43:42 -0400, micky <NONONO...@fmguy.com>
wrote:
Personally, I'm sick of ths AI crap which seems to exist only in the
minds of the tech idiots. When it devolves into the lives of us
common dummies, I'll worry about it then.

ohg...@gmail.com

unread,
Aug 10, 2023, 3:58:02 PM8/10/23
to
I think it's a matter of definition. Unless and until an "AI" becomes self aware, it's not AI.

Dennis Kane

unread,
Aug 10, 2023, 4:43:54 PM8/10/23
to

> When it devolves into the lives of us
> common dummies, I'll worry about it then.

By that time, it may be too late.

Cindy Hamilton

unread,
Aug 10, 2023, 5:03:53 PM8/10/23
to
On 2023-08-10, micky <NONONO...@fmguy.com> wrote:
> No one in popular news talked about AI 6 months ago and all of sudden
> it's everywhere.

I promise you, people in the programming business have been talking
about it for a long while.

> The most recent discussion I heard was about "using AI to read X-rays
> and other medical imaging".
>
> They have computer programs that will "look" at, examine, x-rays etc.
> and find medical problems, sometimes ones that the radiologist misses.
>
> So it's good if both look them.
>
> But is it AI? Seems to me it one slightly complicated algorith and
> comes nowhere close to AI. The Turing test for example.

An AI doesn't need to pass the Turing test to be considered an AI.

> And that lots of thigns they are calling AI these days are just slightly
> or moderately complicated computer programs, black boxes maybe, but not
> AI.
>
> What say you?

Here. This will get you started:

https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence

--
Cindy Hamilton

Cindy Hamilton

unread,
Aug 10, 2023, 5:08:03 PM8/10/23
to
My next mammogram might be analyzed by an AI in addition to a human being.
https://nyulangone.org/news/node/24633

--
Cindy Hamilton

Scott Lurndal

unread,
Aug 10, 2023, 5:08:34 PM8/10/23
to
The term "AI" has been misused by media and most non-computer scientists. The
current crop "AI" tools (e.g. chatGPT) are not artificial intelligence, but
rather simple statistical algorithms based on a huge volume of pre-processed
data.

https://en.wikipedia.org/wiki/Large_language_model

"As language models, they work by taking an input text and repeatedly
predicting the next token or word"

Which leads to

https://en.wikipedia.org/wiki/Intelligent_agent

micky

unread,
Aug 10, 2023, 5:24:08 PM8/10/23
to
In alt.home.repair, on Thu, 10 Aug 2023 21:03:48 GMT, Cindy Hamilton
<hami...@invalid.com> wrote:

>On 2023-08-10, micky <NONONO...@fmguy.com> wrote:
>> No one in popular news talked about AI 6 months ago and all of sudden
>> it's everywhere.
>
>I promise you, people in the programming business have been talking
>about it for a long while.

That's why I said "popular", to exclude that sort of thing.
>
>> The most recent discussion I heard was about "using AI to read X-rays
>> and other medical imaging".
>>
>> They have computer programs that will "look" at, examine, x-rays etc.
>> and find medical problems, sometimes ones that the radiologist misses.
>>
>> So it's good if both look them.
>>
>> But is it AI? Seems to me it one slightly complicated algorith and
>> comes nowhere close to AI. The Turing test for example.
>
>An AI doesn't need to pass the Turing test to be considered an AI.

Okay, but doesn't it have to be more than a single purpose algorithm?
Otherwise, cars have had AI since computerized fuel injection, but
nobody called it that.

tr...@invalid.com

unread,
Aug 10, 2023, 5:42:37 PM8/10/23
to
So they use in in conjunction with AI. Why do I need to know that? I
don't know squat about any medical test. I don't have to. If they
want to use AI to bake bread, what difference does that make to me?

I'm sick of reading about this AI crap. I don't need to know where
it's being used.

tr...@invalid.com

unread,
Aug 10, 2023, 5:45:59 PM8/10/23
to
On Thu, 10 Aug 2023 21:08:29 GMT, sc...@slp53.sl.home (Scott Lurndal)
wrote:
That's exactly what I thought. Yet these ignoramuses who hardly
understand the Web software they use everyday keep burping about this
AI stuff when they probably would fail at learning Basic.

tr...@invalid.com

unread,
Aug 10, 2023, 5:56:30 PM8/10/23
to
On Thu, 10 Aug 2023 17:23:57 -0400, micky <NONONO...@fmguy.com>
wrote:

>In alt.home.repair, on Thu, 10 Aug 2023 21:03:48 GMT, Cindy Hamilton
><hami...@invalid.com> wrote:
>
>>On 2023-08-10, micky <NONONO...@fmguy.com> wrote:
>>> No one in popular news talked about AI 6 months ago and all of sudden
>>> it's everywhere.
>>
>>I promise you, people in the programming business have been talking
>>about it for a long while.
>
>That's why I said "popular", to exclude that sort of thing.
>>
>>> The most recent discussion I heard was about "using AI to read X-rays
>>> and other medical imaging".
>>>
>>> They have computer programs that will "look" at, examine, x-rays etc.
>>> and find medical problems, sometimes ones that the radiologist misses.
>>>
>>> So it's good if both look them.
>>>
>>> But is it AI? Seems to me it one slightly complicated algorith and
>>> comes nowhere close to AI. The Turing test for example.
>>
>>An AI doesn't need to pass the Turing test to be considered an AI.
>
>Okay, but doesn't it have to be more than a single purpose algorithm?
>Otherwise, cars have had AI since computerized fuel injection, but
>nobody called it that.
>

Good Lawd! Next It'll be our turn signals that are AI
Intuitive...Jeesh!

(I'm just about at the point where this AI nonsense is going to end up
in my Plonk! file.)



mau...@help!.com

unread,
Aug 10, 2023, 5:59:19 PM8/10/23
to
On Thu, 10 Aug 2023 13:42:11 -0700, Dennis Kane <dk...@mail.com>
wrote:

>
>> When it devolves into the lives of us
>> common dummies, I'll worry about it then.
>
>By that time, it may be too late.

Oh, I think it will be a while before they come up with the Killer
Robots.

Big Al

unread,
Aug 10, 2023, 6:02:16 PM8/10/23
to
It sells. That's all radio and TV first think about.
--
Linux Mint 21.1 Cinnamon 5.6.8
Al

Cindy Hamilton

unread,
Aug 10, 2023, 6:02:53 PM8/10/23
to
I don't know. Why don't you tell us.

> I'm sick of reading about this AI crap. I don't need to know where
> it's being used.

Perhaps not. I think it's better to know than not know.

--
Cindy Hamilton

mau...@help!.com

unread,
Aug 10, 2023, 6:06:46 PM8/10/23
to
More amorphous defintions of AI.

Here is a post made by someone else that I copied a while back which
makes sense to me.
---------------------------
Subject: The Reality of The Bullshit of Quantum Comps Breaking PGP

https://techbeacon.com/security/newest-quantum-breakthrough-encryption-killer

Just a few bits from the article about Quantum hype.

"Building a universal quantum computer, one that can perform
essentially any computation, is an extremely challenging technical
problem. We're far from having solved it."

"To crack a 2,048-bit RSA key, such as the ones that today's standards
require, a quantum computer will need at least a register of 2,048
entangled qubits. That's far from what's available today. And it seems
very unlikely that the current rate of progress in creating more
entanglement will make it possible in the next several years."

"For now, it seems hard to justify worrying about your encryption
becoming vulnerable to adversaries with quantum computers. It seems
very likely that NIST's effort to standardize encryption algorithms
that are quantum-safe will be completed and widely deployed well
before quantum computers are a serious threat to security."

tr...@invalid.com

unread,
Aug 10, 2023, 6:28:18 PM8/10/23
to
Amen.

(Now everyone shut the F/Up 'bout AI, Dammit!)

tr...@invalid.com

unread,
Aug 10, 2023, 6:42:10 PM8/10/23
to
On Thu, 10 Aug 2023 21:03:48 GMT, Cindy Hamilton
<hami...@invalid.com> wrote:

>On 2023-08-10, micky <NONONO...@fmguy.com> wrote:
>> No one in popular news talked about AI 6 months ago and all of sudden
>> it's everywhere.
>
>I promise you, people in the programming business have been talking
>about it for a long while.
>
>> The most recent discussion I heard was about "using AI to read X-rays
>> and other medical imaging".
>>
>> They have computer programs that will "look" at, examine, x-rays etc.
>> and find medical problems, sometimes ones that the radiologist misses.
>>
>> So it's good if both look them.
>>
>> But is it AI? Seems to me it one slightly complicated algorith and
>> comes nowhere close to AI. The Turing test for example.
>
>An AI doesn't need to pass the Turing test to be considered an AI.
>

From Wikipedia:
"The Turing test, originally called the imitation game by Alan Turing
in 1950, is a test of a machine's ability to exhibit intelligent
behaviour equivalent to, or indistinguishable from, that of a human."

Of what good is AI if the product of it is dumber than a human?

One of your dumb Killer Robots would be dead meat after its first
kill.

That's progress? We have cities run by Dumbocrats with human non-AI
killers who are getting away with more killings than the combined
number committed in our war zones over the past 30-40 years.

So, how dangerous can AI really be?

rbowman

unread,
Aug 10, 2023, 9:50:54 PM8/10/23
to
On Thu, 10 Aug 2023 14:55:10 -0500, tracy wrote:


> Personally, I'm sick of ths AI crap which seems to exist only in the
> minds of the tech idiots. When it devolves into the lives of us common
> dummies, I'll worry about it then.

Already there:

https://www.prnewswire.com/news-releases/ai-powered-litterbox-system-
offers-new-standard-of-care-for-cat-owners-301632491.html

"Using artificial intelligence developed by a team of Purina pet and data
experts, the Petivity Smart Litterbox System detects meaningful changes
that indicate health conditions that may require a veterinarian's
attention or diagnosis. The monitor, which users are instructed to place
under each litterbox in the household, gathers precise data on each cat's
weight and important litterbox habits to help owners be proactive about
their pet's health."

Bob F

unread,
Aug 10, 2023, 10:03:30 PM8/10/23
to
How long will we have to wait for the human size version?

rbowman

unread,
Aug 10, 2023, 10:33:57 PM8/10/23
to
On Thu, 10 Aug 2023 21:08:29 GMT, Scott Lurndal wrote:

> The term "AI" has been misused by media and most non-computer
> scientists. The current crop "AI" tools (e.g. chatGPT) are not
> artificial intelligence, but rather simple statistical algorithms based
> on a huge volume of pre-processed data.

Not quite...

https://blog.dataiku.com/large-language-model-chatgpt

I played around with neural networks in the '80s. It was going to be the
Next Big Thing. The approach was an attempt to quantify the biological
neuron model and the relationship of axons and dendrites.

https://en.wikipedia.org/wiki/Biological_neuron_model

There was one major problem: the computing power wasn't there. Fast
forward 40 years and the availability of GPUs. Google calls their
proprietary units TPUs, or tensor processing units, which is more
accurate. That's the linear algebra tensor, not the physics tensor. While
they are certainly related the terminology changes a bit between
disciplines.

These aren't quite the GPUs in your gaming PC:

https://beincrypto.com/chatgpt-spurs-nvidia-deep-learning-gpu-demand-post-
crypto-mining-decline/

For training a GPT you need a lot of them -- and a lot of power. They make
the crypto miners look good.

The dirty little secret is after you've trained your model with the
training dataset, validated it with the validation data, and tweaked the
parameters for minimal error you don't really know what's going on under
the hood.

https://towardsdatascience.com/text-generation-with-markov-chains-an-
introduction-to-using-markovify-742e6680dc33

Markov chains are relatively simple.



tr...@invalid.com

unread,
Aug 10, 2023, 10:44:14 PM8/10/23
to
And why should I be worried about AI for litterboxes?

Let me know when it starts breaking TrueCrypt or PGP encryption and
devastating our security more than Giggle.com and Redmond are doing.

Until then - shut the *F* UP about *realistic" AI used for something
outside of bagel baking and litterboxes.

tr...@invalid.com

unread,
Aug 10, 2023, 10:46:20 PM8/10/23
to
Exactly!

+5

Peeler

unread,
Aug 11, 2023, 3:57:55 AM8/11/23
to
On 11 Aug 2023 01:50:48 GMT, lowbrowwoman, the endlessly driveling,
troll-feeding, senile idiot, blabbered again:


> Already there:

Like what? Your self-admiring big mouth? Yep, it's ALWAYS there in these
ngs, admiring itself. LOL

--
And yet another idiotic "cool" line, this time about the UK, from the
resident bigmouthed all-American superhero:
"You could dump the entire 93,628 square miles in eastern Montana and only
the prairie dogs would notice."
MID: <ka2vrl...@mid.individual.net>

Peeler

unread,
Aug 11, 2023, 4:01:15 AM8/11/23
to
On 11 Aug 2023 02:33:51 GMT, lowbrowwoman, the endlessly driveling,
troll-feeding, senile idiot, blabbered again:


> Not quite...
>
> https://blog.dataiku.com/large-language-model-chatgpt
>
> I played around with neural networks in the '80s. It was going to be the
> Next Big Thing.

Oh, no! The self-admiring bigmouth is at it again!

<FLUSH rest of the usual senile crap unread again>

--
More of the resident senile gossip's absolutely idiotic endless blather
about herself:
"My family and I traveled cross country in '52, going out on the northern
route and returning mostly on Rt 66. We also traveled quite a bit as the
interstates were being built. It might have been slower but it was a lot
more interesting. Even now I prefer what William Least Heat-Moon called
the blue highways but it's difficult. Around here there are remnants of
the Mullan Road as frontage roads but I-90 was laid over most of it so
there is no continuous route. So far 93 hasn't been destroyed."
MID: <kae9iv...@mid.individual.net>

Ed Cryer

unread,
Aug 11, 2023, 4:50:32 AM8/11/23
to
micky wrote:
> No one in popular news talked about AI 6 months ago and all of sudden
> it's everywhere.
>
> The most recent discussion I heard was about "using AI to read X-rays
> and other medical imaging".
>
> They have computer programs that will "look" at, examine, x-rays etc.
> and find medical problems, sometimes ones that the radiologist misses.
>
> So it's good if both look them.
>
> But is it AI? Seems to me it one slightly complicated algorith and
> comes nowhere close to AI. The Turing test for example.
>
> And that lots of thigns they are calling AI these days are just slightly
> or moderately complicated computer programs, black boxes maybe, but not
> AI.
>
> What say you?

A woman I met at a family event recently asked me what I thought of AI.
I started talking about Leibniz and his speculations, Charles Babbage's
Differential Engine, Isaac Asimov's robot books. Well, she let me have
the limelight; thank you.
Later I found out that what she had in mind was ChatGPT and OpenAI. It's
amazed millions, created a rush of books (including ChatGPT for
Dummies), and fuelled a whole new debate. "AI" has replaced
"algorithms", which replaced "apps", which replaced "programs".

Ed

Paul

unread,
Aug 11, 2023, 5:42:17 AM8/11/23
to
Every technology needs to be classified.

CharGPT is about as useful as OCR. OCR is about 99% accurate.
You've just run 200 pages through the scanner. Now what...

Voluminous output, that must be scrupulously checked.

An "advisor", not a "boss".

A robust source of beer bong pictures.

Paul

Jeroni Paul

unread,
Aug 11, 2023, 7:20:21 AM8/11/23
to
micky wrote:
> No one in popular news talked about AI 6 months ago and all of sudden
> it's everywhere.

The cloud has been trend for some years and has no more juice left, they had to take on something else.

Allodoxaphobia

unread,
Aug 11, 2023, 8:51:18 AM8/11/23
to
I think you'll notice that anything new and shiny,
even if it only employs an 8-bit microprocessor,
well be granted the cloak of artificial intelligence.

Peter W.

unread,
Aug 11, 2023, 9:23:59 AM8/11/23
to
MPffffff..... I have sat on my fingers long enough.

Apologies in advance to all blondes everywhere, including my wife!

A natural blonde dyes her hair a dark shade of brunette - and then says to her husband: Look! Artificial intelligence!"

That is about as seriously as I take AI as it affects my daily life. Sure, reading X-rays, doing certain types of surgery, and many other repetitive, but exacting tasks is well suited to a process that does not get tired, does not get blurry vision, does not ignore something or similar, and can even learn as it repeats those tasks ways to do so with fewer steps - but that is hardly "intelligence" - there is no self-awareness, just increasingly more complex responses to a defined problem.

But, a robo-caller, as much as it/he/she may try to be 'real' is so obviously artificial as to make be very sad for those who might be fooled. And robotic phone-trees - such as many companies use to avoid having humans on the payroll - are the furthest possible thing from Intelligent.

Peter Wieck
Melrose Park, PA

ohg...@gmail.com

unread,
Aug 11, 2023, 10:02:04 AM8/11/23
to
On Friday, August 11, 2023 at 9:23:59 AM UTC-4, Peter W. wrote:

>
> A natural blonde dyes her hair a dark shade of brunette - and then says to her husband: Look! Artificial intelligence!"
>

Snort.. good one Peter.

If there was true AI there would have been a virtual rimshot accompanying that post.


steve1...@outlook.com

unread,
Aug 11, 2023, 11:40:42 AM8/11/23
to
On Thu, 10 Aug 2023 14:43:42 -0400, micky <NONONO...@fmguy.com>
wrote:

>No one in popular news talked about AI 6 months ago and all of sudden
>it's everywhere.
>
>The most recent discussion I heard was about "using AI to read X-rays
>and other medical imaging".
>
>They have computer programs that will "look" at, examine, x-rays etc.
>and find medical problems, sometimes ones that the radiologist misses.
>
>So it's good if both look them.
>
>But is it AI? Seems to me it one slightly complicated algorith and
>comes nowhere close to AI. The Turing test for example.
>
>And that lots of thigns they are calling AI these days are just slightly
>or moderately complicated computer programs, black boxes maybe, but not
>AI.
>
>What say you?

Designing machines for looking at images and finding specific items
like unexpected growths is quite simple. That's not AI. Simple 3 layer
neural networks can learn to do that already. When machines do things
they are not supposed to do that may be by using a form of AI. When my
washing machine starts making meals for me that could be described as
AI!

John

unread,
Aug 11, 2023, 12:17:14 PM8/11/23
to
Unless there's an as-yet-unknown flaw in the design of RSA or other
public-key algorithms, there's no way in which an AI of any sort could
"break" encryption -- artificial intelligence cannot beat mathematics. I
guess some hypothetical super-AI could design a quantum computer which
could break non-quantum-resistant algorithms, but that's a pretty far
cry from the gussied-up chatbots most people are talking about.

Anyway I'm pretty sure you've contributed at least 50% of the traffic on
this topic just by whining that people are talking about something you
don't care about. Learn to use your client's scoring system, killfile
the conversation, then take your own advice and shut the fuck up.

john

Stan Brown

unread,
Aug 11, 2023, 1:09:28 PM8/11/23
to
On Thu, 10 Aug 2023 14:43:42 -0400, micky wrote:
> They have computer programs that will "look" at, examine, x-rays etc.
> and find medical problems, sometimes ones that the radiologist misses.
>
> So it's good if both look them.
>
> But is it AI? Seems to me it one slightly complicated algorith and
> comes nowhere close to AI. The Turing test for example.
>

An algorithm would be programmed by some (presumably) human
programmer as, essentially, a list of rules to follow.

AI (old name: neural networks) is trained by being given a huge stack
of photographs that radiologists have previously examined and
pronounced "yes" or "no". The neural net makes guesses about whether
each picture is a "yes" or a "no", and somehow learns from its
mistakes, so that over time its accuracy becomes better and better.

While the training is simple in principle: pathways in the neural
network that led to a correct result are given a boost and those that
led to an incorrect response are depressed -- in practice the result
is so complex that nobody can determine what caused the NN to reach a
particular decision in a particular case.

Or, at least, that's how I understand it. A NN product was offered by
the company I used to work for, and the programmer explained it to me
that way. Nothing I've seen has told me it's different in principle
now, though I believe much bigger computers are being used, and with
most of the Internet as a training set. But does anybody know exactly
why an AI would answer questions like "Who holds the world speed
record for walking across the English Channel" with specific name,
date, and time? I don't think so.

--
Stan Brown, Tehachapi, California, USA https://BrownMath.com/
Shikata ga nai...

Stan Brown

unread,
Aug 11, 2023, 1:16:08 PM8/11/23
to
On Thu, 10 Aug 2023 21:44:11 -0500, tr...@invalid.com wrote:
> Let me know when it starts breaking TrueCrypt
>

If you really meant TrueCrypt, and you use it, you might want to
think about switching to the fork called VeraCrypt.

https://en.wikipedia.org/wiki/TrueCrypt#End_of_life_announcement

Nobody knows for sure what happened, but the stated reason - that
TrueCrypt isn't needed because Bitlocker exists -- was obviously
absurd. (Bitlocker doesn't work on Windows Home.)

Stan Brown

unread,
Aug 11, 2023, 1:30:02 PM8/11/23
to
On Thu, 10 Aug 2023 17:42:08 -0500, tr...@invalid.com wrote:
> From Wikipedia:
> "The Turing test, originally called the imitation game by Alan Turing
> in 1950, is a test of a machine's ability to exhibit intelligent
> behaviour equivalent to, or indistinguishable from, that of a human."
>
> Of what good is AI if the product of it is dumber than a human?

Humans as a rule are pretty dumb. (Dave Barry famously remarked that
the three strongest forces in a human are stupidity, selfishness, and
horniness. I've seen nothing to prove him wrong.)

But as for the Turing test, I think that ship has sailed. Case in
point: A few months ago, at character.ai, I was chatting with
Napoleon. Not really, of course, but the responses were consistent
with what I knew of his actions and speeches from my reading of
multiple biographies of Talleyrand, his foreign minister and nemesis.
Seems to me that it passed the Turing test.

What laypeople often mean when they talk about artificial
intelligence is not the Turing test, or not _only_ the Turing test
but rather something intangible like a sense of self. They don't want
to know if a machine can imitate a human, they want to know if it's
"alive", whatever that means.

Add to the mix:

* Daniel Dennett, in /Consciousness Explained/ concluded that our
sense of self is essentially a bunch of parallel processses that look
like a serial process. (Or maybe it was the other way around.
Fascinating book, but it's a long time since I read it. I do remember
that he wasn't just speculating in a vacuum: he used evidence in
standard medical journals with the results of actual experiments done
with human perception and cognition.)

* It may all be moot. If the speculation is correct that we're living
in a simulation, humans are all already AI.

rbowman

unread,
Aug 11, 2023, 1:51:32 PM8/11/23
to
On Fri, 11 Aug 2023 10:09:23 -0700, Stan Brown wrote:

> Or, at least, that's how I understand it. A NN product was offered by
> the company I used to work for, and the programmer explained it to me
> that way. Nothing I've seen has told me it's different in principle now,
> though I believe much bigger computers are being used, and with most of
> the Internet as a training set.

GPUs were the breakthrough. The Graphics part was a misnomer although the
original intent was to speed up the calculations involved in CG. That made
them ideal for crypto mining to the point where a GPU shortage was caused
by the miners buying the high end boards. They are also good at the vector
manipulations needed for training a neural net.

Using something like PyTorch you can experiment on a PC. Even then it's
much faster if you have a GPU that supports CUDA. CUDA is an open standard
but afaik only Nvidia supports it in their chips.

When you get to something like Chat you're talking many very expensive
GPUs, a lot of power, and millions of dollars. Chat isn't aware of recent
events since it was frozen with the data available when the training
occurred and retraining is very expensive.

Once the model is trained inference, or using the model, is much less
intensive. That's the 'Pre-trained' in GPT.

For me the interesting part is pruning the model developed on a huge
system to run locally with limited resources. Cell phones are getting to
be power enough to do so. Many people didn't realize that for something
like speech recognition the audio was sent off to Google, processed, and
the text returned. the light dawned when they realized Alexa has very big
ears. The desired outcome is to have more functionality at a local level,
including very small processors like the Arduino family that don't require
huge amout of power to run.

It's a fascinating development but like all disruptors the potential for
bad is just as high as good.

rbowman

unread,
Aug 11, 2023, 1:56:47 PM8/11/23
to
On Fri, 11 Aug 2023 05:42:10 -0400, Paul wrote:

> CharGPT is about as useful as OCR. OCR is about 99% accurate.
> You've just run 200 pages through the scanner. Now what...

'Stochastic' gets used a lot so 100% accuracy isn't even a realistic goal.
'Good enough' is the criteria. If a simple model can tell a cat from a dog
97% of the time, that's pretty good. Humans aren't 100% either so you
could say artificial intelligence is a lot like human intelligence.

Ed Cryer

unread,
Aug 11, 2023, 2:25:41 PM8/11/23
to
ChatGPT stings even an old computer programmer. It beats the Turing Test
by miles.
But, as you've grasped, it's not got nearer the truth; it's got nearer
the norms of human interaction, in which fake news and camouflage and
outright ignorance play such a part.
But if we're ever going to set up Blade Runners to police humanity,
they'll look for those qualities rather than truthfulness.
This is what's so scary about it. It mimics us so damn closely.

Ed


Peeler

unread,
Aug 11, 2023, 3:21:04 PM8/11/23
to
On 11 Aug 2023 17:56:42 GMT, lowbrowwoman, the endlessly driveling,
troll-feeding, senile idiot, blabbered again:


> 'Stochastic' gets used a lot so 100% accuracy isn't even a realistic goal.
> 'Good enough' is the criteria. If a simple model can tell a cat from a dog
> 97% of the time, that's pretty good. Humans aren't 100% either so you
> could say artificial intelligence is a lot like human intelligence.

And the bullshit just keeps getting squeezed out of your abnormally big
mouth! <tsk>

Peeler

unread,
Aug 11, 2023, 3:23:55 PM8/11/23
to
On 11 Aug 2023 17:51:26 GMT, lowbrowwoman, the endlessly driveling,
troll-feeding, senile idiot, blabbered again:


> It's a fascinating development but like all disruptors the potential for
> bad is just as high as good.

It's even more fascinating what amount of shit you are able to keep spouting
in these ngs, you grandiloquent self-admiring, useless senile bigmouth! <BG>

Zaghadka

unread,
Aug 11, 2023, 6:39:25 PM8/11/23
to
On Thu, 10 Aug 2023 14:43:42 -0400, micky <NONONO...@fmguy.com> wrote:

>No one in popular news talked about AI 6 months ago and all of sudden
>it's everywhere.
>
>The most recent discussion I heard was about "using AI to read X-rays
>and other medical imaging".
>
>They have computer programs that will "look" at, examine, x-rays etc.
>and find medical problems, sometimes ones that the radiologist misses.
>
>So it's good if both look them.
>
>But is it AI? Seems to me it one slightly complicated algorith and
>comes nowhere close to AI. The Turing test for example.
>
>And that lots of thigns they are calling AI these days are just slightly
>or moderately complicated computer programs, black boxes maybe, but not
>AI.
>
>What say you?

It's not generalized AI as per the Turing test.

That and language changes. If marketers want to start calling Machine
Learning (ML) AI instead, well, we'll need a new term when we have one
that passes the Turing test. Probably "Overlord."

https://en.wikipedia.org/wiki/Machine_learning

In other words, nobody sane gives a rat's ass, excepting marketers
looking to copy a highly successful campaign. In the end, it's AI because
everyone is now going to call it that.

It is not generalized AI, which is what AI used to mean, though.

My main concern is not the semantics but corporate needs to release
unsafe stuff in a rush to be first, and the sort of people who buy that
unsafe stuff. There are a lot of unsafe people that like to play with
guns. Nothing against guns, just irresponsible owners.

...and I really don't think welding ChatGPT onto Bing is going to be the
success MS thinks it is that will finally usher in an age of Bing and
Edge supremacy over Google. Just no. I don't think people want it.

--
Zag

No one ever said on their deathbed, 'Gee, I wish I had
spent more time alone with my computer.' ~Dan(i) Bunten

Zaghadka

unread,
Aug 11, 2023, 6:41:24 PM8/11/23
to
On Thu, 10 Aug 2023 13:42:11 -0700, Dennis Kane <dk...@mail.com> wrote:

>
>> When it devolves into the lives of us
>> common dummies, I'll worry about it then.
>
>By that time, it may be too late.

It's already too late. Pandora has opened the box. There's no putting it
back in.

Dennis Kane

unread,
Aug 11, 2023, 9:27:30 PM8/11/23
to
On 8/11/2023 3:41 PM, Zaghadka wrote:
> On Thu, 10 Aug 2023 13:42:11 -0700, Dennis Kane <dk...@mail.com> wrote:
>
>>
>>> When it devolves into the lives of us
>>> common dummies, I'll worry about it then.
>>
>> By that time, it may be too late.
>
> It's already too late. Pandora has opened the box. There's no putting it
> back in.

Be afraid. Be very afraid.

Zaghadka

unread,
Aug 12, 2023, 12:51:22 PM8/12/23
to
I wouldn't go that far, but standard human stupidity is gonna make this
veeery interesting.

bruce bowser

unread,
Aug 13, 2023, 10:37:57 AM8/13/23
to
On Thursday, August 10, 2023 at 2:43:50 PM UTC-4, micky wrote:
> No one in popular news talked about AI 6 months ago and all of sudden
> it's everywhere.
>
> The most recent discussion I heard was about "using AI to read X-rays
> and other medical imaging".
>
> They have computer programs that will "look" at, examine, x-rays etc.
> and find medical problems, sometimes ones that the radiologist misses.
>
> So it's good if both look them.
>
> But is it AI? Seems to me it one slightly complicated algorith and
> comes nowhere close to AI. The Turing test for example.
>
> And that lots of thigns they are calling AI these days are just slightly
> or moderately complicated computer programs, black boxes maybe, but not
> AI.
>
> What say you?

Being a person with a polly sci major and Army ROTC background in college (along with union electrical construction school), my understanding of AI however, came from talk radio (both politically leftist and rightist).

I heard rightist Hugh Hewitt say something about teaching law school during the day and how someone put a reading, writing version of AI in front of a California state bar exam lawyer certification test and it did much better than several of its human counterparts. I guess that's around the time when AI became all the talk.

Chris

unread,
Aug 13, 2023, 12:43:09 PM8/13/23
to
Bob F <bobn...@gmail.com> wrote:
> On 8/10/2023 6:50 PM, rbowman wrote:
>> On Thu, 10 Aug 2023 14:55:10 -0500, tracy wrote:
>>
>>
>>> Personally, I'm sick of ths AI crap which seems to exist only in the
>>> minds of the tech idiots. When it devolves into the lives of us common
>>> dummies, I'll worry about it then.
>>
>> Already there:
>>
>> https://www.prnewswire.com/news-releases/ai-powered-litterbox-system-
>> offers-new-standard-of-care-for-cat-owners-301632491.html
>>
>> "Using artificial intelligence developed by a team of Purina pet and data
>> experts, the Petivity Smart Litterbox System detects meaningful changes
>> that indicate health conditions that may require a veterinarian's
>> attention or diagnosis. The monitor, which users are instructed to place
>> under each litterbox in the household, gathers precise data on each cat's
>> weight and important litterbox habits to help owners be proactive about
>> their pet's health."
>>
>
> How long will we have to wait for the human size version?

Have to get people to poop in a litterbox first ;)

Chris

unread,
Aug 13, 2023, 5:17:50 PM8/13/23
to
micky <NONONO...@fmguy.com> wrote:
> No one in popular news talked about AI 6 months ago and all of sudden
> it's everywhere.
>
> The most recent discussion I heard was about "using AI to read X-rays
> and other medical imaging".
>
> They have computer programs that will "look" at, examine, x-rays etc.
> and find medical problems, sometimes ones that the radiologist misses.
>
> So it's good if both look them.
>
> But is it AI? Seems to me it one slightly complicated algorith and
> comes nowhere close to AI. The Turing test for example.

The Turing has been passed quite a while ago. Plus the test is flawed.

> And that lots of thigns they are calling AI these days are just slightly
> or moderately complicated computer programs, black boxes maybe, but not
> AI.
>
> What say you?

AI has never been properly defined and so people are using to describe all
sorts of things.



Chris

unread,
Aug 13, 2023, 5:39:38 PM8/13/23
to
Paul <nos...@needed.invalid> wrote:
> On 8/10/2023 2:43 PM, micky wrote:
>> No one in popular news talked about AI 6 months ago and all of sudden
>> it's everywhere.
>>
>> The most recent discussion I heard was about "using AI to read X-rays
>> and other medical imaging".
>>
>> They have computer programs that will "look" at, examine, x-rays etc.
>> and find medical problems, sometimes ones that the radiologist misses.
>>
>> So it's good if both look them.
>>
>> But is it AI? Seems to me it one slightly complicated algorith and
>> comes nowhere close to AI. The Turing test for example.
>>
>> And that lots of thigns they are calling AI these days are just slightly
>> or moderately complicated computer programs, black boxes maybe, but not
>> AI.
>>
>> What say you?
>>
>
> A radiologist assistant is not a Large Language Model.
>
> I would expect to some extent, image analysis would be a
> "module" on an LLM, and not a part of the main bit.
>
> Bare minimum, it's a neural network, trained on images,
> one at a time, that slosh around and train the neurons.
>
> For example, something like YOLO_5 (You Only Look Once), can
> be trained to identify animals in photos. It draws a box around
> the presumed animal and names it (or whatever). That uses a lot
> less hardware than a Large Language Model, and less storage.
> The article had a picture with a bear in it, and indeed, the
> bear had a square drawn around it.
>
> But as for whether the "quality" is there, that is another
> issue entirely. In my opinion, no radiologist would ever trust
> something as sketchy as YOLO. Radiologists are very particular
> about their jobs, as they hate getting sued.

It's a sad reflection of priorities where the primary concern is about
being sued rather than making sure patients get the best treatment.

> And I can imagine
> the look on the judges face when you tell him "yer honor, I didn't
> even bother to look at that film, the computer told me there was
> nothing there". Some lawyers recently, learned about what happens
> when you "phone it in".

Some lawyers getting caught being dumb is not the same as using a
clinically approved tool. A clinician isn't ever going to diagnose a
patient via ChatGPT. If they did they deserve to get the book thrown at
them.

> Professionals are still on the hook for the
> whole bolt of goods. The computer isn't going to get sued for
> "being stupid", because it is stupid.
>
> It would take a *lot* of films, to train a radiologist assistant.
> Who would have a collection, large enough for the job ?

Er, hospitals.

> It would be
> a violation of privacy law, for a bunch of hospitals to throw all
> their films into a big vat, for NN training.

No it isn't.

> It's not like crawling
> the web and getting access to content that way.

Which is likely illegal. Hence all the suits against google et all.

> While a lot of individuals and their jobs can be replaced,
> the radiologist will be "the last to go".

The risk to jobs from AI is massively overblown. People aren't going to
lose jobs to AI, they will lose jobs from other people who use AI.

Radiologists' jobs have the potential of improvement with AI.
https://www.theguardian.com/society/2023/aug/02/ai-use-breast-cancer-screening-study-preliminary-results

There's a way to go yet before this gets into routine practice but it will
get there in one form or other. The pressures on health services are only
growing and efficiencies need to improve to keep up. Money isn't enough.

Chris

unread,
Aug 13, 2023, 5:47:06 PM8/13/23
to
Exactly. Perfection is not the target. At a minimum an AI that performs as
well as a human that never gets tired, grumpy or distracted and is twice as
fast would be enough of an improvement.

Paul

unread,
Aug 13, 2023, 6:34:54 PM8/13/23
to
https://en.wikipedia.org/wiki/Artificial_intelligence

"AI founder John McCarthy agreed, writing that "Artificial intelligence is not,
by definition, simulation of human intelligence".

This is the interesting stuff.

https://arstechnica.com/information-technology/2023/03/embodied-ai-googles-palm-e-allows-robot-control-with-natural-commands/

The problem with LLM that write software, is you cannot easily evaluate
whether the output meets the specification.

It's possible the robot ideas will fall prey to the same issues. If you
define a complex enough task, maybe the machine will fail to plan what
it has to do properly. And trying the tiny tests with a bag of crisps,
isn't really all that much of a challenge, from a complexity perspective.

"Take this 40 foot container of ASML Lithography machine parts,
assemble and calibrate the machine."

The motion on the Google robot, isn't all that smooth. Boston Dynamics
has some software they use for "choreography", that smooths out some of that.
Maybe telling the Google robot "how to be smooth", would automatically
result in it choreographing what it is doing. Instead of "bump-bump-bumping"
the drawer shut. But the bumping behavior likely results from the computer
re-evaluating the problem every 0.2 seconds and working out a new set
of actions (which includes another tiny "bump"). So perhaps that's
an artifact of the method.

Paul

Chris

unread,
Aug 14, 2023, 2:40:49 AM8/14/23
to
Sure it can. Write tests as is normal practice.



bruce bowser

unread,
Aug 14, 2023, 7:45:51 AM8/14/23
to
In the UK and Europe, the plaintiff must repay expenses of those involved if their side loses.

vjp...@at.biostrategist.dot.dot.com

unread,
Aug 15, 2023, 5:34:58 AM8/15/23
to
I used "AI" since 1978. But every few years, what used to be AI becomes
commonplace and stops being alled AI. Character and voice recognition, file
completion, symbolic manipulation (Macsyma, Wolfram), ELIZA psychanalyser,
were once called AI. Then again, I'm a 62yo (familially) third generation
computer user as well as a third generation engineer. If you used Marvin
Minsky's fifty year old psychanalysis program ELIZA (available in emacs as
doctor; I use it for night time OCD panic attacks), ChatGPT looks awfully
boring. I've used fractals (EXCEL:LOGEST) for fraud detection for four
decades.


Gregory Nazianzen, the Great, tells us all creativity is divine (28:6; 1
cor 3:5-9) and denounced anti-science at Basil's funeral (42:11) as ignorant,
lazy and stupid. (My namesake, Basil of Caereria, was a physician, who
invented the concept of a hospital.) This may be found on p151 of the 1977
OEDB Patrsitics textbook used in high schools in Greece (Evagelos Theodorou,
Anthology of Holy Fathers.) More completely from Florovsky v7 p109 "We
derive something useful for our orthodoxy even from the worldly
science.. Everyone who has a mind will recognize that learning is our highest
good.. also worldly learning, which many Christians incorrectly abhor.. those
who hold such an opinion are stupid and ignorant. They want everyone to be
just like themselves, so that the general failing will hide their own"


--
Vasos Panagiotopoulos panix.com/~vjp2/vasos.htm
---{Nothing herein constitutes advice. Everything fully disclaimed.}---


Ed Cryer

unread,
Aug 15, 2023, 1:37:59 PM8/15/23
to
Google Books Ngram Viewer traces "AI" way back to beyond 1800, although
it seems to have increased rapidly from the mid 1950s.
https://tinyurl.com/22hru257
I can't help but wonder if many of the earlier citings are initials for
things like "Andalusian Insurance" or "American Independence" or
"African Iratedness" etc.

Ed

micky

unread,
Aug 15, 2023, 8:01:28 PM8/15/23
to
In alt.home.repair, on Tue, 15 Aug 2023 18:36:54 +0100, Ed Cryer
Maybe so. Very interesting comments from all of you.

Paul

unread,
Aug 16, 2023, 6:05:26 PM8/16/23
to
On 8/15/2023 8:01 PM, micky wrote:

>
> Maybe so. Very interesting comments from all of you.
>

When you've got a moment (after you've fixed your clock),
could you zip over and look at this question.

There's a guy who owns more Internet Radios than he knows
what to do with, who is getting "disconnected" randomly.

<ubinjc$3b5dd$1...@dont-email.me>

http://al.howardknight.net/?STYPE=msgid&MSGI=%3Cubinjc%243b5dd%241%40dont-email.me%3E

In a previous message, he identifies the brands of the units.
"sanstrom" brand and "majority" brand. The UK seems to have had
its share of these things, over the years.

<uasvso$3b2gu$1...@dont-email.me>

http://al.howardknight.net/?STYPE=msgid&MSGI=%3Cuasvso%243b2gu%241%40dont-email.me%3E

I have no idea, how the directory feature works on those, to
create the list of stations.

And the other thing I don't know, is if IR has made any
recent changes to the transport protocol, which might
break a radio.

Paul

three_jeeps

unread,
Aug 17, 2023, 4:44:32 PM8/17/23
to
On Thursday, August 10, 2023 at 10:33:57 PM UTC-4, rbowman wrote:
> On Thu, 10 Aug 2023 21:08:29 GMT, Scott Lurndal wrote:
>
> > The term "AI" has been misused by media and most non-computer
> > scientists. The current crop "AI" tools (e.g. chatGPT) are not
> > artificial intelligence, but rather simple statistical algorithms based
> > on a huge volume of pre-processed data.
> Not quite...
>
> https://blog.dataiku.com/large-language-model-chatgpt
>
> I played around with neural networks in the '80s. It was going to be the
> Next Big Thing. The approach was an attempt to quantify the biological
> neuron model and the relationship of axons and dendrites.
>
> https://en.wikipedia.org/wiki/Biological_neuron_model
>
> There was one major problem: the computing power wasn't there. Fast
> forward 40 years and the availability of GPUs. Google calls their
> proprietary units TPUs, or tensor processing units, which is more
> accurate. That's the linear algebra tensor, not the physics tensor. While
> they are certainly related the terminology changes a bit between
> disciplines.
>
> These aren't quite the GPUs in your gaming PC:
>
> https://beincrypto.com/chatgpt-spurs-nvidia-deep-learning-gpu-demand-post-
> crypto-mining-decline/
>
> For training a GPT you need a lot of them -- and a lot of power. They make
> the crypto miners look good.
>
> The dirty little secret is after you've trained your model with the
> training dataset, validated it with the validation data, and tweaked the
> parameters for minimal error you don't really know what's going on under
> the hood.
>
> https://towardsdatascience.com/text-generation-with-markov-chains-an-
> introduction-to-using-markovify-742e6680dc33
>
> Markov chains are relatively simple.

I developed ANN classifiers in the mid-late 80s for explosive recognition in suitcases and as hardware plant observers for control loops. The size of the training set is one of the issues...but..... the web has a gazillion images that training sets are harvested. I know of a few local self-driving car companies that do exactly that.

Yes, verification of the ANN is and still is a big issue.
J

three_jeeps

unread,
Aug 17, 2023, 4:52:39 PM8/17/23
to
On Thursday, August 10, 2023 at 2:43:50 PM UTC-4, micky wrote:
> No one in popular news talked about AI 6 months ago and all of sudden
> it's everywhere.
>
> The most recent discussion I heard was about "using AI to read X-rays
> and other medical imaging".
>
> They have computer programs that will "look" at, examine, x-rays etc.
> and find medical problems, sometimes ones that the radiologist misses.
>
> So it's good if both look them.
>
> But is it AI? Seems to me it one slightly complicated algorith and
> comes nowhere close to AI. The Turing test for example.
>
> And that lots of thigns they are calling AI these days are just slightly
> or moderately complicated computer programs, black boxes maybe, but not
> AI.
>
> What say you?
I did a fair amount of 'AI' research in the 80s and early 90s. The amount of hype was amazing and it was all about 'branding' IMHO...a new science fiction technology made real. I bundled up for the first AI winter....I've moved to a different climate where I don't have to bundle up for the second AI winter.....

The really hard aspects of AI are knowledge discovery and composition which has made some progress but nowhere near sensational. Ask a computer program to design and build an automatic transmission, and then figure out why it doesn't work as well when a different ATF is used.....We are *really* far away from that.


whit3rd

unread,
Aug 17, 2023, 11:46:50 PM8/17/23
to
On Friday, August 11, 2023 at 2:42:17 AM UTC-7, Paul wrote:

> Every technology needs to be classified.
>
> CharGPT is about as useful as OCR. OCR is about 99% accurate.
> You've just run 200 pages through the scanner. Now what...
>
> Voluminous output, that must be scrupulously checked.
>
> An "advisor", not a "boss".

Yeah, you DO want to classify that source. Alas, the
internet is set up so... it could be that 'Paul' is an AI.

The Horny Goat

unread,
Dec 31, 2023, 2:48:26 AM12/31/23
to
On Fri, 11 Aug 2023 17:39:19 -0500, Zaghadka <zagh...@hotmail.com>
wrote:

>...and I really don't think welding ChatGPT onto Bing is going to be the
>success MS thinks it is that will finally usher in an age of Bing and
>Edge supremacy over Google. Just no. I don't think people want it.

If I thought some AI was going to harvest my personal data and do
nasty things with it you bet I'd be running away hoping to win an
Olympic gold medal with my speed...

Cindy Hamilton

unread,
Dec 31, 2023, 5:35:07 AM12/31/23
to
I'm pretty sure it already has. Best get moving. Do you have another
world to run to?

--
Cindy Hamilton
0 new messages