Paul <nos...@needed.invalid> wrote:
> On 8/10/2023 2:43 PM, micky wrote:
>> No one in popular news talked about AI 6 months ago and all of sudden
>> it's everywhere.
>>
>> The most recent discussion I heard was about "using AI to read X-rays
>> and other medical imaging".
>>
>> They have computer programs that will "look" at, examine, x-rays etc.
>> and find medical problems, sometimes ones that the radiologist misses.
>>
>> So it's good if both look them.
>>
>> But is it AI? Seems to me it one slightly complicated algorith and
>> comes nowhere close to AI. The Turing test for example.
>>
>> And that lots of thigns they are calling AI these days are just slightly
>> or moderately complicated computer programs, black boxes maybe, but not
>> AI.
>>
>> What say you?
>>
>
> A radiologist assistant is not a Large Language Model.
>
> I would expect to some extent, image analysis would be a
> "module" on an LLM, and not a part of the main bit.
>
> Bare minimum, it's a neural network, trained on images,
> one at a time, that slosh around and train the neurons.
>
> For example, something like YOLO_5 (You Only Look Once), can
> be trained to identify animals in photos. It draws a box around
> the presumed animal and names it (or whatever). That uses a lot
> less hardware than a Large Language Model, and less storage.
> The article had a picture with a bear in it, and indeed, the
> bear had a square drawn around it.
>
> But as for whether the "quality" is there, that is another
> issue entirely. In my opinion, no radiologist would ever trust
> something as sketchy as YOLO. Radiologists are very particular
> about their jobs, as they hate getting sued.
It's a sad reflection of priorities where the primary concern is about
being sued rather than making sure patients get the best treatment.
> And I can imagine
> the look on the judges face when you tell him "yer honor, I didn't
> even bother to look at that film, the computer told me there was
> nothing there". Some lawyers recently, learned about what happens
> when you "phone it in".
Some lawyers getting caught being dumb is not the same as using a
clinically approved tool. A clinician isn't ever going to diagnose a
patient via ChatGPT. If they did they deserve to get the book thrown at
them.
> Professionals are still on the hook for the
> whole bolt of goods. The computer isn't going to get sued for
> "being stupid", because it is stupid.
>
> It would take a *lot* of films, to train a radiologist assistant.
> Who would have a collection, large enough for the job ?
Er, hospitals.
> It would be
> a violation of privacy law, for a bunch of hospitals to throw all
> their films into a big vat, for NN training.
No it isn't.
> It's not like crawling
> the web and getting access to content that way.
Which is likely illegal. Hence all the suits against google et all.
> While a lot of individuals and their jobs can be replaced,
> the radiologist will be "the last to go".
The risk to jobs from AI is massively overblown. People aren't going to
lose jobs to AI, they will lose jobs from other people who use AI.
Radiologists' jobs have the potential of improvement with AI.
https://www.theguardian.com/society/2023/aug/02/ai-use-breast-cancer-screening-study-preliminary-results
There's a way to go yet before this gets into routine practice but it will
get there in one form or other. The pressures on health services are only
growing and efficiencies need to improve to keep up. Money isn't enough.