aifaq.wtf

"How do you know about all this AI stuff?"
I just read tweets, buddy.

#trust

Page 1 of 1

@emollick on August 01, 2023

#challenges   #shortcomings and inflated expectations   #trust   #real-world experience   #medicine   #tweets  

A common refrain about AI is that it's a useful helper for humans to get things done. Reading x-rays, MRIs and the like is a big one: practically every human being who's worked with machine learning and images has worked with medical imagery, as it's always part of the curriculum. Here we are again, but this time looking at whether radiologists will take AI judgement into account when analyzing images.

They apparently do not. Thus this wild ride of a recommendation:

Our results demonstrate that, unless the documented mistakes can be corrected, the optimal solution involves assigning cases either to humans or to AI, but rarely to a human assisted by AI.

And later...

In fact, a majority of radiologists would do better on average by simply following the AI prediction.

It's in stark contrast to the police, who embrace flawed facial recognition even when it just plain doesn't work and leads to racial disparities.

My hot take is the acceptance of tool-assisted workflows depends on accomplishing something. The police get to accomplish something extra if they issue a warrant based on a facial recognition match, and the faulty nature of the match is secondary to feeling like you're making progress in a case. On the other hand, radiologists just sit around looking at images all day, and it isn't a case of "I get to go poke around at someone's bones if I agree with the AI."

But a caveat: I found the writing in the paper to be absolutely impenetrable, so if we're being honest I have no idea what it's actually saying outside of those few choice quotes.

@pushmeet on July 17, 2023

#medicine   #trust   #ethics   #dystopia   #tweets  

Oh lordy:

a model that learns when predictive AI is offering correct information - and when it's better to defer to a clinician

In theory who wouldn't want this? You can't trust AI with medical facts, so it would make sense to say "oh hey, maybe don't trust me this time?" But how's this fancy, fancy system made?

From reading the post, it literally seems to be taking the confidence scores of the predictive model and saying "when we're this confident, are we usually right?" As clinicians, we could just accept any computer prediction that was >95% confident to carve off the easiest cases and save some workload.

I think the "secret" is that it's not about analysis of the image itself, it's about just the confidence score. So when you're 99% sure, go with AI, but if it's only 85% sure a doctor is probably better. Why this is deserving of a paper in Nature I'm not exactly sure, so I'm guessing I'm missing something?

Paper is here: Enhancing the reliability and accuracy of AI-enabled diagnosis via complementarity-driven deferral to clinicians, blog post announcement is here, code is here