Oh lordy:
a model that learns when predictive AI is offering correct information - and when it's better to defer to a clinician
In theory who wouldn't want this? You can't trust AI with medical facts, so it would make sense to say "oh hey, maybe don't trust me this time?" But how's this fancy, fancy system made?
From reading the post, it literally seems to be taking the confidence scores of the predictive model and saying "when we're this confident, are we usually right?" As clinicians, we could just accept any computer prediction that was >95% confident to carve off the easiest cases and save some workload.
I think the "secret" is that it's not about analysis of the image itself, it's about just the confidence score. So when you're 99% sure, go with AI, but if it's only 85% sure a doctor is probably better. Why this is deserving of a paper in Nature I'm not exactly sure, so I'm guessing I'm missing something?
Paper is here: Enhancing the reliability and accuracy of AI-enabled diagnosis via complementarity-driven deferral to clinicians, blog post announcement is here, code is here