This paper, introduced to me by Meredith Broussard a couple months ago, is the funniest thing I have ever read. It's a ruthless takedown of AI systems and our belief in them, demanding that we start from the basics when evaluating them as a policy choice: making sure that they work.
From the intro:
AI-enabled moderation tools regularly flag safe content, teacher assessment tools mark star instructors to be fired, hospital bed assignment algorithms prioritize healthy over sick patients, and medical insurance service distribution and pricing systems gatekeep necessary care-taking resource. Deployed AI-enabled clinical support tools misallocate prescriptions, misread medical images, and misdiagnose.
All of those have citations, of course! And while yes, the AI-powered systems themselves often don't work, it's also the human element that repeatedly fails us:
The New York MTA’s pilot of facial recognition had a reported 100% error rate, yet the program moved forward anyway
Ouch. You can read the story on that one yourself at MTA’s Initial Foray Into Facial Recognition at High Speed Is a Bust (free link).
But yes, the full paper is highly highly recommended.