"How do you know about all this AI stuff?"
I just read tweets, buddy.
Page 26 of 37
Jonathan Stray's point of "I got the model to say a bad word" being the focal point of bias makes sense, and it's a lot easier to understand than the most insidious forms. Slurs are a starting point, but until we can easily point to "here are some situations where it went wrong," we're going to be falling back on the simple cases.
It's a solid thread, but is a little long so I'm dropping you into what I think is the good bit. It's a take on the "A is B, B is A" generalizations paper.
The conflation of generative AI with AI/machine learning in general is good in the sense that there is acceptance of tools that would once be thought of as too technical, but on the other side gen AI is awful at truth-telling, the only thing that journalism needs to care about.
Esp unsettling that they don’t seem to have a solid grasp of the tech.
One of the big problems in journalism + AI is the lack of informed, combative discourse. I've tried, I've tried: at a conference last month I put together a last-minute session called Trusting AI in the newsroom: Hallucinations, bias, security and labor which did its best to specifically address the intersection of journalism and problems with AI.
Note that this is only for fine-tuned data, not content included in the prompt.
An approach to improving factuality and decreasing hallucinations in LLMs?