"How do you know about all this AI stuff?"
I just read tweets, buddy.
Page 1 of 1
I wasn't there, but I cite this tweet like every hour of every day.
We're going to see a lot of people doubling down on the (accidental? incidental?) falsehoods spread by ChatGPT.
The part I'll stress here is "without fiddling...[summarization] can go terribly wrong." We like to think summarizing things is easy – and it is, comparatively! – but give this a read. In a Danish newsroom experimenting with summarization, 41% of the auto-generated story summaries needed to be corrected before publication.
Hallucinations for book and paper authorship are some of the most convincing. Subject matter typically matches the supposed author, and the titles are always very, very plausible. Because they are just generating text that statistically would make sense, LLMs are masters of "sounds about right." There's no list of books inside of the machine.
The issue here is what is a "language model" actually for? We can say "predicting the next word in a sequence of words" but that's kicking the can down the road.
Most of the time it's pretty good at giving you facts, so where do you draw the line?
At some point I just stopped collecting tweets like this, there were just too many.