We're going to see a lot of people doubling down on the (accidental? incidental?) falsehoods spread by ChatGPT.
I love these magic words. Read more here.
If threatening a single life doesn't get AI to do what you want, threatening children or nuclear annihilation is a good second step.
Confidence is everything.
I've been guilty of thinking along the lines of "if these safeguards are built into mainstream products, everyone is just going to develop their own products," but... I don't know, adaptation of AI tools has shown that ease of use and accessibility mean a lot. It's the "if there were a hundred dollar bill on the ground, someone would have picked it up already" market efficiency econ joke.
Hallucinations for book and paper authorship are some of the most convincing. Subject matter typically matches the supposed author, and the titles are always very, very plausible. Because they are just generating text that statistically would make sense, LLMs are masters of "sounds about right." There's no list of books inside of the machine.
The issue here is what is a "language model" actually for? We can say "predicting the next word in a sequence of words" but that's kicking the can down the road.
Most of the time it's pretty good at giving you facts, so where do you draw the line?
At some point I just stopped collecting tweets like this, there were just too many.