aifaq.wtf

"How do you know about all this AI stuff?"
I just read tweets, buddy.

#hallucinations

Page 3 of 3

May 7, 2023: @timnitgebru

#hallucinations   #summarization   #alignment  

All I want in life is to read this opposite-of-the-argument summary! Things could have gone wrong in two ways:

First, they pasted in the URL and said "what's this say?" Sometimes ChatGPT pretends it can read the web, even when it can't, and generates a summary based on what ideas it can pull out of the URL.

Second, it just hallucinated all to hell.

Third, ChatGPT is secretly aligned to support itself. Doubtful, but a great way to stay on the good side of Roko's Basilisk.

May 6, 2023: @baddatatakes

#lol   #hallucinations   #fact-checking   #failures  

The issue here is what is a "language model" actually for? We can say "predicting the next word in a sequence of words" but that's kicking the can down the road.

Most of the time it's pretty good at giving you facts, so where do you draw the line?

May 4, 2023: @cpautoscribe

#hallucinations   #lol   #fact-checking   #failures  

At some point I just stopped collecting tweets like this, there were just too many.

May 4, 2023: @mayfer

#hallucinations   #challenges   #evaluation   #lol  

We're impressed by the toy use cases for LLMs because they're things like "write a poem about popcorn" and the result is fun and adorable. The problem is when you try to use them for Real Work: it turns out LLMs make things up all of the time! If you're relying on them for facts or accuracy you're going to be sorely disappointed.

Unfortunately, it's easy to stop at the good "wow" and don't not get deep enough to get to the bad "wow." This tweet should be legally required reading for anyone signing off on AI in their organization.

@nelsonfliu on April 20, 2023

#hallucinations   #evaluation   #shortcomings and inflated expectations   #citations and sourcing   #tweets