aifaq.wtf

"How do you know about all this AI stuff?"
I just read tweets, buddy.

Page 24 of 64

@mister_shroom on January 03, 2024

#security   #hallucinations   #github copilot   #tweets  

@belisards on January 05, 2024

#generative art and visuals   #theory   #tweets  

@LChoshen on January 04, 2024

#evaluation   #audio   #prompt injection   #tweets  

Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023? | Hacker News

#uncategorized   #link  

@minimaxir on December 31, 2023

#uncategorized   #tweets  

404 Page not found

#uncategorized   #link  

@colin_fraser on December 29, 2023

#uncategorized   #tweets  

@samim on April 27, 2023

#uncategorized   #tweets  

@Succinct_Punchy on December 23, 2023

#uncategorized   #tweets  

@emollick on January 08, 2024

#lol   #prompt engineering   #evaluatins   #tweets  

@daithaigilbert on December 15, 2023

#dystopia   #tweets   #journalism  

Welcome to the future! Story here (paywall skip)

It's honestly worth reading, every paragraph is a new list of Bad Stuff! Here's an example of doubling down on hallucinations:

The chatbot responded quickly, stating that Funiciello was alleged to have received money from a lobbying group financed by pharmaceutical companies in order to advocate for the legalization of cannabis products. But the entire corruption allegation against Funiciello was an AI hallucination. To “back up” its baseless allegations, the chatbot linked to five different websites including Funiciello’s own website, her Wikipedia page, a news article where the lawmaker highlights the problem of femicide in Switzerland, and an interview she gave with a mainstream Swiss broadcaster about the issue of consent.

And in its (ineffective) race to protect from the Bad Stuff, it isn't even good at the easy stuff:

In their study, the researchers concluded that a third of the answers given by Copilot contained factual errors and that the tool was “an unreliable source of information for voters.” In 31 percent of the smaller subset of recorded conversations, they found that Copilot offered inaccurate answers, some of which were made up entirely.

@ChrisJBakke on December 17, 2023

#lol   #prompt injection   #tweets  

@OfficialLoganK on December 17, 2023

#prompt engineering   #openai   #prompting   #tweets  

The guide can be found here

@fchollet on December 16, 2023

#evaluation   #reading list   #tweets  

@IanMagnusson on December 19, 2023

#nlp   #dialects   #evaluation   #tweets  

Paper here, data here