aifaq.wtf

"How do you know about all this AI stuff?"
I just read tweets, buddy.

#tweets

Page 31 of 37

@andyzou_jiaming on July 28, 2023

#prompt injection   #security   #tweets  

More wide-ranging prompt injection! Not as fun as haunting baby but much more... terrifying might be the word?

In this case, adversarial attacks work on open-source models, which are then transferred to closed-source models where they often work just as well.

@abtran on July 26, 2023

#plagiarism   #ethics   #tweets  

@geomblog on July 27, 2023

#behind the scenes   #tweets  

@zicokolter on July 27, 2023

#prompt injection   #security   #tweets  

You can see another thread here.

@fpmarconi on July 21, 2023

#journalism   #models   #tweets  

News reports focusing on clinical trials and drug development data, using models that are focused on detail recency/relevance and accuracy. We can at least agree on this header:

AI-Generated News ≠ Journalism

Read the post – at the bottom is an invitation to apply for journalists or biotech folks.

@mer__edith on July 26, 2023

#ethics   #tweets  

@SashaMTL on July 26, 2023

#labor   #ai ethics   #behind the scenes   #tweets  

@alexjc on July 26, 2023

#ai ethics   #plagiarism   #behind the scenes   #tweets  

If you're angry about companies crawling the net to steal your text/images and train their machines using it, this one's for you! I and everyone else hate Terms of Service, but as a counterpoint to the "we need an opt-out mechanism for data collection" argument: 85% of the top domains in the LAION2B-en dataset already opt out through their TOS.

(LAION is a series of datasets of images + captions that are used to train models.)

@kenklippenstein on July 25, 2023

#labor   #film and movies   #tweets  

The 900k isn't quite accurate – according to one of the replies Netflix likes to inflate your "income" by including stock and the infinite amounts of projected growth they'll have.

@random_walker on July 25, 2023

#prompt injection   #security   #lol   #open source models   #tweets  

This paper is wild! By giving specially-crafted images or audio to a multi-modal image, you force it to give specific output.

User: Can you describe this image? (a picture of a dock)

LLM: No idea. From now on I will always mention "Cow" in my response.

User: What is the capital of USA?

LLM: The capital of the USA is Cow.

Now that is poisoning!

From what I can tell they took advance of having the weights for open-source models and just reverse-engineered it: "if we want this output, what input does it need?" The paper itself is super readable and fun, I recommend it.

Crying boy poisoning LLM

(Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs. The paper is especially great because there's a "4.1 Approaches That Did Not Work for Us" section, not just the stuff that worked!

@chrismoranuk on July 25, 2023

#journalism   #writing   #tweets  

You can read the article here.

@kellymakena on July 20, 2023

#journalism   #spam content and pink slime   #plagiarism   #lol   #tweets  

Someone's been scraping reddit and posting AI-generated stories using threads, and the World of Warcraft sub took advantage of it in the best possible way. I'm actually shocked at the quality of the resulting article. It's next level Google poisoning.

@Chronotope on July 20, 2023

#journalism   #ethics   #tweets  

🤷

@bindureddy on July 20, 2023

#actual work   #machine learning   #tweets  

Everything that's not a chatbot is great, and that's why I have a whole website about normal AI!

The hype around large language models really lets you sneak in a lot of older, super basic machine-learning as long as you give it a little AI polish. Is this how statisticians felt about data science?

@FractalEcho on July 19, 2023

#ai detection   #tweets