aifaq.wtf

"How do you know about all this AI stuff?"
I just read tweets, buddy.

Page 46 of 64

@sarahookr on July 17, 2023

#models   #tweets   #evaluation  

Answers include:

...but lbh I haven't read any of these.

Is AI-generated disinformation a threat to democracy?

#misinformation and disinformation   #link  

I like this piece because it agrees with me! The money quote is:

Disinformation is a serious problem. But we don’t think generative AI has made it qualitatively different. Our main observation is that when it comes to disinfo, generative AI merely gives bad actors a cost reduction, not new capabilities. In fact, the bottleneck has always been distributing disinformation, not generating it, and AI hasn’t changed that.

Which is exactly in line with my favorite tweet on the subject:

infinite disinformation

@pushmeet on July 17, 2023

#medicine   #trust   #ethics   #dystopia   #tweets  

Oh lordy:

a model that learns when predictive AI is offering correct information - and when it's better to defer to a clinician

In theory who wouldn't want this? You can't trust AI with medical facts, so it would make sense to say "oh hey, maybe don't trust me this time?" But how's this fancy, fancy system made?

From reading the post, it literally seems to be taking the confidence scores of the predictive model and saying "when we're this confident, are we usually right?" As clinicians, we could just accept any computer prediction that was >95% confident to carve off the easiest cases and save some workload.

I think the "secret" is that it's not about analysis of the image itself, it's about just the confidence score. So when you're 99% sure, go with AI, but if it's only 85% sure a doctor is probably better. Why this is deserving of a paper in Nature I'm not exactly sure, so I'm guessing I'm missing something?

Paper is here: Enhancing the reliability and accuracy of AI-enabled diagnosis via complementarity-driven deferral to clinicians, blog post announcement is here, code is here

ChatGPT use declines as users complain about ‘dumber’ answers | Hacker News

#models   #evaluation   #shortcomings and inflated expectations   #link  

The responses in here are a good read. Thoughts about whether and/or why it's happening, including the shine of novelty disappearing, awareness of hallucinations coming to the forefront, and/or RLHF alignment preventing you from just asking for racial slurs all day.

I especially enjoyed this comment:

If you ask ChatGPT an exceedingly trivial question, it’ll typically spend the next 60 seconds spewing out five paragraphs of corporate gobbledygook. And of course, because ChatGPT will lie to you, I often end up back on Google anyways to validate it’s claims.

@braddwyer on July 16, 2023

#shortcomings and inflated expectations   #hallucinations   #tweets  

This almost gets a #lol but mostly it's just sad.

@techladyallison on July 16, 2023

#bias   #generative art and visuals   #tweets  

This one belongs in the "AI is A Very Bad Thing" hall of fame.

@skirano on July 16, 2023

#generative art and visuals   #user interface   #tweets  

It's fun, that's about it.

> These systems are built around models that have built-in biases [...] (if you ... | Hacker News

#bias   #link  

OP makes a rather benign statement about the bias in generative models...

if you ask it to create a picture of an entrepreneur, for example, you will likely see more pictures featuring men than women

...which summons some predictable replies:

How is that a bias? That's reality

It might be worth poking around in this thread about what it means when AI mediates your exposure to the world. A super basic one might be: if I connect to an AI tool from Senegal and ask in French for a photo of an entrepreneur, is it going to give me a white man?

Also, check the one where AI generating a professional LinkedIn photo turned an Asian woman white.

Stable Bias: Analyzing Societal Representations in Diffusion Models

#generative art and visuals   #bias   #link  

The academic paper version of the beautiful Bloomberg piece Humans are biased. Generative AI is even worse.. Another choice rec from Meredith Broussard.

The Problem With LangChain | Max Woolf's Blog

#langchain   #link  

@ronawang on July 14, 2023

#bias   #generative art and visuals   #tweets  

I can't find it now, but there was a QT that pulled out a response along the lines of "you're just not using the right model, find a better model."

This is going to come up again and again and again in terms of bias and other issues, and we need to acknowlege that it's a pretty absurd reaction. Boundless trust in tech, availability of alternatives, etc etc etc – the onus absolutely can't be on the end user.

@natanielruizg on July 14, 2023

#models   #fine-tuning   #training   #generative art and visuals   #tweets  

How to Use AI to Do Stuff: An Opinionated Guide

#generative art and visuals   #generative text   #explanations and guides and tutorials   #models   #link  

This is a pretty thorough, none-technical guide on the AI tools available for use. It doesn't dig too deep, but it's a heck of a useable list. For example:

Make images

Most transparent option: Adobe Firefly Open Source Option: Stable Diffusion Best free option: Bing or Bing Image Creator (which uses DALL-E), Playgound (which lets you use multiple models) Best quality images: Midjourney

Nice, 'eh?

How to Get an AI to Lie to You in Three Simple Steps

#hallucinations   #shortcomings and inflated expectations   #link  

A deeper dive into hallucinations than just "look, the AI said something wrong!" As a spoiler, the three methods for getting tricked by an AI are:

  • Asking it for more than it "knows"
  • Assuming it is a person
  • Assuming it can explain itself

GitHub - moreshk/alzebra: Math Tutor for kids

#education   #link  

I haven't looked at it nor have I used it. It's really just sitting here as open-source inspo (and it has an adorable name).